Hacker News new | past | comments | ask | show | jobs | submit | taftster's comments login

Are mobile apps the equivalent of that experience today? Cross-platform SDKs for mobile devices have been around for awhile. This might be the closest equivalent to a Flash application?


I wonder if it's only downhill once you have reached your own point of enlightenment. For me, that wasn't late 00's, but more like late 90's and early 00's. Maybe that was my coming of age.

To me, it's been downhill pretty much before it got started. I'm always feeling "behind" having missed the fun at any stage.


In the late 90's, I attended a talk by Ted Nelson, the guy who coined the term Hypertext. To him, things started going downhill with HTML, and the URL. The gist of his complain is that he wanted links to be bi-directional.

In the 80's, telecom operators were complaining that TCP/IP and packet transmission was a regression over circuit commutation.

So it looks like the internet has progressed through perpetual regression.


The internet is 30-40 years old, and has brought an entirely new paradigm to the world. It has abolished distances, disproportionately increasing the reach of a few.

I'd love to share your optimism that things will keep improving in the long run, but I don't see what you're basing that off.


I like the late 00s/10s because it represents a particular level of refinement and balance of functionality in contrast to earlier eras. As much as I enjoyed the web of the 90s and early 00s, it was still quite nascent and in some ways a bit too basic for my taste.


90s web was fun in a wild west kind of way. Sites were small, but the net in general was slow.

00-10 had a lot of forums in which I remember being very fun. At the same time it brought in the age of popups and ads everywhere.

10+ brought in the age of large social media and the feeling everything was trying to scam you. A lot of the forums that felt special and interesting started disappearing for multiple reasons, but mostly their userbase had moved to FB or something else huge. Then those large sites started moderating anything interesting away.


You're speaking my language here. I think this is exactly what happens when a company has cornered the market. We have completely stagnated, as you say, for at least a decade, maybe more.

Lots of innovation has happened, don't get me wrong. And maybe the web browser as we know is "mature" and therefore lacking need to evolve.

But I'd argue (as I did in a sibling comment) that maybe this drying up of funds could pave the way for new innovation. The web, the creative parts of the web, and definitely the internet as well, didn't have monster budgets to drive its innovation originally. It had some (DARPA, et al), but not like today.


"Almost at parity" in feature set, maybe.

But practically? How many sites actually offer an innovative and/or mobile-first version of their website anymore?

There was definitely a time when we had websites delivering various layouts based on the viewport size of the user agent. CSS media queries, flexible layouts, etc. were all very important innovations for a very short lived period of time.

Now, every serious web presence has moved on to offering their own mobile app, pushing users that direction. The browser was stubborn and erred on the side of privacy. So it didn't quite offer all the integrated (ahem, intrusive) means to interact with the user's device in order to bleed every penny and every bit of data mined from your usage and behaviors.

So I don't see anything lovely in the current situation at all. The traditional web -- you know the one where you surf with a web browser to discover the world -- has been dying for quite some time. It might even be dead and we just don't realize it yet.

We don't need web browser parity with mobile apps. We just need the web to be what the web is good at. It's a lost cause thinking that the web browser will ever integrate with a portable device quite the same as a native app. Those days are gone.


>Now, every serious web presence has moved on to offering their own mobile app, pushing users that direction.

If Apple didn't do everything in their power to slow down the adoption of PWAs you might have seen it take off by now. They still won't allow you to easily install a PWA to your homescreen, you basically have to be a power user (a reader here, and maybe not even then) to know about it.


Going back to gopher as a starting point maybe. Let's innovate from there.



On the surface, it's easy to agree with your opinion.

But then I think, what would it have been like without this investment. Maybe browsers would stay buggy and we'd have an internet with much more diversity in protocol. The internet of today is monotone and subservient to its web master.

I wonder if innovation stagnated because of the extensive (ab)use of the web. Granted, early on, Google's contributions have been more than just pioneering. Both on the backend and the frontend, we all owe them a pint.

But recently, it feels it's just been self-serving. And the monopolistic overtones plus the loss of "do no evil" has arguably hurt us in recent years.

That being said, if the web browser isn't funded so deeply, maybe this is a good thing? Maybe that will give birth to fresh cycles again. I kind of think like letting a corn field grow a new crop to let it regenerate. It could usher in new innovations.


I'm not so sure about that, I bet we'd probably still have Flash, Java Applets, Silverlight and ActiveX controls. The web was a mess before. The recent capture by big platforms is more about taking you out of the web, into their superapps.

edit: On a second thought, as a dev now, I look at React, Angular, all these mega frameworks... and wonder if we're just patching over problems big tech baked into the modern web. First point still stands tho.


Oh, that’s definitely revisionism. The iPhone killed Flash, and ActiveX (outside of South Korea / Silverlight) and applets were already dead at that point.


Yeah, true. I forgot that, even Steve's letter on why they wouldn't put flash on the iPhone.

That was the final blow, yup. But the web was still a clunky mess of plugins, broken standards, and browser-specific hacks.

Google pushed to make the web better. And through Chrome they helped bring WebKit to multiplatform: I still remember I couldn't even get rounded edges or nice typography support across platforms, only in Safari.

It wasn’t until Chrome took off that the rest started paying attention.


The iPhone was undoubtedly the deciding factor, I agree - but interestingly Netflix used to rely on Silverlight for DRM [1] until Google introduced video DRM to Chrome in ~2013 [2]. iPhone netflix users had to use an app.

[1] https://www.engadget.com/2008-10-26-netflix-finally-brings-w... [2] https://netflixtechblog.com/html5-video-at-netflix-721d1f143...


Why was ActiveX dead? Why didn't it succeed when MS launched it at a time IE had 90%+ user share?


Because it required Windows in a time that iPhones were storming the market.

Also the ActiveX security model was pretty horrific.


I mean, I don't disagree with you. I think we needed Google and needed their investment to push forward past Applets, ActiveX, and Flash.

But now, we're stagnating again. So maybe drying up those funds will be part of the cure.


I think Apple also forced the world to move on from Flash. The iPhone didn't have Flash.


All the other plugins were dying on their own (for whatever reason). But Flash was a stubborn virus, to be sure.

Yes, it definitely took the big slap from Apple to kill Flash once and for all.


The web browser is an ugly mongrel that in a “sane” world would never exist. The only reason it is a platform is due to the immense wealth funneled to ductaping and reinforcing it to hold.

It’s basically a statue of liberty made of ductape and chewing gum, then reinforced with formula-1 level engineering and novel materials research.

The building blocks and lessons learned could be used for something novel (nope not gonna happen it’s permanent now). WASM, json, Skia renderer, pretty awesome v8 virtual machine etc etc … all of that are pretty neat.

I guess the key thing is what is the value of browser now?

It’s the ui to bazillion networked business and government systems, productivity tools etc.

I would argue the sticky moat here is not the web interface, though, but the data and the familiar usage patterns. _Theoretically_ the ux is portable to any system with vector graphics renderer and the data itself should be (a long stretch right) independent of the client ui.


The winning (marketingwise) systems couldn't get sandboxing to work. You couldn't simply download software and run it.


Sure, but back then people were used to downloading and running random exe files (even from really untrusted sources such as torrent sites, eMule, etc.)


imho, there are no trusted sources nor should we need them. One can have a good track record for as long as it lasts.


I can see what you’re getting at but I think the monotonous, sterilized nature of the Web is really business driven, not technology driven.


The author mentioned this exact problem. Quoting:

> There was a problem that I noticed right away, though: this text was from the GPL v3, not the GPL v2. In my original request I had never mentioned the GPL version I was asking about.

>The original license notice makes no mention of GPL version either. Should the fact that the license notice contained an address have been enough metadata or a clue, that I was actually requesting the GPL v2 license? Or should I have mentioned that I was seeking the GPLv2 license?

This is seemingly a problem with the GPL text itself, in that it doesn't mention which license version to request when you mail the FSF.


A Sid Caesar skit showed doughboys celebrating and one shouted "World War 1 is over!"... when they made GPLv2 maybe they didn't anticipate creating future versions (although yeah, if you're already on v2 you should foresee that).


There is a GPL v.1, and it may have been so numbered at initial publication:

<https://www.gnu.org/licenses/old-licenses/gpl-1.0.html>


Well to be fair, that's not the full license notice, that's only the last paragraph. There should a couple more above that one and the first paragraph says the version of GPL in use. That said I think the license notice is also just a suggested one, it's not required that you use that _exact_ text.


Assuming that win.com is able to be stand alone and doesn't require win.bat, then yes, Bill can license both of these components separately, one under the GPL and another proprietary.

The Free Software Foundation (FSF) describes this copyleft aspect of the GPL in terms of "derivative works" associated with GPL-licensed software. When two components are related to each other in a derivative way, then the GPL says that the derivative must likewise be licensed accordingly.

So in this example, does win.bat simply execute commands to get win.com started? Is win.bat a glorified shell script wrapper? If so, then win.com would NOT be derived from win.bat. The cart follows the horse. But instead, if win.bat exposed some symbols or other binary API features that win.com was coupled to and depended on, then you could rightly argue that the win.com would be a derivative of win.bat.

More practical of an example, if a database is licensed under the GPL, clients that connect to the database using the socket interface do not constitute a derived work. Or components in a micro-service architecture do not necessarily need be licensed all under the GPL when a single component is.

Pluggable architectures are possible with the GPL. And of course, your interpretation of what exactly that means is subjective and requires case law to help understand.

[edit]

And to reinforce what the parent of yours is saying, the author in the original example can do whatever they want with the software, since they own the copyright for both the GPL and proprietary components.

The GPL is simply a license for non-copyright holders. It allows others to be able to use a piece of proprietary software without having to establish any additional authority with the owner. e.g. it's the means to convey how others can use the software and does not constrain the owners/authors of the software. Other licensing options may be available, if the copyright holder allows.


The GP says:

>>> At this point I'd include some of the code as binary blobs and "pay me for the source!". In addition to GPL!

So, the proposal is to hide the source code and IIUC if someone does this, the whole complete project can not be released as GPL.


That's incorrect. As the original author of the work, you can release the project under whatever license you choose. Doing so may make it impossible for someone else to meaningfully comply with it, but that's their problem, not yours. It doesn't stop you from choosing the GPL, even if it's a bizarre option for that particular project.


Absolutely this. AI can help reveal solutions that weren't seen. An a-ha moment can be as instrumental to learning as the struggle that came before.

Academia needs to embrace this concept and not try to fight it. AI is here, it's real, it's going to be used. Let's teach our students how to benefit from its (ethical) use.


I don't think asking "what's wrong with my code" hurts the learning process. In fact, I would argue it helps it. I don't think you learn when you have reached your frustration point and you just want the dang assignment completed. But before reaching that point, if you had a tutor or assistant that you could ask, "hey, I'm just not seeing my mistake, do you have ideas" goes a long way to foster learning. ChatGPT, used in this way, can be extremely valuable and can definitely unlock learning in new ways which we probably even haven't seen yet.

That being said, I agree with you, if you just ask ChatGPT to write a b-tree implementation from scratch, then you have not learned anything. So like all things in academia, AI can be used to foster education or cheat around it. There's been examples of these "cheats" far before ChatGPT or Google existed.


No I think the struggle is essential. If you can just ask a tutor (real or electronic) what is wrong with your code, you stop thinking and become dependent on that. Learning to think your way through a roadblock that seems like a showstopper is huge.

It's sort of the mental analog of weight training. The only way to get better at weightlifting is to actually lift weight.


If I were to go and try to bench 300lbs, I would absolutely need a spotter to rescue me. Taking on more weight than I can possibly achieve is a setup for failure.

Sure, I should probably practice benching 150lbs. That would be a good challenge for me and I would benefit from that experience. But 300lbs would crush me.


Sadly, ChatGPT is like a spotter that takes over at the smallest hint of struggle. Yes, you are not going to get crushed, but you won't get any workout done either.

You really want start with a smaller weight, and increment it in steps as you progress. You know, like a class or something. And when you do those exercises, you really want to be lifting those weights yourself, and not rely on spotter for every rep.


We're stretching the metaphor here. I know, kind of obnoxious.

If I have accidentally lifted too much weight, I want a spotter that can immediately give me relief. But yes, you're right. If I am always getting a spot, then I'm not really lifting my own weight and indeed not making any gains.

I think the question was, "I'm stuck on this code, and I don't see an obvious answer." Now the lazy student is going to ask for help prematurely. But that doesn't preclude ChatGPT's use to only the lazy.

If I'm stuck and I'm asking for insight, I think it's brilliant that ChatGPT can act as a spotter and give some immediate relief. No different than asking for a tutor. Yes maybe ChatGPT gives away the whole answer when all you needed is a hint. That's the difference between pure human intelligence and just the glorified search engine that is AI.

And quite probably, this could be a really awesome way in which AI learning models could evolve in the context of education. Maybe ChatGPT doesn't give you the whole answer, instead it can just give you the hint you need to consider moving forward.

Microsoft put out a demo/video of a grad student using Copilot in very much this way. Basically the student was asking questions and Copilot was giving answers that were in the frame of "did you think about this approach?" or "consider that there are other possibilities", etc. Granted, mostly a marketing vibe from MSFT, but this really demonstrates a vision for using LLMs as a means for true learning, not just spoiling the answer.


Sure, this is possible. Also Chegg is an "innovative learning tool", not a way to cheat.

I agree that it's not that different than asking a tutor though, assuming it's a personal tutor whom are you paying so they won't ever refuse to answer. I've never had access to someone like that, but I can totally believe that if I did, I would graduate without learning much.

Back to ChatGPT: during my college times I've had plenty of times when I was really struggling, I remember feeling extremely frustrated when my projects would not work, and spending long hours in the labs. I was able to solve this myself, without any outside help, be it tutors or AI - and I think this was the most important part of my education, probably at least as important as all the lectures I went to. As they say, "no pain, no gain".

That said, our discussion is kinda useless - it's not like we can convince college students to stop using AI. The bad colleges will pass everyone (this already happens), the good colleges will adapt (probably by assigning less weight to homework and more weight to in-class exams). Students will have another reason to fail the class: in additional to classic "I spend whole semester partying/playing computer games instead of studying", they would also say "I never opened books and had ChatGPT do all assignments for me, why am I failing tests?"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: