I hope I get to see the next thing after browser applications in my lifetime. I fully understand the advantages, but it has grown so fast and so wild I think it has to eventually fall down by its own weight and complexity despite its immense success.
It's not that the presentation layer need conditionals, it's that the layers under it have grown full of hacks that need more hacks to work around them, because the web was designed to grow documents, not programs.
If the web had been designed for applications in the first place, the presentation layer probably wouldn't need conditionals at all.
If life has adapted to the crushing pressure of deep ocean, I have hopes that it can adapt to not-so-crushing gravity. I'm sure a lot of our current life could adapt if our gavity was doubled. I'd feel sorry for birds, though.
Quick googling tells me that trees move water internally by capillarity, and suction caused by leave evaporation, both processes passive.
This puts limits on how high the column of water can be raised, yet at 1g we can have monstrous trees like sequoias, so maybe many kinds of trees would die, but the survivors would just grow shorter.
Abisal creatures, who knows how much pressure they can adapt to? They have populated our oceans as deep as they can go, the planet has nothing stronger to challenge them.
you're focusing on sea dwelling creatures. what about land based? would animals get as large? would more calories need to be consumed for the extra effort necessary to move around in what ever >1g is around? some of these are are between 1.9x and 10x the size of earth. working twice as hard every day for everything be one thing, but 10x the effort?
what would be the atmospheric pressure at >1g? what effect would that play as well? not only would you be heavier, but you'd have to work harder to breathe.
again, lots of questions about the these differences that make it a lot more complicated than the right amino acids floating around in space.
Woah, I'm not focusing on anything specific, I just tried to address the two observations of your previous comment. If you keep adding more we'll never end this tread.
It's not like I am a SuperEarther cultist or something, I just think life can adapt to a wider range of gravities. If you think about it, it's amazing that Earth life can withstand constant microgravity despite no evolutionary pressure in that direction. If microgravity is survivable, why not some degree of macrogravity?
RSS has been traditionally used like an email client rather than a streaming service. You don't read every email, some go straight to spam or the trash bin. RSS is a time saver, not a time waster.
I can see that some feeds, like serializartions or low-volume/high quality content, is desirable to be consumed in its entirety, but the 80/20 principle seems to also apply to RSS feeds too in general. Specially if your RSS list reaches double digits.
A bit weird to make blanket statements about a tool like that. Some people read all emails, some don’t. Just like some people only subscribe to people’s personal blogs and want to read all of them.
Some might want to use it as a news aggregator and quickly browse through headlines. There no right or wrong usage of an RSS reader or “traditional usage”.
As RSS was being widespread around 2010, this is what most people said they were using it like, at least in my experience. It was the time when we still didn't have great spam filters, and people were used to receive and discard many emails without reading them.
RSS was also frequently compared to discussion forums, where you also want to efficiently ignore non-relevant content. RSS gave us the power to ignore the budding information overload.
A common setup was to have a folder hierarchy similar to email. Blogs were in folders organized by topic using whatever approach you felt best. You'd then dip into parts of the hierarchy. There often wasn't an aggregated feed that you could use but you could see a list of all items per blog. Each blog would then be highlighted or show a count when there was new content.
I said blog instead of feed because social networks had a focus on the single scrolling feed as a list of content aggregated from different authors. Some RSS clients embraced this to a degree, but it didn't start out that way. Twitter was the first social network I really used in 2007 to follow bloggers I subscribed to, and it took a while to adjust to this firehose of interspersed content. That wasn't an uncommon sentiment from devs.
So what? It's not a democratic vote to decide what way is the right/wrong way to use RSS. Do as you please, it's a simple usable protocol that basically allows for different use cases.
I was a Feed Demon user. There are some videos of the experience which is much closer to a Windows email client than Google Reader was: https://www.youtube.com/watch?v=MIz5u9T94K0. Google Reader was late-stage RSS for me, but it brought some of the benefits of having all of the content download and aggregation being done server-side so the cost of adding new feeds was shared.
I just scroll over it. Only the newest 5000 items are preserved, by default I allow maximum 4 items per feed (some feeds more some less), titles must be at least 3 words long and I delete items if the title contains any of the badwords.
Now that I think of it, the mistake most people make is not having enough subscriptions. Some spot around 1000 feeds the experience changes dramatically. You can afford to be less interested in things as there is plenty more.
I think I find about one decent article per day for each 10 000 subs.
Disposing of crappy feeds isn't a lot of work and a word filter works really well because people want to stuff descriptive words into titles.
Business insider amused me. They are so good at writing good titles that practically non of their countless worthless publications make it though my word filter. What remains would have one think it is a reasonable website.
It's more likely that overcapacity is put to work in a plan B, like cheap cloud virtual desktops. Why spend effort on spying and tracking users when their whole desktop computer is in your data center?
You couldn't write a book containing the context needed to qualify a factual statement about any of these cats. It seems even the article authors couldn't be bothered, writing only 5 pages after failing meet their nonsensical objective.
If you ever want to play them, jump straight to Dragon Quest 4 on NES. It's the evolutionary perfection of the JRPG formula of its time. Basically what the article describes (turn-based combat, overworld, menus, extensive story, free exploration, no handholding, secrets). The storytelling, even if linear, is the most attractive part of them.
If you prefer 16-bit, SNEs has Crono Trigger, still a cult classic (innovator for multiple endings), or the whole Final Fantasy series. Sega Genesis has Phantasy Star IV, also an evolutionary improvement of the series.
Ah I never felt inspired to use it on a computer and always use physical volume controls in the car and through headphones, so I wouldn’t have run into that. It does seem like something that should be a day-one sort of feature.
> The Chinese Room is just a roundabout way of pleading human exceptionalism
Au contraire, LLMs have proven that Chinese Rooms that can casually fool humans do exist.
ELIZA could be considered a rudimentary Chinese Room, Markov chains a bit more advanced, but LLMs have proven that given enough resources, LLMs can be surprisingly convincing Chinese rooms.
I agree that our consciousness might be fully explained by a long string of deterministic electrochemical reactions, so we could be not that different; and until we can fully explain consciousness we can't close the possibility that a statistical calculation is conscious to some degree. It just doesn't seem likely IMO right now.
Food for thought: If I use the weights to blindly calculate the output tokens with pencil and paper, are they thinking, or is it a Chinese Room with a HUGE dictionary?
> ELIZA could be considered a rudimentary Chinese Room, Markov chains a bit more advanced, but LLMs have proven that given enough resources, LLMs can be surprisingly convincing Chinese rooms.
Eliza is not a Chinese room because we know how it works. The whole point of the Chinese Room is that you don't. It is a thought experiment to say 'since we don't know how this is producing output, we should consider that it is just following rules (unless it is human).
> Food for thought: If I use the weights to blindly calculate the output tokens with pencil and paper, are they thinking, or is it a Chinese Room with a HUGE dictionary?
Well, I never conceded that language models are thinking, all I did was say that the Chinese Room is a lazy way of concluding human exceptionalism.
But, I would have to conclude that if you were able to produce output which was coherent and appropriate, and exhibited all signs of what I understand a thinking system to do, then it is a possibility.
reply