Hacker News new | past | comments | ask | show | jobs | submit | jxub's favorites login

For 99% of extensions it is not lucrative and probably nets 99% of developers a fat $0, because people view extensions as shit-tier fun little hobby scripts and so they built shit-tier extensions, or they build extensions that can’t be monetised.

But then you have smarter players (Honey, Grammarly) who realise that if you build extensions to their full potential, you can have a packet sniffing, network orchestrating, data harvesting, insanely powerful privileged web app that can do things most developers have never thought of.

So if you build these Uber extensions (Grammarly is the best example I think) you can make insane amounts of money with virtually no competition!

I’m all in on extensions. I’m currently building an LLM extension for developers that is a first in the market (unique concept) that is free in Beta, but a paid subscription once out of Beta. It is basically a super-privileged web app that is performing a useful function for devs that would be impossible for a normal website/service.

This means it has login/signup as well, which is an extremely rare sight for an extension (Because it’s much, much, much harder to handle auth in an extension. Seriously extension auth is horrible to implement and I think I’m maybe the only dev in the world with a working Google Sign In inside an extension)

But also the reason you don’t see many monetised extensions is because it’s extremely hard to securely setup auth and stripe within an extension, as they are client only.

So my powerful extensions are backed by things like cloud functions etc for some functionality and for checking auth/db-access/subscriptions


There are many others that are better.

1/ The Annotated Transformer Attention is All You Need http://nlp.seas.harvard.edu/annotated-transformer/

2/ Transformers from Scratch https://e2eml.school/transformers.html

3/ Andrej Karpathy has really good series of intros: https://karpathy.ai/zero-to-hero.html Let's build GPT: from scratch, in code, spelled out. https://www.youtube.com/watch?v=kCc8FmEb1nY GPT with Andrej Karpathy: Part 1 https://medium.com/@kdwa2404/gpt-with-andrej-karpathy-part-1...

4/ 3Blue1Brown: But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning https://www.youtube.com/watch?v=wjZofJX0v4M Attention in transformers, visually explained | Chapter 6, Deep Learning https://www.youtube.com/watch?v=eMlx5fFNoYc Full 3Blue1Brown Neural Networks playlist https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_6700...


I'm bored too. I got into programming young so I'm only about 15 years into my career and it's been my hobby for 25 years.

Not only will I never learn everything about software, but I hit the point of negative returns a while ago. Every hour I spend on the computer is an hour I'm not spending on myself or my friends or on making new friends

I just want to travel, exercise, practice non-monogamy, do stuff like that I didn't do as a teenager because I was busy programming. The computer will be there when I'm done.


This advice is wrong on so many levels. What gets you promoted is being aligned with your leadership and being easy going. You'll get bonus points if your skip level likes you - it usually makes promotions come quicker. Aside from my time at Google, where technical leadership was valued - being the "go-to" person usually gets you more work for the same pay as others in your level. In fact, the "go-to" person on my current team was taken for a ride, exploited for multiple years, and then sacked in a round of layoffs because he became "difficult" (which he had every right to be for his treatment). My advice to young folks would be to join teams with a history of promotion (very important to ask during interviews) and a low turn over rate.

It depends on which area of the business you're in.

The quants (people who do the analysis to give buy/sell signals, basically, typically using programming for the analysis) will generally need a strong grasp of statistics and finance.

The developers that build the trading platform will need a strong grasp of exchanges and the mechanics of trading, but not really of statistics or "how the market works/reacts".

The developers that work on the reporting side don't really need a strong grasp of either. The math is relatively simple, the difficult part is handling stuff like cross trades. I.e. internally trading stocks between portfolio managers without having to hit the actual exchange (i.e. PM A wants to sell Intel, and PM B wants to buy, so you just transfer ownership instead of actual buy/sell orders). You have to attribute a profit/loss to PM A even though there wasn't a "real" profit or loss.

The SRE/ops side often doesn't require much knowledge of finance or math. The apps aren't particularly unique, and the portions of the trading flow you're expected to know aren't hard to pick up in a month or two. You aren't typically expected to have an in-depth understanding of trading strategy, or a highly detailed understanding of how trading works mechanically. Knowing that stuff probably lets you command a higher salary and title, but it isn't a prerequisite.


Here's a URL Link to the advanced search: https://www.google.com/advanced_search?hl=en&fg=1

So what the heck has happened with LK-99 really? (Disclaimer: I'm no physicist nor chemist, but I have co-written a report on three LK-99 papers [1] and am tracking the Twitter discussion as much as I can. I also got some help from knowledgable friends---much thanks for proof-reading.)

It turned out that LK folks were not talking about some stupid shit. Specifically they were one of the last believers of long-forgotten Russian theory of superconductivity, pioneered by Nikolay Bogolyubov. The accepted theory is entirely based on Cooper pairs, but this theory suggests that a sufficient constraint on electrons may allow superconductivity without actual Cooper pairs. This requires carefully positioned point defects in the crystalline structure, which contemporary scientists consider unlikely and such mode of SC was never formally categorized unlike type-I and type-II SC. Professor Tong-seek Chair (최동식) represented a regret about this status quo (in 90s, but still applies today) that this theory was largely forgotten without the proper assessment after the fall of USSR. It was also a very interesting twist that Iris Alexandria, "that Russian catgirl chemist", had an advisor who was a physicist-cum-biochemist studied this theory and as a result were so familiar with the theory that they were able to tell if replications follow the theoretical prediction.

Fast forward to today, students of the late Chair continued the research and produced a possible superconducting substance---LK-99---based on the Russian theory. A lot can be said about papers themselves, but it should be first noted that this substance is not a strict superconductor in the current theory. Prof. Chair once suggested that we need to trade off some (less desirable) properties of superconductors for room-temperature superconductivity, and that property seems to be isotropy. This particularly weakens the Meissner effect criterion due to the much reduced Eddy current, so there is a possibility that LK-99, even when it's real, might not be accepted as a superconductor in the traditional sense. LK folks on the other hand think they should be also considered a superconductor, but they are probably already aware of this possibility.

If we allow anisotropy in this discussion, we do have lots of such things already, most importantly carbon nanotubes. Scientists even thought about the possibility that they may function as typical superconductors [2], without any success though. So it might be appropriate to say that LK-99 is a substance that mimics them in one direction, but much more malleable. And that is an actually significant result (if true, of course) because for most uses a strict type-I superconductor is far more than sufficient, while implications of superconductivity are more achievable. We so far looked for strict superconductors only because we didn't know the effective way to trigger superconductivity otherwise; LK-99 might change that situation.

This whole discourse should make you more careful to conclude whether LK-99 is a superconductor or not, because we may well end up with a revised definition of SC as a result. If LK-99 makes superconductivity much easier to trigger it should be considered a superconductor in the macroscopic sense, authors would argue. Only the time will tell if they indeed made such a substance and it would be malleable enough to be substitutes for other superconductors, but they have a long history and arguably received unfair treatments. And they are about to fight back.

[1] https://hackmd.io/@sanxiyn/S1hejVXo3 (Semi-automatically translated: https://hackmd.io/DMjYGOJFRheZw5XZU8kqKg)

[2] For example, https://twitter.com/MichaelSFuhrer/status/168696072754495897...

----

This post is now also available as a standalone version: https://hackmd.io/@lifthrasiir/lk-99-prehistory & https://twitter.com/senokay/status/1687360854315151360


Okay, here's my attempt!

First, we take a sequence of words and represent it as a grid of numbers: each column of the grid is a separate word, and each row of the grid is a measurement of some property of that word. Words with similar meanings are likely to have similar numerical values on a row-by-row basis.

(During the training process, we create a dictionary of all possible words, with a column of numbers for each of those words. More on this later!)

This grid is called the "context". Typical systems will have a context that spans several thousand columns and several thousand rows. Right now, context length (column count) is rapidly expanding (1k to 2k to 8k to 32k to 100k+!!) while the dimensionality of each word in the dictionary (row count) is pretty static at around 4k to 8k...

Anyhow, the Transformer architecture takes that grid and passes it through a multi-layer transformation algorithm. The functionality of each layer is identical: receive the grid of numbers as input, then perform a mathematical transformation on the grid of numbers, and pass it along to the next layer.

Most systems these days have around 64 or 96 layers.

After the grid of numbers has passed through all the layers, we can use it to generate a new column of numbers that predicts the properties of some word that would maximize the coherence of the sequence if we add it to the end of the grid. We take that new column of numbers and comb through our dictionary to find the actual word that most-closely matches the properties we're looking for.

That word is the winner! We add it to the sequence as a new column, remove the first-column, and run the whole process again! That's how we generate long text-completions on word at a time :D

So the interesting bits are located within that stack of layers. This is why it's called "deep learning".

The mathematical transformation in each layer is called "self-attention", and it involves a lot of matrix multiplications and dot-product calculations with a learned set of "Query, Key and Value" matrixes.

It can be hard to understand what these layers are doing linguistically, but we can use image-processing and computer-vision as a good metaphor, since images are also grids of numbers, and we've all seen how photo-filters can transform that entire grid in lots of useful ways...

You can think of each layer in the transformer as being like a "mask" or "filter" that selects various interesting features from the grid, and then tweaks the image with respect to those masks and filters.

In image processing, you might apply a color-channel mask (chroma key) to select all the green pixels in the background, so that you can erase the background and replace it with other footage. Or you might apply a "gaussian blur" that mixes each pixel with its nearest neighbors, to create a blurring effect. Or you might do the inverse of a gaussian blur, to create a "sharpening" operation that helps you find edges...

But the basic idea is that you have a library of operations that you can apply to a grid of pixels, in order to transform the image (or part of the image) for a desired effect. And you can stack these transforms to create arbitrarily-complex effects.

The same thing is true in a linguistic transformer, where a text sequence is modeled as a matrix.

The language-model has a library of "Query, Key and Value" matrixes (which were learned during training) that are roughly analogous to the "Masks and Filters" we use on images.

Each layer in the Transformer architecture attempts to identify some features of the incoming linguistic data, an then having identified those features, it can subtract those features from the matrix, so that the next layer sees only the transformation, rather than the original.

We don't know exactly what each of these layers is doing in a linguistic model, but we can imagine it's probably doing things like: performing part-of-speech identification (in this context, is the word "ring" a noun or a verb?), reference resolution (who does the word "he" refer to in this sentence?), etc, etc.

And the "dot-product" calculations in each attention layer are there to make each word "entangled" with its neighbors, so that we can discover all the ways that each word is connected to all the other words in its context.

So... that's how we generate word-predictions (aka "inference") at runtime!

By why does it work?

To understand why it's so effective, you have to understand a bit about the training process.

The flow of data during inference always flows in the same direction. It's called a "feed-forward" network.

But during training, there's another step called "back-propagation".

For each document in our training corpus, we go through all the steps I described above, passing each word into our feed-forward neural network and making word-predictions. We start out with a completely randomized set of QKV matrixes, so the results are often really bad!

During training, when we make a prediction, we KNOW what word is supposed to come next. And we have a numerical representation of each word (4096 numbers in a column!) so we can measure the error between our predictions and the actual next word. Those "error" measurements are also represented as columns of 4096 numbers (because we measure the error in every dimension).

So we take that error vector and pass it backward through the whole system! Each layer needs to take the back-propagated error matrix and perform tiny adjustments to its Query, Key, and Value matrixes. Having compensated for those errors, it reverses its calculations based on the new QKV, and passes the resultant matrix backward to the previous layer. So we make tiny corrections on all 96 layers, and eventually to the word-vectors in the dictionary itself!

Like I said earlier, we don't know exactly what those layers are doing. But we know that they're performing a hierarchical decomposition of concepts.

Hope that helps!


The best friend violated one of the fundamental rules of being a worker. They are:

1. Your employer is NEVER your friend. You might be friendly with them but they will replace you in a second so it's best to have the right mindset from the start.

2. Every important correspondence needs to be in writing. If you asked your boss in person whether you could take vacation days, follow up and have them confirm it in an email.

The friend violated rule 1 and in the end it won't have mattered once they get laid off.


I've generally given a lot of notice as an IC, 2-3 months in some cases. and I have to say, I think it's not been appreciated, not even once. I've tried to spend the time wrapping things up, communicating my tacit knowledge to my coworkers, and writing documentation for things that I've done and created and am responsible for; I'm fairly certain that no one has given my opinions and thoughts any more than a cursory amount of attention.

Now, I absolutely loathe the modern corporate culture, which is happy to escort you out of the building the moment your employment is terminated, without giving you a chance to even say goodbye to your colleagues, who you might have been working with extensively for years. It's deeply traumatic and it contributes to an overall sense of fear and "screw teamwork, it's everyone for themselves".

But now when I "give notice" and they don't even let me try to work the next 2 weeks, I'm grateful. I don't want my coworkers to ignore or patronize me while I sit idle or do make-work. I don't want to have to put on a show about how wonderful the company and team are, and why I'm leaving anyways. Nor do I want to expose my true feelings to my co-workers and infect them with my bad attitude--even if the writing is on the wall for the entire enterprise. It's like a breakup: the best thing for everyone is to make it clean and crisp, say "it's not you, it's me", make a sincere statement to the effect of "let's be friends", and then see each other roughly never again.


> [10] Make your project robust to re-orgs. A company management hierarchy is inherently fragile (a tree is a 1-connected graph, after all); socialize the project continuously with managers who might take over in the future. Do whatever it takes to make sure that manager churn does not result in unfair career outcomes for ICs.

> [17] For storage systems, bias heavily in the beginning towards consistency and durability rather than availability; these are harder to measure and harder to fix if broken. Because availability is easier to measure, there will be external pressure to prioritize it first; push back.

It's kind of crazy how well the author is able to move from the very top to the very bottom of the technical project management stack.


LangChain is awesome. For people not sure what it's doing, large language models (LLMs) are very powerful but they're very general. As a common example for this limitation, imagine you want your LLM to answer questions over a large corpus.

You can't pass the entire corpus into the prompt. So you might: - preprocess the corpus by iterating over documents, splitting them into chunks, and summarizing them - embed those chunks/summaries in some vector space - when you get a question, search your vector space for similar chunks - pass those chunks to the LLM in the prompt, along with your question

This ends up being a very common pattern, where you need to do some preprocessing of some information, some real-time collecting of pieces, and then an interaction with the LLM (in some cases, you might go back and forth with the LLM). For instance, code and semantic search follows a similar pattern (preprocess -> embed -> nearest-neighbors at query time -> LLM).

Langchain provides a great abstraction for composing these pieces. IMO, this sort of "prompt plumbing" is far more important than all the slick (but somewhat gimicky) "prompt engineering" examples we see.

I suspect this will get more important as the LLMs become more powerful and more integrated, requiring more data to be provided at prompt time.


I used to travel to China all the time for work during my first job in 2008. Those times were super optimistic. Hong Kong was filled with Chinese pride, and it seemed inevitable that China would absorb Taiwan. China had just hosted the Olympics.

I visited Hong Kong late 2018 after many years of not having been there. It was a very different place from the Hong Kong I had visited in 2009-2011. The energy was a bit darker. It almost felt like another Chinese city. I had even been to HK several times when I was in middle school in Taiwan (I was born in America, but was a “reverse import” to Taiwan) as well as a mandarin language tutor during university, and was always amazed by the richness of HK culture from fishing villages in Saikung to bustling life in Tsim Tsa Tsui and in Central with relics of British colonialism. Now many unique elements of HK life had disappeared.

Meanwhile, Taiwan’s value to the Chinese diaspora can’t be understated—it’s a bastion of a mandarin-speaking democracy, or in software terms a hard fork of an alternate reality of what China could have been. It has cultivated its own culture, and retained elements of Chinese culture cancelled during the cultural revolution. It has its own identity from the aboriginal population, the settlers from the dynastic period, Japanese colonization, and influences from the Republic of China refugees (or occupiers, depending on your POV, post 1949 Chinese settlers) and American forces. And since 2018, I’ve seen the Taiwanese double down on their Taiwanese identity and pride, and in many ways Taiwan is the envy of China (also literally).

If I were to live in Asia, it might have once included Hong Kong because of its unique British history. Now I would probably live in Taiwan and Japan.

Edit: Taiwan hasn’t always been a democracy, and the path to democracy hasn’t been easy (just ask America). It’s not perfect like any other well-running democracy, but it’s the closest paragon we have in the Chinese diaspora. The presidency has transitioned peacefully to different parties since the 1990s.


University interns are mostly 21 years old.

21 year olds won't know very much of anything in general.

A 21 year old 3rd year college intern is... 21 years old

Three quarters of a four year computer science degree doesn't change the fact they're a 21 year old.

Even in the topics they have covered, the knowledge won't be very deep.

A true mental model of concurrent programming is not something easily obtained.

Frankly, most 35 y/o engineers don't truly appreciate the intricacies of intra-thread concurrent algorithms, unless it's their specialized area.

Frankly, most engineers in the industry are too lazy to learn SQL well.

Lower your expectations of 21 year olds. Lower your expectations of the workforce in general. Hackernews is a self selecting community of tech works who study their job in their spare time as a hobby.

Most people I've worked with go home and watch football after 5pm.


"To put that in some perspective, a Roman legion (roughly 5,000 men) in the Late Republic might have carried into battle around 44,000kg (c. 48.5 tons) of iron – not counting pots, fittings, picks, shovels and other tools we know they used. That iron equipment in turn might represent the mining of around 541,200kg (c. 600 tons) of ore, smelted with 642,400kg (c. 710 tons) of charcoal, made from 4,620,000kg (c. 5,100 tons) of wood. Cutting the wood and making the charcoal alone, from our figures above, might represent something like (I am assuming our charcoal-burners are working in teams) 80,000 man-days of labor. For one legion."

Of course it's from https://acoup.blog/2020/09/25/collections-iron-how-did-they-...


I realize this isn't really the intent of the question, but I read "The Count of Monte Cristo" this year for the first time and it's now my favorite book. It's a classic that I just had never bothered with and the story sucked me in. The redemption, revenge, scheming, secrecy. It was phenomenal.

I've posted this elsewhere, but I'll repeat it.

First they send your work home, and then they send it next door.

Corporate learned during COVID that the metro premium isn't worth it any more. These layoff announcements--which have always occurred in the shadows--have three purposes:

1. Shift jobs to cheaper locales. In the US, many metro jobs are being moved to cheaper locations like the midwest. Western European labor is also relatively cheaper. Work from home works for you and your boss. I live in Raleigh-Durham. Things are on fire over here. Google, Apple, Oracle, Microsoft, and Amazon are all hiring at a rapid clip. My fiance just finished her PhD in comp bio, and she has competing offers... but last I heard, biotech is dead and some other thing about long term R&D slowing down due to rate hikes (more on why I think this is nonsense later).

2. Off set the massive over-hiring. The rush of cash during COVID and record profits lured many companies into growth-mode-at-any-cost. While they still need the head count in many cases, people were willing to cut corners in hiring and project quality (i.e. does this really have returns to justify the investment) so a cull is needed. Think about crypto for example. I have a feeling a good number of companies are regretting jumping on the crypto NFT train right about now.

3. I believe companies and the media loudly communicate layoffs in part to reduce labor's negotiating power. I can't prove this, but it seems about right. In December, Facebook was struggling to hire. It was in the news. Right now, I'm sure many people are afraid to ask for more money, but I've managed to wring out 10-20% more than what's being asked for by recruiters despite what everyone is hearing.

I know people thinking this is all Fed induced, but you have to remember, the money that was spent during COVID hasn't gone anywhere. It's still circulating. Companies also borrowed record amounts of cash during ZIRP that's due in 30+ years. Many of these companies have returns in the 10-30% band. A bump to 4-5% is no where near enough to slow down business given how much cheap money was created.

For more evidence, go look at the start up raises. In a recessionary environment, VC would be completely dead. Yes, deal making has slowed and garbage companies can't find financing, but let's be real: those companies should have never existed in the first place.


I fell into a sorta wikihole and now I have no one to present my findings to.

North Korea has an animation studio, and there's a good chance you've seen some of it's work.

It's called SEK (or Korean April 26 Animation Studio). The things they were outsourced to work on were The Simpsons Movie, and Futurama: Benders Big Score. There's also an episode of Avatar, and Teenage Mutent Ninja Turtles(2003) in there too.

Which is interesting but what's really crazy is the other stuff they make. There's internal NK animation, mostly propaganda and children's works like Boy General.

MondoTV is an Italian animation company that used to import and dub anime but then decided to do original shows. Most are based on something historical (Ulysses, Genghis Khan, Pocahontas), something out of copyright that Disney did before (The Story of Cinderella, The Jungle Book, Pocahontas).

These two entities would collaborate on something of titanic portions. Of course it'd end up sinking into obscurity. The Legend of the Titanic is a 1999 animated movie about the Titanic in the same way Disney's Robin Hood is about medieval politics. That is to say randomly filled with anthropomorphic animals.

This movie is not to be confused with the other animated Italian movie about the titanic that also is full of talking animals, 2000's Titanic: The Legend Goes On.

And this is a concept with legs. Mondo and SEK worked together again on a sequel In Search of the Titanic where they end up defending Atlantis because they're trying to find the wreck of the ship. Spoiler: They save the ship (no one died on it anyway but they still lost the boat).

And because the Titanic was still red hot six years later in a sequel to a sequel of a very strange Titanic movie, a tv series came out. Fantasy Island not to be confused with the other one. This one has the Titanic at it. And talking mice and 26 episodes of misadventures and new friends.

Finding stuff like this reminds me of how absurd the world is. I know it's probably not like... what was expected here but sometimes you just get really weird into specific things because you feel like you're witnessing history created via madlibs.


For the past five years or so, I've taken singing lessons. I really recommend it to anyone who has even the faintest interest, even if you feel like you can't carry a tune in a bucket. What I learned the first four to six lessons was enough to make a substantial difference both in my vocal quality and in my comfort in singing for long periods (one of the first things you learn, essentially, is how not to shred your vocal cords). One thing I love about singing is that it's one of the most democratic arts. You don't have to buy or maintain instruments -- you were born with it. Almost everybody is capable at least of some degree of singing. There's no gadgets to buy to improve it. And no matter what kind of music you like, there is a place for vocals. You can sing by yourself in the shower or in front of a band or in a chorus or in a congregation, if that's your thing.

Aside from the benefits of being able to produce aesthetically pleasant sounds and the fundamental pleasures of mastery of a skill, I recommend it to anybody who wants to become more aware of and comfortable with their body and/or with expressing their emotions.

I'm sure that there are good free online lessons for singing, and I've used a lot of videos for practice, but I really encourage seeking out a teacher if you can. Covid has been bad for their business, and there's no replacement for face-to-face instruction. (The good news is that, unlike something like the piano, it's absolutely feasible to get useful instruction over a video call!)


To anyone full of regret, I'd just like to give a quote by Marcus Aurelius (a Roman Emperor and stoic philosopher)

Think of yourself as dead. You have lived your life. Now take what's left and live it properly.

Life likely hasn't been perfect for almost anyone, but would you rather die right now (with likely unfinished desires, wishes and more regrets) or would you try to make the best use of what you have?

(It may be a bit difficult to fully live as per the quote even if you're already familiar with stoicism - it's quite hard for me too - but something that sometimes helps me is to literally visualize yourself dead as of now.

Maybe a stroke.

Would you be happy?

If not, you should do something about it.)


Back in the mid 90's as the Internet was starting to become more accessible to the general public, I remember reading an article that made the following point:

The telegraph effectively killed many newspapers as everyone moved to newspapers to printed national and global news. That was bad b/c it lowered the options people had for consuming news and dramatically reduced the diversity of ideas and opinions. The Internet was going to do the same to the point that we all read the same big websites and therefore had the same thoughts, opinions etc. <end of the point>

I remember reading this and thinking "that kind of makes sense". Revisiting the idea recently, Pandora, YouTube, TikTok etc have all made a business out of building a customized feed for YOU. It's become the exact opposite of what the original article predicted. It's not surprising that we've become so divided when you can "build" your own reality by choosing the information streams that you want to consume.

PS In terms of mass media shared experiences, 120 MILLION people watched the series finale of the TV show MASH. Hard to imagine a similar event today.


>> There is literally no other option to watch YouTube on a TV nowadays

I use https://github.com/yuliskov/SmartTubeNext Smart Tube Next on a Firestick. I side loaded both the beta and release versions with adb. I have not seen a commercial since. If Release doesn't work, try Beta. One of them always seems to work.

>> And it's impossible to buy a dumb TV in 2022

Next best thing: I bought a Samsung TV last year and never accepted the Terms and Conditions. No ads.

Edit: fixed link syntax


I am a big fan of RSS one of the reasons (alluded to by this article) is the type of content that tends to get exposed via RSS.

Frequently the type of content that gets syndicated via RSS is long form and non-commercial.

It turns out that these two properties produce a pretty good signal to noise ratio which filters out precisely the kind of trash that has ruined the web over the last decade or two (long form content at least has the possibility of teaching you something or presenting an idea with some kind of rigor; and since RSS isn't great at monetization, the worst offenders in media tend to deprioritize it).

Of course RSS is a very imprecise filter, but it's basically the antithesis of a Twitter or Facebook feed, where everything is short form and you tend to see whatever serves the platform's commercial interests (i.e. their definition of engagement).

This matters to me because at a certain point I realized -- I have basically never read anything short form on social media which enriched me in a meaningful way.

I have learned a lot from studies, reference works, long form analysis, and books -- basically all the quality knowledge I possess has come from one of these sources.

At best social media has given me occasional links to these things (scattered among an ocean of junk information).

It's mostly because of how RSS originated with blogs I guess, and who was involved in designing it. But for whatever reason it has been far more valuable to me than any other form of content syndication online.


I have started doing something completely different than using bookmarks. I set up yacy[1] on a personal, internal server at my home, which I can access from all my devices, since they are always on my wireguard vpn.

Yacy is actually a distributed search engine, but I run in 'Robinson mode' as a private peer, to keep it isolated, as I just want a personal search of only sites I have indexed.

Anytime I come across something of interest, I index it with yacy, using a a depth of 0 (since I only want to index that one page, not the whole site). This way, I can just go to my search site, and search for something, and anything related that I've indexed before pops up. I found this works way better than trying to manage bookmarks with descriptions and tags.

Also, yacy will keep a cache of the content which is great if the site ever goes offline or changes.

If I need to browse, I can go use yacy's admin tools to see all the urls I have indexed.

I have been using this for several months and I am using this way more than I ever used my bookmarks.

[1] https://yacy.net/


I really like this essay [0] by cryptographer Moxie, specifically this section about starting a new career:

"...simply observe the older people working there.

They are the future you. Do not think that you will be substantially different. Look carefully at how they spend their time at work and outside of work, because this is also almost certainly how your life will look. It sounds obvious, but it’s amazing how often young people imagine a different projection for themselves.

Look at the real people, and you’ll see the honest future for yourself."

I also think money is really nice, but should not be a means unto itself.

I used to bartend in a wealthy area. Lots of folks on this little town would get tipsy and start talking about how much money they have.

One of my barometers for life is to have things I'm more passionate to talk about than wealth accumulation when I'm tipsy in a bar.

[0] https://moxie.org/2013/01/07/career-advice.html


From Bill Watterson's commencement speech at Kenyon College in 1990. I was lucky to read this when I was graduating and it changed my whole career path. I am now very happy and very boring:

"Creating a life that reflects your values and satisfies your soul is a rare achievement. In a culture that relentlessly promotes avarice and excess as the good life, a person happy doing his own work is usually considered an eccentric, if not a subversive. Ambition is only understood if it's to rise to the top of some imaginary ladder of success. Someone who takes an undemanding job because it affords him the time to pursue other interests and activities is considered a flake. A person who abandons a career in order to stay home and raise children is considered not to be living up to his potential-as if a job title and salary are the sole measure of human worth.

You'll be told in a hundred ways, some subtle and some not, to keep climbing, and never be satisfied with where you are, who you are, and what you're doing. There are a million ways to sell yourself out, and I guarantee you'll hear about them.

To invent your own life's meaning is not easy, but it's still allowed, and I think you'll be happier for the trouble."


Here's something that's fascinating.

A type of therapy that's become more popular recently is called Eye Movement Desensitisation and Reprocessing (EMDR).

You lie down, while the therapist asks you to follow their index finger with your eyes as they move it back and forth about 12 inches in front of you. So it's recreating those rapid eye movements. At the same time, you're asked to recall a traumatic event, bringing back the images, any sounds, thoughts, in detail.

This is a fairly new form of therapy, with both strong supporters and critics. But a lot of evidence points to the fact that, for whatever reason, this technique allows people to more safely access their trauma, process it, and take control of their thoughts.

I first read about in this book The Body Keeps The Score (by Bessel Van Der Kolk, MD), which by the way is absolutely fascinating. In a nutshell it's about trauma. But talks about different types of trauma, the effects it has on your brain, childhood vs adult trauma, treatments, his experience treating it, etc.

It's an exciting area of study and there's evidently something about REM (Rapid Eye Movement) and processing memories/trauma.


Key takeaways:

1. One of the best ways to improve your writing is to learn how to cut out words that are not necessary

2. Stuffy writing is bad writing! It lowers the power of your brain and mine!

3. What words should you never use in writing? Words whose exact meanings you don’t know! Never use a word unless you know EXACTLY what it means

4. If your writing is nonsense, maybe your thoughts are nonsense too!

5. To keep things clear and readable: Put the main point of each paragraph in its first sentence

6. Pretend you’re writing a textbook! That’s how I ended up writing so many books...Organizing knowledge Learning is a lot like writing a book

7. I often write the introduction last, after I know what it will introduce!

8. Never draw the reader’s eye to anything that is not the main point


Seems random developers were targeted as well as European Parliament members (and more):

> Jordi Baylina is the technology lead at Polygon, a popular decentralised Ethereum scaling platform. He is also an advisor on projects related to digital voting and decentralisation, and has built a widely-used privacy toolkit. He was extensively targeted with Pegasus, receiving at least 26 infection attempts. Ultimately, he was infected at least eight times between October 2019 and July 2020.

> Baylina received a text message masquerading as a boarding pass link for a Swiss International Air Lines flight he had purchased. Targeting in this case indicates that the Pegasus operator may have had access to Baylina’s Passenger Name Record (PNR) or other information collected from the carrier.

Scare stuff that not just random text messages can infect you (we knew this) but combined with harvesting other data (like PNR), they can time to exploit messages with other actions you do (like buying an flight ticket) and get you that way.

I was scared of receiving random text messages already, but easy to just ignore them as they have nothing to do with me. But if I buy a flight ticket and receive a text message that looks relevant to me, I'm not sure I'd be able to guess it was actually malicious. Scary stuff.

Edit: The more I read, the worse it gets:

> Another common mode of targeting was to masquerade as official notifications from Spanish government entities, including the Tax and Social Security authorities.The messages also used SMS Sender IDs to masquerade as official agency accounts.

> Notably, fake official messages were sometimes highly personalized. For example, a message sent to Jordi Baylina included a portion of his actual official tax identification number, suggesting that the Pegasus operator had access to this information.

Seems clear at this point that the official Spanish government was behind these attacks, or the official registries got hacked (together with various delivery companies). Both are bad, but that signs are pointing to the earlier makes it even worse.

It seems that the Spanish government can't help itself to give more fuel to the fire that is the fight for Catalan independence. Who'd want to belong to a state that constantly suppresses and surveillance you?


Seconded. As you grow older you may find that you actually did lift some of the subjects to a "marketable level". For example, I'm now 40, studied molecular biology but my dedication to my home server, HN and tech podcasts is now paying dividends because I can talk with the big boys about infra-as-code, higher level system architecture and software development. This is really nice as the healthcare company I work for is transforming more towards IT and away from the lab. It also helps me in talking to oncology professionals about their IT woes.

But my formal education always payed the bills and I did enjoy it (although I really wanted to move away from the lab and for as long as I can remember I enjoyed the data analysis parts more than the cell-culture/lab parts. I can still remember starting where I work now, 12 years ago, and saying: I want to get out of the lab! And my manager back then replying: But you have been educated and hired to be in the lab, start there, we'll see where it goes...).

Yes I like philosophy, economics, politics and math too, but I'll leave that for birthdays and late nights at conferences, keep it at the cocktail level so to speak (and watch some Stand-up Maths and 3Blue1Brown, read some Yuval Noah Harari).

One has to specialize in some things to make money. I always tell my kids: You need to learn a trick that your are better at than others so someone will pay you to do it. Then you can pay others to do things you find boring or difficult. Of course you want to focus on something you find fun. If they get older I'd recommend them to keep consciously thinking about what gives them energy and what drains it. It's what my career coaches always told me (I was fortunate enough to work at a company that supplied coaches to everyone). Is it really more complex than that?

My tip would be to look for a company where you get the opportunity to "transform" (a large company's research department for example?), and then to sometimes just give it a year when you don't like your current position. Talk to people, go to conferences, try to learn if there is work out there with the right balance of old vs new for you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: