Hacker News new | past | comments | ask | show | jobs | submit | Denzel's comments login

Thank you for narrowing your claims, you might want to update your post at the top of the thread to call out your ADV/ACV assumption.

I appreciate all the experience and advice you’re offering on this thread! Take my feedback as a nitpick: as I was reading through your top post, my initial thought was “this isn’t true all the time” because I spent 6 years in 2 separate startups with significant and successful outbound sales where our ADV > $100k.

One company stayed private and profitable while driving revenue north of $80M/yr; and the other company sold enough long-term enterprise contracts to be acquired by a bigger $B company.

Context is king.


Correct. Kinda like it suddenly came up when Facebook started showing memories of dead friends and relatives to people that didn't want it nor enjoyed it. There's many instances of humanity plowing headfirst into some technology thinking "this will be great!" only to haphazardly run into the unanticipated not-so-great parts.

Not to mention there's literally people creating tech out here _today_ that's recreating _exactly_ what some Black Mirror episodes were talking about years ago. Like interactive chatbots model after dead people from voice samples, videos, and messages.


Can you talk through specifically what sprint goals you’ve completed in an afternoon? Hopefully multiple examples.

Grounding these conversations in an actual reality affords more context for people to evaluate your claims. Otherwise it’s just “trust me bro”.

And I say this as a Senior SWE who’s successfully worked with ChatGPT to code up some prototype stuff, but haven’t been able to dedicate 100+ hours to work through all the minutia of learning how to drive daily with it.


If you do want to get more into it, I'd suggest something that plugs into your IDE instead of Copy/Paste with ChatGPT. Try Aider or Roo code. I've only used Aider, and run it in the VS terminal. It's much nicer to be able to leave comments to the AI and have it make the changes to discrete parts of the app.

I'm not the OP, but on your other point about completing sprint goals fast - I'm building a video library app for myself, and wanted to add tagging of videos. I was out dropping the kids at classes and waiting for them. Had 20 minutes and said to Aider/Claude - "Give me an implementation for tagging videos." It came back with the changes it would make across multiple files: Creating a new model, a service, configuring the DI container, updating the DB context, updating the UI to add tags to videos and created a basic search form to click on tags and filter the videos. I hit build before the kids had finished and it all worked. Later, I found a small bug - but it saved me a fair bit of time. I've never been a fast coder - I stare at the screen and think way too much (function and variable names are my doom ... and the hardest problem in programming, and AI fixes this for me).

Some developers may be able to do all this in 20 minutes, but I know that I never could have. I've programmed for 25 years across many languages and frameworks, and know my limitations. A terrible memory is one of them. I would normally spend a good chunk of time on StackOverflow and the documentation sites for whatever frameworks/libraries I'm using. The AI has reduced that reliance and keeps me in the zone for longer.


I think experiences vary. AI can work well with greenfield projects, small features, and helping solve annoying problems. I've tried using it on a large Python Django codebase and it works really well if I ask for help with a particular function AND I give it an example to model after for code consistency.

But I have also spent hours asking Claude and ChatGPT with help trying to solve several annoying Django problems and I have reached the point multiple times where they circle back and give me answers that did not previously work in the same context window. Eventually when I figure out the issue, I have fun and ask it "well does it not work as expected because the existing code chained multiple filter calls in django?" and all of a sudden the AI knows what is wrong! To be fair, there was only one sentence in the django documentation that mentions not chaining filter calls on many to many relationships.


In what specific way did this post misrepresent or abuse the Dunning-Kruger concept? (Btw, the graph used is the same one used on the Wikipedia page for DK.) If you’re able to explain what you understand to be misrepresented, you can clear up the misconception for others — like me.


You can find the original paper here: https://www.researchgate.net/publication/12688660_Unskilled_...

It's a mere 15.5 pages of actual text.


So, instead of engaging in a discussion, and sharing your knowledge, with someone genuinely interested in learning from you— to improve upon the seeming misconception that bothers you — you link to paper and do nothing to correct your own pet peeve. Maybe consider that human life is finite, no person will ever be able to read or analyze everything, so you can help others when you have a piece of knowledge. Relevant - https://xkcd.com/1053/.


I don’t see that graph anywhere on the Wikipedia page

https://en.m.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effec...

I do see other graphs that tell a different story. Namely, that confidence is a monotonically increasing function of competence. If the data supports the idea that there is a valley of despair where confidence decreases as competence increases, I must be missing it.


Here, from Wikipedia:

https://commons.wikimedia.org/wiki/File:Dunning%E2%80%93Krug...

[edit: yes, it isn't currently on the Wiki page. On the other hand, I've seen that graph associated with that work before]


This is Commons, not a Wikipedia article. This image is incorrect, has been removed from the enwiki article, and is in fact explicitly tagged with a disputed factual accuracy notice.

Dunning-Kruger described a relationship between people's subjective opinion of their skill, and their performance on a test. They find the subjective curve is less steep than the objective one (low performers believe they are closer to the center than they really are, and so do top performers). There's no "peak of stupid", or anything else on that graph.

Repeating vague associations you've seen on the Internet before is how misinformation spreads.


I dispute nothing you write. Looking at the paper that graph is not within it.

Either my eyes skipped past it or that dispute notice was added after I linked the image. Regardless it belongs there.

I have previously seen a similarly shaped graph with Dunning-Kruger effect discussions many times, including on Wikipedia I believe. Now I'm curious what the source of the misrepresentation is since it does not appear quite derivable without artistic interpretation from the paper's data.

Regardless, I'm glad to update and add to my beliefs.

Please note that despite the implication that seems to be in your final statement, I did not mean to say the graph was correct, only that it is a graph commonly associated with the paper's message and thus understandable for the author to have used. From that, the use of it doesn't quite come from nowhere. I'm fact, I didn't really say much at all. While Wikipedia is the first search result, the Decision Lab is next which has a similar, even more distorted graph on their page [0] and yet is a fairly well esteemed organization.

Glad to improve my knowledge but that the graph is in common use is not misinformation even if the graph itself misinforms and isn't from the paper.

[0] https://thedecisionlab.com/biases/dunning-kruger-effect


I am likewise baffled by this. The entire "Mount Stupid" theory of Dunning-Kruger is wrong, and the blog shows that same wrong graph for me.

Maybe the author is running some kind of A/B test between the actual Dunning-Kruger paper graph and the fake one?


Presumably you read the section where Brooks highlights all the forecasts executives were making in 2017? His NET predictions act as a sort of counter-prediction to those types of blind optimistic, overly confident assertions.

In that context, I’d say his predictions are neither obvious nor lacking boldness when we have influential people running around claiming that AGI is here today, AI agents will enter the workforce this year, and we should be prepared for AI-enabled layoffs.


In what sense is self-driving “here” if the economics alone prove that it can’t get “here”? It’s not just limited coverage, it’s practically non-existent coverage, both nationally and globally, with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in.


It's covering significant areas of 3 major metros, and the core of one minor, with testing deployments in several other major metros. Considering the top 10 metros are >70% of the US ridehail market, that seems like a long way beyond "non-existent" coverage nationally.


You’re narrowing the market for self-driving to the ridehail market in the top 10 US metros. That’s kinda moving the goal posts, my friend, and completely ignoring the promises made by self-driving companies.

The promise has been that self-driving would replace driving in general because it’d be safer, more economical, etc. The promise has been that you’d be able to send your autonomous car from city to city without a driver present, possibly to pick up your child from school, and bring them back home.

In that sense, yes, Waymo is nonexistent. As the article author points out, lifetime miles for “self-driving” vehicles (70M) accounts for less than 1% of daily driving miles in the US (9B).

Even if we suspend that perspective, and look at the ride-hailing market, in 2018 Uber/Lyft accounted for ~1-2% of miles driven in the top 10 US metros. [1] So, Waymo is a tiny part of a tiny market in a single nation in the world.

Self-driving isn’t “here” in any meaningful sense and it won’t be in the near-term. If it were, we’d see Alphabet pouring much more of its war chest into Waymo to capture what stands to be a multi-trillion dollar market. But they’re not, so clearly they see the same risks that Brooks is highlighting.

[1]: https://drive.google.com/file/d/1FIUskVkj9lsAnWJQ6kLhAhNoVLj...


There are, optimistically, significantly less than 10k Waymos operating today. There are a bit less than 300M registered vehicles in the US. If the entire US automotive production were devoted solely to Waymos, it'd still take years to produce enough vehicles to drive any meaningful percentage of the daily road miles in the US.

I think that's a bit of a silly standard to set for hopefully obvious reasons.


> ..is a tiny part of a tiny market in a single nation in the world.

Calculator was a small device that was made in one tiny market in one nation in the world. Now we all got a couple of hardware ones in our desk drawers, and a couple software ones on each smartphone.

If a driving car can perform 'well' (Your Definition May Vary - YDMV) in NY/Chicago/etc. then it can perform equally 'well' in London, Paris, Berlin, Brussels, etc. It's just that EU has stricter rules/regulations while US is more relaxed (thus innovation happens 'there' and not 'here' in the EU).

When 'you guys' (US) nail self-driving, it will only be a matter of time til we (EU) allow it to cross the pond. I see this as a hockey-stick graph. We are still on the eraser/blade phase.


if you had read the F-ing article, which you clearly did not, you would see that you are committing the sin of exponentiation: assuming that all tech advances exponentially because microprocessor development did (for awhile).

Development of this technology appears to be logarithmic, not exponential.


He's committing the "sin" of monotonicity, not exponentiation. You could quibble about whether progress is currently exponential, but Waymo has started limited deployments in 2-3 cities in 2024 and wide deployments in at least SF (its second city after Phoenix). I don't think you can reasonably say its progress is logarithmic at this point - maybe linear or quadratic.


Speaking for one of those metro areas I'm familiar with: maybe in SF city limits specifically (where they still are half the Uber's share), but that's 10% of the population of the Bay Area metro. I'm very much looking forward to the day when I can take a robo cab from where I live near Google to the airport - preferably, much cheaper than today's absurd Uber rates - but today it's just not present in the lives of about 95+% of Bay Area residents.


> preferably, much cheaper than today's absurd Uber rates

I just want to highlight that the only mechanism by which this eventually produces cheaper rates is by removing having to pay a human driver.

I’m not one to forestall technological progress, but there are a huge number of people already living on the margins who will lose one of their few remaining options for income as this expands. AI will inevitably create jobs, but it’s hard to see how it will—in the short term at least—do anything to help the enormous numbers of people who are going to be put out of work.

I’m not saying we should stop the inevitable forward march of technology. But at the same time it’s hard for me to “very much look forward to” the flip side of being able to take robocabs everywhere.


People living on the margins is fundamentally a social problem, and we all know how amenable those are to technical solutions.

Let's say AV development stops tomorrow though. Is continuing to grind workers down under the boot of the gig economy really a preferred solution here or just a way to avoid the difficult political discussion we need to have either way?


I'm not sure how I could have been more clear that I'm not suggesting we stop development on robotaxis or anything related to AI.

All I'm asking is that we take a moment to reflect on the people who won't be winners. Which is going to be a hell of a lot of people. And right now there is absolutely zero plan for what to do when these folks have one of the few remaining opportunities taken away from them.

As awful as the gig economy has been it's better than the "no economy" we're about to drive them to.


This is orthogonal. You're living in a society with no social safety net, one which leaves people with minimal options, and you're arguing for keeping at least those minimal options. Yes, that's better than nothing, but there are much better solutions.

The US is one of the richest countries in the world, with all that wealth going to a few people. "Give everyone else a few scraps too!" is better than having nothing, but redistributing the wealth is better.


I agree.

But this is the society we live in now. We don’t live in one where we take care of those whose jobs have been displaced.

I wish we did. But we don’t. So it’s hard for me to feel quite as excited these days for the next thing that will make the world worse for so many people, even if it is a technological marvel.

Just between trucking and rideshare drivers we’re talking over 10 million people. Maybe this will be the straw that breaks the camel’s back and finally gets us to take better care of our neighbors.


Yeah but it doesn't work to on the one hand campaign for not taking rideshare jobs away from people on an online forum, and on the other say "that's the society we live in now". If you're going to be defeatist, just accept those jobs might go away. If not, campaign for wealth redistribution and social safety nets.


I do?


Public transit would also remove lot of jobs and yet nobody suggesting we shouldn't build more public transit because it will remove jobs.

This is just coming from using what we already know how to do better.


Public transit has a fundamentally local impact. It takes away some jobs but also provides a lot of jobs for a wide variety of skills and skill levels. It simultaneously provides an enormous number of benefits to nearby populations, including increased safety and reduced traffic.

Self-driving cars will be disruptive globally. So far they primarily drive employment in a small set of the technology industry. Yes, there are manufacturing jobs involved but those are overwhelmingly going to be jobs that were already building human-operated vehicles. Self-driving cars will save many lives. But not as many as public transit does (proportionally per user) And it is blindingly obvious they will make traffic worse.


Do you ever drive yourself or would you feel guilty not paying a driver?


> preferably, much cheaper than today's absurd Uber rates

You haven’t paid attention to how VC companies work.


Waymo has approval to operate in San Mateo County so it’s likely coming pretty soon.


Waymo's current operational area in the bay runs from Sunnyvale to fisherman's wharf. I don't know how many people that is, but I'm pretty comfortable calling it a big chunk of the bay.

They don't run to SFO because SF hasn't approved them for airport service.


I just opened the Waymo app and its service certainly doesn't extend to Sunnyvale. I just recently had an experience where I got a Waymo to drive me to a Caltrain station so I can actually get to Sunnyvale.


The public area is SF to Daly City. The employee-only area runs down the rest of the peninsula. Both of them together are the operational area.

Waymo's app only shows the areas accessible to you. Different users can have different accessible areas, though in the Bay area it's currently just the two divisions I'm aware of.


Why would you consider the employee-only area? For that categorization to exist it must mean it's either unreliable for customers or too expensive cause there's too much human drivers on the loop. Either way it would not be considered as an area served by self driving, imo.


There are alternative possibilities, like "we don't have enough vehicles to serve this area appropriately" or "we don't have statistical power to ensure this area meets safety standards even though it looks fine", and "there are missing features (like freeways) that would make public service uncompetitive in this area" to simply "the CPUC hasn't approved a fare area expansion".

It's an area they're operating legally, so it's part of their operational area. It's not part of their public service area, which I'd call that instead.


I wish! In Palo Alto the cars have been driving around for more than a decade and you still can't hail one. Lately I see them much less often than I used to, actually. I don't think occasional internal-only testing qualifies as "operational".


Where's the economic proof of impossibility? As far as I know Waymo has not published any official numbers, and any third party unit profitability analysis is going to be so sensitive to assumptions about e.g. exact depreciation schedules and utilization percentages that the error bars would inevitably be straddling both sides of the break-even line.

> with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in

That argument doesn't seem horribly compelling given the regular expansions to new areas.


Analyzing Alphabet’s capital allocation decisions gives you all the evidence necessary.

It’s safe to assume that a company’s ownership takes the decisions that they believe will maximize the value of their company. Therefore, we can look at Alphabet’s capital allocation decisions, with respect to Waymo, to see what they think about Waymo’s opportunity.

In the past five years, Alphabet has spent >$100B to buyback their stock; retained ~100B in cash. In 2024, they issued their first dividend to investors and authorized up to $70B more in stock buybacks.

Over that same time period they’ve invested <$5B in Waymo, and committed to investing $5B more over the next few years (no timeline was given).

This tells us that Alphabet believes their money is better spent buying back their stock, paying back their investors, or sitting in the bank, when compared to investing more in Waymo.

Either they believe Waymo’s opportunity is too small (unlikely) to warrant further investment, or when adjusted for the remaining risk/uncertainty (research, technology, product, market, execution, etc) they feel the venture needs to be de-risked further before investing more.


Isn’t there a point of diminishing returns? Let’s assume they hand over $70B to Waymo today. Can Waymo even allocate that?

I view the bottlenecks as two things. Producing the vehicles and establishing new markets.

My understanding of the process with the vehicles is they acquire them then begin a lengthy process of retrofitting them. It seems the only way to improve (read: speed up) this process is to have a tightly integrated manufacturing partner. Does $70B buy that? I’m not sure.

Next, to establish new markets… you need to secure people and real estate. Money is essential but this isn’t a problem you can simply wave money at. You need to get boots on the ground, scout out locations meeting requirements, and begin the fuzzy process of hiring.

I think Alphabet will allocate money as the operation scales. If they can prove viability in a few more markets the levers to open faster production of vehicles will be pulled.


Yes, correct, you’re restating the “risk/uncertainty” in the form of various concrete hypotheses. :)

Within the context of the original discussion around whether self-driving is here, today, or not, I think we can definitively see it’s not here.


To be clear, buying back stock is one of the ways they can invest in Waymo (and other business units).

Since Alphabet buybacks mostly just offset employee stock compensation, the main thing they are getting for this money is employees.


I would prefer if they just give employee bonuses rather than this indirect form of compensation


>believes their money is better spent buying back their stock,

Alphabet has to buy back their stock because of the massive amount of stock comp they award.


> Alphabet has to buy back their stock because of the massive amount of stock comp they award.

Wait, really? They're a publically traded company; don't they just need to issue new stock (the opposite of buying it back) to employees, who can then choose to sell it in the public market?


It's much better comp if the value of the stock goes up.


They could issue more stock, but Alphabet has decided to keep the number of outstanding shares the same, it's a thing they do for shareholders.


This is just a quirk of the modern stock market capitalist system. Yes, stock buybacks are more lucrative than almost anything other than a blitz-scaling B2B SAAS. But for good of society, I would prefer if Alphabet spent their money developing new technologies and not on stock buybacks / dividends. If they think every tech is a waste of money, then give it to charity, not stock buybacks. That said, Alohabet does develop new technologies regularly. Their track record before 2012 is stellar, their track record now is good (Alphafold, Waymo, Tensorflow, TPU etc), and it is nowhere close to being the worst offender of stock buybacks (I’m looking at you Apple), but we should move away from stock price over everything as a mentality and force companies to use their profits for the common good.


That's a very hand wavy argument. How about starting here:

> Mario Herger: Waymo is using around four NVIDIA H100 GPUSs at a unit price of $10,000 per vehicle to cover the necessary computing requirements. The five lidars, 29 cameras, 4 radars – adds another $40,000 - $50,000. This would put the cost of a current Waymo robotaxi at around $150,000

There are definitely some numbers out there that allow us to estimate within some standard deviations how unprofitable Waymo is


(That quote doesn't seem credible. It seems quite unlikely that Waymo would use H100s -- for one, they operate cars that predate the H100 release. And H100s sure as hell don't cost just $10k either.)

You're not even making a handwavy argument. Sure, it might sound like a lot of money, but in terms of unit profitability it could mean anything at all depending on the other parameters. What really matters is a) how long a period that investment is depreciated over; b) what utilization the car gets (ot alternatively, how much revenue it generates); c) how much lower the operating costs are due to not needing to pay a driver.

Like, if the car is depreciated over 5 years, it's basically guaranteed to be unit profitable. While if it has to be depreciated over just a year, it probably isn't.

Do you know what those numbers actually are? I don't.


I know for a fact Waymo uses TPU’s not GPU, maybe it is equivalent to 4 H100’s but TPU vs GPU is somewhat apples vs oranges


Here in the product/research sense, which is the hardest bar to cross. Making it cheaper takes time but generally we have reduced cost of everything by orders of magnitude when manufacturing ramps up, and I don't think self driving hardware(sensors etc) would be any different.


It’s not even here in the product/research sense. First, as the author points out, it’s better characterized as operator-assisted semi-autonomous driving in limited locations. That’s great but far from autonomous driving.

Secondly, if we throw a dart on a map: 1) what are the chances Waymo can deploy there, 2) how much money would they have to invest to deploy, and 3) how long would it take?

Waymo is nowhere near a turn-key system where they can setup in any city without investing in the infrastructure underlying Waymo’s system. See [1] which details the amount of manual work and coordination with local officials that Waymo has to do per city.

And that’s just to deploy an operator-assisted semi-autonomous vehicle in the US. EU, China, and India aren’t even on the roadmap yet. These locations will take many more billions worth of investment.

Not to mention Waymo hasn’t even addressed long-haul trucking, an industry ripe for automation that makes cold, calculated, rational business decisions based on economics. Waymo had a brief foray in the industry and then gave up. Because they haven’t solved autonomous driving yet and it’s not even on the horizon.

Whereas we can drop most humans in any of these locations and they’ll mostly figure it out within the week.

Far more than lowering the cost, there are fundamental technological problems that remain unsolved.

[1]: https://waymo.com/blog/2020/09/the-waymo-driver-handbook-map...


Thanks for linking me to the ICAP framework. ICAP and your “I do something else to learn” methodology generally jives with the same thing I happened upon by chance after spending a lot of time trying to “learn how to learn” the best way; landing upon SRS and Anki specifically, as a tool; and then finding a much better process+system. Given how deeply involved you are in the space, I assume you’ve heard of, and possibly follow, some of Justin Sung’s videos and techniques?

He provides some scientific foundation behind the recommendations he makes, specifically his recommendations around mind mapping and _how_ to do it properly. His process puts mind mapping firmly in the _Interactive_ mode. The results are truly unbelievable.

So much so that after investing 20 hours to mind map a book for myself 7 months ago, I can recall practically all the information I mind mapped without rehearsal.

Mind mapping makes up probably 70% of my learning these days, then I have a long-form written system for the other 29%, and sometimes, when I have a little isolated fact that doesn’t fit in either system, I turn to SRS for memorization of the last 1%.


I'm not familiar with Sung's videos, but after a quick perusal of his thumbnails I have some knowledge in the various cognitive science concepts he covers (Cognitive Load Theory, Flow, Mindmaps). I didn't really dig into educational theory until I started teaching Computer Science. I wanted to figure out how to better instruct my students after learning about the high drop/fail rates in intro courses. Once I started to experiment in my classes, I decided I should get the PhD for a pay bump.

Once I was in the program, I focused most of my research in reading where cog sci was being used for stem, but also for general practice research. I've been training martial arts for almost 20 years now, so some of the research was me double dipping in how to improve teaching CS and punching people.

Honestly, I still argue that martial arts' spaced repetition was a bigger influence on how I view learning. I need to allocate 2-4 hours 1-4 times a week for practice (4 when I was younger and could get away with it; correct due to immediate feedback from my partners; and have a giant support network of people through the US that make me vested in not only the art but their lives as well. I acknowledge the benefits of meta-cognitive methods like planning and self-reflection, but they feel more theory than application.

Planning is great until you're a novice that doesn't know what to train next. Then you are just a struggling student receiving negative reinforcement, which only amplifies any imposter syndrome you already have. Sadly, there isn't much research exploring how physical athletes learn beyond simple spaced repetition. There's some work in interleaved practice [1] but since physical training is more or less "solved", progress is slow.

Instead, I focus on the various lower-level practice activities so students can acquire subskills without needing to program. Then, I heavily encourage building a 'sense of community' [2], not through group projects (which have their own faults) but rather in simply "giving a damn" about your classmates' progress.

At the end of the day, I think learning is heavily a "time on task" [3] problem and determining how to structure lower-level practice and toy examples that encourage you to keep with it and break Carol Dweck's "fixed mindset" [4].

I'd like to dig deeper into how to properly structure practice across ICAP modalities, but the sheer number of variables and even determining how many activities should be in a practice is too complex of a problem without a very large sample size.

[1] https://effectiviology.com/interleaving/

[2] https://en.wikipedia.org/wiki/Sense_of_community

[3] https://www.thisiscalmer.com/blog/time-on-task-learning-stra...

[4] https://learning-theories.com/mindset-theory-fixed-vs-growth...


Do you have any book recommendations to help understand the electrical grid and why you can’t just start pumping electricity into it? I find these type of systems fascinating.


Fundamentally, Grid operators have to realtime-match demand + generation, and have a bunch of grid-scale devices along the way to sink excess generation temporarily, or have (expensive) peaker plants to ramp up generation temporarily. For example, natural gas can respond quicker to demands, but nuclear power plants typically cannot. Typically nuclear plants will feed into the average 'base load' and the more responsive plants will handle temporary peaks.

In order to financially make this work, there are a whole litany of agreements + commitments in place, as well as some "free market magic". (Remember a few years ago when spot prices for electricity spiked into obscene territory?)

Renewables provide unique challenges for these operations, as you cannot simply turn sun and wind off and on. Similarly, you can't just pump uncontrolled electricity into the grid w/o the operator's coordination.


Huh, thanks for that explanation, I didn't consider what happens when a grid generates excess energy that has nowhere to go. Makes sense that bad things would happen once you've exceeded your storage capacity, hence the real-time matching.


You may have already seen it, but this timely story from Grady/Practical Engineering also goes into these details: https://news.ycombinator.com/item?id=42183747


You don’t provide any data to support your premise. There’s an intervening 10 years between acceptance into medical school and showing up at a surgery table. Show the connection between MCAT scores and surgery outcomes.


> You don’t provide any data to support your premise

lol, The burden is not on the world to prove that 10 years of med school can turn anyone into a good doctor.

It's on the people that decided to not take in the best students in the first place.


Ive met a few bipoc Ivy docs over the last few years.

Some of them basically felt burned by the education system that accepted them into undergrad where they performed at bottom of class as they were let in with lowest standards. Then this process repeated as they felt burned by med school again as they were let in with lowest scores. Some of them would assume, these schools let me in hoping i would fail out so their diversity numbers look good, instead i graduated at bottom of my class and jad a terrible experience.

The psychological effect of being at the bottom of your class at an Ivy, vs top of your class at a public state university, is an interesting way to start your career in any field.


Surely the number of people at the bottom of their class at an Ivy didn’t change though? Just (possibly) the race of those at the bottom? So any psychological effect of being at the bottom seems… constant?


Of course there is always a bottom 20% of the ivy league class. What differs is:

if u lower academic metrics to admit racially diverse students, these students will all be in bottom 20% unless university changes other factors.

To prevent bottom 20% from all being racially preffered students, the univ or professor has the choice of either lowering level of education to allow racial students to compete, just give them better scores for being racial, or just let them be the bottom 20% of class because they cant keep up with the other academic merit based students.

Some Ivy professors such as Amy Wax have discussed this, mentioning racially admitted students are always in bottom of class, the current process sets them up to fail rather than pushing them to state school where they might be top of class,racial students have never been in top of her classes, as expected the university is trying to revoke her tenure.


Pushing then to state schools where they might be top of their class has psychological benifits for the student, if i am #1 in level 2 math vs the worst student in level 1 math, i can take pride in my academic success, this will encourage me to study more rather than feeling like the worst student in class. If i knew my high school put me in level 1 math intentionally knowing i would be the weakest student, i would be mad at the high school.


> It's on the people that decided to not take in the best students in the first place.

The reason all students at this level look exactly the same is that everyone in the pool at this point has essentially unlimited potential.


Relax, you're on the internet not in a courtroom. u/sandspar made a claim, and it's well within reason to point out that their claim has no basis in the data they referenced.


In his specific example, a comment is not the best solution. You can create self-documenting code easily with an interface and an implementation, also commonly referred to as the strategy pattern.

Interface expresses “what the function does” while the implementation expresses “how it’s done with what tradeoffs”.

Bonus, in the future, if performance ever does become a problem, you swap in the new optimized implementation behind the existing interface.


Interfaces are not free. Depending on the implementation, and how you use them, it can have significant runtime costs, some compile cost (though usually negligible) and most importantly, code complexity.

When you see a call to an interface, you don't know what the code does concretely, you only know what it is supposed to do. The actual implementation and how it ties to the interface may by in a completely different place. It is one of my biggest source of headaches when debugging code.

Interfaces are useful, the strategy pattern is useful, but overuse is harmful. My idea is to not use an abstraction unless I know I am going to need it. For example, let's say I need to decode video, and I have a hardware decoder and a software decoder. Here, an abstraction make sense, as I know the software decoder will be prohibitively slow one some platforms, and the hardware decoder won't always be supported. But if the optimized version is sufficiently better so that it makes the old version obsolete, just change the code.

And if I don't know if I will need to change strategies later, I just write the first strategy, and if it calls for an interface later, only then I will do the abstraction.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: