Hacker Newsnew | past | comments | ask | show | jobs | submit | xvector's commentslogin

They are solving for privacy before solving for the UX.

They should actually make something useful first, and then work backwards to making it private before releasing it.


With 1B+ users Apple isn't in the position to do the typical startup fast & loose order of operations. Apple has (rightly) given themselves the responsibility to protect people's privacy, and a lot of people rely on that. It'd be a really bad look if it turned out they made Siri really really useful but then hostile govt's all got access to the data and cracked down on a bunch of vulnerable people.

Making privacy some end-goal that PMs cut to meet targets is how you end up with Google redefining privacy to mean "only we have access to every aspect of your life, now and in the future".

If Apple takes the position that the UX has to fit in around the privacy requirements, so what? Privacy is a core pillar of their product identity—a built-in hallucinating compliments machine isn't.


To be fair, Google have always treated privacy that way, long before they used any of that data.

Big players don't want this either, we rely on open source software and frequently contribute back

This is just another dumb EU reg that hurts everyone


Let's turn that around. What personal data wouldn't help train an AI model?


Where can I read more about this? Quick search turns up nothing for me


https://www.theverge.com/news/823191/meta-ftc-antitrust-tria...

It is actually a monumental case ruling, and for some reason it wasnt reported or discussed here. Lina Khan's FTC has lost both their marquee cases now (Google, Meta)

> Meta won a landmark antitrust battle with the Federal Trade Commission on Tuesday after a federal judge ruled it has not monopolized the social media market at the center of the case.


Wasn't the case here really weak to begin with? I remember reading the FTC's initial filings and they just sounded absurd. The very premise that Meta didn't face meaningful competition from TikTok was a farce.

I'm not very happy with Lina Khan after she killed our only remaining low cost airline carrier. And killed iRobot to let Roborock, a a Chinese company, take over.

She "stood up" to big tech, failed, and her remaining legacy is destroying American businesses that people actually relied on. Literally no value was added, but a bunch was subtracted. I never understood the hype for her.


> The very premise that Meta didn't face meaningful competition from TikTok was a farce.

The original claim was centered around the timeline of purchasing Instagram and Whatsapp. TikTok came much, much later.


If this is true, the case then becomes "Meta was a monopoly from start_date-tiktok_date" which isn't a very meaningful claim since they are not arguing it is a monopoly to be broken up.

Anyways, I disagree - this is not the case. If you read the filings and their slides, the FTC argues Meta is a monopoly in the personal networking space.

They essentially carve a market out of thin air to selectively exclude Snapchat, TikTok, and Shorts. The judge has understandably called this for what it is.

It was a phenomenally poorly litigated case, most experts at the time doubted it would succeed, but it did wonders for Lina Khan's popularity. Seems to have served her well with NYC and all.


Just to be clear, when you Khan "killed our remaining low cost airline carrier", are you referring to when the DOJ blocked the JetBlue-Spirit Airlines merger? Not arguing, I just want to understand.


Correct, yeah.



They also have plenty of domestic and foreign intelligence agents literally working with sensitive systems at the company.


Total non-issue. Parenting has never been about what the child wants, only what is best for them.


> But in this case i feel the wider harm to society outweighs the potential good to the individual.

This is where you have it wrong. The risk is not to society, it is to the individual. One family can take on immense risk to discover something that benefits all of humanity - whether it makes us live better, cure a disease, etc.

Yes, there are society-wide upheavals that a new technology like this might create, which you might be referring to as a "risk" - but upheavals are a fact of life all major technologies throughout human history. We will adapt.


It's not a simple debate, but you are suggesting unprecedented levels of medical intervention. It's an ethical minefield. Firstly, i'm sure this is not your intention, but you are basically suggesting we should test genetic experiments on human guinea pigs. I'm not an expert in medical ethics but i'm pretty sure it's a major no go however noble the intention (i know new treatments get tested the whole time but this is a level up from that) . You are also suggesting we should use it to solve problems as trivial as colour blindness, even without fully understanding the moral, ethical and social impacts of using gene editing in such a way.


We have become too risk averse as a species to make any real progress on this front.

Our ancestors would make the most daring bets in pursuit of a better for their children. Hunter-gatherers setting off in an unknown direction in search for more abundant pastures, knowing that their survival was unlikely.

Everything we have is thanks to them.

Today we sit on our laurels, unwilling to take trajectory-changing bets because things might go wrong. In our risk paralysis, human evolution will come to a standstill, and that is a disservice to all future humans.

No longer can an individual family or group of humans set out in that direction in search of a better future. They will be thrown in prison for daring to instead.


There aren't any risks to take. Modern society is approaching a steady-state solution.

Eugenics and artificial selection results in monocultures. In the long run has the opposite effect of what you're describing.


Maybe it's not risk-aversion, but an adjacent concept I'll call stifled freedom of action.

It's very hard to just do stuff nowadays. For example, building something on your land, selling stuff to other humans, marrying someone, immigrating somewhere, renewing your id, paying your taxes.

The immense burden of paperwork and the knowledge required to navigate it all, and the paralysis that comes from just being aware of the burden, is not trivial.

The individual really ought to stay in their lane and fit into the template that's expected of them by the systems they are subject to.

It legitimately wasn't like this a century ago. We were oppressed by nature (disease, material poverty), but in many real ways we had more freedom of action to just do life stuff.


I think it is fairly shortsighted to think that modern society is approaching "steady state" when we are on the "stick" part of the hockey stick curve of progress.

There are plenty of risks to take today (with things like gene editing - which does not mean "monoculture") and there will be plenty of trajectory-changing risks to take tomorrow.


Steady state solution? Christ, imagine if they had decided that's where they were at in 1800.


> Our ancestors would make the most daring bets in pursuit of a better for their children.

There are numerous counterexamples to this and plenty of them worked out fine. The speed and enthusiasm we adopt new technology is unmatched by any culture with a surviving literary tradition that I'm aware of.


I often think what would happen if somebody were to engineer some sort of quasi universal cure to cancer, and they were to do it out of desperation. Say the cure works, and then this person wanted for it to reach more people right now. Would they become fugitives? Would the long arm of the law chase them to the confines of the world? What would the drugs lobby do if the billions of investment they must throw into drug certification are jeopardized by some Rambo?


They shouldn't be banned, but regulators would regulate their own shadow if they could.

People are allowed to mutilate their babies, raise them in whatever destructive fashion they please, avoid vaccinating them in an environment where they will be exposed to deadly viruses.

But god forbid someone try to make their baby immune to AIDS, some other genetic disease, or reduce the likelihood of psychosis given family history.

There is no world in which regulators will let this happen. There is no way to test this in a manner that will satisfy them, because babies can't consent to a trial. If it was up to regulators, human evolution ends here. No group should have that power over our species.

It is the same problem as modern medicine being so prohibitively expensive to test, that most ideas go to the bin. We need a deregulated zone to allow for progress to actually happen.


Genetic engineering is banned because people will almost certainly use it for something else more nefarious than cure AIDS the first chance they get.


The same can be said of things like mRNA vaccines, but they have done good for society.

You're also just wrong - the first scientist to genetically edit human embryos edited in immunity to AIDS.


the first, but not the last


the methods of cleaning up messes introduced are also kind of disgusting


Citation needed.


Some things don’t need citation. Nuclear energy is a great example. You don’t need citations to explain why allowing every country to pursue it is a bad idea.


Again: citation needed.

A huge chunk of the environmental disaster we are facing is because Europe and the US didn't go the nuclear route like France did in the 60. We could have had this crisis we're having now in a few hundred years instead.


You need citation on why authoritarian dictatorships shouldn’t have access to nuclear power?


Yea. Nuclear power is not the same thing as nuclear weapons.


Of course it isn’t. But once you have access to nuclear power, you can have access to nuclear weapons very quickly.


People aren’t allowed to mutilate babies what the hell are you going on about?

Genetic tampering can lead to all kinds of unknowable nightmares.


> People aren’t allowed to mutilate babies

Circumcision?


> People aren’t allowed to mutilate babies what the hell are you going on about?

Circumcision is absolutely mutilation.

> Genetic tampering can lead to all kinds of unknowable nightmares.

You can "tamper with your kid's DNA" just by having kids with the wrong person and passing down a genetic disease.

There are plenty of unknowable things about life. You could die in a car crash. You certainly will die eventually.

Should we avoid taking risks entirely because they might result in bad outcomes? With this mindset, humanity would have never progressed. We would have never left our caves if we were paralyzed by our own fear.

Humanity is still early stage. We are not so different from those that once ventured out of their caves. To them, we owe everything. It is a disservice to all future humans that will ever live if we stop taking trajectory-changing bets because things could go wrong.


I agree on circumcision but you made it out that all kinds of mutilation are perfectly acceptable. But that one should definitely be banned, idk why (no -Jewish) Americans are so obsessed with it.

> There are plenty of unknowable things about life.

I agree but I know that I’m going to die someday.

As for where genetic engineering can lead I recommend the book “All Tomorrows”.

In any case i broadly agree with you - however there should still be guardrails and until we can safely and reliably manipulate the genetics of “less complex” animals we shouldn’t experiment with humans.

However you can probably do it if you really want! There are lots of countries that have less guardrails in place - but I would assume you don’t want to take the risk when it’s comes to your own life/offspring or am I wrong?

Take some trajectory-changing bets yourself and then I’ll believe that what you are saying is not just posturing


> I would assume you don’t want to take the risk when it’s comes to your own life/offspring or am I wrong?

I would, but that's mainly because of congenital psychosis that runs in my partner's family. Would gladly take the chance at editing that out of any embryo if there were targeted therapies.

If you know of any, please let me know - my understanding is that psychosis has not been isolated as well as Down's and blindness has, so you cannot genetically screen an embryo for it.


Incidentally there is another thread where just this is being discussed: https://news.ycombinator.com/item?id=45867125

Not sure if it can help in your case but definitely interesting.


I think OP might be referring to circumcision.

And just as a small aside, not really related to OPs points, I'd just like to point out that nature pretty consistently tampers with everyones kids DNA, which quite regularly leads to absolute nightmare fuel. Whatever those unknowable nightmares may be, they have to be pretty gruesome in order to compete.


You can't simultaneously expect people to learn from AI when it's right, and magically recognize when it's wrong.


But you can expect to learn in both cases. Just like you often learn from your own failures. Learning doesn’t require that you’re given the right answer, just that it’s possible for you to obtain the right answer


Hopefully you're mixing chemicals, diagnosing a personal health issue or resolving a legal dispute when you do that learning!


We’ve been down this road before. Wikipedia was going to be the knowledge apocalypse. How were you going to be able to trust what you read when anyone can edit it if you don’t already know the truth.

And we learned the limits. Broadly verifiable, non-controversial items are reasonably reliable (or at least no worse than classic encyclopedias). And highly technical or controversial items may have some useful information but you should definitely follow up with the source material. And you probably shouldn’t substitute Wikipedia for seeing a doctor either.

We’ll learn the same boundaries with AI. It will be fine to use for learning in some contexts and awful for learning in others. Maybe we should spend some energy on teaching people how to identify those contexts instead of trying to put the genie back in the bottle.


If you can't discern the difference between a LAMP stack returning UGC and an RNG-seeded matmut across the same UGC fined-tuned by sycophants then I think we're just going to end up disagreeing.


> You can't simultaneously expect people to learn from AI when it's right, and magically recognize when it's wrong.

You are misconstruing the point I was making.

My point is that DK is about developing competence and the relationship between competence and confidence (which I am also claiming evolves over time). My whole point is that the DK effect is not as relevant to LLMs giving wrong answers and people believing them as the author is claiming.

As someone else pointed out in another comment, the effect of people believing falsehoods from LLMs has more to do with Gell-Mann amnesia.

Tangentially, it actually is possible to learn from AI when it's right and recognize when its wrong, but its not magic, its just being patient, checking sources and thinking critically. It's how all of humanity has learned pretty much everything, because most people have been wrong about most things for most of time and yet we still learn from each other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: