Hacker Newsnew | past | comments | ask | show | jobs | submit | materielle's commentslogin

Why isn’t it the governments role?

Because you think it’s not?

What if I, and many other people, think that it is?


Because it's ultimately a form of censorship. Governments shouldn't be in the business of shutting down speech some people don't like, and in the same way shouldn't be in the business of shutting down software features some people don't like. As long as nobody is being harmed, censorship is bad and anti-democratic. (And we make exceptions for cases of actual harm, like libelous or threatening speech, or a product that injures or defrauds its users.) Freedom is a fundamental aspect of democracy, which is why freedoms are written into constitutions so simple majority vote can't remove them.

1) Integration or removal of features isn't speech. And has been subject to government compulsion for a long time (e.g. seat belts and catalytic converters in automobiles).

2) Business speech is limited in many, many ways. There is even compelled speech in business (e.g. black box warnings, mandatory sonograms prior to abortions).


I said, "As long as nobody is being harmed". Seatbelts and catalytic converters are about keeping people safe from harm. As are black box warnings and mandatory sonograms.

And legally, code and software are considered a form of speech in many contexts.

Do you really want the government to start telling you what software you can and cannot build? You think the government should be able to outlaw Python and require you to do your work in Java, and outlaw JSON and require your API's to return XML? Because that's the type of interference you're talking about here.


Mandatory sonograms aren't about harm prevention. (Though yes, I would agree with you if you said the government should not be able to compel them.)

In the US, commercial activities do not have constitutionally protected speech rights, with the sole exception of "the press". This is covered under the commerce clause and the first amendment, respectively.

I assemble DNA, I am not a programmer. And yes, due to biosecurity concerns there are constraints. Again, this might be covered under your "does no harm" standard. Though my making smallpox, for example, would not be causing harm any more than someone building a nuclear weapon would cause harm. The harm would come from releasing it.

But I think, given that AI has encouraged people to suicide, and would allow minors the ability to circumvent parental controls, as examples, that regulations pertaining to AI integration in software, including mandates that allow users to disable it (NOTE, THIS DOESN'T FORCE USERS TO DISABLE IT!!), would also fall under your harm standard. Outside of that, the leaking of personally identifiable information does cause material harm every day. So there needs to be proactive control available to the end user regarding what AI does on their computer, and how easy it is to accidentally enable information-gathering AI when that was not intended.

I can come up with more examples of harm beyond mere annoyance. Hopefully these examples are enough.


Those examples of harm are not good ones.

The topic of suicide and LLMs is a nuanced and complex one, but LLMs aren't suggesting it out of nowhere when summarizing your inbox or calendar. Those are conversations users actively start.

As for leaking PII, that's definitely something for to be aware of, but it's not a major practical concern for any end users so far. We'll see if prompt injection turns into a significant real-world threat and what can be done to mitigate it.

But people here aren't arguing against LLM features based on substantial harms. They're doing it because they don't like it in their UX. That's not a good enough reason for the government to get involved.

(Also, regarding sonograms, I typed without thinking -- yes of course the ones that are medically unnecessary have no justification in law, which is precisely why US federal courts have struck them down in North Carolina, Indiana, and Kentucky. And even when they're medically necessary, that's a decision for doctors not lawmakers.)


> Those examples of harm are not good ones.

I emphatically disagree. See you at the ballot box.

> but it's not a major practical concern for any end users so far.

My wife came across a post or comment by a person considering preemptive suicide in fear that their ChatGPT logs will ever get leaked. Yes, fear of leaks is a major practical concern for at least that user.


Fear of leaks, or the other harms you mention, have nothing to do with the question at hand, which is whether these features are enabled by default.

If someone is using ChatGPT, they're using ChatGPT. They're not inputting sensitive personal secrets by accident. Turning Gemini off by default in Gmail isn't going to change whether someone is using ChatGPT as a therapist or something.

You seem to simply be arguing that you don't like LLM's. To which I'll reply: if they do turn out to present substantial harms that need to be regulated, then so be it, and regulate them appropriately.

But that applies to all of them, and has nothing to do with the question at hand, which is whether they can be enabled by default in consumer products. As long as chatgpt.com and gemini.google.com exist, there's no basis for asking the government to turn off LLM features by default in Gmail or Calendar, while making them freely available as standalone products. Does that make sense?


I think investors would certainly love this. So why hasn’t it already happened?

My guess: they would lose a ton of cultural cachet.

Turning OpenAI into an ads business is basically admitting that AGI isn’t coming down the pipeline anytime soon. Yes, I know people will make some cost-based argument that ads + agi is perfectly logical.

But that’s not how people will perceive things, and OpenAI knows this. And I think the masses have a point: if we are really a few years away from AGI replacing the entire labor force, then there’s surely higher margin businesses they can engage in compared to ads. Especially since they are allegedly a non-profit.

After Google and Facebook, nobody is buying the “just a few ads to fund operating costs” argument either.


Yup, it’s essentially an admission of failure. I think the people who were expecting AI to improve exponentially are disappointed by its current state, where it’s basically just a useful tool to assist workers in some highly specific fields.

Highly specific fields? They are trying to get you to reach for ai when an emailed “ok, thanks” would do. They want you to lose your ability to write and formulate thoughts without the tool. Then it is really over. That is the golden goose. Not a couple data scientists.

> it’s essentially an admission of failure

A multibillion dollar failure is fine by investors. Altman hasn’t been peddling the AGI BS to them. That’s aimed at the public and policymakers.


Is a trillion dollar failure okay with investors?

Aka you need them deep enough into the trap they can’t escape, before you trigger it.

Yes and there are layers. Remember when google ads had yellow backgrounds? I'm sure OpenAI will find a way to do ads "ethically"... for a while, until people get comfortable, and that's when they will start to make ChatGPT increasingly manipulative.

Gotta make that line go up and to the right!

> The goals of the advertising business model do not always correspond to providing quality search to users.

- Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine


Can anyone explain to me what ChatGPT does that traps people? I get the value as tools, I like using copilot, but ChatGPT doesn't offer me value that any other LLM can't. Given that everyone is quickly rolling "AI" into their own stuff, I don't see what's ChatGPT's killer app. If anything, I think Gemini is better positioned to capture the general user market.

They make it a habit to use them, by offloading that part of their thinking/process to them. It’s similar to Google Maps, or even Google itself.

When was the last time you went to an actual physical library, for instance? Or pulled out a paper map?

Gemini is a competitor, yes. But most people still go to Google at this point, even if there are a ton of competitors.

That is what the race is about (in large part), who can become ‘the norm’.


I also wouldn't underestimate Google's ability to nudge regular users towards whichever AI surface they want to promote. My highly non-technical mom recently told me she started using Google's "AI Mode" or whatever it's called for most of her searches (she says she likes how it can search/compare multiple sites for browsing house listings and stuff).

She doesn't really install apps and never felt a need to learn a new tool like "ChatGPT" but the transition from regular Google search to "AI Search" felt really natural and also made it easy to switch back to regular search if it wasn't useful for specific types of searches.

It definitely reduces cognitive load for an average user not needing to switch between multiple apps/websites to lookup hours/reviews via Google Maps, search for "facebook.com" to navigate to a site and now run AI searches all in the same familiar places on her phone. So I think Google is still pretty "sticky" despite ChatGPT being a buzzword everyone hears now that they caught up to OpenAI in terms of model capability/features.


> When was the last time you went to an actual physical library

My eyesight is making paper books harder and harder to read, so I don't go to libraries and bookstores as much as I used to. But I think libraries are still relatively popular with families, because they're sites of various community activities as well as safe, quiet places to let kids roam and entertain themselves while the parents are nearby.

When I was a kid, my parents went to the library much more often than they do now, because they were taking me and my sister there. And then we would all get books before we came home.

Not saying you're entirely wrong, but there's a significant part of this that is "changing rhythms of life as we age", not just "changing times".


It used to be, people went to the library to look things up, and as a primary source for finding information they needed. Not just as a community center.

That is my point.


> Gemini is a competitor, yes. But most people still go to Google at this point, even if there are a ton of competitors.

Yeah, that's my point. If Google is good enough I don't think people are going to want to do those extra steps, just as in your google maps example. There might be better services out there, but google maps are just too convenient.


The branding is so strong and it works well enough (I’d say, according to the perception of most people) that it’s just the first “obvious” choice.

Akin to nobody getting fired for choosing AWS, nobody would think poorly of you using ChatGPT.

I don’t think Claude has that same presence yet.

Google has a reputation for being a risk to develop with, and I think they flopped on marketing for general users. It’s hard to compete with “ChatGPT” where there’s a perceived call to action right in the name; You don’t really know what Gemini is for until it’s explained.


Would've happened if Claude and Gemini weren't things. But they are.

Regardless of AGI, being known as the only LLM that introduced ads sounds very bad.


It also is impossible to work properly: either they also screwup the entire API to break everyones programmatic access to coding and regular apps, or else everyone just starts making wrappers around the API to make without-ad-chatbots

Why would they need ads on API though, API is paid usage. They just need a few years of scaling for it to be profitable. Some models are already a net profit on API usage.

I agree but even if AGI is possible within 5-10 years it must be hard to justify maintaining or even increasing this level of burn for much longer.

I’m actually not a huge Zig person.

But yes, avoiding arcaneness for the sake of arcaneness will earn you more users.

A big success of Rust has nothing to do with systems programming or the borrow checker.

But just that it brings ML ideas to the masses without having to learn a completely new syntax and fight with idiosyncratic toolchains and design decisions.


Or it is another example of younger generations unaware of our computing history, celebrating something that they think is totally new.


I actually think this is a pretty good argument against AI dooming that I don’t here that often.

Sam Altman doesn’t own AI. His investors actually own most of the actual assets.

Eventually there is going to be pressure for open ai to deliver returns to investors. Given that the majority of the US economy is consumer spending, the incentive is going to be for open ai to increase consumer spending in some way.

That’s essentially what happened to Google during the 2000s. I know everyone is negative about social media right now. But one could envision an alternative reality where Google explicitly controls and sensors all information, took over roadways with their driving cars, completely merged with the government, etc. Basically a doomsday scenario.

What actually happened is Google was incentivized by capital to narrow the scope of their vision. Today, the company mainly sells ads to increase consumer spending.


I'd agree. The logical fallacy I always observe in (what I call) the marxist-nihilist AI doom scenario is that it assumes that the top N% of people perfectly cooperate in a way that the remaining 100-N% cannot. Even stratified social structure is far too muddled for a "mass-replacement" scenario to not cause the elites to factionalize across different plans that would be best for them, which in turn prevents the kind of unified coherent action that the doom scenario hinges on (ex. theyll gun down the proles with robodogs).


Technofeudalism by Varoufakis is about this N% cooperation. Growing wealth concentration means this collusion becomes possible with smaller and smaller N% cooperating. If it's game theory optimal to cooperate I have no doubt Thiel will be releasing the robo hounds the minute he can.


When every industry is dominated by 1 or 2 players, collusion becomes a lot easier. This concentration has been slowly happening for decades now, and we're pretty much at the end. Every industry is dominated by what is essentially a monopoly, but because they keep at least 1 competitor alive, the public and the FTC are fine with it.


Devils advocate: is it really such a problem? Perhaps it should be banned simply on moralistic grounds.

But I fail to see how a hundred or so buildings sold to millionaires and billionaires numbering in the thousands has any affect at all in a city with 20 million people.

Again, surely it’s not the best nor most democratic thing that these buildings exist at all.

But I don’t see how it can impact the bread and butter real estate and rental market. Surely this is caused by the city’s numerous bad housing policies like rent control, zoning, public transportation, education.


NYC metro area has fewer than 400 skyscrapers, so a hundred is quite a lot.


My problems with ORMs is that they are a solution in search of a problem most of the time.

We already have an abstraction for interfacing with the DBMS. It’s called SQL, and it works perfectly fine.


> We already have an abstraction for interfacing with the DBMS. It’s called SQL, and it works perfectly fine.

ORMs are not an abstraction to interface with the DBMS. They are an abstraction to map the data in your database to objects in your code and vice versa. It's literally in the name.

Feels like a lot of anti-ORM sentiment originates from people who literally don't know what the acronym means.


> They are an abstraction to map the data in your database to objects in your code and vice versa.

Maybe that's part of the problem - you're trying to map tabular data in your database to hierarchical data in your programming language.

Of course there's going to be all kinds of pain when pounding square pegs into round holes. Getting a better hammer (i.e. a better ORM) isn't necessarily going to help.


Okay, so what's the round peg that goes in the round hole, here? Forgetting about objects and just passing around dicts or whatever with no type information?


> Forgetting about objects and just passing around dicts or whatever with no type information?

Why would you need to drop the type information when you stop using hierarchical structures for your data?


You're working with bits. It's turtles all the way down.


The way it integrates into Django is more than just an abstraction to SQL. It's also an abstraction to your table schema, mapped to your model. In short, it's the Pythonic way of fetching data from your models in Django.

It allows for functional programming, as in building queries upon other queries. And predefined filters, easily combining queries, etc. And much more.

Of course you don't need all of that. But in a big project, where you might query some particular tables a lot of the times, and there are common joins you make between tables, then sometimes it is nice to have predefined models and columns and relations, so you need less verbosity when building the queries.

You do of course need to learn a new tool to build queries, but it does pay off in some cases.


Mostly, I think, the problem is SQL injection, and raw SQL is a great place for people to forget to escape their strings.


ORM's are not the only solution to SQL injection, pyscopg for example handles string escaping etc for you.


Yeah, if you remember to use it properly. SQL injection was pretty rampant before ORMs and web frameworks started being used everywhere.

ORMs let anyone make CRUD apps without needing to worry about that sort of thing. Also helps prevent issues from slipping through on larger teams with more junior developers. Or, frankly, even “senior” developers that don’t really understand web security.


That’s certainly an advantage, but I’m not sure that’s the value proposition.

It’s that Chrome and V8’s implementation has grown to match resourcing. You probably can’t maintain a fork of their engine long-term without Google level funding.


It doesn’t even have to be a bug. Having some rule like “invalidate all data older than 6 months” makes it easier to reason about and test for backwards compatibility.

I’m sure the data format of Apple Maps is constantly changing to support new features and optimizations.


apple maps data expires after 30 days.

if you create offline maps for some vacation away from reliable cellphone service, when 30 days passes the (gigabytes of) maps just disappear. Unusable even if you are in a remote village.


Wait, so if we give the foreign workers the same at will employment rights as Americans, then they are no longer interested?

I thought they needed these foreign workers because no American could do the job?


No, what they wouldn't be interested in is paying $100,000 to help someone enter the country, with no compensation if they ditch you on day one.


The idea would be that you would pay that employee at above market rates, so they wouldn't ditch you on day one because you pay them more than any of their other alternatives.

Right now, the H1B system is used to bring over cheap labor, willing to work for compensation and conditions worse than native labor. This is not the stated goal of the program, the idea was to bring over highly skilled labor doing jobs that no-one native is able to. The system detailed above is supposed to be a way to change it from how it currently is to what it was supposed to be.


It should be marshaled into {} by default, with a opt-out for special use cases.

There’s a simple reason: most JavaScript parsers reject null. At least in the slice case.


Not sure what you mean here by "most JavaScript parser rejects null" - did you mean "JSON parsers"? And why would they reject null, which is a valid JSON value?

It's more that when building an API that adheres to a specification, whether formal or informal, if the field is supposed to be a JSON array then it should be a JSON array. Not _sometimes_ a JSON array and _sometimes_ null, but always an array. That way clients consuming the JSON output can write code consuming that array without needing to be overly defensive


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: