They extend it in some ways, but I'm not sure if they do in this way. They do sound kind of terrible, but I always assumed it was due to the microphones being way back by your ears. I'm not sure though
You know… I'm grateful that they moved quickly and decisively to bridge businesses through that time. We'd all be worse off without this having happened. I'm willing to accept a certain amount of waste for the importance of speed.
Mm. It’s certainly good to work at the other end of the funnel (thank you!) but it also won’t help address pattern matching that people do in hiring.
It’s an incredibly natural thing for people to hire people like themselves, or people they meet their image of what a top notch software dev looks like. It requires active effort to counteract this. One can definitely argue about the efficacy of DEI approaches, but I disagree that JUST increasing the strength of applicants will address the issue.
Yes it will! That pattern matching is based on prior experience and if the entire makeup of candidates changes that'll cause people to pattern match differently. If old prejudices are taking a while to die out, it won't be long until someone smart realizes there's whole groups of qualified candidates who aren't getting the same offers as others and hires them
> it won't be long until someone smart realizes there's whole groups of qualified candidates who aren't getting the same offers as others and hires them
There's an argument to be made that this is exactly what pipeline-level DEI programs are!
If the goal is to prevent people from being biased, why not anonymize candidate packets? Zoom interviews can also be anonymized easily. If it's the case that equally strong, or stronger, candidates are being passed over anonymization should solve this.
Rather than working to anonymize candidates, every DEI policy I've witnessed sought to incentivize increasing the representation of specific demographics. Bonuses for hitting specific thresholds of X% one gender, Y% one race. Or even outright reserving headcount on the basis of race and gender. This is likely because the target levels of representation are considerably higher than the representation of the workforce. At Dropbox the target was 33% women in software developer roles. Hard to do when ~20% of software developers are women.
Anonymization is probably an under tried idea. Various orchestras switched to blind auditions and significantly increased the number of women they hired.
They can cheat non-anonymous interviews too. An alternative is to have candidates go in person to an office to interview, but the grading and hiring panel only sees anonymized recordings of the interview.
I don't think they're advocating not doing defer in C? They're saying you can backport the functionality if needed, or if you want to start using it now.
They're recommending changes to the proposal though, such as requiring a trailing semicolon after the close brace. It also changes the syntactical category of the defer statement, though it's not clear to me what that actually affects.
As is relevant to the Ninth Circuit’s opinion, the AADC creates two sets of requirements. First, the AADC requires businesses to complete a data protection impact assessment (DPIA) before a business offers to the public any new online services, products, or features likely to be accessed by children. The DPIA must “identify the purpose of the online service, product, or feature, how it uses children’s personal information, and the risks of material detriment to children that arise from the data management practices of the business.” The DPIA also must address, to the extent applicable, eight factors, including “whether the design of the online product, service, or feature could harm children, including by exposing children to harmful, or potentially harmful, content on the online product, service, or feature.” Businesses must document the risks and “create a timed plan to mitigate or eliminate” the risks before the online product, service or feature is accessed by children.
Second, the AADC contains a list of prescriptive requirements in sections 1798.99.31(a)(5)-(10) and (b). Specifically, sections (a)(5)-(10) require businesses to:
Estimate the age of child users or apply the privacy and data protections afforded to children to all users.
Configure all default privacy settings provided to children to the settings that offer a high level of privacy, unless the business can demonstrate a compelling reason that the different setting is in the best interests of children.
Provide privacy information and other documents such as terms of service in language suited to the age of children likely to access the product, service, or feature.
Provide an “obvious signal” to a child if they are being monitored or tracked by a parent, guardian or any other consumer.
Enforce published terms and other documents.
Provide prominent tools to allow children or, if applicable, their parents or guardians, to exercise their private rights.
Section (b) then provides that businesses cannot:
Use children’s personal information in a way that the “business knows, or has reason to know, is materially detrimental to the physical health, mental health, or well-being of a child.”
Profile a child unless certain criteria are met.
Collect, sell, share, or retain any personal information that is not necessary to provide an online service, product, or feature, unless the business can demonstrate a compelling reason that doing so is in the best interests of children likely to access the product, service or feature.
Collect, sell, or share a child’s precise geolocation information by default unless strictly necessary to provide the requested service.
Use dark patterns to lead or encourage children to provide personal information beyond what is reasonably expected.
Use personal information collected to estimate age or age range for any other purpose or retain that personal information for longer than necessary to estimate age.
I stand with the parent commenter - Maybe Techdirt could actually hyperlink to this kind of information, instead of hyperlinking to three other articles pretending to answer the question but not doing so.
I found the law's text via hyperlinks. It wasn't directly linked in the article, or anything it linked to, but it was a link from an article linked by this article.
Given that it's part of an ongoing series of articles, it's not surprising that the law itself isn't directly linked (the author clearly expects that you've already been following this saga to some degree).
I mean, you don’t have to care about this unless you have an application where you do. And if you do there is enough transparency (ie ability to inspect the assembly and ask questions) that you can solve this one issue without knowing everything under the sun.
If you had an application where this sort of thing made a difference in JavaScript, the problem would likely still the there, you’d just have a lot less visibility on it.
I guess you’re still right - at the end of the day you see discussions like this far more often in C, so it impacts the feel of programming in C more.
You should be able to judge whether something is a copyright violation based on the resulting work. If a work was produced with or without computer assistance, why would that change whether it infringes?
It helps. If it's at stake whether there is infringement or not, and it comes that you were looking at a photograph of the protected work while working on yours (or any other type of "computer assistance") do you think this would not make for a more clear cut case?
That's why clean room reverse engineering and all of that even exists.
As a normative claim, this is interesting, perhaps this should be the rule.
As a descriptive claim, it isn't correct. Several lawsuits relating to sampling in hip-hop have hinged on whether the sounds in the recording were, in fact, sampled, or instead, recreated independently.
This is interesting from the legal point of view, because AI service providers like OpenAI give you "rights" to the output produced by their systems. E.g. see the "Content" section of https://openai.com/policies/eu-terms-of-use/
Given that output cannot be produced without input, and models have to be trained on something, one could claim the original IP owners could have a reasonable claim against people and entities who use their content without permission.