I predict the vast majority of people will never again understand that new decades, centuries, and millenia begin on January 1st of years ending in '1' and not '0' because there never was a Year 0. Everyone knew that 50 years ago, which is why Kubrick didn't call it '2000: A Space Odyssey.'
Everyone needs to stop this hypercorrection nonsense. That only applies to centuries and millennia because they're ordinal numbers. You say "20th century" and "21st century". But you say "1990s" and "2020s". Unless you start saying "201st decade" you are wrong.
Also, nobody ever complained about decades until just this year, because it's obviously wrong. I don't even remember it in 2010. I think it's a bunch of kids who vaguely remember the 21st century starting in 2001 (which is correct, of course) but never realized the reason, and are having hypercorrection issues now that they're older.
You all are correct but no one cares. Numbers ending with 0 is what gets attention. Going back to the year 1 doesn't add much anyway since we are pretty sure that the count of years since the birth of Jesus (whoever he was historically) is off.
I used to joke with my colleagues that eventually everything will converge and become systemd running jupyterhub (now jupyterlab) running "apps" that are actually notebooks on some kind of Node.js kernel.
Iowa is a terrible place to build wind farms since it still has some of the most valuable soil on the planet.
Better to build these things on land with steady wind, less agricultural value and nearer big population centers, such as Wyoming and Colorado (near the I-25 corridor from Cheyenne to Colorado Springs) or the barren hills on either side of the Bay Area.
I grew up in Iowa. I inherited 160 acres of prime farmland from my grandparents and parents. I actively participated in the successful lobbying of the county commissioners in the county where my property is located to change the offset rules for wind turbines to make life livable for the people directly affected by them. I personally turned down the offer to build turbines on my land.
That's astonishing, then, that you weren't aware that wind farms share land with productive farms just fine.
I live on a planet and consume food. My family has rural acreage in Wisconsin, too, but that's irrelevant. Privilege doesn't have to stop the ability to think critically.
That you're lobbying to make wind harder to deploy doesn't give you better moral or intellectual clarity, here...
When you do the research, you discover that wind is not the panacea it has been portrayed to be. Building and placing turbines is not carbon neutral, the cleanup costs of decommissioned turbines are completely ignored because they are 40 years out, and they are not very profitable without subsidies. There are better alternatives that are less destructive to the environment.
This also ignores the other part of my original comment, which was to build turbines nearer to population centers. Why aren't the hills east of Oakland and southwest of SV filled with wind turbines?
We need to fire on all cylinders, here, to fight climate change.
And as far as non-wind clean energy, there are literally zero options that are immune to NIMBYism like you're engaged in here. Not hydro, not geothermal, not even solar... And SURE as heck not nuclear.
Svierge: I don't own the land yet (parents are still alive) and no one has approached us, but I absolutely would put wind turbines on it if there was an opportunity. My father and I have regularly discussed putting solar and wind and possibly some form of microhydro on the land at our own expense. We have planted trees on it.
> Better to build these things on land with steady wind, less agricultural value and nearer big population centers
I'm sure developers would be happy to if that was the case, but it turns out the valuable locations simply tend to be west central, and the easternmost regions of the mountain states https://windexchange.energy.gov/maps-data/319
I thought something was awry with your link until I realized that from overhead they’re basically invisible; that’s how efficiently the land is also being used for farming. Point well made.
The pads used for the turbines are quite large, they require an access road, power connections, and the turbines create substantial shade, reducing the productivity of the ground below.
All of that is practically a rounding error compared to the actual area used for crops. Seriously, look at any satellite image of a wind farm in Iowa. Swamped by 2 or 3 years of annual crop yield growth: https://ourworldindata.org/exports/average-corn-yields-in-th...
Did you know farmers in Iowa cut down trees around their fields to minimize shade? The graph you provided is completely irrelevant to this discussion. Corn yield growth is resultant from political decisions to fund biofuels.
The yield increases have mainly come from two sources: increased use of chemicals for pest control and fertilizer, and genetic engineering. Both of those are problematic, with consequences only now slowly dawning on most people, and heavy resistance by the industry to any naysaying in public places.
One day, we may recognize the problem and pass laws to limit or prohibit the use of chemicals. (I think the horse is out of the barn on genetic modification.) That will mean lower yields. Further, if California's droughts continue, the Midwest may need to start growing a greater variety of crops beyond corn and soybeans. There is no other place in the U.S. with soil as potentially productive and useful. Wasting it on wind farms when literally any other place will have less potential impact on agriculture is ridiculous.
Each turbine consumes a minimum of one acre of farmland, and should be offset by at least a half mile from any homes. The problem there is that there are often homes every mile in rural Iowa.
Why can't we build them nearer to the population centers with high energy demands, then? Why not build them on vacant land near big cities?
There is a lot of prime wind farms land on the hills east and southwest and northwest of the Bay Area. There is a lot of vacant land in the hills above Los Angeles. The Olympic peninsula and western slopes of the Cascades have plenty of room for wind farms to serve Seattle. Same on the east coast: Cape Cod actually had a proposed wind farm that was stopped by the moneyed interests that live there, but that could supply Boston with all the power they need.
> Same on the east coast: Cape Cod actually had a proposed wind farm that was stopped by the moneyed interests that live there, but that could supply Boston with all the power they need.
> The project is expected to produce an average of 170 MW of electricity, about 75% of the average electricity demand for Cape Cod, Martha's Vineyard, and Nantucket island combined.
Those three areas have ~250,000 people, and are largely residential. Boston has ~685,000, likely with larger per-capita electrical demand due to industry/office space.
And even Ireland still remains one of those jurisdictions: while they were successfully pressured into passing this new general legislation, Ireland have refused to accept Apple's unpaid taxes and are spending millions fighting the EU ruling in court[0]
Yes, but that takes time, and it involves the company trusting their money to a third party that is trustworthy in inverse proportion to how much tax leniency they are willing to grant.
"You can't stop us from dodging taxes so don't even try" is what they want us to believe, because if we believe it then they can dodge taxes every year without inconvenience.
Do you think that the accountants at Google, Apple, et al. have failed to foresee this sort of pushback? I personally don't doubt they've been planning alternative tax-avoidance scenarios for years. That would include lobbying other jurisdictions for favorable tax treatment in any rational scenario.
To date, they have suffered no serious financial or reputational consequences for tax evasion, so there's no reason to think they won't continue to avoid taxes as part of their overall financial strategy.
I just gave up on discord last night. Third or fourth time I came across a link to what sounded like an interesting server, then couldn't log in with existing credentials. I want to like it and use it, but frankly it's very tiresome to get going.
I'm not clicking on the link inviting me to your server any more. Have fun with those who fight through the silly spinning spider thing.
This is a welcome development. ProtonMail has worked well for me. Now if I could only find a way to make a Pixel phone accept that email address instead of one of my several one-off fake name gmail addresses that I use for such things.
Don't integrate privacy-focused email service (hushmail/proton etc) into a non-private phone. Access it via the webmail interface.
I've been asked several times to decrypt my phone at international boarders. If you leave things to webmail, unlocking your phone doesn't give them access to your email account, or even tell them where it is. All the TSA/Cops get is my "gmail-for-phone-2018@gmail.com" address that I haven't checked since day one with the phone. My access to my real email is covered by a web browser that doesn't keep records.
If memory serves you can create a Google account with your existing email address. They won’t create a gmail account for you, but you can still use other Google services with it. I’m guessing it’s worth trying with your Android phone?
Now, now, don't get too frisky there or these same people might get the idea to uninstall the spyware put on their phones by the manufacturer, the carrier, the OS provider, and the app devs. Then where would we end up?
[/s]
Edit: You'll have to navigate to the Morse Trainer link from the main page.
Also, it's the best because it has knobs for all the variables -- speed, number of characters sent, even options to add QRM (interference) and variable sending speed (imitating very well someone with an unsteady fist).
As an aside, it's pointless to learn code visually. It's only really useable as an auditory messaging system.
only really useable as an auditory messaging system
What? When I was a kid, it was common for camping flashlights to have pressure buttons on the side, which was intended for (and we used for) morse code. Ships used bright, shuttered lights for Morse code, soldiers used it, scouts used it, kids communicated with friends in the neighborhood at night.
Security is distinct from privacy. The four mainstream browsers - Chrome, Firefox, Edge and Safari - have the most secure software, regardless of their producers' business models and data hygiene.
> The four mainstream browsers - Chrome, Firefox, Edge and Safari - have the most secure software
I disagree that they are "the most secure" browsers, let alone software. They fail to isolate remote scripts properly; that people were capable of executing timing attacks against the CPU (Specter et.al.) shows that they are not really very secure.
Browsers which don't execute Javascript and advanced CSS (Lynx being one extreme example) are going to be much more secure by default.
There are four major dimensions to security: attack surface; depth of defense, or how much an attacker can do once they're in; proactive measures to find security bugs (e.g., fuzzing); and code quality.
You're focusing on attack surface. But from a security standpoint, attack surface is probably the least important factor. Every sufficiently large application has a hole in it, and all attack surface does is crudely control how likely it is to stumble across that hole. Defense in depth, by contrast, lets you keep the attacker from doing bad things such as installing ransomware on your computer just because your HTML parser had a buffer overflow.
The major browsers spend a lot of time sandboxing their scripts in separate processes, and then disabling capabilities of those processes using techniques such as pledge(1), giving them much better defense in depth. They also put a lot more effort into finding and closing security bugs through use of techniques such as fuzzing. No one questions their much larger attack surface, but they do have much more effort into ameliorating attack vulnerabilities.
I should also bring up Spectre because you did. At its core, Spectre allows you to read arbitrary memory in your current memory space, nothing more. As a result, it basically means that you can't build an effective in-process sandbox... which everyone already knew to begin with. What Spectre did was show how easy it was do such arbitrary memory reads, since you can proxy it through code as innocent as an array bounds check. There are mitigations for this, which requires rebuilding your entire application and all libraries with special mitigation flags... guess which browser is more likely to do that?
This is kind of a strange analysis. Sort of infamously, Dan Bernstein, who is sort of a pioneer in these privilege-separated defensive designs, foreswore them in a retrospective paper about qmail. Really, though, I'm not sure I'm clear on the distinction you're drawing between attack surface reduction and privilege separation, since both techniques are essentially about reducing the impact of bugs without eliminating the bugs themselves.
You might more coherently reduce security to "mitigation" and "prevention", but then that doesn't make much of an argument about the topic at hand.
What I meant by "attack surface" here is probably a lot narrower than what you're used to. I'm using it to focus on the code size concern. I was trying to visualize it in terms of "how many opportunities do you have to try to break the system" (as surface area) versus "what can you actually do once you've made the first breach" (as volume), and didn't fully coherently rewrite the explanation to excise the surface area/volume distinction I originally made.
No, it's not. Security is not a goal in itself, it can not be, security is only about guaranteeing other goals, there is no security absent all other goals. What it means for software to be insecure is that it doesn't ensure your goals are met. For many, privacy is an important goal. If the software that you are using compromises your privacy that you value, then that software is not secure.
I am much more concerned about someone being able to impersonate me (security) than to know what I'm doing (privacy). This doesn't mean im unconcerned about the latter.
If secure software compromises privacy in ways that concern you, it may not be the right software for you to use, but it is still secure (and potentially more secure than other software that you feel better protects your privacy).
> I am much more concerned about someone being able to impersonate me
Well, great?!
> (security)
Erm ... no?
> than to know what I'm doing (privacy)
Privacy is not about what your software knows, it's about who else gets access to that information. Software allowing access to your information to parties other than the ones that you intended is a vulnerability class commonly called "information leak".
> This doesn't mean im unconcerned about the latter.
And thus it is, as per the common understanding of the word, a security concern.
> If secure software compromises privacy in ways that concern you
That's just logical nonsense. You might as well be saying "If secure software kills you in ways that concern you, [...]".
> it may not be the right software for you to use, but it is still secure
So, let's assume your browser had a bug where for some reason, every website could read all the data in the browser. Like, could access the storage, cookies, cache, history, page contents, everything. But no write access. This is obviously purely a privacy violation ... but, according to your definition, not a security problem, right?
> And thus it is, as per the common understanding of the word, a security concern
Yes, but not when talking about cyber-things. Generally, we only enter the realm of security of the information leak is secret or unintentional, neither of which is the case here.
> Generally, we only enter the realm of security of the information leak is secret or unintentional, neither of which is the case here.
So, you are telling me the user is intending the information leak? I'm not sure I understand: You say it's not a security matter if the "leak" is intentional. But then, if a user is transmitting information intentionally ... why would you call that a leak?
Or do you mean the leak is intended by Google or whoever and that is why it's not a security problem?! But then, what if a hacker intentionally installs a back door on your system and uses that to leak your information ... then that wouldn't be a security problem either, would it? Or is that where the "secret" part comes in, and it would only be a security problem if the hacker didn't tell you that they stole all your data?
Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here). If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.
> Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here).
Well, but do they actually have your permission?
> If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.
Well, for one, are they not doing their things secretly? Is the mere fact that you can find out about it enough to call it "not secret"? Is the mere fact that you didn't refuse where you didn't even really have an option to refuse permission?
Let's suppose a food manufacturer put a new pudding on the market. Included with the package is a 500-page explanation of everything that you need to know about it. Somewhere on those 500 pages, all ingredients are listed. Most are mentioned using the most unusual names. Among the ingredients is a strong carcinogen. A carcinogen that doesn't contribute anything to the taste, the look, or anything else you would value. All it does is make producing the pudding cheaper to produce.
Now, a biochemist could obviously know what is going on if they were to read the 500 pages, so it's not secret that the carcinogen is in the pudding. Also, the packaging says that you agree to the conditions of use in the 500 pages if you open the package, so you gave them permission to feed you that carcinogen.
Would you agree, then, that this pudding is not a health safety risk, it's simply the manufacturer not living up to your health preferences?
Also, I don't really understand how permission can make something not a security problem. It seems like that's all backwards?! I generally would first check a product for security problems, and then give permission based on the presence or absence of security problems. And one of the security risks to check for would be software leaking information to whereever I don't want information to leak to. Why should the fact that the manufacturer of some piece of software announces or doesn't announce that they leak certain information have any relevance to whether I condier the leak a security problem? If I don't want my information in the hands of Google, then how am I any more secure against that leak just because Google told me about it?
Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.
It's not a good analogy for a whole host of other reasons, but that's one of them.
You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.
> Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.
1. Well, one thing that IT security professionals surely don't use is "cyber", that's a term from the snake oil corner of the industry.
2. People in IT security most definitely do not distinguish between security problems that the manufacturer intended as a feature and security problems that were caused any other way. You create a model of things you want to protect, and if a property of a product violates that, then that is the definition of a security problem in your overall system, obviously. The only difference would be whether you report it as a vulnerability or not, as that would obviously be pointless for intentional, publicly announced features.
> It's not a good analogy for a whole host of other reasons, but that's one of them.
Really, even that would not be a good reason, as it smells of essentialism.
> You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.
No, I am using the exact standard definition, and the only sensible one at that. It obviously makes no sense to have a definition of "security" that says nothing about whether your system is secure. If you consider Google having access to your data a threat in your threat model, then whatever properties of your system that give Google access to your data is a security problem in your system, it's as simple as that.
The only thing that matters is whether your overall system reaches its protection goals or not, not whether some component by itself would be considered vulnerable in some abstract sense. And that obviously applies in the opposite direction as well: If you run some old software with known vulnerabilities that you can not patch, but you somehow isolate it sufficiently that those vulnerabilities can not be used by an attacker to violate your protection goals, then that system is considered secure despite the presence of vulnerabilities.
The six sides of the three record album 'Sandinista!' had a quote from the movie Alien written in the runout, "IN SPACE ... NO ONE ... CAN ... HEAR ... YOU ... CLASH!"
Not a program, but I remember being oddly satisfied when I noticed it the first or second time I played it.