Voting machines that saw more voters had a larger skew. Up to about 300 votes per machine, the results look pretty random and natural. From there on, the pattern changes drastically and shows unexpected clustering. It's unexpected because larger samples should slowly converge but still be normally distributed (law of large numbers). Instead of matching a bell curve, the distribution shows a "russian tail" (search for it on the page), a sign of vote manipulation.
All this is only for the early votes. If you compare the scatter plots and distributions of early votes vs. election day, they look completely different.
eID is a federated identity system that uses national electronic ID systems as the identity sources. That's useful especially for apps on the EU level, e.g. for customs, to submit what you're importing as a company.
In practice, I won't be able to use my phone as a replacement for my passport when traveling internationally. But I might be able to use my phone, in combination with my national ID card which has a NFC chip inside, to submit a government form in another EU country.
While I don’t have hard evidence to support the idea, I believe this is highly variable between individuals.
I’m not a heavy coffee drinker, but I do have a cup in the mornings some time between 6:30-8:00. This makes it easy to stop as needed; there are some days where I skip it out of necessity (too busy), and occasionally I’ll go a weekend without it just to keep my tolerance in check. During these periods, I never experience adverse effects.
The main benefit I glean from coffee is not feeling awake, but notably improved ability to focus. In my experience, my baseline ability is just not as good; there’s no “bounce back” after extended periods of going without, it just stays somewhere between bad and average indefinitely.
I strongly suspect this manifests more in folks that consistently intake > 100mg per day. I used to regularly intake 200mg total across ~2 beverages (mostly because I just enjoyed the flavor), and suffered mild headaches if I skipped my schedule. I know others that start their day with 200mg in a single, quad shot beverage.
3 years ago I took a summer and then autumn off of caffeine; my intent was to quit and only use caffeine in "times of need". This would be an on-brand thing for me to do as a person, and something I wanted pretty bad. But after 4 full months without touching any caffeine, my mood/energy levels never returned or even really rebounded any further than they did after the first month. I'm just a different person with caffeine, even after I'm addicted/habituated/develop tolerance. Notably I have a lot more energy, and I'm not a high energy person to start with. This energy allows me to better take care of myself, exercise, and be social, all things that I struggle with in its absence.
From experience, I can function fine without caffeine, even under an acute withdrawal scenario. In fact, some of my greatest achievements were done without caffeine for one reason or another. Not having caffeine just makes those things more miserable in the moment, at least for me.
This makes me so tempted to get one of those whole genome tests. I had issues with anxiety from age 20 to 38. I was a regular drinker of coffee between the age 20 to 38. About a month after quitting I was finally able to yawn properly like I remember I was able to when I was young. Coffee overstimulates me I figure.
Shape.getOfType(type) would also be a factory. A factory does not need to be a class, it can also just be a function.
I see two reasons for using a factory:
1. Reducing the scope of what a piece of code is responsible for (a.k.a. "Single-Responsibility-Principle").
Assume you are working on a graphics app that can draw 50 different shapes. The currently selected tool is stored in a variable, and there is a long switch statement that returns a new Shape depending on that variable. You wouldn't want the switch statement to dominate the rest of the code:
handleClick(x, y) {
selectedTool = getSelectedTool();
shape = switch (selectedTool) {
case "CIRCLE" -> new Circle();
// 49 more lines here...
}
drawShape(shape, x, y);
}
Not only would it get more difficult to read the code, but also the test for handleClick() would have to test for all 50 shapes. If the instantiation of the shape is separated out into its own function, handleClick() can be shorter. If the factory function is injectable, the tests for handleClick() can focus on the coordination work that the function does: it asks the factory for the shape and draws it in the right place.
2. Allowing to reconfigure what kind of objects are created through Dependency Injection. For instance:
class CommentRepository {
constructor(dataSource, queryFactory) { ... }
getComments(articleId) {
commentsQuery = queryFactory.getCommentsQuery(articleId);
return dataSource.query(commentsQuery).map(toResponseType);
}
}
class PostgresQueryFactory { ... }
class RedisQueryFactory { ... }
Something has to map your click events to each of the button classes, and you have to test that code anyway. So I would just put that delegation in the Button class, I don't see the point in the extra abstraction to the factory yet. But to each their own I guess :)
Also the fact that you use the repository pattern, which I would absolutely never do even though I know it's fairly common, shows that we have different ideals (no offense). :)
Maybe you can go into more detail about what kind of modules you mean, but generally Dependency Injection allows using different implementations of a thing (module?) in the same place.
The most common use case for me is test doubles (mocks). People who are serious about tests usually use some kind of Dependency Injection.
"Vegetable oils require an industrial process of extraction - with high temperatures and hexane"
You're talking about refined oils. This method of extraction has a higher yield and is therefore less expensive. It also increases the smoke point of the oil, making it more suitable for high-temperature frying and deep frying.
While I don't know what's available to consumers where you live, solvents and other chemical treatment are not required for vegetable oil production. Seeds can be cold extracted. The result is much more expensive though.
"the output is novel to humans on evolutionary scale"
This is true for all oils and all other highly processed foods.
"they come from inedible waste which for millions of years has been discarded - seeds etc"
Seeds are not waste. Do you think bread is made from waste?
The notion of using "waste" comes from the production of refined oils. The seeds are initially pressed, resulting in oil and meal as the byproduct. The meal still contains some oil which is extracted with solvents. After the solvent treatment, the meal is used as animal food.
There is a detailed four article series about this which was recently shared here on Hacker News. The first article is here: https://www.jeffnobbs.com/posts/what-causes-chronic-disease , but the second and third articles probably address this most directly.
>> "the output is novel to humans on evolutionary scale"
>
> This is true for all oils and all other highly processed foods.
That is not true. There is evidence that humans have sought out fats and oils for most of our history. Lard and other rendered fats are made from heating animal fats directly with modest heat being sufficient to liquify animal fats. Butter is made from churning milk. Olive oil and Coconut oil are made by squeezing fruits. All of that is quite different from modern vegetable oil processing which requires high temperatures which in turn convert a fraction of the oil into transfats. When oils refined in these ways are then used for cooking they become increasingly dangerous as they are heated.
What you are describing makes sense within Scrum, because scrum asks for commitment.
It has nothing to do with "agile" though. In fact, the agile manifesto clearly says "individuals and interactions over processes and tools". An organization that follows processes that don't make sense to its employees is not agile, despite what they claim.
This is what I often describe as doing agile vs being agile. Most large companies are constitutionally incapable of being agile, for a while host if reasons I don’t feel like iterating here, but I’m sure you can fill in the blanks.
But they are more than capable of putting in rules and processes that allow them to do agile. The problem is that doing agile without being agile does far more harm than good.
> Most large companies are constitutionally incapable of being agile, for a while host if reasons I don’t feel like iterating here, but I’m sure you can fill in the blanks.
There's a long list but only one really needs to be mentioned:
Payroll costs $XX,XXX,XXX and is due every 2 weeks.
I've sat through "agile training" and it's the most plain grift I've ever seen.
Selling developers on a deadline-less utopia, but walking it back just enough to not make management types paying for the whole thing balk. Switching which side of the scale their finger is on based on who seems the most engaged at a given moment.
But at the end of the day, it's always predicated on the idea that developers can always provide backpressure against clients.
Well you're free to do that, and clients are free to not pay, and unless your developers will work for client IOUs that's the end of the game.
Agile mentality is great. The idea of rapidly iterating and rapidly producing feedback is great. But "Agile methodology" has evolved into a productivity ponzi scheme meant to add a place for consultants and trainers to insert themselves for $$$
If the execs and managers can't get out of the mentality of planning X deliverables for Q3, Y deliverables for Q4 and Z feature by December 10th to satisfy a $10m customer then it becomes hollow and fake no matter how genuine and well meaning the consultants are.
I dont really blame them. They've gotta eat, and occasionally they probably do get a client who is genuinely willing to enact a real transformation. Dysfunctional upper management is the real problem.
> If the execs and managers can't get out of the mentality of planning X deliverables for Q3, Y deliverables for Q4 and Z feature by December 10th to satisfy a $10m customer then it becomes hollow and fake no matter how genuine and well meaning the consultants are.
Dysfunctional upper management: maybe.
Customers don't pay money today for products delivered on an indefinite, unspecified future timeline: definitely.
More software engineers should have to be involved with customer conversations to internalize this...or just, I dunno...do a construction project or something. See how much you like it when you've paid money for something bespoke, and the people doing the work refuse to set schedules or tell you what you're getting. Suddenly you'll be a dysfunctional manager, too.
Agile works best when you have a consultancy situation, and the customer can be directly involved in the construction of the product while it is being built. This rarely applies. But even then, customers tend to make ridiculous demands like "advance notice before you change the software from under us so that we can re-train our large team of users", and as soon as you do that, you've got a calendar, a deliverable and a deadline. Probably some Gantt charts in there too, because only trivial projects have a single deliverable.
Then you say "OK, let's actually be agile, break down the project timeline using iterative deliverables and clear development cycles"...now you have sprints.
I'm not defending "Scrum" -- just saying that deliverables and deadlines are part of life.
>Customers don't pay money today for products delivered on an indefinite, unspecified future timeline
They frequently do for features though. I happily waited a year for my bank to implement virtual credit cards. As a corporate user of software I have waited similar lengths of time for features I really wanted.
This reality usually doesnt stop some layer(s) of management from waterfalling the shit out of everything simply because they can't conceive of an alternative.
>See how much you like it when you've paid money for something bespoke, and the people doing the work refuse to set schedules or tell you what you're getting.
Yeah, this is why I try to avoid that type of software development at all costs. I got burned on it when I was young, thinking that nothing was more natural than treating software like a construction project.
Practically speaking, when you do ask for something bespoke - whether it's a skyscraper, crossrail or a bathroom remodeling or software, you've gotta be prepared for delays and budget overruns. Software is much the same, and waterfalling the shit out of this type of thing may be unavoidable, but so is the inevitable shitty result. There are entire sub industries that can't seem to produce anything good (e.g. healthcare software) and I think this model of operation is largely why.
This is why it's better for execs to treat bespoke deliveries as something radioactive and try to minimize them as much as possible even if that means saying the scariest six words in an exec's vocab "sorry, we can't take your money".
> I happily waited a year for my bank to implement virtual credit cards.
Not the kind of feature I'm talking about -- your bank isn't contracting with you, personally, to make a credit card. Apple doesn't care at all about my opinions on when to release their next phone. Google doesn't ask me what they should name Chat this week.
Nonetheless, the stakeholders for such a project are going to be internal to the company, and there will be many: customer support, billing, compliance, marketing and legal, just to name a few. There will also be many deadlines, simply because huge numbers of people have to coordinate to turn out a complicated project.
> As a corporate user of software I have waited similar lengths of time for features I really wanted.
Again, unless you are signing the purchase order, you're not the customer, and you don't set the deadlines. Someone else is setting the deadlines for you. Also, just because you had to wait a long time doesn't mean that deadlines didn't exist for the project.
> This is why it's better for execs to treat bespoke deliveries as something radioactive
All software projects are bespoke. Some are smaller and more tightly scoped than others, but if you didn't need custom work done, you wouldn't pay an engineer to do it.
I know. I expressly declared the type of feature you were talking about radioactive.
The tragedy isnt that this type of feature exists it's that some managers cant conceive of there being any other kind and will turn every feature radioactive. Thats how we get shit software.
> This is what I often describe as doing agile vs being agile.
I just started a new role leading a 30 person Engineering team at a company doing Scrum. My very first questions to everyone was why are we doing this? Answers ranged from some expectations like regular/predictable delivery of product to "because that's how software is done." From there, it's figure out a process that makes sense and actually achieves objectives. Scrum provides a nice framework, but everyone needs to be bought into the goals of it first. (My next actions were to get rid of most of the useless workflow rules in Jira and build that accountability at the team level...it works amazingly well that way)
I agree that it's much more doable at a small company. Once companies get big enough, the need to centrally plan and coordinate seem to create an irresistible urge to apply identical processes to all teams, compare velocities, and all sorts of other bad habits.
All of the route planning tools have their advantages and disadvantages. I frequently create routes for cycling and a couple of times I used Strava, Komoot and Garmin Connect for the same route. Garmin Connect always works because it can ignore map data if necessary. Strava and Komoot were not always able to create the route I want (looks like Strava have added a "manual mode" as well).
Most of the people around me use Komoot, it's also my preferred tool now. The information density on the map is great, the POIs shown on the map can be customized and editing works well enough.
Strava is okay. It definitely looks sleek. The biggest issue is that routes cannot be edited in the mobile app, only created. Editing is possible in the browser but really awkward. Try rerouting parts of an existing route.
Heatmaps are useful sometimes, e.g. when I'm abroad and I don't know if a road can be cycled on at all. But the most used roads are not necessarily the best ones. Komoot doesn't have a heatmap visualization but there are comments, ratings and photos of segments.
I'm never really sure about Komoot either. In the UK it struggles with the difference between a really big town and a city.
It seems to obsessed with what things are called and takes no notice of how big, in population density, they actually are. That's a pain when planning touring routes in unfamiliar areas because it will exaggerate the importance of tiny cities and completely hide the presence of massive towns.
I have the three services and I too think strava is by far better than all others (i do around 20k cycling a year and i am the one in charge of making routes for my cycling club)
Voting machines that saw more voters had a larger skew. Up to about 300 votes per machine, the results look pretty random and natural. From there on, the pattern changes drastically and shows unexpected clustering. It's unexpected because larger samples should slowly converge but still be normally distributed (law of large numbers). Instead of matching a bell curve, the distribution shows a "russian tail" (search for it on the page), a sign of vote manipulation.
All this is only for the early votes. If you compare the scatter plots and distributions of early votes vs. election day, they look completely different.