Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Linux Backdoor Attempt of 2003 (freedom-to-tinker.com)
141 points by sytse on Oct 9, 2018 | hide | past | favorite | 28 comments


"But the attempt didn’t work, because the Linux team was careful enough to notice that that this code was in the CVS repository without having gone through the normal approval process. Score one for Linux."

Actually, they were just lucky enough that someone wasn't able to break into the main BitKeeper repository. It is highly unlikely that any private organization could withstand a state sponsored intrusion from the likes of the US, Russia, or China. And that's assuming they needed to. They probably already have found bugs that would let them in without going to such lengths. Also take into consideration that just "Linux" by itself is useless, there will be a lot of other 3rd party applications installed on the system with a lot less strict security over their source code.

The particular bug in question doesn't actually provide an exploit, it just sets the current UID to 0. An attacker would still need some other method of executing their own code under that ID which would require the ability to create new processes (e.g. the command line), or a method of altering the code of the current process through some other bug. The fact that an accompanying alteration wasn't found that allowed for that is a good hint that there's a lot more problems.


disclaimer: I was one of the BitKeeper developers in 2003

> Actually, they were just lucky enough that someone wasn't able to break into the main BitKeeper repository.

That is kinda the point of BitKeeper (and yes git as well). The "main" repository is just a convention and everybody has a copy of it. If you changed a line of code in the history then the checksums wouldn't match and this checksum difference would propagate to all following csets. BitKeeper would give an error when talking to the server with this change.

This was actually one of our complaints about git. It is entirely too trusting of blobs of data sent from another git repository. Files are stored and indexed by the SHA1 hash, but that hash is not revalidated when files are read again in the future. We could mangle git objects and then git would checkout bad files without complaining. This would even happen for objects sent over the wire. Only the enclosing checksum is validated but the corrupted objects were not detected. A 'git fsck' would notice, but this is really expensive so rarely used. It has been a long time, so git may have improved.


Are there any calls that check the UID before performing a privileged operation? It seems to me like this would be exploitable, since modifying a kernel data structure like this is surely not intended.


I think the other commenter is trying to say that this is only a local exploit, and an attacker would also need a remote exploit (or be allowed remote access) to use it.


> Are there any calls that check the UID before performing a privileged operation?

All of them. that's what the UID is for.


That's what I thought–this basically gives you root access. So it's just a privilege escalation, plain and simple.


I enjoy the idea that we can feel safe from attackers because we caught an attacker once, that time back in 2003.


Definitely not safe:

https://events.linuxfoundation.org/wp-content/uploads/2017/1...

Far as backdoors, the most common way professionals do it is disguise them as normal bugs/vulnerabilities. They werent saboteurs: just slipped like everyone does. ;)


Your comment reminds me of the Underhanded C Contest. Some of the submissions are absolutely dirty, and it goes to show how feasible it is to slip in these kinds if bugs.

http://underhanded-c.org/


That's a nice example of sneaky subversions. Many examples. I do want to emphasize even more that subversions with low consequences will look like most common, vanilla stuff you will see. Most of them won't make UCC since they look too accidental. Article included a common source of problems and subverison opportunities: replacing == with = or other way around. Few would bat an eye when that could just be a finger slip of a hurried programmer.

The other part of things is you don't want to just introduce the defect. It's better to meaningfully improve something so the contributors' image stays good despite problems. So, you want to improve it for performance, readability, or something like that. Then, make sure most of your contributions don't introduce problems. A high-quantity, mostly-positive contributor is a much better saboteur since the bad stuff is both (a) tiny portion of contributions and (b) stuff others screw up on. Therefore, it looks like random defects not worth blocking that contributor.

The fact that things like Linux kernel have hundreds of vulnerabilities a year isn't helping. The baseline is way worse than any saboteur would be contributing. Such insecure-by-default projects make their job much easier. Hell, they can look pretty skilled only adding a few vulnerabilities to a project with that many.



If memory serves me right the CVS bug was originally discovered and exploited by a member of an infamous file sharing site. After descriptions(?) of that bug were leaked in underground circles, an east European hacker wrote up his own exploit for it. This second exploit was eventually traded for hatorihanzo.c, a kernel exploit, which was also a 0-day at the time.

The recipient of the hatorihanzo.c then tried to backdoor the kernel after first owning the CVS server and subsequently getting root on it.

The hatorihanzo exploit was left on the kernel.org server, but encrypted with an (at the time) popular ELF encrypting tool. Ironically the author of that same tool was on the forensic team and managed to crack the password, which turned out to be a really lame throwaway password.

And that's the story of how two fine 0-days were killed in the blink of an eye.

(The other funny kernel.org story is when a Google security researcher found his own name in the master boot record of a misbehaving server.)


>(The other funny kernel.org story is when a Google security researcher found his own name in the master boot record of a misbehaving server.)

Do you have a link to this story? I tried googling but couldn't find it


No.


Why is that people with brains write l33t like kids and have such dirty names and ascii drawings?


no no, its NSA, people are incapable. only 3 letter agencies can find da bugz ;D


While I love, loved, and will always love C, it has too many security dangers like these. I know it's not intrinsically its fault, and that you can mitigate lots of issues using better tooling, but many of the issues with C are due either to poor design choices (just look at the state of string.h) or things nobody could foresee decades ago. While being close to metal is of paramount importance for tasks such as writing kernels, we shouldn't be forced to pick between safety and simplicity; I think that C needs a treatment like C++11 has been to C++. Lots of people will stubbornly stick with C89, but a boy can hope, I guess.


Published 2013.


The check should be written:

if ((options == (__WCLONE|__WALL)) && (0 = current->uid)) retval = -EINVAL;

This would cause a compile error on the naughty code.


"Yoda conditions" are controversial. (Personally, I hate them.)

Some people find conditions with the constant on the left, like `(0 == foo)`, jarring and difficult to read. Other people apparently aren't bothered by them. In my experience, neither group of people will be convinced by the other side's arguments.

https://en.wikipedia.org/wiki/Yoda_conditions


I sometimes do this. I find that for some expressions, especially long ones, it reads better to put the shorter operand on the left. It can help establish context earlier in the statement so you can more quickly scan the code.

As for it being jarring, well that's sometimes a good thing. Narrow city roads with heavy pedestrian traffic can make an automobile driver feel anxious and convinced the road unsafe. But making the driver uncomfortable is precisely what makes the road actually objectively safer.

If you want a reader to slow down and appreciate a critical statement, then swapping the operands can sometimes accomplish that in a way that comments may not.


Put me on the haters groups, specially because that is a side effect of C's expressions.


Yoda conditions are smart!


Personally, I hate them

I'm nearly ambivalent on them: I'm no programmer. On the one hand it looks like relying on behaviour that is correct now but may not be so in the future. On the other it is a little extra safety. It looks horrid. Its a nice mini hack.

Nahh: Too specific to one small potential source of error. Rely on this sort of nonsense and before you know it you'll be invoking magic. Use a decent IDE that looks out for it if you need reminding about this class of bug.


> On the one hand it looks like relying on behaviour that is correct now but may not be so in the future.

> Use a decent IDE that looks out for it if you need reminding about this class of bug.

And when your next environment/plugin/employer's setup changes or doesn't offer this warning? Or the day that getting your critical system back up ASAP leaves you no choice but to SSH in from your phone and edit your source in Nano?

Learning and practicing defensive programming as a professional programmer is like defensive driving for a taxi driver, or survival skills for a long-distance hiker. It's a part of what someone who considers themselves a serious professional should maintain.

Relying on a 'decent IDE' to manage your code quality for you is an excellent way to blunt your skills. I'm not saying we should all throw IDEs away or turn off their warnings, but to rely on them is an awful substitute for maintaining good practice. Granted, crossing every t and dotting every i is much, much harder in C than most other languages, but every compiler warning you have to hunt down and fix is another little slice of your day shaved off and gone.


... or you can keep the code readable and just pay attention to the relevant compiler warning.


Maybe, or have a linter that checks for things like this.

But it doesn't really reduce readability IMHO, although if you're not used to seeing it then it may jump out at you as odd.


Also do the same for the options ==

This is something I do in code when checking, always put the variable after what you're checking for.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: