In 1984, Ken Thompson asked in his Turing Award lecture:
To what extent should one trust a statement that a program is free of Trojan horses?
He proposed a method for corrupting a compiler binary. You can find details in the paper of Reflections on trusting trust
Here is a short description of the attack. Re-write compiler code to contain 2 flaws:
- When compiling its own binary, the compiler must compile these flaws
- When compiling some other preselected code (login function) it must compile some arbitrary backdoor
Thus, the compiler works normally - when it compiles a login script or similar, it can create a security backdoor, and when it compiles newer versions of itself in the future, it retains the previous flaws - and the flaws will only exist in the compiler binary so are extremely difficult to detect.
Here is an excerpt from Michael Borgwardt’s answer on stackexchange:
What made the attack so scary was that the C compiler was the central piece of software for these systems. Almost everything in the system went through the compiler when it was first installed (binary distributions were rare due to the heterogenous hardware). Everyone compiled stuff all the time. People regularly inspected source code (they often had to make adjustments to get it to compile at all), so having the compiler inject backdoors seemed to be a kind of “perfect crime” scenario where you could not be caught.
Nowadays, hardware is much more compatible and compilers therefore have a much smaller role in the day-to-day operation of a system. A compromised compiler is not the most scary scenario anymore - rootkits and a compromised BIOS are even harder to detect and get rid of.
But after all, I think the highlight of Thompson’s lecture is You always have to trust someone.