Home >iphone >BlastDoor or how iMessage closes many doors to 'hacking' with iOS 14

iphone

Time: globisoftware

On: May/07/2022

Category: Huawei iphone samsung Digital life

Tags: Can someone hack my iphone?

Software development TODAY WE TALK ABOUT

Subscribe to Applesphere

Receive an email a day with our articles:

4 Comments Julio Cesar Fernandez @jcfmunoz

We have repeated it countless times: the most important thing about updating the operating system is not getting all its new features or emojis, it is security. With each new update that may seem like it doesn't bring much news, there are always security patches that close possibilities and options for exploiting security holes that could compromise our team.

The big problem is that there is no appropriate culture or education to understand these concepts because, make no mistake, they are quite complex in themselves. Without going any further, the latest iOS 14.4 update resolved three fairly serious security bugs where Apple explicitly said that "they could have been actively exploited". Problems that allowed us to take control of our smartphone or tablet without our consent or knowledge.

The first lesson to learn in security is to update (when possible) to the latest versions that Apple releases of its systems (and this applies to any other system such as Windows, Linux, Android...).

To exemplify the importance of updates, we are going to tell you about the results of the Google Project Zero investigation (a security team that actively works to find security problems in all current systems and devices) that has discovered that Apple has substantially improved the security in iMessage, one of the most active sources of direct errors (0-click errors that do not even require us to intervene with the device). To do this they have created a new component called BlastDoor, which closes many of the important security problems that iMessage presented until before iOS 14.

Although, as you already know what I usually do in my articles, we are going to clarify previous concepts and, of course, I will add my own knowledge so that you better understand everything that is explained. And the first thing to understand is, why do security flaws exist? what is that exactly? We are going to try to shed light on this in the simplest way possible within its complexity.

An operating system is a program

This seems very obvious, but sometimes we lose perspective on it. When I run an app on my device, it is nothing more than a program that is executed (that runs) on top of another program. The main program, the highest level of any operating system, is the kernel or core. It is the one in charge of coordinating the operation of the system and, therefore, it is the objective element that any attacker wants to control.

By controlling and executing code at the kernel level, you can control anything on the device and modify it as you please without the user knowing what is happening.

Not only can the system file system be accessed, services or peripherals can also be activated. Like GPS, cameras, microphones... you can access to control services like phone calls, messaging apps... in short: turn the device into a zombie in which we can do anything with it without the user knows what is being done.

For this reason, the kernel has protections that prevent it from being attacked. But we must make a very clear concept from moment 0: There is NO 100% infallible method to protect any software in the world. No matter how many doors are placed, others can always be found and it is impossible to achieve 100% secure software. So the best we can do is update to close holes that are discovered where some malicious program could sneak in to control the kernel.

And why is there no 100% secure system? Because all software is made by humans who make mistakes. Everyone. And because an operating system is made up of layers of programs that execute other programs, that execute libraries (which are more programs) that in turn execute other programs executing... guess? more programs. And those layers have to communicate with each other (with security measures) in order to make it all work. But if there is an error in one of them and someone discovers it, we are in a bad way.

All system cores and many low-level components that need good performance are programmed in C. And C is an excellent language, fast, versatile... but very insecure.

BlastDoor o cómo iMessage cierra muchas puertas al 'hackeo' con iOS 14

The basic problem is C, the language. In its definition as a language, security was not taken into account to give priority to performance and this was left in the hands of the developers. In this way, essential things are not controlled in C, such as the fact that in a data store declared with a size, more data is put in than what was said to be put in. Or it is not controlled that a datum that was said to be of one type, finally is of that type and not of another. Or that a place where it has been said that there is data actually has data or is simply empty. That to cite some basic examples. Those checks are left to assume that "the programmer will get it right and not make those mistakes".

In Applesfera"Technology must serve people, not the other way around", Tim Cook talks about privacy at CPDP2021

What causes that? Execution errors. That when we reach a code where a human error has not been controlled, an execution error is caused that will allow an attacker to obtain information on where the error occurred, what data was in memory at that time and even, take advantage of a “vulnerable” state of the system due to this error to traverse down layers (towards the kernel) and execute arbitrary malicious code at the kernel level. That is, a code that is not the one that should be there and whose objective is to do "something bad".

And this whole structure is so deeply rooted in the systems that to solve it, everything would have to be recreated from scratch, which is unthinkable today. Also, that we could try to do everything from scratch and return to the same current state in which we continue to have security errors due to human error. Errors in code that cannot be detected in any way until we voluntarily create the specific conditions that cause that error. And if we don't know what is there or how to provoke it, it is quite complex for us to find it.

BlastDoor, a data parser

At this point, we can talk about BlastDoor, the new component that Apple has included in iOS 14 and that, obviously, being a security system, Apple has not advertised it in any way.

One of the coolest features iMessage has as a messaging platform is the ability to display rich content. If I share a web, it is able to form a box with the previous image of said web, put the title and create an elegant way to go to see that content in preview mode. If we share a YouTube video, it is capable of creating a YouTube mini-player that allows us to see it right there. Like other messaging apps, it turns a simple link or content into rich content so we know what's behind the link.

In ApplesferaApple announces the arrival of App Tracking Transparency in early spring while Facebook launches its complaints

The problem is that this excellent functionality has had a myriad of security issues historically. When I share a link to a page, the system parses the data: it parses it, captures what it's interested in, and transforms it into the fancy visual representation we see. But in that process, I have to activate the HTML rendering engine to pull down the meta information of a page and retrieve its featured image, the excerpt, the title of the page itself... that requires pulling down the page itself and assuming that data like the title, image and others are where I expect them to be... but what if someone slips something that shouldn't be into a place where I expect something else? I could be able to cheat the system.

How? Imagine that I forgot to check that the data X that I download, which I hope is an image and that I am going to treat as such, is actually an image. It looks like an image, but it is actually malicious code. And I, as a developer, forgot to check the meta information of the image and its correct format in the case of a specific type of image. In this way, someone could take advantage of that vulnerability (that lack of verification that a piece of data is what we expect it to be and not something else) and sneak in something that it is not. And maybe the program that processes the image has another error that does not validate the format or allows some code to be executed and I can change that code to something else. And then I take advantage of one ajar door after another, to sneak in and get "lower" layers until I control the system at the kernel level. To compromise the system.

Not only that: the message itself is a data set before being processed, and there may be things that are intended to cause errors in the code and hang the process that interprets the information that comes in the total data that is then converted into a message (your payload). And a crash is an opportunity to sneak in and create a controlled process that is capable of placing code in a specific place where it violates security and allows layers to be lowered, compromising security.

For this reason Apple has created a new program called BlastDoor, which is also programmed in Swift. And why in Swift? Because Swift is a language developed mainly in C, but it controls the most common errors that C has. It corrects as a language the errors of the language in which it is developed. In Swift, it's impossible for a data to be anything other than what you said it would be. In Swift, there are no sizes so I will never store more information than what I said a data would hold. Empty data is not allowed in Swift. In Swift there is no dynamic execution engine that allows to inject processes. In fact, as it is compiled and obfuscated static code (the symbols or names of methods or components do not appear in the code), reverse engineering the code becomes a much more complex process for any attacker to understand what the code is doing.

There are many controls within the Swift framework where security is paramount, so it's one more layer an attacker would have to bypass. And a pretty hard coat. Again, nothing is 100% secure, but the stronger and more layers you have to go through, the harder it is to compromise a system.

And what does this new BlastDoor module do? Well, it is an API integrated within the messaging app, in charge of the secure parsing of the message itself, and then of the links or shared content that it may contain. It has a series of checks that root out any content suspected of being code or something that should be but isn't.

When our device receives a new message, it comes in a payload. BlastDoor checks that payload to verify that nothing weird is coming and decrypts each contained data safely. Then, when the message is already processed and sent to be decoded into its content, it is sent back to BlastDoor. To give a clear example of moving content with Swift, let's talk about how to load an image. This load is done on an optional data. That is to say: I check if the constructor of the image type, with the raw data that I pass to it, is capable of constructing an image. If you can, pass the image. If not, it returns a controlled error and rejects the data so it prevents something that shouldn't happen from happening where it shouldn't. And it prevents the process from crashing due to corrupted data (a very common way to cause errors and look for vulnerabilities).

Apple has also made some important corrections such as if the program that processes a message crashes (which shouldn't happen, but can be caused mainly by an attack), a pause is generated in the system that doesn't allow the process to be restarted until after a while, which is increasing. In this way, if an attack is attempted by brute force or if you try to hang the process to test cases of attack, it will take more and more time to respond. This does not prevent a continued attack, but it makes the process so slow and tedious that the attacker may give up trying to open the locked door.

Security, the silent watchman

Security is essential in any software because the amount of information that can be extracted from any device that we have today in our pockets or in our homes, permanently connected to the Internet, is worth a lot of money. We may think that we are nobody and that we also have nothing to lose. But we were wrong. We are a link in a chain that contains (for example) an agenda to family and friends. And that contains photographs, history of use, navigation, preferences... data that tells us who we are, what we do, what we like, what we don't, where we work, what life we ​​have, where our children study (if we have them), If we have a pet...

Our devices are loaded with data from which a multitude of conclusions can be drawn that make us the target of companies that want us as customers or people who want to deceive us.

Maybe you think it's not that big of a deal either and don't update your device. But it could be that your iPhone or iPad is vulnerable and your agenda is stolen (something that may seem innocuous). And on your agenda you have your parents who are older. And then they call you trying to trick you or send you a phishing email and get your bank details and take your money (this is something that happens every day, much more than we think). And all because we have thought that, what are they going to rob me of?

In Applesfera“A day in the life of your data” is Apple's story to celebrate Privacy Day

We have a responsibility. The one that has to do with companies is being fulfilled and they have highly specialized work teams to guarantee that we can be as safe as possible. Using the internet is like getting into a car. It is impossible to think that you will never have an accident with the car. There is always a chance that every time you get into a car you could have an accident. Because of you or someone else's. I was left without a car more than a year ago because, correctly stopped in a traffic jam, a woman confused by her daughter's crying rammed several cars behind mine and in dominoes, she touched me and I was left without a car. And thanks that nothing happened to me. Internet is the same. Entering and browsing it is already a risk of being deceived, of suffering an attack, that our security is compromised...

As we can see, the companies work because we are safer every day, we still need to do our part and update the systems. We will be the first beneficiaries.

Share BlastDoor or how iMessage closes many doors to 'hacking' with iOS 14

Topics

Share