Good evening from Loose Leaf Security! We're enjoying the last week of iced tea weather here, but remember, while there's always time for a tea break, there's never time for a break from your personal security!
-Liz & Geoffrey
P.S. If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.
In the news
The "Simjacker" attack: There's a new attack on cell phones in the news - all types of cell phones this time, unfortunately. The security research firm that found it is calling it "Simjacker", but to be clear, it has no relation to the practice of fraudulently acquiring a SIM card for someone else's account known as "SIM-jacking." The "Simjacker" attack uses SMS to send commands to a particular application running on the SIM card itself (SIM cards themselves are in fact very tiny computers), which can then send commands to the phone. Many carriers have filters or firewalls for these sorts of SMS messages, and in particular, the four major US carriers (AT&T, Sprint, T-Mobile, and Verizon) have confirmed that they are immune to the attack. Unfortunately, other carriers do not, and AdaptiveMobile Security, the research firm that found the attack, says it's being used in the wild by a company that specializes in government surveillance.
Since the place to fix the attack is filtering these messages on the network and there's no way for you as an individual to make changes to your SIM card, there's not much to do about it, but if your carrier isn't one of those who have confirmed they're doing this filtering, you might want to ask them about it. There aren't many details on the Simjacker website (and it's not worth giving them your contact info just to see their briefing paper, which doesn't explain much more and mostly advertises the company's security services to cellular carriers). The SIM Alliance has released a very brief recommendation to carriers to filter these messages and also cryptographically verify them on the SIM itself, so let's hope carriers take that advice soon.
NOPORT, no way: In actual SIM-jacking news, Motherboard reports on a secret setting from T-Mobile to protect your account from unauthorized changes by requiring additional checks for reissuing a SIM card or other types of changes. While their official "Port Validation" feature lets you set a PIN for making account changes like porting your number to another carrier, the undocumented "NOPORT" feature marks your account as requiring you to visit a store in person with government-issued ID. Other providers have similar features - for instance, AT&T has both an "extra security" passcode and a "PIC freeze" to prevent "unauthorized changes" without "proper approval," and Sprint has a feature called "Security Plus." Like T-Mobile, Sprint has no public web pages about how to enable "Security Plus."
If you're heavily reliant on your phone number staying secure - for instance, you receive business calls, you use iMessage or Signal, or you have important accounts that only offer SMS-based two-factor authentication - it might be a good idea to call up your cell phone carrier's customer service and ask them about enabling these features. It's a bit more hassle if you want to switch to another cell phone provider - you'll have to swing by your current provider's store with ID to release your account - but the additional security could be well worth it.
Also, if you're deciding between carriers, it's worth considering which ones have decent port-out protection and generally protect you against social engineering attacks like SIM-jacking. A few of the smaller ones are unfortunately pretty bad at securing your account: they might not let you set a PIN to protect your account or might let anyone who knows basic info about you call up and change your account settings.
Sweet homing, Alabama: The University of Alabama encourages students to install an app on their phone and have it run while they're at sports games to show that they've stayed for the whole game. Why? The postseason games sell out and if you demonstrate your loyalty by staying for the less-well-attended games earlier in the season, you'll get a better shot in the ticket lottery. On one hand, this tracking is pretty limited and opt-in - it's sort of like how you need to give your phone access to your location for it to give you driving directions, and students can completely delete the app after each game. However, we can't help but think there must be a simple alternative for the school to use to see who stays for the whole game that doesn't involve storing anyone's location data with a third-party. (For example, the school could scan student ID's on the way out, which doesn't trust the creator of the app, FanMaker, with anything.)
Additionally, it's difficult to truly say giving up your privacy is ever entirely consensual - as commoditization of our location information becomes increasingly normalized, the consequences of opting out can become more severe. Adam Schwartz, a lawyer for the EFF, argues it's particularly "inappropriate" for a public university which has the role of "teacher" to be normalizing this sort of privacy invasion. Better odds to getting tickets to sports games may not seem like a big deal, but it's a slippery slope towards tracking how long you stay at the gym affect health insurance rates or using how little time you're in your home to adjust your rent.
As more organizations and companies realize how easy it is to get people to voluntarily subject themselves to tracking, expect to see more attempts like this - but that's not to say that a world of pervasive monitoring is inevitable. While this burden shouldn't be on individuals, we can still push back in small ways: don't install apps that trade your privacy for slightly increased convenience (and by the way, mobile websites are often just as functional and easy to use) and speak up when organizations that serve the public think that the easiest route is to monitor people.
Your data should be yours, period: BuzzFeed News reports that some mobile apps for period tracking are sending private health data and metrics to Facebook because they happen to use the Facebook SDK, which makes recorded data available to Facebook on both a technical and contractual level. We've discussed our worries about Facebook advertisers getting access to health data before in the news section of our episode "Using a password manager effectively," but we wanted to explicitly note that while the information you share with your healthcare provider is typically covered by privacy laws, e.g. HIPAA in the US, health apps are not subject to the same privacy regulations. While this isn't as headline-friendly privacy news as apps tracking how long you've yelled "Roll Tide," it's probably actually a bigger privacy concern, and it's unfortunate that this same thing has happened before, also with little attention.
Another bug in LastPass: Google's Project Zero has disclosed a few new vulnerabilities in the LastPass browser extension which were fixed last week; Ars Technica has some slightly less technical reporting on the issue. The LastPass extension works by injecting a popup into web pages with password fields that prompts you to fill in your saved password. Project Zero's research found that a malicious web page could add that popup to itself, without waiting for you to click on the extension - and when they did that, the password information for some previously-visited website was available in the popup, because the extension's code to properly add the popup never actually ran. Then the site could get that password filled in with a "clickjacking" attack - they could mostly cover the popup with parts of a button that looked like something you genuinely wanted to click, but leave enough uncovered that you actually clicked on the LastPass popup. There was one final safety check to make sure that the passwords was being filled in on a related website, even if not the same origin, but Project Zero researcher Tavis Ormandy found a clever workaround for Google logins in particular: by running a malicious web page through Google Translate, an attacker could convince LastPass not to warn about sending accounts.google.com passwords to translate.google.com.
Nonetheless, Ormandy still endorses LastPass as the right password manager extension to use on the strength of their security team and response process, even after he found this vulnerability. We're particularly surprised by this because he previously found the serious LastPass vulnerabilities we discussed in our episode "Password managers: how they should work and when they didn't", and he even got an error from LastPass's servers while trying to contact their security team about this bug. In particular, though we had identified 1Password as doing well in Project Zero's analysis, he recommends against it, calling their security response "astonishingly bad." We will very cautiously disagree with Ormandy here, in part because his recommendation is to use a non-browser password manager or a hand-written book. We strongly favor password manager extensions, despite the inherent and empirical difficulty of getting them right, because they offer effective protection against phishing, which neither of his suggestions provides. It's likely his threat model for password managers is a fair bit different from ours.
Ultimately, it comes down to your personal needs and likely threats. For most of us, non-targeted phishing is a more serious concern than someone finding an undisclosed vulnerability in a password manager, and making it easy to use new, random, strong passwords on every website is well worth the risks of an extension. For companies with centralized login systems, it probably makes more sense to keep passwords away from an extension, and it's more bearable if there aren't many of them. If your important accounts all have two-factor auth, you might decide that the risk of an extension bug leaking your password is acceptable - or that the risk of using memorable passwords is acceptable. Our recommendation for most people is still to use a browser password manager with a good track record of not having security issues in the first place, and that's what we ourselves do, but you should carefully evaluate your own needs (and a good place to start is our password manager reference page, of course!).
Fingerprinting iOS via fonts: A sharp-eyed developer noticed that Crashlytics, a tool for reporting and analyzing crashes in iOS apps, installs a custom font on your phone that seems to only contain the product's logo. Crashlytics responded that they only do this for users using pre-release beta apps, and the purpose of the font is to track beta testers: when you get the configuration profile to install beta apps via Crashlytics, the profile includes a custom font with a unique ID, which lets them identify which crash or error reports come from the same user. The feature's developer added that they did this specifically to avoid beta testers having to log in to use the app.
There's been some concern over this feature being used inappropriately for tracking (some of which is complaining about Google "being evil," though the feature being written years before Google acquired Crashlytics and the new Google-developed feature for beta apps doesn't use this approach, anyway). In this case, we think Crashlytics' behavior is pretty reasonable, since it is specifically for beta testers - we're more concerned about Apple making this privacy compromise possible. Using information about installed fonts to "fingerprint" users has long been a concern on desktop web browsers, especially in the days where Flash and Java were popular, and it's a little unfortunate to see it coming to mobile.
There is one major mitigation, though. The only way to install a custom font for the entire phone (instead of for just one specific app) is to use a configuration profile, a feature primarily built for large companies or organizations to manage their phones or their employees' phones. Configuration profiles are quite powerful, and malicious ones can impact your security quite easily - for instance, custom certificate authorities have the power to sign any HTTPS website, and custom VPNs can redirect all of your web browsing traffic. Configuration profiles can also enable installing apps from outside the App Store, which is exactly what Crashlytics needs. If you get an unexpected prompt to install a configuration profile, say no, and if you're intending to install one, check carefully to make sure it's only configuring the things you expect to need configured.
On the other hand, one of the features in the upcoming iOS 13 is installing custom fonts without a profile. That's probably a good idea, because there are already a few apps that generate custom configuration profiles just so you can install fonts, and it's much less risky to have a feature for installing fonts alone than one which gets you in the habit of installing configuration profiles. Still, let's hope Apple adds some safeguards against using this for fingerprinting, and if an iOS 13 app is asking you to install a font when it doesn't really make sense, don't let it!
In other news...
Maybe don't DontDuo: Hopefully DontDuo, which "for the cost of a small coffee, automatically accepts incoming Duo authentication requests," is a troll and not a real service, but push notification authentication requests are one of the easiest to use second factors. A couple of extra seconds of your time isn't a bad tradeoff for an extra layer of defense against password breaches, and also, we both welcome the occasional break to take a couple sips of tea.
That wraps it up for this week - thanks so much for subscribing to our new newsletter! If there's a story you'd like us to cover, send us an email at email@example.com. See y'all next week!
-Liz & Geoffrey