"Skimmed" may be what you're looking for when selecting milk for your tea, but probably isn't something you want to hear happened to your credit card. We talk about skimming attacks in our episode "Credit and debit card security," but since a similar attack has been making the rounds lately, we figured today's newsletter would be a good time to highlight one of our favorite tips for minimizing damages if your card number gets stolen.
If someone forwarded this to you, you can sign up yourself at https://looseleafsecurity.com/newsletter.
Tip of the week
Most credit and debit cards have a way to notify you for each transaction. If your card has a mobile app, it almost certainly has this feature, and if not, you can usually sign up for email or text message notifications on your card's website. (If you opt into text message notifications, don't trust phone numbers or links in those messages - SMS messages are easily spoofed. If you can, look up your bank's phone number yourself and call them instead of replying, too.) The faster you know about your card being misused, the more likely you can get the charge reversed and stop further misuse of the card. You can also look at your statement at the end of the month, but in our experience, it's way easier to keep up with notifications that show up in real time when the transaction is fresh on your mind - we usually get them before we've even left the register.
Large-scale breaches of credit card information are unfortunately not uncommon occurrences. As recently as last month, malware installed on the payment devices of Wawa, a convenience store/gas station chain in the east coast of the US, captured more than 30 million credit card numbers - giving criminals a stash almost as big as the 2013 Target breach that captured 40 million card numbers. While newer technologies like EMV (chip) cards make it harder to use stolen numbers, criminals are still interested in stolen credit card numbers so they can use them at online merchants or other places that haven't made the switch.
Wawa made a statement about the breach which said, in part, "Under federal law and card company rules, customers who notify their payment card issuer in a timely manner of fraudulent charges will not be responsible for those charges." It's easiest to notify your card company "in a timely manner" of unwanted charges if you get timely notifications of card activity yourself.
In the news
Ain't no party like a tracking third party, 'cause third-party tracking don't stop: This week's security news included a number of stories about the overwhelming pervasiveness of third-party tracking in mobile apps. Most commonly, we think of third-party trackers being advertisers, but features like third-party login buttons can identify you across apps, even if you don't actually log in with them. There are also a few companies that specialize in monitoring how you use apps, both to help the app's developers fix bugs (like Google's Crashlytics, which monitors what you did before a crash) and simply to help the company sell their products better (like Branch, which can "unify fragmented data to show you each customer's full journey").
The Electronic Frontier Foundation analyzed the behavior of the mobile app for Ring's smart doorbells and found it "packed with third-party trackers," including Crashlytics, Facebook, Branch, and two other mobile analytics companies. The EFF found that Facebook got information as soon as you opened the app, even if you weren't logged in, and that the data sent to these third parties included things like your device model and various attempts to fingerprint your device by identifying its physical quirks (like the calibration of its motion sensors). Meanwhile, Gizmodo investigated the Noonlight app, which powers the new "panic button" in the dating app Tinder. Noonlight bills itself as a safety service that works by linking as many of your "smart devices" as possible to detect when you're in danger, but all those linked services are collecting data about you. Noonlight's app integrates code from Facebook and YouTube, which can identify you to those companies, and also uses mobile analytics services like Branch and Kochava. A Noonlight spokesperson defended their use of these services by saying they're just trying to improve their own app, but Kochava in particular works closely with mobile advertisers to identify real people when serving ads (and in general many of these services do similar things). While pervasive tracking is generally frustrating, it's particularly frustrating in the case of Tinder and Noonlight, as Noonlight is specifically marketed to users who are worried about their safety when going on a date with a stranger - and they have to give up their privacy to use it.
Last month, the Norwegian Consumer Council published a report (in English) entitled "Out of Control" about tracking in mobile apps, including Tinder, OKCupid, Grindr, a makeup app, and a Muslim prayer reminder app. The council, which is funded by Norway's consumer affairs ministry but is politically independent, is accusing advertisers of "systematically breaking the law" - in particular, they say the industry collects data in violation of the GDPR even though it's been almost two years after the law took full effect. In particular, the revelation that gay dating app Grindr was automatically sending personal identifiers, ages, and locations to Twitter's advertising subsidiary MoPub led to Twitter suspending Grindr from their platform and investigating "the sufficiency of Grindr's consent mechanism." However, the EFF argues that Grindr is just using MoPub "almost exactly as intended," sending information about you to Twitter in real time to get a targeted ad back within milliseconds, and that thousands upon thousands of other apps do exactly the same, so suspending Grindr means very little. The report also looked at data collection in period-tracking apps, a concern we've discussed before.
To some extent, part of the problem here is the all-seeing capabilities of major tech companies like Twitter, Facebook, and Google. While we as consumers often think of them as fairly different companies because they run fairly different products, all of them make a significant amount of money by being centralized advertising platforms that can track us across myriad websites and mobile apps. MoPub, for instance, was an independent startup that Twitter acquired in 2013 for $350M. In an interview with TechCrunch, the CEO at the time of acquisition said, "Google was doing a great job monetizing their own properties, but in order to really be a significant player with scale, breadth, reach, and frequency [on other sites], they needed DoubleClick. We can provide that level of scale and breadth to the Twitter folk." Twitter's most recent earnings report, covering the third quarter of 2019, says they made $824M revenue, of which $702M came from advertising. (The remainder is listed as just one category, "data licensing and other revenue.")
Facebook has taken steps towards at least being transparent with the amount of data it collects: they've introduced a tool to show data about you that other apps and websites have sent Facebook. The tool was originally piloted this past August to customers in Ireland, South Korea, and Spain, but as of last week, all Facebook users can visit the Off-Facebook Activity page. The EFF calls it a "welcome but incomplete move" and has a quick guide on how to opt out of associating data with your account - though it won't prevent Facebook from collecting the data. (Also, note that this option will prevent you from using Facebook logins on other sites or apps, which you should be careful about if you have accounts you can only get to with your Facebook account. Besides tracking, there's another reason we're not a fan of "Login with Facebook" or similar features: if you lose access to your Facebook account for any reason, intentionally or unintentionally, you'll also have a difficult time logging into other accounts via your Facebook account.)
The Washington Post has a column about the new Off-Facebook Activity page, which discusses one reporter's experience with the tool. Facebook has information about everything from when he gets coffee - because he uses the Peet's Coffee app to pay for it - to when his Ring doorbell detects motion - because he opens the Ring app every time the doorbell notifies him. They suggest using a browser extension or browser configuration to block trackers on websites, which is a great idea, but unfortunately, there's currently no comparable tool for blocking trackers in mobile apps.
The internet of things that badly need regulation: The UK government may have been busy last month, but they managed to find time to release a report on regulating Internet of Things devices for security. A few years ago, they had tried to work with the industry to adopt better security practices, but seeing no real improvement, they're now proposing regulation to require that IoT devices conform to certain security standards - or at least carry labels indicating their conformance. The report focuses on the "top three" security practices (as identified by both a previous UK project and a European standards organization): don't have default passwords that are the same for each device, have a procedure for security researchers to report a vulnerability, and make it clear to customers how long a device will get software updates.
Until the UK adopts these resolutions (and for those of us not in the UK), these "top three" practices are things that we as customers can be aware of when buying devices. If you're installing a device with a default password, whether it's a wireless router or a security camera, change it immediately. When considering whether to buy a device, see if the manufacturer's website lists a clear way for researchers to report security bugs. ("Researchers" could be anyone from professionals investigating products for their day jobs to 14-year-olds who notice something funny about FaceTime.) Also, see if the company has any promises about how long they'll keep issuing security updates for the device. If they don't have this information, contact them and ask - or find another seller.
They're no longer selling Avast amount of personal data: Back in December, we discussed a story about how antivirus company Avast was collecting data about browsing history through their product and selling it. Motherboard and PCMag recently came into possession of leaked documents from Avast subsidiary Jumpshot and reported on how Jumpshot sells highly detailed browsing data to companies from Pepsi to Yelp to McKinsey, taking advantage of the antivirus software's privileged access to its users' web browsers. In response to their story, Avast's CEO immediately shut down Jumpshot and ended the practice of reselling data. This is undoubtedly good news for Avast users, but we remain skeptical of antivirus software that gives itself that level of access, even if they're not abusing that access to sell your data for profit.
What we're reading
The changing face of recognition technologies: In a cross-posted blog post and New York Times opinion piece, Bruce Schneier, a cryptographer who's been writing about computer security for decades, argues facial recognition is just one of many tools used in modern surveillance to "identify" individuals with the ultimate goal of what he terms "correlating" and then "discriminating," and that regulatory proposals against facial recognition would be better directed at correlation and discrimination, regardless of the specific technology. Facial recognition often feels unique in the tracking landscape because it is biometric in nature and already widely deployed, but Schneier points out this is just one biometric identification technology: heartbeats, gaits, fingerprints, and irises can all be identified at range.
Another reason facial recognition technologies are getting a lot of scrutiny is because other, unrelated video systems, such as private surveillance systems and even government surveillance cameras installed with the intent of police reviewing them to solve a past crime, could easily be converted to always-on surveillance networks recognizing people's faces as they go about their day - though the same argument could be made for the other video-based identification systems Schneier mentions. Schneier agrees with proponents of facial-recognition bans that your physical identity deserves privacy, but he argues that we also need to focus on regulating "correlation" and "discrimination" in order to protect our privacy in the face of ever-advancing technologies. In particular, Schneier notes that while discrimination based on protected classes is illegal, new technologies enable discrimination on data that is one degree removed from whether or not an individual is a member of a protected class (e.g. substituting gait for gender identification with high accuracy), and we need legislation that gets in front of discrimination generally instead of constantly playing catch-up to new technologies as they're deployed.
The hellscape [Flagged: Bad, Reason: Language] of unnecessary online surveillance: Twitter user @kmlefranc recently discovered part of a background check for their job involved a report of Twitter likes that were flagged as bad by "talent screening software" company Fama. Fama's homepage says their product "helps identify problematic behavior among potential hires and current employees by analyzing publicly available online information," and one of their recent blog posts touts the value of company "culture." "Culture fit" as a hiring criteria is often criticized for its potential to give cover to discrimination, and it's not hard to imagine companies selectively caring about an employee's "negative" social media history. It's clear that neither our social norms nor laws have come to terms with our new reality of being able to find tons of data about someone instantly, be it their Twitter likes or their route home past street cameras or how often they buy a coffee, and even if all of these technologies were 100% accurate - and Fama, at least, is not - the dangers Schneier mentions of automated discrimination reinforcing existing discrimination are real.
The SERPent deceived me, and I did click: Two weeks ago, Google redesigned their search engine results page (or "SERP," in the lingo of marketers who gain and lose millions based on whether they're on page one or page two) to make paid ads look almost identical to genuine ("organic") search results. Real results showed the icon that the website shows in the tab bar ("favicon"), whereas ads showed the word "Ad" in place of the icon but otherwise showed up just the same. The Verge has a deep look at the change and the pressures on Google to make its advertising product effective - and the possibility that Google ads are profitable not because they're well-matched to what you're searching but because so many people use Google.
We were originally thinking of making this week's tip be about how to tell ads and real results apart, but Google has now backtracked on the redesign, making real results look a bit more visually different. It's still not quite as obvious as the colored backgrounds of years past, but it's still fairly clear where the ads stop and the organic results begin. Still, the first Google search result may not always be the site you're intending to visit - for sensitive sites, if your password manager has the URL saved, it's effectively being a trustworthy bookmarking service and relying on that is a lot better than relying on Google.
Until next time...
We hear today's an excellent day for watching a superb owl. Here's a superb owl taking a bath. (Or was there something about a bowl? The owl is taking a bath in a bowl...)
That wraps it up for this week - thanks so much for subscribing to our newsletter! If there's a story you'd like us to cover, send us an email at email@example.com. See y'all next week!
-Liz & Geoffrey