Researchers go hunting for Netflix’s Bandersnatch

A new research paper from the Indian Institute of Technology Madras explains how popular Netflix interactive show Bandersnatch could fall victim to a side-channel attack.

In 2016, Netflix began adding TLS (Transport Layer Security) to their video content to ensure strangers couldn’t eavesdrop on viewer habits. Essentially, now the videos on Netflix are hidden away behind HTTPS—encrypted and compressed.

Previously, Netflix had run into some optimisation issues when trialling the new security boost, but they got there in the end—which is great for subscribers. However, this new research illustrates that even with such measures in place, snoopers can still make accurate observations about their targets.

What is Bandersnatch?

Bandersnatch is a 2018 film on Netflix that is part of the science fiction series Black Mirror, an anthology about the ways technology can have unforeseen consequences. Bandersnatch gives viewers a choose-your-own-adventure-style experience, allowing for various options to perform task X or Y. Not all of them are important, but you’ll never quite be sure what will steer you to one of 10 endings.

Charlie Brooker, the brains behind Bandersnatch and Black Mirror, was entirely aware of the new, incredibly popular wave of full motion video (FMV) games on platforms such as Steam [1], [2], [3]. Familiarity with Scott Adams text adventures and the choose your own adventure books of the ’70s and ’80s would also be a given.

No surprise, then, that Bandersnatch—essentially an interactive FMV game as a movie—became a smash hit. Also notable, continuing the video game link: It was built using Twine, a common method for piecing together interactive fiction in gaming circles.

What’s the problem?

Researchers figured out a way to determine which options were selected in any given play-through across multiple network environments. Browsers, networks, operating systems, connection type, and more were changed for 100 people during testing.

Bandersnatch offers two choices at multiple places throughout the story. There’s a 10-second window to make that choice. If nothing is selected, it defaults to one of the options and continues on.

Under the hood, Bandersnatch is divided into multiple pieces, like a flowchart. Larger, overarching slices of script go about their business, while within those slices are smaller fragments where storyline can potentially branch out.

This is where we take a quick commercial break and introduce ourselves to JSON.

Who is JSON?

He won’t be joining us. However, JavaScript Object Notation will.

Put simply, JSON is an easily-readable method of sending data between servers and web applications. In fact, it more closely resembles a notepad file than a pile of obscure code.

In Bandersnatch, there are a set of answers considered to be the default flow of the story. That data is prefetched, allowing users who choose the default or do nothing to stream continuously.

When a viewer reaches the point in the story where they must make a choice, a JSON file is triggered from the browser to let the Netflix server know. Do nothing in the 10-second window? Under the hood, the prefetched data continues to stream, and viewers continue their journey with the default storyline.

If the viewer chooses the other, non-default option, however, then the prefetched data is abandoned and a second, different type of JSON file is sent out requesting the alternate story path.

What we have here is a tale of two JSONs.

Although the traffic between the Netflix browser and its servers is encrypted, researchers in this latest study were able to decipher which choices its participants made 96 percent of the time by determining the number and type of JSON files sent.

Should we be worried?

This may not be a particularly big problem for Netflix viewers, yet. However, if threat actors could intercept and follow user choices using a similar side channel, they could build reasonable behavioral profiles of their victims.

For instance, viewers of Bandersnatch are asked questions like “Frosties or sugar-puffs?”, “Visit therapist or follow Colin?”, and “Throw tea over computer or shout at dad?”. The choices made could potentially reveal benign information, such as food and music preferences, or more sensitive intel, such as a penchant for violence or political leanings.

Just as we can’t second guess everyone’s threat model (even for Netflix viewers), we also shouldn’t dismiss this. There are plenty of dangerous ways monitoring along these lines could be abused, whether the data is SSL or not. Additionally, this is something most of us going about our business probably haven’t accounted for, much less know what to do about it.

What we do know is that it’s important that content providers—such as gaming studios or streaming services—affected by this research account for it, and look at ways of obfuscating data still further.

Afterall, a world where your supposedly private choices are actually parseable feels very much like a Black Mirror episode waiting to happen.

The post Researchers go hunting for Netflix’s Bandersnatch appeared first on Malwarebytes Labs.

Are hackers gonna hack anymore? Not if we keep reusing passwords

Enterprises have a password problem, and it’s one that is making the work of hackers a lot easier. From credential stuffing to brute force and password spraying attacks, modern hackers don’t have to do much hacking in order to compromise internal corporate networks. Instead, they log in using weak, stolen, or otherwise compromised credentials.

Take the recent case of Citrix as an example. The FBI informed Citrix that a nation-state actor had likely gained access to the company’s internal network, news that came only months after Citrix forced a password reset because it had suffered a credential-stuffing attack.

“While not confirmed, the FBI has advised that the hackers likely used a tactic known as password spraying, a technique that exploits weak passwords. Once they gained a foothold with limited access, they worked to circumvent additional layers of security,” Citrix wrote in a March 6th blog post.

Passwords problems abound

While a recent data privacy survey conducted by Malwarebytes found that an overwhelming majority (96 percent) of the 4,000 cross-generational respondents said online privacy is crucial, nearly a third (29 percent) admitted to reusing passwords across multiple accounts.

Survey after survey shows that passwords are the bane of enterprise security. In a recent survey conducted by Centrify, 52 percent of respondents said their organizations do not have a password vault, and one in five still aren’t using MFA for administrative privileged access.

“That’s too easy for a modern hacker,” said Torsten George, Cybersecurity Evangelist at Centrify. “Organizations can significantly harden their security posture by adopting a Zero Trust Privilege approach to secure the modern threatscape and granting least privilege access based on verifying who is requesting access, the context of the request, and the risk of the access environment.”

How hackers attack without hacking

The problem with password reuse is that in order for an attacker to gain a foothold into your network, malicious actors don’t have to use advanced tactics. “In many cases, first stage attacks are simple vectors such as password spraying and credential stuffing and could be avoided with proper password hygiene,” according to Daniel Smith, head of threat research at Radware.

When cybercriminals are conducting password spraying attacks, they typically scan an organization’s infrastructure for externally-facing applications and network services, such as webmail, SSO, and VPN gateways.

Because these interfaces typically have strict timeout features, malicious actors will opt for password spraying over brute force attacks, which allows them to avoid being timed out or trigger an alert to administrators.

“Password spraying is a technique that involves using a limited set of passwords like Unidesk1, test, C1trix32 or nsroot that are discovered during the recon phase and used in attempted logins for known usernames,” Smith said. “Once the user is compromised, the actors will then employ advanced techniques to deploy and spread malware to gain persistence in the network.”

Cybercriminals have also been targeting cloud-based accounts by leveraging Internet Message Access Protocol (IMAP) for password-spray attacks, according to Proofpoint. One tricky hitch with IMAP is that two-factor authentication inherently can’t work, so it is automatically bypassed when authenticating, said Justin Jett, director of audit and compliance for Plixer.

“Because password-spraying attacks don’t generate an alarm or lock out a user account, a hacker can continually attempt logging in until they succeed. Once they succeed, they may try to use the credentials they found for other purposes,” Jett said.

Tightening up password security

The reality is that guessing passwords is easier for hackers than it is for them to go up against technology. If we’re being honest, there is a strong chance that an attacker is already in your network, given the widespread problem of password reuse. Because passwords are used to authenticate users, any conversation about augmenting password security has to look at the bigger picture of authentication strategies.

On the one hand, it’s true that password length and complexity are critical to creating strong passwords, but making each password unique has its challenges. Password managers have proven to address the problem of remembering credentials for multiple accounts, and these tools are indeed an important piece of an overall password security strategy.

“The pervasiveness of password stuffing, brute force and other similar attacks shows that password length is no longer a deterrent,” said Fausto Oliveira, principal security architect at Acceptto.

Instead, Oliveira said enabling continuous authentication on privileged employee, client, and consumer accounts is one preemptive approach that can stop an attacker from gaining access to sensitive information—even if they breach the system with a brute force attack.

“It is not about a simple 123456, obvious P@55word password versus a complicated passphrase, but recognizing that all of your passwords are compromised. This includes those passwords you have not yet created, you just don’t know it yet.”

Passwords continue to be a problem because their creation and maintenance is largely the responsibility of the user. There’s no technology to change human behavior, which only exacerbates the issues of password reuse and overall poor password hygiene.

Organizations that want to tighten up their password security need to look seriously at more viable solutions than trusting users, which may include eliminating passwords altogether.

The post Are hackers gonna hack anymore? Not if we keep reusing passwords appeared first on Malwarebytes Labs.

Facebook’s history betrays its privacy pivot

Facebook CEO Mark Zuckerberg proposed a radical pivot for his company this month: it would start caring—really—about privacy, building out a new version of the platform that turns Facebook less into a public, open “town square” and more into a private, intimate “living room.”

Zuckerberg promised end-to-end encryption across the company’s messaging platforms, interoperability, disappearing messages, posts, and photos for users, and a commitment to store less user data, while also refusing to put that data in countries with poor human rights records.

If carried out, these promises could bring user privacy front and center.

But Zuckerberg’s promises have exhausted users, privacy advocates, technologists, and industry experts, including those of us at Malwarebytes. Respecting user privacy makes for a better Internet, period. And Zuckerberg’s proposals are absolutely a step in the right direction. Unfortunately, there is a chasm between Zuckerberg’s privacy proposal and Facebook’s privacy success. Given Zuckerberg’s past performance, we doubt that he will actually deliver, and we blame no user who feels the same way.

The outside response to Zuckerberg’s announcement was swift and critical.

One early Facebook investor called the move a PR stunt. Veteran tech journalist Kara Swisher jabbed Facebook for a “shoplift” of a competitor’s better idea. Digital rights group Electronic Frontier Foundation said it would believe in a truly private Facebook when it sees it, and Austrian online privacy rights activist (and thorn in Facebook’s side) Max Schrems laughed at what he saw as hypocrisy: merging users’ metadata across WhatsApp, Facebook, and Instagram, and telling users it was for their own, private good.

The biggest obstacle to believing Zuckerberg’s words? For many, it’s Facebook’s history.

The very idea of a privacy-protective Facebook goes so against the public’s understanding of the company that Zuckerberg’s comments taste downright unpalatable. These promises are coming from a man whose crisis-management statements often lack the words “sorry” or “apology.” A man who, when his company was trying to contain its own understanding of a foreign intelligence disinformation campaign, played would-be president, touring America for a so-called “listening tour.”

Users, understandably, expect better. They expect companies to protect their privacy. But can Facebook actually live up to that?

“The future of the Internet”

Zuckerberg opens his appeal with a shaky claim—that he has focused his attention in recent years on “understanding and addressing the biggest challenges facing Facebook.” According to Zuckerberg, “this means taking positions on important issues concerning the future of the Internet.”

Facebook’s vision of the future of the Internet has, at times, been largely positive. Facebook routinely supports net neutrality, and last year, the company opposed a dangerous, anti-encryption, anti-security law in Australia that could force companies around the world to comply with secret government orders to spy on users.

But Facebook’s lobbying record also reveals a future of the Internet that is, for some, less secure.

Last year, Facebook supported one half of a pair of sibling bills that eventually merged into one law. The law followed a convoluted, circuitous route, but its impact today is clear: Consensual sex workers have found their online communities wiped out, and are once again pushed into the streets, away from guidance and support, and potentially back into the hands of predators.

“The bill is killing us,” said one sex worker to The Huffington Post.

Though the law was ostensibly meant to protect sex trafficking victims, it has only made their lives worse, according to some sex worker advocates.

On March 21, 2018, the US Senate passed the Allow States and Victims to Fight Online Sex Trafficking (FOSTA) bill. The bill was the product of an earlier version of its own namesake, and a separate, related bill, called the Stop Enabling Sex Traffickers Act (SESTA). Despite clear warnings from digital rights groups and sex positive advocates, Facebook supported SESTA in November 2017. According to the New York Times, Facebook made this calculated move to curry favor amongst some of its fiercest critics in US politics.

“[The] sex trafficking bill was championed by Senator John Thune, a Republican of South Dakota who had pummeled Facebook over accusations that it censored conservative content, and Senator Richard Blumenthal, a Connecticut Democrat and senior commerce committee member who was a frequent critic of Facebook,” the article said. “Facebook broke ranks with other tech companies, hoping the move would help repair relations on both sides of the aisle, said two congressional staffers and three tech industry officials.”

Last October, the bill came back to haunt the social media giant: a Jane Doe plaintiff in Texas sued Facebook for failing to protect her from sex traffickers.

Further in Zuckerberg’s essay, he promises that Facebook will continue to refuse to build data centers in countries with poor human rights records.

Zuckerberg’s concern is welcome and his cautions are well-placed. As the Internet has evolved, so has data storage. Users’ online profiles, photos, videos, and messages can travel across various servers located in countries around the world, away from a company’s headquarters. But this development poses a challenge. Placing people’s data in countries with fewer privacy protections—and potentially oppressive government regimes—puts everyone’s private, online lives at risk. As Zuckerberg said:

“[S]toring data in more countries also establishes a precedent that emboldens other governments to seek greater access to their citizen’s data and therefore weakens privacy and security protections for people around the world,” Zuckerberg said.

But what Zuckerberg says and what Facebook supports are at odds.

Last year, Facebook supported the CLOUD Act, a law that lowered privacy protections around the world by allowing foreign governments to directly request companies for their citizens’ online data. It is a law that, according to Electronic Frontier Foundation, could result in UK police inadvertently getting their hands on Slack messages written by an American, and then forwarding those messages to US police, who could then charge that American with a crime—all without a warrant.

The same day that the CLOUD Act was first introduced as a bill, it received immediate support from Facebook, Google, Microsoft, Apple, and Oath (formerly Yahoo). Digital rights groups, civil liberties advocates, and human rights organizations directly opposed the bill soon after. None of their efforts swayed the technology giants. The CLOUD Act became law just months after its introduction.

While Zuckerberg’s push to keep data out of human-rights-abusing countries is a step in the right direction for protecting global privacy, his company supported a law that could result in the opposite. The CLOUD Act does not meaningfully hinge on a country’s human rights record. Instead, it hinges on backroom negotiations between governments, away from public view.

The future of the Internet is already here, and Facebook is partially responsible for the way it looks.

Skepticism over Facebook’s origin story 2.0

For years, Zuckerberg told anyone who would listen—including US Senators hungry for answers—that he started Facebook in his Harvard dorm room. This innocent retelling involves a young, doe-eyed Zuckerberg who doesn’t care about starting a business, but rather, about connecting people.

Connection, Zuckerberg has repeated, was the ultimate mission. This singular vision was once employed by a company executive to hand-wave away human death for the “*de facto* good” of connecting people.

But Zuckerberg’s latest statement adds a new purpose, or wrinkle, to the Facebook mission: privacy.

“Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks,” Zuckerberg said.

Several experts see ulterior motives.

Kara Swisher, the executive editor of Recode, said that Facebook’s re-steering is probably an attempt to remain relevant with younger users. Online privacy, data shows, is a top concern for that demographic. But caring about privacy, Swisher said, “was never part of [Facebook’s] DNA, except perhaps as a throwaway line in a news release.”

Ashkan Soltani, former chief technology officer of the Federal Trade Commission, said that Zuckerberg’s ideas were obvious attempts to leverage privacy as a competitive edge.

“I strongly support consumer privacy when communicating online but this move is entirely a strategic play to use privacy as a competitive advantage and further lock-in Facebook as the dominant messaging platform,” Soltani said on Twitter.

As to the commitment to staying out of countries that violate human rights, Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford Law School’s Center for Internet and Society, pressed harder.

“I don’t know what standards they’re using to determine who are human rights abusers,” Pfefferkorn said in a phone interview. “If it’s the list of countries that the US has sanctioned, where they won’t allow exports, that’s a short list. But if you have every country that’s ever put dissidents in prison, then that starts some much harder questions.”

For instance, what will Facebook do if it wants to enter a country that, on paper, protects human rights, but in practice, utilizes oppressive laws against its citizens? Will Facebook preserve its new privacy model and forgo the market entirely? Or will it bend?

“We’ll see about that,” Pfefferkorn said in an earlier email. “[Zuckerberg] is answerable to shareholders and to the tyranny of the #1 rule: growth, growth, growth.”

Asked whether Facebook’s pivot will succeed, Pfefferkorn said the company has definitely made some important hires to help out. In the past year, Facebook brought aboard three critics and digital rights experts—one from EFF, one from New American’s Open Technology Institute, and another from AccessNow—into lead policy roles. Further, Pfefferkorn said, Facebook has successfully pushed out enormous, privacy-forward projects before.

“They rolled out end-to-end encryption and made it happen for a billion people in WhatsApp,” Pfefferkorn said. “It’s not necessarily impossible.”

WhatsApp’s past is now Facebook’s future

In looking to the future, Zuckerberg first looks back.

To lend some authenticity to this new-and-improved private Facebook, Zuckerberg repeatedly invokes a previously-acquired company’s reputation to bolster Facebook’s own.

WhatsApp, Zuckerberg said, should be the model for the all new Facebook.

“We plan to build this [privacy-focused platform] the way we’ve developed WhatsApp: focus on the most fundamental and private use case—messaging—make it as secure as possible, and then build more ways for people to interact on top of that,” Zuckerberg said.

The secure messenger, which Facebook purchased in 2014 for $19 billion, is a privacy exemplar. It developed default end-to-end encryption for users in 2016 (under Facebook’s stead), refuses to store keys to grant access to users’ messages, and tries to limit user data collection as much as possible.

Still, several users believed that WhatsApp joining Facebook represented a death knell for user privacy. One month after the sale, WhatsApp’s co-founder Jan Kaum tried to dispel any misinformation about WhatsApp’s compromised vision.

“If partnering with Facebook meant that we had to change our values, we wouldn’t have done it,” Kaum wrote.

Four years after the sale, something changed.

Kaum left Facebook in March 2018, reportedly troubled by Facebook’s approach to privacy and data collection. Kaum’s departure followed that of his co-founder Brian Acton the year before.

In an exclusive interview with Forbes, Acton explained his decision to leave Facebook. It was, he said, very much about privacy.

“I sold my users’ privacy to a larger benefit,” Acton said. “I made a choice and a compromise. And I live with that every day.”

Strangely, in defending Facebook’s privacy record, Zuckerberg avoids a recent pro-encryption episode. Last year, Facebook fought—and prevailed—against a US government request to reportedly “break the encryption” in its Facebook Messenger app. Zuckerberg also neglects to mention Facebook’s successful roll-out of optional end-to-end encryption in its Messenger app.

Further, relying so heavily on WhatsApp as a symbol of privacy is tricky. After all, Facebook didn’t purchase the company because of its philosophy. Facebook purchased WhatsApp because it was a threat. 

Facebook’s history of missed promises

Zuckerberg’s statement promises users an entirely new Facebook, complete with end-to-end encryption, ephemeral messages and posts, less intrusive, permanent data collection, and no data storage in countries that have abused human rights.

These are strong ideas. End-to-end encryption is a crucial security measure for protecting people’s private lives, and Facebook’s promise to refuse to store encryption keys only further buttresses that security. Ephemeral messages, posts, photos, and videos give users the opportunity to share their lives on their own terms. Refusing to put data in known human-rights-abusing regimes could represent a potentially significant market share sacrifice, giving Facebook a chance to prove its commitment to user privacy.

But Facebook’s promise-keeping record is far lighter than its promise-making record. In the past, whether Facebook promised a new product feature or better responsibility to its users, the company has repeatedly missed its own mark.

In April 2018, TechCrunch revealed that, as far back as 2010, Facebook deleted some of Zuckerberg’s private conversations and any record of his participation—retracting his sent messages from both his inbox and from the inboxes of his friends. The company also performed this deletion, which is unavailable to users, for other executives.

Following the news, Facebook announced a plan to give its users an “unsend” feature.

But nearly six months later, the company had failed to deliver its promise. It wasn’t until February of this year that Facebook produced a half-measure: instead of giving users the ability to actually delete sent messages, like Facebook did for Zuckerberg, users could “unsend” an accidental message on the Messenger app within 10 minutes of the initial sending time.

Gizmodo labeled it a “bait-and-switch.”

In October 2016, ProPublica purchased an advertisement in Facebook’s “housing categories” that excluded groups of users who were potentially African-American, Asian American, or Hispanic. One civil rights lawyer called this exclusionary function “horrifying.”

Facebook quickly promised to improve its advertising platform by removing exclusionary options for housing, credit, and employment ads, and by rolling out better auto-detection technology to stop potentially discriminatory ads before they published.

One year later, in November 2017, ProPublica ran its experiment again. Discrimination, again, proved possible. The anti-discriminatory tools Facebook announced the year earlier caught nothing.

“Every single ad was approved within minutes,” the article said.

This time, Facebook shut the entire functionality down, according to a letter from Chief Operating Officer Sheryl Sandberg to the Congressional Black Caucus. (Facebook also announced the changes on its website.)

More recently, Facebook failed to deliver on a promise that users’ phone numbers would be protected from search. Today, through a strange workaround, users can still be “found” through the phone number that Facebook asked them to provide specifically for two-factor authentication.

Away from product changes, Facebook has repeatedly told users that it would commit itself to user safety, security, and privacy. The actual track record following those statements tells a different story, though.

In 2013, an Australian documentary filmmaker met with Facebook’s public policy and communications lead and warned him of the rising hate speech problem on Facebook’s platform in Myanmar. The country’s ultranationalist Buddhists were making false, inflammatory posts about the local Rohingya Muslim population, sometimes demanding violence against them. Riots had taken 80 people’s lives the year before, and thousands of Rohingya were forced into internment camps.

Facebook’s public policy and communications lead, Elliot Schrage, sent the Australian filmmaker, Aela Callan, down a dead end.

“He didn’t connect me to anyone inside Facebook who could deal with the actual problem,” Callan told Reuters.

By November 2017, the problem had exploded, with Myanmar torn and its government engaging in what the United States called “ethnic cleansing” against the Rohingya. In 2018, investigators from the United Nations placed blame on Facebook.

“I’m afraid that Facebook has now turned into a beast,” said one investigator.

During the years before, Facebook made no visible effort to fix the problem. By 2015, the company employed just two content moderators who spoke Burmese—the primary language in Myanmar. By mid-2018, the company’s content reporting tools were still not translated into Burmese, handicapping the population’s ability to protect itself online. Facebook had also not hired a single employee in Myanmar at that time.

In April 2018, Zuckerberg promised to do better. Four months later, Reuters discovered that hate speech still ran rampant on the platform and that hateful posts as far back as six years had not been removed.

The international crises continued.

In March 2018, The Guardian revealed that a European data analytics company had harvested the Facebook profiles of tens of millions of users. This was the Cambridge Analytica scandal, and, for the first time, it directly implicated Facebook in an international campaign to sway the US presidential election.

Buffeted on all sides, Facebook released … an ad campaign. Drenched in sentimentality and barren of culpability, a campaign commercial vaguely said that “something happened” on Facebook: “spam, clickbait, fake news, and data misuse.”

“That’s going to change,” the commercial promised. “From now on, Facebook will do more to keep you safe and protect your privacy.”

Here’s what happened since that ad aired in April 2018.

The New York Times revealed that, throughout the past 10 years, Facebook shared data with at least 60 device makers, including Apple, Samsung, Amazon, Microsoft, and Blackberry. The New York Times also published an investigatory bombshell into Facebook’s corporate culture, showing that, time and again, Zuckerberg and Sandberg responded to corporate crises with obfuscation, deflection, and, in the case of one transparency-focused project, outright anger.

British parliamentary committee released documents that showed how Facebook gave some companies, including Airbnb and Netflix, access to its platform in exchange for favors. (More documents released this year showed prior attempts by Facebook to sell user data.) Facebook’s Onava app got kicked off the Apple app store for gathering user data. Facebook also reportedly paid users as young as 13-years-old to install the “Facebook Research” app on their own devices, an app intended strictly for Facebook employee use.

Oh, and Facebook suffered a data breach that potentially affected up to 50 million users.

While the substance of Zuckerberg’s promises could protect user privacy, the execution of those promises is still up in the air. It’s not that users don’t want what Zuckerberg is describing—it’s that they’re burnt out on him. How many times will they be forced to hear about another change of heart before Facebook actually changes for good?

Tomorrow’s Facebook

Changing the direction of a multibillion-dollar, international company is tough work, though several experts sound optimistic about Zuckerberg’s privacy roadmap. But just as many experts have depleted their faith in the company. If anything, Facebook’s public pressures might be at their lowest—detractors have removed themselves from the platform entirely, and supporters will continue to dig deep into their own good will.

What Facebook does with this opportunity is entirely under its own control. Users around the world will be better off if the company decides that, this time, it’s serious about change. User privacy is worth the effort.

The post Facebook’s history betrays its privacy pivot appeared first on Malwarebytes Labs.

New research finds hospitals are easy targets for phishing attacks

New research from Brigham and Women’s Hospital in Boston finds hospital employees are extremely vulnerable to phishing attacks. The study highlights just how effective phishing remains as a tactic—the need for defense against and awareness of email scams is more critical than ever.

The research was a multi-center exercise that looked at results of phishing simulations at six anonymous healthcare facilities in the US. Research coordinators ran phishing simulations for close to seven years and analyzed click rates for more than 2.9 million simulated emails. Results revealed that 422,052 (14.2 percent) of phishing emails were clicked, which is a rate of one in seven.

Patient data at risk

Security professionals are acutely aware of the intense scrutiny placed on patient data and the regulatory requirements around HIPAA (Health Insurance Portability and Accountability Act). This new research on phishing in healthcare puts a spotlight on the vulnerability of this kind of data.

“Patient data, patient care, patient trust and financial stability may be on the line,” said study author William Gordon, MD, MBI, of the Brigham’s Division of General Internal Medicine and Primary Care. “Understanding susceptibility, but also what steps can be taken to mitigate it, are critical as cyberattacks continue to rise.”

Odds of clicks decreased with time

There was a positive finding in the study. Researchers noted that clicks on phishing emails went down with increasing campaigns. After institutions had run 10 or more phishing simulation campaigns, the odds of users clicking on fraudulent emails went down by more than one-third.

The findings make the case for solid awareness efforts to educate about the dangers of phishing, said Gordon.

“Things get better over time with awareness, education, and training,” he said. “Our study suggests that while the risk is high, there is an opportunity to mitigate it.”

Healthcare industry struggles with breach rate

Chris Carmody, senior vice president of enterprise technology and services at the University of Pittsburgh Medical Center (UPMC) and president of Clinical Connect Health Information Exchange, noted in an interview with Reuters Health News that phishing is a challenge in an increasingly digital healthcare environment.

“This is definitely a problem in all industries where people rely on e-communications, especially email,” Carmody said in the interview. “And health care is no different. We see clinical users whose primary focus is on patient care, and we’re trying to do our best to help them develop the knowhow to know what to look for so they can identify phishing attempts and report them to us.”

Carmody estimates that his security group at UMPC, which also runs phishing simulations, gets about 7,500 suspect emails forwarded to them each month, with about 12.5 percent of them being actually malicious.

But any number puts a healthcare facility at risk, as these kinds of institutions are particularly vulnerable to breach. A separate report from Beazley Breach Response finds that healthcare organizations suffered the highest number of data breaches in 2018 across any sector of the US economy. Healthcare institutions have a 41 percent reported breach rate, the highest of any industry.

Other figures from ratings firm SecurityScorecard find the healthcare industry is one of the lowest ranked industries when it comes to security practices. The report, titled SecurityScorecard 2018 Healthcare Report: A Pulse on The Healthcare Industry’s Cybersecurity Risk, looked at data from 1200 healthcare entities and ranked healthcare 15th out of 17 industries for overall cybersecurity posture.

The SecurityScorecard report noted the healthcare industry is one of the lowest performing industries in terms of endpoint security, posing a threat to patient data and potentially patient lives. In addition, 60 percent of the most common cybersecurity issues in the healthcare industry relate to poor patching cadence.

Healthcare phishing in the headlines

Healthcare phishing attempts that devastate facilities and lead to patient data leaks regularly make news headlines. In December 2018, an employee of Memorial Hospital at Gulfport, Mississippi was tricked by a phishing scheme and the result was the breached data of 30,000 patients.

The breach was discovered when investigators noticed an unauthorized party had gained access to an employee email account earlier in the month. Among the patient data leaked were emails, names, dates of birth, health data, and information about services patients had received at MHG. Social Security numbers were also leaked on some patients.

Phishing on the rise all over

Massive malware campaigns like Emotet and TrickBot have pushed phishing levels higher this year in many industries. Kaspersky Labs most recent Spam and phishing in 2018 report finds the number of phishing attacks that took place in 2018 more than doubled from the previous year.

Research from Sophos finds that 45 percent of UK businesses were hit by phishing attacks between 2016 and 2018. The study also revealed 54 percent had identified instances of employees replying to unsolicited emails or clicking the links in them.

The Malwarebytes 2019 State of Malware report finds all sectors are impacted by the kind of malware served up in phishing emails. Trojans like Emotet and TrickBot are particularly problematic in education, manufacturing, and retail. While healthcare fared poorly in the Brigham and Women’s study, every vertical is plagued by phishing.

How can business defend against phishing attacks?

Of all of the cybersecurity risks to organizations, the human element is always the toughest to mitigate. But, as the healthcare phishing study shows, user awareness does have a positive impact on click rates—the more campaigns were launched, the fewer employees who fell prey to fake emails.

There are plenty of free awareness and anti-phishing resources available that businesses can tap for training internally. For example, our anti-phishing guide offers suggestions and awareness tips for both employees and customers. And Google has an anti-phishing test you can access online to familiarize users with common phishing techniques. Of course, there are also many companies that offer training products for purchase.

However businesses choose to train employees, it’s important to have regular access to information and tools that promote awareness of evolving phishing techniques. In the healthcare industry, it’s not just about the bottom line—it could actually save lives.

The post New research finds hospitals are easy targets for phishing attacks appeared first on Malwarebytes Labs.

A week in security (March 11 – 17)

Last week on Malwarebytes Labs, we looked at the Lazarus group in our series about APT groups, we discussed the introduction of Payment Service Directive 2 (PSD2) in the EU, we tackled Google’s Nest fiasco, and the launch of Mozilla’s Firefox Send. In addition, we gave you an overview of the pervasive threat, Emotet, and we discussed reputation management in the age of cyberattacks against businesses.

Other security news

  • A new phishing campaign targeting mainly iOS users is asking them to login in with their Facebook account and give away their credentials. The technique the threat actors are using can easily be ported over to scam Android users. (Source: SC Magazine)
  • Iranian hackers have stolen between six and 10 terabytes of data from Citrix. The hack was focused on assets related to NASA, aerospace contracts, Saudi Arabia’s state oil company, and the FBI. (Source: The Inquirer)
  • Up to 150 million users might have downloaded and installed an Android app on their phones that contained a new strain of adware named SimBad. The malicious advertising kit was found inside 210 Android apps that had been uploaded on the official Google Play Store. (Source: ZDNet)
  • The popularity of the Apex Legends game and its absence on the Android Play store have attracted the attention of many malware writers who exploited this opportunity to spread malicious versions for Android. (Source: Security Affairs)
  • A new insidious malware dubbed GlitchPOS bent on siphoning credit-card numbers from point-of-sale (PoS) systems has recently been spotted on a crimeware forum. GlitchPOS joins other recently-developed malware  targeting the retail and hospitality space. (Source: ThreatPost)
  • A partial Facebook outage affecting users around the world and stretching beyond 14 hours is believed to be the biggest interruption ever suffered by the social network. (Source: CNN)
    Telegram reported it received 3 million signups during this Facebook outage. (Source: CNet)
  • A 21-year-old Australian man was arrested after earning over $200,000 from stolen Spotify and Netflix accounts. Allegedly, he sold the stolen accounts through an “account generator” website. (Source: TechSpot)
  • A code execution vulnerability in WinRAR (CVE-2018-20250) generated over a hundred distinct exploits in the first week since its disclosure, and the number of exploits keeps on swelling. (Source: BleepingComputer)
  • A new flaw in the content management software (CMS) WordPress has been discovered that could potentially lead to remote code execution attacks. Users are advised to update to the latest version, which was at 5.1.1 at the time of writing. (Source: The Hacker News)
  • The Chinese authorities are collecting DNA as a means to track their people. And it seems they got unlikely corporate and academic help from the United States. (Source: The New York Times)

Stay safe, everyone!

The post A week in security (March 11 – 17) appeared first on Malwarebytes Labs.

Mozilla launches Firefox Send for private file sharing

Mozilla look to reclaim some ground from the all-powerful Chrome with a new way to send and receive files securely from inside the browser. Firefox Send first emerged in 2017, promising an easy way to send documents without fuss. The training wheels have now come off and Send is ready to go primetime. Will it catch on with the masses, or will only a small, niche group use it to play document tennis?

How does it work?

Firefox Send allows for files up to 1GB to be sent to others via any web browser (2.5GB if you sign in with a Firefox account). The files are encrypted after a key is generated, at which point a URL is created containing said key. You send this URL to the recipient, who is able to then download and access the file securely. Mozilla can’t access the key, as the JavaScript code powering things only runs locally.

Before sending, a number of security settings come into play. You can set the link expiration to include number of downloads, from one to 200, or number of days the link is live (up to seven). Passwords are also available for additional security.

It’s not for everyone

The process isn’t 100 percent anonymous, as per the Send privacy page:

IP addresses: We receive IP addresses of downloaders and uploaders as part of our standard server logs. These are retained for 90 days, and for that period, may be connected to activity of a file’s download URL. Although we develop our services in ways that minimize identification, you should know that it may be possible to correlate the IP address of a Send user to the IP address of other Mozilla services with accounts; and if there is a match, this could identify the account email address.

Of course, there may be even less anonymity if you use the service while signed into a FireFox account to make use of the greater send allowance of 2.5GB.

As a result, this might not be something you wish to use if absolute anonymity is your primary concern.

Who is likely to make use of this?

Send is for situations where you need to get an important file to someone but:

  1. The recipient isn’t massively tech-savvy. If you’re dealing with applications involving a drip feed of documents over time, this can get messy. Eventually, the person at the other end will have had enough of multiple AES-256 encrypted zip files hosted on Box where the password never seems to work, or they don’t have the right zip tool to extract the file. Send will simplify that process.
  2. The person at the other end is tech-savvy. However, they’re not necessarily aware that sending bank details or passport photos in plaintext emails is a bad idea.

A Mozilla project manager mentioned issues involving Visa-related documents in the cloud, and this is definitely where a service like Send can flourish. Multiple uploads over time usually ends up in a game of “hunt the files.” Did you delete everything? Maybe you should leave some of it online in case a problem arises? Are the files really gone if you delete them all, or is it as simple as flipping a “Whoops, didn’t mean it” switch and watching them all come back?

These are real-world, practical problems that people run into on a daily basis. The duct tape, multiple service/program approach works up to a point—and then it doesn’t. Firefox Send is perhaps a bit niche, but there’s nothing wrong with that. Not everyone is a fan of leaving important documents scattered across Google Drive or Dropbox, and this is a handy alternative. We’ll have to see what impact this product has long-term, but having more privacy options available is never a bad thing.

The post Mozilla launches Firefox Send for private file sharing appeared first on Malwarebytes Labs.

Reputation management in the age of cyberattacks against businesses

Avid readers of the Malwarebytes Labs blog would know that we strive to prepare businesses of all sizes for the inevitability of cyberattacks. From effectively training employees about basic cybersecurity hygiene to guiding organizations in formulating an incident response (IR) program, a cybersecurity policy, and introducing an intentional culture of security, we aim to promote proactive prevention.

However, there are times when organizations need to be reactive. And one of these is business reputation management (BRM), a buzzword that refers to the practice of ensuring that organizations are always putting their best foot forward, online and offline, by constant monitoring and dealing with information and communications that shape public perception. This is a process that executives must not miss out on, especially when the company has found itself in the center of a media storm after disclosing a cybersecurity incident that has potentially affected millions of their customers.

In this post, we look at why companies of all sizes should have such a system in place by having a refresher on what forms a reputation and how much consumer trust and loyalty have evolved. We’ll also show you what proactive and reactive BRM would look like before, during, and after a cybersecurity fallout.

Reputation, like beauty, is in the eye of the beholder

A company’s reputation—how clients, investors, employees, suppliers, and partners perceive it—is its most valuable, intangible asset. Gideon Spanier, Global Head of Media at Campaign, has said in his Raconteur piece that it is built on three things: what you say, what you do, and what others say about you when you’re not in the room. Because of the highly digitized and networked world we live in, the walls of this room have become imaginary, with everyone now hearing what you have to say.

Looking up organizations and brands online has become part of a consumer’s decision-making process, so having a strong and positive online presence is more important than ever. But to see that only 15 percent of executives are addressing the need to manage their business’s reputation, there’s clearly work to be done.

Consumer trust and loyalty evolved

Brand trust has grown up. Before, we relied on word of mouth—commendations and condemnations alike—from friends and family, the positivity or the negativity of our own and others’ experiences about a product or service, and endorsements from someone we look up to (like celebrities and athletes). Nowadays, many of us tend to believe what strangers say about a brand, product, or service; read the news about what is going on with institutions; and follow social media chatter about them.

The relationship between consumer trust and brand reputation has changed as well. While mainstream names are still favored over new or unfamiliar brands (even if they offer a similar product or service at a cheaper cost), connected consumers have learned the value of their data. Not only do they want their needs met, but they also expect companies to take care of them—and by extension, the information they choose to give away—so they can feel safe and happy.

Of course, with trust comes loyalty. Weber Shandwick, a global PR firm, has reminded business leaders in their report, The Company behind the Brand: In Reputation We Trust [PDF], has found that consumers in the UK tend to associate themselves with a product, and if the company producing that product falls short of what is expected of them, they bail in search for a better one, which is usually offered by a competing brand. It’s not hard to imagine this same reaction from consumers in the United States in the context of stolen customer data due to a company-wide data breach.

Business reputation management in action

The possibility of finding their business in the crosshairs of threat actors is no longer just a possibility, but something executives should always be prepared for. The good news is that it’s not impossible to protect your business reputation from risks.

In this section, we outline what businesses can do in three phases—before, during, and after an attack—by illustration based on a real-world scenario to give organizations an idea on how they can formulate a game plan to manage their reputation now or in the future. Note that we have aligned our pointers in the context of cybersecurity and privacy incidents.

Before an attack: Be prepared for a breach

  • Identify and secure your company’s most sensitive data. This includes intellectual property (IP) and your customers’ personally identifiable information (PII).
  • Back up your data. We have a practical guide for that.
  • Patch everything. It may take a while, and it may cause some disruption, but it’ll be worth it.
  • Educate employees on basic data security measures, social engineering tactics, and how to identify red flags of a potential breach.
  • Put together a team of incident responders. That is, if the company has decided to handle incidents in-house. If this is the case:
    • Provide them the tools they will need for the job.
    • Train them on how to use these tools and on established processes of proper evidence collection and storage.
  • Create a data breach response plan. This is a set of actions an organization takes to quickly and effectively address a security or privacy incident. Sadly, according to PwC’s 2018 Global Economic Crime and Fraud Survey, only 30 percent of companies have this plan in place.
    • Once created, make sure that all internal stakeholders—your employees, executives, business units, investors, and B2B contacts—are informed about this plan, so they know what to do and what to expect.
  • Learn the security breach notification laws in the state your business is based in. Make sure that your company complies with the legislation.
  • Establish an alert and follow-through process. This includes maintaining a communication channel that is accessible 24/7. In the event of an attack, internal stakeholders must be informed first.
  • On a similar note, create a notification process. Involve relevant key departments, such as marketing and legal, in coming up with what to say to customers (if the breach involves PII theft), regulators, and law enforcement, and how to best notify them.
  • Depending on the nature of your company and the potential assets that may be affected by a breach, prepare a list of possible special services your company can offer to clients that may be affected. For example, if your company stores credit card information, you can provide identity protection to clients with a contact number they can call to avail of the service. This was what Home Depot did when it was breached in 2014.

Read: How to browse the Internet safely at work

During an attack: Be strategic

  • Keep internal stakeholders updated on developments and steps your company has taken to mitigate and remedy the severity of the situation. Keep phone lines open, but it would be more efficient to send periodic email updates. Create a timeline of events as you go along.
  • Identify and document the following information and evidence as much as you can, as these are needed when the time comes to notify clients and the public about the breach:
    • Compromised systems, assets, and networks
    • Patient zero, or how the breach happened
    • Information in affected machines that has been disclosed, taken, deleted, or corrupted.
  • If your company has a blog or a page where you can post company news, draft up an account of the events from start to finish and what you continue to plan on doing in the next few weeks following the breach. Be transparent and effective. This is a good opportunity to show clients that the company is not just talking the talk but also walking the walk. The Chief Marketing Officer (CMO) should take the lead on this.

After an attack: Be excellent to your stakeholders

  • Notify your clients and other entities that may have been affected by the breach.
    • Put out the company news or blog post the company has drafted about the cybersecurity incident.
    • Send out breach notifications via email, linking back to the blog, and social media.
  • Prepare to receive questions from clients and anyone who is interested in learning more about what happened. Expect to have uncomfortable conversations.
  • Offer additional services to your clients, which you have already thought out and prepared for in the first phase of this BRM exercise.
  • Continue accepting and addressing concerns and questions from clients at extended periods for a certain length of time.
  • Implement new processes and use new products based on post-incident discussions to further minimize future breaches from happening.
  • Rejuvenate stakeholder’s confidence and trust by focusing on breach preparedness, containment, and mitigation strategies as proof of the company’s commitment to its clients. This can turn the stigma of data breaches on its head. Remember that a breach can happen to any company from any industry. How the company acted before, during, and after the incident is what will be remembered. So use that to your advantage.
  • Audit the information your company collects and stores to see if you have data that is not necessarily needed to fulfill your product and service obligations to clients. The logic behind this is the less data you keep about customers; the less data are at risk. Make sure that all your stakeholders, especially your customers, know about which data you will not be collecting and storing anymore.
  • Recognize the hard work of your employees and reward them for it. Yes, they’re your stakeholders, too, and shouldn’t be forgotten, especially after the event of a cybersecurity incident.

Business reputation management is the new black

Indeed, businesses remains a favorite target of today’s threat actors and nation states. It’s the new normal, at this point—something that many organizations are still choosing to deny.

Knowing how to manage your business’s reputation is seen as a competitive advantage. Sure, it’s one thing to know how to recover from a cybersecurity incident. But it’s quite another to know what to do to keep the brand’s image intact amidst the negative attention and what to say to those who have been affected by the attack—your stakeholders—and to the public at large.

The post Reputation management in the age of cyberattacks against businesses appeared first on Malwarebytes Labs.

Emotet revisited: pervasive threat still a danger to businesses

One of the most common and pervasive threats for businesses today is Emotet, a banking Trojan turned downloader that has been on our list of top 10 detections for many months in a row. Emotet, which Malwarebytes detects as Trojan.Emotet, has been leveled at consumers and organizations across the globe, fooling users into infecting endpoints through phishing emails, and then spreading laterally through networks using stolen NSA exploits. Its modular, polymorphic form, and ability to drop multiple, changing payloads have made Emotet a thorn in the side of cybersecurity researchers and IT teams alike.

Emotet first appeared on the scene as a banking Trojan, but its effective combination of persistence and network propagation has turned it into a popular infection mechanism for other forms of malware, such as TrickBot and Ryuk ransomware. It has also earned a reputation as one of the hardest-to-remediate infections once it has infiltrated an organization’s network.

Emotet Graph

Emotet detections March 12, 2018 – February 23, 2019

In July 2018, the US Department of Homeland Security issued a Technical Alert through CISA (Cyber-Infrastructure) about Emotet, warning that:

“Emotet continues to be among the most costly and destructive malware affecting SLTT governments. Its worm-like features result in rapidly spreading network-wide infection, which are difficult to combat. Emotet infections have cost SLTT governments up to $1 million per incident to remediate.”

From banking Trojan to botnet

Emotet started out in 2014 as an information-stealing banking Trojan that scoured sensitive financial information from infected systems (which is the reason why Malwarebytes detects some components as Spyware.Emotet). However, over time Emotet and its business model evolved, switching from a singular threat leveled at specific targets to a botnet that distributes multiple malware payloads to industry verticals ranging from governments to schools.

Emotet was designed to be modular, with each module having a designated task. One of its modules is a Trojan downloader that downloads and runs additional malware. At first, Emotet started delivering other banking Trojans on the side. However, its modular design made it easier for its authors—a group called Mealybug—to adapt the malware or swap functionality between variants. Later versions began dropping newer and more sophisticated payloads that held files for ransom, stole personally identifiable information (PII), spammed other users with phishing emails, and even cleaned out cryptocurrency wallets. All of these sidekicks were happy and eager to make use of the stubborn nature of this threat.

Infection mechanism

We have discussed some of the structure and flow of Emotet’s infection vectors in detail here and here by decoding an example. What most Emotet variants have in common is that the initial infection mechanism is malspam. At first, infections were initiated from Javascript files attached to emails; later, (and still true today) it was via infected Word documents that downloaded and executed the payload.

A considerable portion of Emotet malspam is generated by the malware’s own spam module that sends out malicious emails to the contacts it finds on an infected system. This makes the emails appear as though they’re coming from a known sender. Recipients of email from a known contact are more likely to open the attachment and become the next victim—a classic social engineering technique.

Besides spamming other endpoints, Emotet also propagates through the popular EternalBlue vulnerability stolen from the NSA and released by the ShadowBrokers Group. This functionality allows the infection to spread laterally across a network of unpatched systems, which makes it even more dangerous to businesses that have hundreds or thousands of endpoints linked together.

Difficult to detect and remove

Emotet has several methods for maintaining persistence, including auto-start registry keys and services, and it uses modular Dynamic Link Libraries (DLLs) to continuously evolve. Because Emotet is polymorphic and modular, it can evade typical signature-based detection.

In fact, not only is Emotet difficult to detect, but also to remediate.

A major factor that frustrates remediation is the aforementioned lateral movement via EternalBlue. This particular exploit requires admins follow a strict policy of isolating infected endpoints from the network, patching, disabling Administrative Shares, and ultimately removing the Trojan before reconnecting to the network—otherwise, face the certainty that cleaned endpoints will become re-infected over and over by infected peers.

Add to that mix an ongoing development of new capabilities, including the ability to be VM-aware, avoid spam filters, or uninstall security programs, and you’ll begin to understand why Emotet is every networks administrators’ worst nightmare.

Recommended remediation steps

An effective, though time-consuming method for disinfecting networked systems has been established. The recommended steps for remediation are as follows:

  • Identify the infected systems by looking for Indicators of Compromise (IOCs)
  • Disconnect the infected endpoints from the network. Treat systems where you have even the slightest doubt as infected.
  • Patch the system for EternalBlue. Patches for many Windows versions can be found through this Microsoft Security Bulletin about MS17-010.
  • Disable administrative shares, because Emotet also spreads itself over the network through default admin shares. TrickBot, one of Emotet’s trusty sidekicks, also uses the Admin$ shares once it has brute forced the local administrator password. A file share server has an IPC$ share that TrickBot queries to get a list of all endpoints that connect to it.
  • Scan the system and clean the Emotet infection.
  • Change account credentials, including all local and domain administrator passwords, as well as passwords for email accounts to stop the system from being accessible to the Trojan.


Obviously, it’s preferable for businesses to avoid Emotet infections in the first place, as remediation is often costly and time-consuming. Here are some things you can do to prevent getting infected with Emotet:

  • Educate users: Make sure end users are aware of the dangers of Emotet and know how to recognize malspam—its primary infection vector. Train users on how to detect phishing attempts, especially those that are spoofed or more sophisticated than, say, the Nigerian Prince.
  • Update software regularly: Applying the latest updates and patches reduces the chances of Emotet infections spreading laterally through networks via EternalBlue vulnerabilities. If not already implemented, consider automating those updates.
  • Limit administrative shares: to the absolute minimum for Emotet damage control.
  • Use safe passwords: Yes, it really is that important to use unique, strong passwords for each online account. Investigate, adopt, and role out a single password manager for all of the organization’s users.
  • Back up files: Some variants of Emotet also download ransomware, which can hold now-encrypted files hostage, rendering them useless unless a ransom is paid. Since we and the FBI recommend never paying the ransom—as it simply finances future attacks and paints a target on an organization’s back—having recent and easy-to-deploy backups is always a good idea.



%appdata%\Roaming\Microsoft\Windows\Start Menu\Programs\Startup [Randomname].LNK. file in the startup folder

Registry keys

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services {Random Hexadecimal Numbers}
HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run {Random Names} with value c:\users\admin\appdata\roaming\{Random}{Legitimate Filename}.exe

Filename examples




Subject Filters

“UPS Ship Notification, Tracking Number”
“UPS Express Domestic”
“Tracking Number *”

Trick to check whether a UPS tracking number is real: a legitimate UPS tracking number contains eighteen alpha-numeric characters and starts with ‘1Z’ and ends with a check digit.

A number matching this format may still be false, but one that doesn’t match is certainly not real.

The post Emotet revisited: pervasive threat still a danger to businesses appeared first on Malwarebytes Labs.

Google’s Nest fiasco harms user trust and invades their privacy

Technology companies, lawmakers, privacy advocates, and everyday consumers likely disagree about exactly how a company should go about collecting user data. But, following a trust-shattering move by Google last month regarding its Nest Secure product, consensus on one issue has emerged: Companies shouldn’t ship products that can surreptitiously spy on users.

Failing to disclose that a product can collect information from users in ways they couldn’t have reasonably expected is bad form. It invades privacy, breaks trust, and robs consumers of the ability to make informed choices.

While collecting data on users is nearly inevitable in today’s corporate world, secret, undisclosed, or unpredictable data collection—or data collection abilities—is another problem.

A smart-home speaker shouldn’t be secretly hiding a video camera. A secure messaging platform shouldn’t have a government-operated backdoor. And a home security hub that controls an alarm, keypad, and motion detector shouldn’t include a clandestine microphone feature—especially one that was never announced to customers.

And yet, that is precisely what Google’s home security product includes.

Google fumbles once again

Last month, Google announced that its Nest Secure would be updated to work with Google Assistant software. Following the update, users could simply utter “Hey Google” to access voice controls on the product line-up’s “Nest Guard” device.

The main problem, though, is that Google never told users that its product had an internal microphone to begin with. Nowhere inside the Nest Guard’s hardware specs, or in its marketing materials, could users find evidence of an installed microphone.

When Business Insider broke the news, Google fumbled ownership of the problem: “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” a Google spokesperson said. “That was an error on our part.”

Customers, academics, and privacy advocates balked at this explanation.

“This is deliberately misleading and lying to your customers about your product,” wrote Eva Galperin, director of cybersecurity at Electronic Frontier Foundation.

“Oops! We neglected to mention we’re recording everything you do while fronting as a security device,” wrote Scott Galloway, professor of marketing at the New York University Stern School of Business.

The Electronic Privacy Information Center (EPIC) spoke in harsher terms: Google’s disclosure failure wasn’t just bad corporate behavior, it was downright criminal.

“It is a federal crime to intercept private communications or to plant a listening device in a private residence,” EPIC said in a statement. In a letter, the organization urged the Federal Trade Commission to take “enforcement action” against Google, with the hope of eventually separating Nest from its parent. (Google purchased Nest in 2014 for $3.2 billion.)

Days later, the US government stepped in. The Senate Select Committee on Commerce sent a letter to Google CEO Sundar Pichai, demanding answers about the company’s disclosure failure. Whether Google was actually recording voice data didn’t matter, the senators said, because hackers could still have taken advantage of the microphone’s capability.

“As consumer technology becomes ever more advanced, it is essential that consumers know the capabilities of the devices they are bringing into their homes so they can make informed choices,” the letter said.

This isn’t just about user data

Collecting user data is essential to today’s technology companies. It powers Yelp recommendations based on a user’s location, product recommendations based on an Amazon user’s prior purchases, and search results based on a Google user’s history. Collecting user data also helps companies find bugs, patch software, and retool their products to their users’ needs.

But some of that data collection is visible to the user. And when it isn’t, it can at least be learned by savvy consumers who research privacy policies, read tech specs, and compare similar products. Other home security devices, for example, advertise the ability to trigger alarms at the sound of broken windows—a functionality that demands a working microphone.

Google’s failure to disclose its microphone prevented even the most privacy-conscious consumers from knowing what they were getting in the box. It is nearly the exact opposite approach that rival home speaker maker Sonos took when it installed a microphone in its own device.

Sonos does it better

In 2017, Sonos revealed that its newest line of products would eventually integrate with voice-controlled smart assistants. The company opted for transparency.

Sonos updated its privacy policy and published a blog about the update, telling users: “The most important thing for you to know is that Sonos does not keep recordings of your voice data.” Further, Sonos eventually designed its speaker so that, if an internal microphone is turned on, so is a small LED light on the device’s control panel. These two functions cannot be separated—the LED light and the internal microphone are hardwired together. If one receives power, so does the other.

While this function has upset some Sonos users who want to turn off the microphone light, the company hasn’t budged.

A Sonos spokesperson said the company values its customers’ privacy because it understands that people are bringing Sonos products into their homes. Adding a voice assistant to those products, the spokesperson said, resulted in Sonos taking a transparent and plain-spoken approach.

Now compare this approach to Google’s.

Consumers purchased a product that they trusted—quite ironically—with the security of their homes, only to realize that, by purchasing the product itself, their personal lives could have become less secure. This isn’t just a company failing to disclose the truth about its products. It’s a company failing to respect the privacy of its users.

A microphone in a home security product may well be a useful feature that many consumers will not only endure but embrace. In fact, internal microphones are available in many competitor products today, proving their popularity. But a secret microphone installed without user knowledge instantly erodes trust.

As we showed in our recent data privacy report, users care a great deal about protecting their personal information online and take many steps to secure it. To win over their trust, businesses need to responsibly disclose features included in their services and products—especially those that impact the security and privacy of their customers’ lives. Transparency is key to establishing and maintaining trust online.

The post Google’s Nest fiasco harms user trust and invades their privacy appeared first on Malwarebytes Labs.

Explained: Payment Service Directive 2 (PSD2)

Payment Service Directive 2 (PSD2) is the implementation of a European guideline designed to further harmonize money transfers inside the EU. The ultimate goal of this directive is to simplify payments across borders so that it’s as easy as transferring money within the same country. Since the EU was set up to diminish the borders between its member states, this make sense. The implementation offers a legal framework for all payments made within the EU.

After the introduction of PSD in 2009, and with the Single Euro Payments Area (SEPA) migration completed, the EU introduced PSD2 on January 13, 2018. However, this new harmonizing plan came with a catch— the use of new online payment and account information services provided by third parties, such as financial institutions, who needed to be able to access the bank accounts of EU users. While they first need to obtain users’ consent to do so, we all know consent is not always freely given or with a full understanding of the implications. Still, it must be noted: Nothing will change if you don’t give your consent, and you are not obliged to do so.

Which providers

Before these institutions are allowed to ask for consent, they have to be authorized and registered under PSD2. The PSD2 already sets out information requirements for the application as payment institution and for the registration as account information services provider (AISP). The European Banking Authority (EBA) published guidelines on the information to be provided by applicants intending to obtain authorization as payment and electronic money institutions, as well as to register as an AISP.

From the pages of the Dutch National Bank (De Nederlandsche Bank):

“In this register are also (foreign) Account information service providers based upon the European Passport. These Account information service providers are supervised by the home supervisor. Account information service providers from other countries of the European Economic Area (EEA) could issue Account information services based upon the European Passport through an Agent in the Netherlands. DNB registers these agents of foreign Account information service providers without obligation to register. The registration of these agents are an extra service to the public. However the possibility may exist that the registration of incoming agents differs from the registration of the home supervisor.”

So, an AISP can obtain a European Passport to conduct its services across the entire EU, while only being obligated to register in its country of origin. And even though the European Union is supposed to be equal across the board, the reality is, in some countries, it’s easier to worm yourself into a comfortable position than in others.

Access to bank account = more services

Wait a minute. What exactly does all of this mean? Third parties often live under a separate set of rules and are not always subject to the same scrutiny. (Case in point: AISPs can move to register in “easier” countries and get away with much more.) So while that offers an AISP better flexibility to provide smooth transfer services, it would also allow those payment institutions to offer new services based on their view into your bank account. That includes a wealth of information, such as:

  • How much money is coming into and out of the account each month
  • Spending habits: what you spend money on and where you spend it
  • Payment habits: Are you paying bills way ahead of deadline or tardy?

AISPs can check your balance, request your bank to initiate a payment (transfer) on your behalf, or create a comprehensive overview of your balances for you.

Simple example: There is an AISP service that keeps tabs on your payments and income and shows you how much you can spend freely until your next payment is expected to come in. This is useful information to have when you are wondering if you can make your money last until the end of the month if you buy that dress.

However, imagine this information in the hands of a commercial party that wants to sell you something. They would be able to figure out how much you are spending with their competitors and make you a better offer. Or pepper you with ads tailored to your spending habits. Is that a problem? Yes, because why did you choose your current provider in the first place? Better service or product? Customer friendliness? Exactly what you needed? In short, the competitor might use your information to help themselves, and not necessarily you.

What is worrying about PSD2?

Consumer consent is a good thing. But if we can learn from history, as we should, it will not be too long before consumers are being tricked into clicking a big green button that gives a less trustworthy provider access to their banking information. Maybe they don’t even have to click it themselves. We can imagine Man-in-the-Middle attacks that sign you up for such a service.

Any offer of a service that requires your consent to access banking information should be carefully examined. How will AISPs that work for free make money? Likely by advertising to you or selling your data.

And then there is the possibility for “soft extortion,” like a mortgage provider that doesn’t want to do business with you unless you provide them with the access to your banking information. Or will offer you a better deal if you do.

In all of these scenarios, consent was given in one way or another, but is the deal really all that beneficial for the customer?

What we’d like to see

Some of the points below may already be under consideration in some or all of the EU member states, but we think they offer a good framework for the implementation of these new services.

  • We only want AISPs that work for the consumer and not for commercial third parties. In fairness, the consumer will pay the AISP for their services so that abuse or misuse of free product business models does not take place.
  • AISPs that want to do business in a country should be registered in that country, as well as in other countries where they want to do business.
  • AISPs should be constantly monitored, with the option to revoke their license if they misbehave. Note that GDPR already requires companies to delete data after services have stopped or when consent is withdrawn.
  • Access to banking information should not be used as a requirement for unrelated business models, or be traded for a discount on certain products.
  • GDPR regulations should be applied with extra care in this sensitive area. Some data- and privacy-related bodies have already expressed concerns about the discrepancies between GDPR and PSD2, even though they come from the same source.
  • Obligatory double-check through another medium by the AISP whether the customer has signed up out of their own free will, with a cooling-off period during which they can withdraw the permission.

Would anyone consent to PSD2 access?

For the moment, it’s hard to imagine a reason for allowing another financial institution or other business access to personal banking information. But despite the obvious red flags, it’s possible that people might be convinced with discounts, denials of service, or appealing benefits to give their consent.

And some of our wishes could very well be implemented as some kinks are still being ironed out. The Dutch Data Protection Authority (DPA) has pointed out that there are discrepancies between GDPR and PSD2 and expressed their concern about them. The DPA acknowledges this in their recommendation on the Implementation Act, and most recently in the Implementation Decree.

In both recommendations, the DPA concludes, in essence, that the GDPR has not been taken in consideration adequately in the course of the Dutch implementation of PSD2. The same may happen in other EU member states. Of course, the financial world tells us that licenses will not be issued to just anybody, but the public has not entirely forgotten the global 2008 banking crisis.

On top of that, there are major lawsuits in progress against insurance companies and other companies that sold products constructed in a way the general public could not possibly understand. These products are now considered misleading, and some even fraudulent. To put it mildly, the trust of the European public in financials is not high at the moment.

And we are not just looking at traditional financials.

Did you know that Google has obtained an eMoney license in Lithuania and that Facebook did the same in Ireland?

Are you worried now? Let me explain that all of these concerns have been brought up before, and the general consensus is that the regulations are strict enough to warrant an introduction of PSD2 that will only allow trustworthy partners which have been vetted and will be monitored by the authorities.

Nevertheless, you can rest assured that we will keep an eye on this development. When the times comes that PSD2 is introduced to the public, it might also turn out to be a subject that phishers are interested in. We can already imagine the “Thank you for allowing us to access your bank account; click here to revoke permission” email buried in junk mail.

Stay safe, everyone!

The post Explained: Payment Service Directive 2 (PSD2) appeared first on Malwarebytes Labs.