Vital infrastructure: Threats target financial institutions, fintech, and cryptocurrencies

With news of a malware attack on accounting firm Wolters Kluwer causing a “quiet panic” in the accounting world this week, our assertion that financial institutions—from banks to brokers—are part of the vital infrastructure of society has been solidified.

According to its website, Wolters Kluwer provides software and services to all of the top 100 accounting firms in the United States, 90 percent of the top global banks, and 93 percent of Fortune 500 companies. With many of its tax, accounting, and vital storage services down since Monday, employees and customers have been unable to access data during a busy filing period (taxes for non-profits are due May 15.

It is unknown at this time if personally identifiable information was taken in the attack, or if the infection spread to any of Kluwer’s customers. The company released a statement saying they had no reason to believe either were true, but that the investigation is still ongoing.

In the meantime, communication with the firm is spotty and day-to-day work operations have been impeded. Up against a deadline, some accountants are having to complete tax returns for their clients by hand.

And that’s just one attack on one firm.

When we lose trust in our financial institutions, it turns our society upside down. When the paper is no longer worth the number printed on it, or you cannot withdraw money from your account, that rattles the bases of our economy. And in the capitalist society we live in, that means literally everything changes.

Whether attacks empty our accounts or expose our data, how can we feel comfortable investing in the future? And if everyone turned their back on financial institutions because of lack of security, what would happen? Would we have to turn back the clocks to a more primitive age when currency was forged from precious metals or we bartered for goods?

Financial institutions

For further discussion, it makes sense to define what we consider to be financial institutions. Also called financials or banking institutions, they are the corporations that act as intermediaries of financial markets or money management. These businesses can be:

  • Banks
  • Insurance companies
  • Stock traders and other brokers
  • Pension funds
  • Mortgage companies
  • Digital currency markets
  • Accounting firms

The digital era

Not only has the digital world introduced new financial institutions, it has also changed the way existing financial institutions work. The hardware and software used in the financial world is generally referred to as fintech. Needless to say, fintech has a special interest from the malware community at large. In fact, as we’ve already mentioned on Labs, 25 percent of all malware target financial institutions.

Banks were more or less forced to develop new standards and new technologies to keep up with modern demands. It is no longer acceptable having to wait for days for a money transfer to come through when we can purchase goods online and receive them at home a day later.

An important role has been taken up by the Society for Worldwide Interbank Financial Telecommunication (SWIFT) to establish quick and secure money transfers. SWIFT does not only aim to enable speedy identification, but also to eliminate errors and omissions in payment data, such as missing or incorrect beneficiary information or incomplete regulatory information.

In addition, there are banks that only exist online and use no brick-and-mortar branches at all. These banks use websites and apps only to provide their customers with the means to make transactions. This makes them and their customers possible targets for fake malicious websites or malware that takes advantage of vulnerabilities in apps. But the same is true nowadays for most older, established banks. They have all set up a digital infrastructure to keep up with the competition—and all of that infrastructure is open to attack.

Old school malware

Some of the oldest malware around the block was created to target financial institutions. However, calling it old school does not mean this malware is no longer effective. While many families have been around for years, they are under constant development to keep up with the latest methods of distribution and gathering financial data.

Banking Trojans are one of the first forms of malware that come to mind when considering which threats target our financial institutions. Nothing frightens us more than threat actors who can get ahold of enough personal information to clean out our bank account. Famous banking Trojan families are as follows:

  • Emotet was originally designed as a banking Trojan that attempted to sneak onto computers and steal sensitive, private information. Later versions of this malicious software saw the addition of spamming and malware delivery services—including other banking Trojans and cryptowallet stealers.
  • Ursnif is one of the most popular forms of information-stealing malware targeting Windows PCs, and it has existed in one form or another since at least 2007.
  • Zeus has been around in many forms for a long time as well and has a wide variety of offspring. This is because the code was published in 2011, and many other threat actors have build it out from there.
  • Kronos was first discovered in 2014 and quickly made a name for itself as an adept malware capable of stealing credentials and using web injects for banking websites. It is also believed to be marketed and rebranded as Osiris.

But PCs are not the only target modern threat actors are after. Odinaff is the name generally in use for a malware strain that performs targeted attacks on SWIFT software to inject fraudulent money transactions. In February 2016, attackers were successful in stealing $81 million from Bangladesh Bank using custom malware that allowed them to hack into the bank’s SWIFT software, transfer money into their accounts, and hide their tracks.

Also, with the introduction of banking apps, we saw the simultaneous introduction of Android banking malware. For example, take Gustuff, a Trojan equipped with web fakes designed to target Android app users of many top international banks. This mobile malware is also after crypto services, fintech companies’ Android programs, marketplace apps, online stores, payment systems, and messengers.


It’s not just consumers who are wary of being robbed by malware authors. So are many traders in digital currencies—and for good reason. Many of them have been robbed. Other trading platforms have been accused of exit scams, where the transactions are frozen in the platforms’ intermediate account under false pretenses, and eventually all the funds are funneled into the account of the perpetrator(s).

Cryptocurrencies have also introduced new types of crime. Blockchain technology allows for threat actors to perform a Sybil attack, which is like overwhelming the system by majority vote so you can influence any decision to be taken by the blockchain. Some networks make this easier than others because they are small or because they only use selected nodes as public peers. Electrum, for example, was confronted with malicious versions of their wallets that DDoS’ed legitimate nodes so that older clients were forced to connect to malicious ones.

Exploit kits

Banking malware and exploit kits have a long-standing relationship. Traditionally, exploit kits like RIG have been involved in the distribution of banking Trojans and other information stealers. EKs make their way onto machines via malvertising, malspam, drive-by-downloads, or as part of a Trojan-turned downloader such as Emotet, helping to spread malware laterally throughout networks.

Banks are not only a target for malware dropped by exploit kits because they are the shortest route to the money, but disrupting the financial sector of a country or region could be a useful card to play in a game of cyberwar.

APTs against banks

It is rare that an APT attack against a bank is discovered, but Carbanak is probably the most famous—and successful—one. However, even Carbanak was not particularly advanced, as no zero-days were used, although it was rather persistent.

The threat actors behind Carbanak managed to steal over $1 billion from a single bank. They did this by infecting the bank’s systems with spyware using spear phishing techniques. Analyzing the data sent to them by the spyware consisting of screenshots and keylogger logs, they learned enough to overtake the bank’s systems in such a way that they were able to create fake rich accounts, manipulate SWIFT transactions, and manipulate ATM payouts. An attack could last a few months and involved the use of many money mules.


The type of phishing we see most is an email supposedly from a bank, asking us to log in to perform some an urgent action—reset passwords and verify account information are comment requests. Only the links provided in the email go to a malicious copy of the bank’s website that was set up by the threat actor. If the victim logs in there, the threat actor can use the provided credentials to perform unauthorized withdrawals to an account under their control.

Users should also be aware of the dangers of phishing attempts on mobile devices, and of spoofed banking apps. In fact, even legitimate banking apps are quite vulnerable to attack.


As we have seen, financial institutions are targeted in many ways. What consumers can do to protect themselves and their financial accounts is both obvious and difficult to adhere to:

  • Don’t fall for the temptation to become a money mule.
  • Think before you click a link in an email. Better yet, bookmark the website for your bank and only use that site to log in.
  • Use a clean and protected device to make any financial transactions.
  • Use a safe and protected browser or banking app to check your accounts, deposit, or transfer money.
  • Be careful when you choose your cryptocurrency trading platform.

Financial institutions can follow a few ground rules to avoid attacks on their infrastructure:

  • Implement an anti-phishing plan.
  • Use specialized cybersecurity techniques to detect and thwart attacks, including a comprehensive cybersecurity solution, a well-trained IT staff, and an extensive cybersecurity policy/plan.
  • Limit permissions over the network to the minimum that is necessary to function.
  • Have an emergency plan in place for data breaches. Financial institutions traditionally store a lot of personal and sensitive information about their customers. Needless to say, these should be stored and handled with care (encrypted end-to-end).
  • Use trusted third-party or in-house developers to create secure banking apps and websites.

Money makes the world go round

As much as the landscape for financial institutions has changed, their importance to our functioning infrastructure remains intact. Where once bank robbers and con artists could rip off individuals and institutions, now cybercriminals, too, target our banks and other financial systems. It is key that our financial institutions protect our dollars and our data so that we can keep investing our money and our trust in them.

Stay safe, everyone!

The post Vital infrastructure: Threats target financial institutions, fintech, and cryptocurrencies appeared first on Malwarebytes Labs.

How 5G could impact cybersecurity strategy

With the recent news that South Korea has rolled out the world’s first 5G network, it’s clear that we’re on the precipice of the wireless technology’s widespread launch.

Offering speeds anywhere from 20 to 100 times faster than 4G long-term evolution (LTE), the next generation of wireless networks will also support higher capacities of wireless devices. That’s a huge deal considering the rise of IoT and similar technologies, all of which require a high-speed, active connection.

But along with the network upgrade—which will surely bring with it a boost in users relying on wireless frequencies—there are security concerns, some new.

Lucky for South Korea, this is something the local telecom companies are not so concerned with. Park Jin-Hyo, head of SK Telecom’s Information and Communication Tech Research Center says, “I don’t think we have a security issue in South Korea.”

However, the reality is that 5G introduces a variety of new cybersecurity concerns, particularly when it comes to intensified attacks.

As more and more devices are powered on and synced up, each one becomes a potential security vulnerability for the wider network. More specifically, many organizations will have to change or restructure their cybersecurity strategies to deal with the new platform.

Here are four ways that the rise of 5G can and will impact a company’s cybersecurity.

1. New risks will surface

In 2016, an incredibly dangerous denial-of-service (DDoS) attack took down most of the Internet on the US east coast. Initially, authorities believed that a certain hostile nation-state was responsible, targeting the country with nefarious ideations. As it turns out, the Mirai botnet was actually to blame, and it involved thousands of insecure IoT devices, including security cameras and similar tech.

More alarming is the fact that its creator originally only developed the system to take down rival Minecraft servers as a means to make some extra cash. The original intention was never to unload on the Internet as a whole, which shows that not all cybersecurity problems stem from mastermind criminals.

What does any of this have to do with 5G? Anything and everything. As soon as 5G networks are rolled out to the greater public, devices will be powered on and connected from a variety of mediums.

Everything from smart home security cameras to smart refrigerators to industrial-grade smart sensors can and will tap into the higher-performance networks. That presents a whole slew of new devices, tools, and systems that hackers can use to their advantage. From there, it’s not a stretch to predict another botnet will rise, one that targets vulnerable and insecure devices, which would mean we’ll see another series of attacks like the Mirai event.

2. More devices will necessitate smarter security solutions

As more devices are introduced, the security landscape becomes broader than ever before. Where once cybersecurity was concerned with internal computers and machines and a handful of authorized mobile devices, it is now expanded to include every possibility.

Install smart coffee makers in the company office? There needs to be a new set of security solutions administered to protect any incoming and outgoing connections related to that device. Install new machine sensors and remote-operation tools for industrial equipment? The same is true.

Security solutions will need to become just as broad to account for all the new network channels and devices, as a means to protect an entire operation. Not only will this facilitate new security requirements—like outsourcing to a more capable provider—but it will also have sweeping implications on the privacy and security of organizations as a whole.

Take that smart coffee maker, for instance. One might not think it’s transmitting or sharing sensitive data—it’s a simple coffee maker. But that doesn’t matter. Hackers could reverse engineer the device to serve more nefarious purposes. For example, they could tap into a microphone which should be used for voice commands and use it to spy on sensitive communications or events.

3. Increased bandwidth will raise capability concerns

Many security solutions involve monitoring traffic in real time to identify potential threats based on activity and sniffed data. Someone in-house visiting a flagged URL, for example, might reveal an inside man, so to speak. They might also discover that a device or machine has been infected, which warrants further investigation.

In any case, these systems are largely able to keep up because of bandwidth limitations. The Internet bandwidth or capacity of a network can only handle so much traffic at once. This is bad in terms of user performance but good in terms of managing security and traffic. With 5G, which offers incredibly higher speeds and capacity, all of that goes out the window.

Security solutions must be upgraded to deal with these new capabilities, particularly when it comes to monitoring, encryption, and prevention—the latter being handled by firewalls. A majority of legacy solutions may no longer work because of the increased capacity, speeds, and overall latency boost that 5G offers.

The frightening element is that because we have no 5G networks around today to test, no one truly knows what the network upgrade is going to require of security professionals. To achieve the higher capabilities, hardware will need to be upgraded to become more powerful, and the solutions themselves may need to be redeveloped to deal with the state of networks. What that looks like exactly, we won’t know until 5G is here.

4. Integration and automation will be a must

We’ve been on the verge of widespread security automation for some time. The current landscape has helped push the need for it, as organizations must be ready to deal with security threats at all hours of the day and night.

But integration has been optional, at least until recently. Integration simply means that the security architecture and system in use is connected across the entire operation. Data must correlate and sync even between security layers, and that’s true whether those divides are physical or digital in nature.

For example, someone trying to force their way inside a physical security facility should be flagged, and any further data that is related to their actions should be monitored digitally. That same person might try to find another way inside company infrastructure, including using various digital or physical systems and vulnerabilities. But integration extends beyond this quick example. Security data and the overall architecture must be evolved to handle the same kinds of threats that are developing in the real world.

A digital-centric hacker might move to physical means and vice versa, at any time. They might use a combination of strategies and attacks to gain unauthorized access—as they’re already showing with Emotet’s polymorphic, multiple module attacks or CrySIS ransomware’s versatile attack vectors. They will constantly be looking for ways in, which requires using automation to keep things running during the off-hours, too.

5G is coming

Advanced 5G and wireless networks are coming, and they will bring a huge selection of benefits, including higher traffic capacities, lower latency, and increased reliability. Naturally, that means more people and more organizations will rely on the new system for their devices.

Unfortunately, it also introduces a slew of cybersecurity concerns and problems, particularly as it relates to current security solutions.

Organizations will need to be prepared and should already have plans in place to upgrade and augment their existing security solutions. Failing to do so could have serious implications, not just for the organization itself but for the world at large. Sensitive data pertaining to the company and its customers could be stolen, and vulnerable devices could be used for nefarious deeds—just like we saw with Mirai botnet.

As we inch ever closer to the launch of next-gen wireless, we must continue to ask ourselves if we are truly prepared.

The post How 5G could impact cybersecurity strategy appeared first on Malwarebytes Labs.

Vulnerabilities in financial mobile apps put consumers and businesses at risk

Security hubris. It’s the phrase we use to refer to our feeling of confidence grounded on assumptions we all have (but may not be aware of or care to admit) about cybersecurity—and, at times, privacy.

It rears its ugly head when (1) we share the common notion that programmers know how to code securely; (2) we cherry-pick perceived-as-easier security and privacy practices over difficult and cumbersome ones, thinking that will be enough to keep our data secure; and (3) we find ourselves signing up to services owned by big-named institutions, believing that—given their strong branding, influence, and seemingly infinite resources—they are securing the privacy of their users’ data by default.

Point three, in particular, applies to how we perceive official mobile apps of financial institutions: We believe they are inherently secure. In a study called “In Plain Sight: The Vulnerability Epidemic in Financial Mobile Apps” [PDF], application security company Arxan Technologies looked to see if this perception is founded. Alas, what they found proved that it is not.

Understanding mobile app vulnerabilities

The overall lack of security in financial mobile apps stems from poor or weak app developing practices. According to the study, Arxan found 11 types of vulnerabilities because of this. They are:

  • Lack of binary protections. Binary protection is the same as binary hardening or application hardening. It’s the process of making a finished app difficult to tamper with or reverse engineer. Source code obfuscation is a way to harden an app’s security, for example. Unfortunately, the study found that all the financial institution apps they tested had no application security, making it easy for threat actors to decompile the app, find its weaknesses, and create an attack.
  • Insecure data storage. Financial mobile apps aren’t particularly good at storing users’ data. They usually store sensitive data in the mobile device’s local or external storage, outside of the sandbox environment, allowing other users to access and exploit it.
  • Unintended data leakage. The majority of financial apps share services with other apps on the mobile device, therefore leaving user data accessible to other apps on the device.
  • Client-side injection. This high-risk vulnerability, when exploited, allows malicious code to execute on the mobile device via the app itself. This could also allow threat actors to access various functions of the mobile device, adjust trust settings for apps, or, if the owner has put a sandbox in place for added protection, break out of it.
  • Weak encryption. An overwhelming number of financial institutions are either using the MD5 encryption algorithm or have implemented a strong cipher incorrectly. This allows for the easy decryption of sensitive data, which threat actors can steal or manipulate.
  • Implicit trust of all certificates. Financial apps do not implement checks when presented with web certificates. This makes the app susceptible to man-in-the-middle (MiTM) attacks, especially when fake certificates are involved. Attackers can intercept an exchange between the app and the financial institution, for example, by changing the bank account number from the original owner’s to the criminal’s in the middle of a money transfer transaction without anyone noticing.
  • Execution of activities using root. A considerable number of the mobile apps tested could conduct tasks on devices with elevated privileges. Much like an admin to a computer, who has free rein over what he can perform on the machine, criminals are also given similar privileges for the app if compromised. Elevated privileges can grant anyone access to normally-restricted data and the ability to manipulate settings, which are otherwise restricted to normal users.
  • World readable/writable files and directories. A fractional number of financial apps allowed for the reading and writing of their files, even when stored in a private data directory. Not only would this cause a level of data leakage, but compromised apps could allow criminals to manipulate said files to change the way the app behaves.
  • Private key exposure. Some apps have hard-coded API keys and private certificates either in their code or in one or more of their component files. Since these can be retrieved easily due to the app’s lack of binary protection, attackers could steal and use them to crack encrypted sessions and sensitive data, such as login credentials.
  • Exposure of database parameters and SQL queries. As financial apps show readable code when decompiled, attackers with a trained eye could readily know important code bits like sensitive database parameters, SQL queries, and configurations. This allows the attacker to perform SQL injection and database manipulation.
  • Insecure random number generator. Apps use a random number generation system for encryption or as part of their function. The better the system, the higher its unpredictability, the stronger the encryption. Most financial apps, however, reply on sub-par generators that makes guessing an easy challenge for attackers.

Small organizations are big on security

When it comes to creating secure financial mobile apps, medium- to large-sized companies could learn a thing or two from smaller organizations. According to the report, “Surprisingly, the smaller companies had the most secure development hygiene, while the larger companies produced the most vulnerable apps.”

Nathan Collier, Senior Malware Intelligence Analyst at Malwarebytes and principal contributor to our Mobile Menace Monday series, felt positive about this finding. “I love that smaller companies that care about their customers did better,” Collier said. “I checked my own credit union’s app, and they seem to be up-to-snuff with most of the things in the report.”

There’s room for improvement

In a recent report from Forbes, researchers found that 25 percent of all malware are targeting financial institutions. Other attacks related to financial services, such as fraud, are also on the uptick.

Given this trend, financial institutions must not only act to protect themselves from direct attacks, but also investigate how they develop the products they offer to clients. Whether apps are made in-house or via third-party, leaving security out of software development and letting programmers continue to write insecure code will cause more harm than good in the end.

Developers do care about security, and vulnerable software is the bane of every business organization. So why not make this an opportunity to innovate and adapt new practices based on the current threat landscape? After all, there’s always room for improvement.

The post Vulnerabilities in financial mobile apps put consumers and businesses at risk appeared first on Malwarebytes Labs.

The top six takeaways for user privacy

Last week, Malwarebytes Labs began closing out our data privacy and cybersecurity law blog series, a two-month long exploration spanning five continents, 50 states, just as many data breach notification laws, three non-universal definitions of personal information and personal data, five pending US data protection laws, and one hypothetical startup’s efforts to just make sense of it all.

We published six high-level takeaways from that series, focusing on what companies can and should do for data privacy compliance in the US and around the world.

Today, we bring the focus back to users. Amidst never-ending data breaches and constantly-surprising company fiascos, here are six takeaways for anyone in the US who cares about protecting their online privacy, whether in a court of law or in a web browser.

1. You are not alone

From January 14 through February 15, 2019, Malwarebytes surveyed nearly 4,000 individuals across 66 countries, asking them about their approaches to online privacy and cybersecurity. Do they care about online privacy? Do they do anything to protect their information online? Where do they admittedly fail?

The results were clear: Almost everyone, no matter their age or postal code, cares about online privacy.

A full 96 percent of respondents said they care about protecting their personal information, while 97 percent said they take steps in protecting their online data. Those steps include refraining from posting any sensitive personal data online, using cybersecurity software on their machines, running software updates regularly, and verifying the security of websites before making any purchases.

2. In the US, you have few legal options to assert your data privacy rights in court

Historically, the United States has approached data privacy legislation on a case-by-base basis, writing and passing laws that protect specific types of data collected by industry-specific companies.

There’s a law that protects health care data handled by health care providers (HIPPA). There’s a law protecting children’s data that applies to companies that knowingly market their products toward children (COPPA). There’s a law for video rental history, another for credit information, and another for banks, insurance companies, and certain financial institutions that collect personal information.

However, the sheer volume of these sector-specific data privacy laws never coalesces into comprehensive, legal data protection for Americans. Instead, the laws interlink to form more of a net—holes included.

As we wrote before:

“If a company gives intimate menstrual tracking info to Facebook? Tough luck. If a flashlight app gathers users’ phone contacts? Too bad. If a vast network of online advertising companies and data brokers build a corporate surveillance regime that profiles, monitors, and follows users across websites, devices, and apps, delivering ads that never disappear? Welcome to the real world.”

When a certain type of data isn’t regulated by a certain law, consumers are left with little legal recourse, said Lee Tien, senior staff attorney for Electronic Frontier Foundation.

“In general, unless there is specific, sectoral legislation, you don’t have much of a right to do anything with respect to [data privacy],” Tien said.


There is one caveat though…

3. Companies cannot legally lie about how they handle your data

In the US, companies are bound by laws that prohibit “unlawful, unfair, or fraudulent” business practices, along with “unfair, deceptive, untrue, or misleading” advertising. Those laws also cover data protection practices.

So, if a company says it will not sell your data, but it does, that company has broken the law, and it can be hit with a lawsuit. This same principle applies when a German automaker lies to the public about its “clean diesel” engines, or when the world’s largest social media company allegedly violates a privacy decree it made many years prior.

While these types of lawsuits can be filed by individuals, their success is limited. If, say, an individual wants to sue a company because of a data breach, that individual must first show that they personally suffered harm. Because of the myriad variables involved in any data breach—the actual criminals who stole the data, the direct relation from a data breach to potential economic injury—such harm is exceedingly difficult to prove.

In 2017, an Uber driver failed to meet just this requirement when he sued the company for a data breach that affected up to 50,000 drivers.

The judge at his hearing told him:

“It’s not there. It’s just not what you think it is…It really isn’t enough to allege a case.”

Fortunately, there is yet another caveat. State Attorneys General, county District Attorneys, and city attorneys can sue a company for its deceitful business practices without having to show personal harm. 

Those lawsuits have worked.

4. Take data privacy into your own hands with online tech tools

Filing a successful lawsuit—or waiting around for a government attorney to file one for you—is not the only way to protect your online privacy. Today, there are multiple online privacy tools that protect users from invasive online tracking, helping to put a wall between users and persistent online ads.

Paul Stephens, director of policy and advocacy for Privacy Rights Clearinghouse, said that users can protect their online activity by using a number of both privacy-focused web browsers and tracker-blocking browser extensions. Though Privacy Rights Clearinghouse does not endorse any products, Stephens mentioned the web browsers Brave and Firefox Focus—which both automatically block online tracking—and the browser extension Disconnect, which the New York Times chose as its favored anti-tracking tool.  

5. Beware of “data leakage”

Stephens had more advice for users that want to protect their online information: Do not trust any app to leave your private data alone.

“We have this naïve conception that the information we’re giving an app, that what we’re doing with that app, is staying with that app,” Stephen said. “That’s really not true in most situations.”

Stephens pointed to several examples of mobile apps that have, for no discernible reason, vacuumed up user data, like the flashlight app that collected mobile contacts. To avoid this problem, Stephens suggested users navigate the Internet on their mobile devices with a privacy-focused browser and not through any company-developed app.

“Quite frankly,” Stephens said, “I would not trust any app to not leak my data.”

6. You might gain more legal data protections in the next two years

Data privacy is, finally, a hot topic for US Congress members.

Last year, after the Guardian revealed how a political consultancy harvested the Facebook profiles of millions of unwitting users in a covert operation to sway the 2016 US presidential election, Congress responded. They called in Facebook CEO Mark Zuckerberg to testify. They peppered him with questions. They told him to his face that they would regulate his lurching social media behemoth.

Since then, they’ve held pursuit.

They invited Google, Alphabet, Twitter, and Facebook executives to explain what their companies were doing to curb Russian disinformation campaigns, and they balked at Google’s self-branded “error” in failing to disclose the microphones installed in its Nest home security products.

This new Congressional temperament has resulted in multiple legislative efforts to protect Americans’ data. Four US Senators and one digital rights nonprofit have all proposed individual federal bills that would regulate how companies collect, store, share, or sell user data. Even the private search engine DuckDuckGo threw its idea into the ring early this month.

Though the bills lack a clear frontrunner, data privacy itself could remain an important topic in the 2020 presidential election. Three Democratic candidates—Senators Amy Klobuchar of Minnesota, Cory Booker of New Jersey, and Michael Bennet of Colorado—have authored or co-sponsored data privacy legislation in the past year.

The post The top six takeaways for user privacy appeared first on Malwarebytes Labs.

What to do when you discover a data breach?

Your cell phone goes off in the middle of your well-deserved sleep and you try to find it before your partner wakes up as well.

“What could be wrong? Why would they page me in the middle of the night?”

More asleep than awake, you stumble down the stairs and call the number on the screen, which you already recognize as the one in use by the chief of the night shift. When you ask why you were called, he tells you it’s because you are part of the data breach incident response team.

Couldn’t it wait until morning?

The chief doesn’t know, that’s above his pay grade. You are the one who gets to decide whether it’s urgent enough to wake up the entire response team, so you’d better hurry over there.

On scene, one of the IT staff shows you two files on a server that shouldn’t be there. They are called and mimikatz. The hairs on the back of your neck stand up in reflex. Without further investigation, you have to assume that a database was zipped and transferred to an unauthorized machine and that someone got their hands on some passwords, or at least tried to retrieve them.

Your company has been breached.

You’ve been breached: now what?

The first point of attention is to figure out which type of information was stolen. So, you try to open the zip in an attempt to get a better idea about the content. Alas, the file is password protected, so you give up none the wiser.

The next item on your to-do list is to find out how the threat actors got in and how to keep them out. Since that is not your field of expertise, you ping the next person on your list.

You decide that it is of no use to assemble the rest of the team until you know more. Even though you have customers in every imaginable time zone, the rest of the research will have to wait until you can get ahold of the firm you contracted for forensic investigations.

While waiting for the night to pass, you prepare a press statement and, together with the system administrator, you prepare a preliminary report for the proper law enforcement authorities.

Be prepared

Data breaches do happen, as has been demonstrated over and over. We wish we could give you a fool-proof method to prevent them, but since such a thing doesn’t exist, the next best steps to take are:

  • To limit the possibilities of breaches happening again
  • To protect any sensitive data that could be stolen
  • To limit the usefulness of the stored data for a thief (e.g. by encrypting the data)
  • To be prepared for another eventual data breach

Our main character was fairly prepared, better than most organizations are in reality, I’m afraid. Having a detailed response plan enables security teams to reduce stress and makes sure that they don’t skip any steps. Without a script to follow, important steps could be forgotten or urgent tasks could be delayed while less compelling work is completed.

The steps outlined in our story are not necessarily right for every use case or organization, but they demonstrate that it helps if everyone knows who to contact, how to get in touch, and how to proceed in the face of an obstacle. A big part of setting up such a plan is to make sure that you follow obligations dictated by law and customer agreements.

Dealing with data breaches

How an organization manages a data breach is of the utmost importance. Going about it in the wrong way can break a company, while being open, transparent, and honest about it with the public can ultimately even improve customer trust.

It is imperative to figure out how the breach happened—not only to prevent it from happening again, but also to inform the public. Not knowing what happened means that it can happen again at any given time, since you will not have discovered which precautions were rendered useless, and which actually stopped the attack from doing further damage.


Our main character did some preliminary investigation but ultimately had to give up and wait for other professionals. It is advisable to hire an outside consultancy to help you with investigations if your internal team does not have the skills. They offer a professional viewpoint that is not too close to the target.

Inside eyes are sometimes troubled by near-vision or may be reluctant to point out the true cause. Hiring an outside consultancy also improves the public’s view of your organization, as they see you have gone through the trouble and cost of trying to keep their data safe.

Informing the public

Before you inform the public, it makes sense to get the full picture about what, exactly, was stolen. You don’t want to cause a panic over a couple emails discussing Friday night plans.

But don’t wait too long, or that could backfire. Sometimes it’s better to give out a quick statement and let the public know that you are investigating the matter further. If they somehow find out before you have issued a statement, that will make your organization look like it has something to hide.

What customers want to know:

  • Which data were stolen? And was I affected?
  • Can the stolen data easy lead back to a person? Is it personal information?
  • What do I need to do if I was affected? Is it a matter of simply changing a password or do I need to worry about identity theft?

What the press wants to know:

The press will have some extra questions, which usually boil down to:

  • How did it happen?
  • What are you going to do to prevent it from happening again?

Be open about all of the above, unless you haven’t been able to close the hole in your defenses. It may help other organizations and it will highlight your transparency. It might also help law enforcement with their investigation. Even when the damage is already done, you will still want the threat actors to be brought to justice, if possible.

General advice on data breaches

Of course, we hope you’ll never need these tips but many have wished they would have thought of them beforehand:

  • Be prepared. Make sure everyone knows who to inform and those involved know how to act. An emergency plan will never be a perfect fit, but it should at least outline the order and importance of actions.
  • Don’t run the risk of legal implications to add to your burden. Know what your obligations are and fulfill them.
  • Be open and transparent about what happened and what was stolen.
  • Hire outside specialists to assist in your investigations.
  • Learn from the incident to prevent a retake.

Stay safe, everyone!

The post What to do when you discover a data breach? appeared first on Malwarebytes Labs.

A week in security (April 29 – May 5)

Last week on Labs we discussed the possible exit scam of dark net market Wall Street Market, how the Electrum DDoS botnet reaches 152,000 infected hosts, we looked at the sophisticated threats plague ailing healthcare industry, a mysterious database that exposed personal information of 80 million US households, how Mozilla urges Apple to make privacy a team sport, the state of cryptojacking in the post-Coinhive era, and we digested the top six takeaways for corporate data privacy compliance.

Other cybersecurity news

  • The news that Europol shut down two prolific dark web marketplaces in simultaneous global operations, one of which was Wall Street Market, shed a new light on the possible exit scam. The other marketplace was Silkkitie aka the Valhalla Marketplace. (Source: Europol)
  • Scammers are now sending sextortion emails stating that they have a tape of you and them having intercourse and are threatening to release it if you do not send them a $1,500 in bitcoins. (Source: Bleeping Computer)
  • Mozilla has released an update today for Firefox that fixes the issue with an expired signing certificate that disabled add-ons for the vast majority of its userbase over the weekend. (Source: ZDNet)
  • A Pennsylvania credit union is suing financial industry technology giant Fiserv, alleging that security vulnerabilities in the company’s software are wreaking havoc on its customers. (Source: Krebs on Security)
  • A researcher has discovered vulnerabilities in more than 100 plugins designed for the Jenkins open source software development automation server and many of them have yet to be patched. (Source: SecurityWeek)
  • Facebook has been hit with three new separate investigations from various governmental authorities—both in the United States and abroad—over the company’s mishandling of its users’ data. (Source: The Hacker News)
  • NIST tool uses updated combinatorial testing to enable more comprehensive tests on high-risk software to reduce potential errors. (Source: NIST)
  • A hacker exploited the fact that some botnet operators had used weak or default credentials to secure the backend panels of their command and control (C&C) servers and was able to take over the IoT DDoS botnets of 29 other hackers. (Source: ZDNet)
  • Programmers say they’ve been hit by ransomware that seemingly wipes their Git repositories’ commits and replaces them with a ransom note demanding Bitcoin. (Source: The Register)
  • Mirrorthief group uses Magecart skimming attack to hit hundreds of campus online stores in US and Canada. (Source: Trendlabs)

Stay safe everyone!

The post A week in security (April 29 – May 5) appeared first on Malwarebytes Labs.

The top six takeaways for corporate data privacy compliance

For nearly two months, Malwarebytes Labs has led readers on a journey through data privacy laws around the world, exploring the nuances between “personal information” and “personal data,” as well as between data breach notification laws in Florida, Utah, California, and Iowa.

We explored the risks of jumping into the global data privacy game, comparing the European Union’s laws with the laws in China, South Korea, and Japan. And we also examined current legislative proposals in the United States to better protect Americans’ data.

But all that information was delivered across five separate blogs of more than 10,000 collective words. Look, we get it—it’s a lot to read through. So, we’re offering some help.

Before fully closing out our data privacy and cybersecurity law series, we are providing the top six takeaways for corporate data privacy compliance. From emerging startups to burgeoning enterprises, these rules should help businesses not just with legal liability, but also to better understand—and gain—user trust.

Here we go.

1. Write and post a privacy policy

In 2004, California changed the online privacy landscape for companies everywhere. The Golden State—which would soon become a pioneer in data privacy law—passed the California Online Privacy Protection Act.

The law is simple. Any company, organization, or entity that runs a website which also collects the personally identifiable information of California residents must also post a privacy policy on their site.

The privacy policy must explain the types of information collected from users, the types of information that may be shared with third parties, the effective date of the privacy policy, and the process—if any—for a user to review and request changes to their collected information.

Because the law applies to any website that collects Californians’ information, it applies far beyond the state’s geographic borders. This isn’t just for California-based companies like Apple, Google, Twitter, and LinkedIn. It’s also for Washington-based Microsoft, New York-based Verizon, and Texas-based Dell.

Also, the law requires that every privacy policy be easy to find. Even Big Tech doesn’t challenge this requirement: In 2007, after reporting by the New York Times, Google decided to more prominently display its privacy policy on its website.

2. Do not lie in your privacy policy

This should be obvious, but in case it is not: Do not lie to your users about what you do with their data. You can collect their data, store their data, share their data, even sell their data, so long as you tell them the truth.

Any company that lies about its data protection practices could be hit with a lawsuit from a state Attorney General or, pending some legal hoops to jump through, an individual user. That’s because, in the US, data protection rights can still be asserted under an area of the law that prohibits “unlawful, unfair, or fraudulent” business practices, along with “unfair, deceptive, untrue, or misleading” advertising.

Lee Tien, senior staff attorney at Electronic Frontier Foundation, explained this area of consumer privacy law.

“Most of consumer privacy that’s not already controlled by a statute lives in this space of ‘Oh, you made a promise about privacy, and then you broke it,’” Tien said. “Maybe you said you don’t share information, or you said that when you store information at rest, you store it in air-gapped computers, using encryption. If you say something like that, but it’s not true, you can get into trouble.”

These lawsuits have been successfully filed against companies before. Last year, Uber agreed to pay $148 million to settle a lawsuit alleging the company’s misconduct when covering up a 2016 data breach. The lawsuit was brought by every single state Attorney General in the United States, plus the Attorney General for Washington, DC.

3. If you want to expand beyond the US market, consult a data privacy lawyer first

Data privacy and cybersecurity laws abroad are not like the laws in the US.

For example, the European Union recently bestowed upon its citizens the new rights to access, control, transport, and delete information that companies collect on them. China’s cybersecurity law grants its government the right to inspect and even copy the source code of incoming software products. South Korea’s cybersecurity laws include fierce penalties and even possible jail time. Singapore, often viewed as a friendly country for US expansion, has its own cybersecurity law that protects “essential” services, a definition that does not exist here in the US.

Expanding into a new country is, most of all, a question of risk: Can you afford—quite literally—the cost of compliance? 

4. Personal information is not the same as personal data

The terms “personal information,” “personal data,” and “personally identifiable information” get thrown around a lot, sometimes even interchangeably, but these terms have specific legal definitions that do not carry over so easily from one to another. The definitions for the terms do vary, however, depending on which law in which state or country you consult.

The important thing to remember is that these terms describe types of information that companies are legally required to protect. Protecting one law’s definition of “personal information” is not the same as protecting another law’s definition of “personal data,” and mixing the two up could lead to compliance mishaps.

The best advice is to, once again, consult a data privacy lawyer. Getting lost in an array of country-specific, legal rabbit holes does not help anyone.

Michelle Donovan, intellectual property and cyber law partner at Duane Morris LLP put it clearly:

“What it comes down to, is, it doesn’t matter what the rules are in China if you’re not doing business in China. Companies need to figure out what jurisdictions apply, what information are they collecting, where do their data subjects reside, and based on that, figure out what law applies.”

5. Get ready for comprehensive data privacy legislation in the US

In the past year, at least four US Senators have proposed comprehensive, federal data privacy legislation. Each bill seeks to improve Americans’ online privacy.

Sen. Ron Wyden’s bill, for example, proposes that dishonest tech executives face potential jail time. Sen. Amy Klobuchar’s bill, on the other hand, focuses on making corporate privacy policies clear and understandable. Sen. Marco Rubio’s bill would ask the country’s trade enforcement agency, the Federal Trade Commission (FCC), to propose its own rules on data privacy, which Congress would later vote on. And Sen. Brian Schatz’s bill would place a new “duty to care” requirement on companies handling user data.

None of the above-mentioned bills have received a vote in Congress, but this area could move fast, and many assume that data privacy will become a lynchpin issue in the 2020 presidential election.

6. Respect and protect your users’ data

Your users have few legal options in asserting their data privacy rights. Despite this, your company should take it upon itself to treat user privacy with respect.

You will not be alone in this proactive decision. Apple, Mozilla, Signal, WhatsApp, CREDO Mobile, ProtonMail, Helix DNA, and several other companies already understand that meaningful user privacy can serve as a competitive advantage.

As Malwarebytes Labs showed this year, people care immensely about online privacy. Listening to your users should not be a matter of legal compliance, but a matter of respect.

Join us next week for another set of data privacy takeaways, this time for consumers in the US.

The post The top six takeaways for corporate data privacy compliance appeared first on Malwarebytes Labs.

Cryptojacking in the post-Coinhive era

September 2017 is widely recognized as the month in which the phenomenon that became cryptojacking began. The idea that website owners could monetize their traffic by having visitors mine for cryptocurrencies in their browser was not new, but this time around it became mainstream, thanks to an entity known as Coinhive.

The mining service became a household name overnight, and quickly drew ire for its original API, whose implementation failed to take into account user approval and CPU consumption. As a result, threat actors were quick to abuse it by turning compromised sites and routers into a large illegal mining business.

The ride was wild but, as we came to see, short-lived, as Coinhive shut its doors in March 2019 following months of steady decline and loss of interest in browser-based mining.

As such, this blog will strictly focus on web-based miners, which were impacted the most by Coinhive’s closure. It will not cover malware (binary-based) coin miners that are still infecting PCs, Macs, and servers.

Coinhive relics left behind

Interestingly, we still detect thousands of blocks for Coinhive-related domain requests, even though the service announced it was shutting down on March 8. Over the past week, our telemetry recorded an average of 50,000 blocks per day.

A spike in traffic just days after the service shut down, followed by decline and plateau

Digging deeper, we see that a large number of websites and routers have never been cleaned, and the bits of JavaScript requesting the Coinhive library are still there. Evidently, with the service down, the necessary WebSocket that sends and receives data between client and server will fail to connect to the server, resulting in zero mining activity or gain.

Hacked site makes web request for Coinhive but fails to connect to the backend

Is cryptojacking still a thing?

To answer that question, we go back to the early adopters of browser-based mining: torrent sites. In the screenshot below, we can see something familiar enough—CPU usage maxed out at 100 percent while visiting a proxy for The Pirate Bay.

Torrent portals are still running cryptojacking code

This is exactly what started the cryptojacking trend back in 2017, when users weren’t told about this code running on their machine, let alone that it was hijacking their processor for maximum usage.

In this instance, the mining API was provided by CryptoLoot, which was one of Coinhive’s competitors at the time. While we are nowhere near the same levels of activity as we saw during fall 2017 and early 2018, according to our telemetry, we detect and block over 1 million requests to CryptoLoot each day.

There are a few other services out there, and it’s worth mentioning CoinIMP, which we’ve seen used more sensibly on file-sharing sites.

Router-based mining still going

While the number of compromised sites loading web miners was going down in 2018, a fresh opportunity presented itself, thanks to serious vulnerabilities affecting MikroTik routers worldwide.

By injecting mining code from a router and serving it to any connected devices behind it, criminals could finally scale the process so it was not limited to visiting a particular website, therefore generating decent revenues.

The number of hacked routers running a miner has greatly decreased. However, today we can still find several hundred that are harboring the old (inactive) Coinhive code, and have also been injected with a newer miner (WebMinePool).

Campaigns gone missing

Perhaps the biggest change in cryptojacking-related activity is the lack of new attacks and campaigns in the wild targeting vulnerable websites. For example, in spring 2018, we saw waves of attacks against Drupal sites where web miners were one of the primary payloads.

These days, hacked sites are leveraged in various traffic monetization schemes that include browlocks, fake updates, and malvertising. If the Content Management System (CMS) is Magento or another e-commerce platform, the primary payload is going to be a web skimmer.

We might compare cryptojacking to a gold rush that didn’t last too long, as criminals sought more rewarding opportunities. However, we wouldn’t rush to call it fully extinct.

We can certainly expect web miners to stick around, especially for sites that generate a lot of traffic. Indeed, miners can provide an additional revenue stream that is, as concluded in this Virus Bulletin paper,”depend[ent] on various factors, including, of course, the value of cryptocurrencies, which historically has been volatile.”

The next time cryptocurrencies see an upturn in the market, expect threat actors to do what they do best: exploit the situation for their own profit.

The post Cryptojacking in the post-Coinhive era appeared first on Malwarebytes Labs.

Mozilla urges Apple to make privacy a team sport

We often say cybersecurity is a team sport, but, pending a public advocacy campaign from one major tech developer to another, the same might be true for online privacy.

Mozilla is currently getting people around the world to lend their voices toward Apple, asking that the company place some extra barriers between iPhone users and online advertisers. Though cybersecurity researchers disagree about the technology behind the request, the campaign has proved popular. In little over a week, more than 11,000 individuals put their names to the cause.

Public advocacy campaigns, common amongst digital rights groups, are a tried-and-true practice for Mozilla, which racked up a couple wins in the past year-and-a-half. And, while such campaigns often target privacy abusers, Mozilla’s petition to Apple is different—it puts the pressure on another privacy champion.

So, why spend the time to push Apple to raise the bar? Because, according to Mozilla, it could work, which could then lead to an outsized benefit for users everywhere.  

“Apple’s track record of protecting user privacy was actually a motivation, and not a deterrent, for launching this campaign,” said a spokesperson from Mozilla’s advocacy team. “It’s an issue they clearly care about, so we’re encouraging them to do better.”

Apple has not yet responded to the petition, and it did not respond to a request for comment, but if Mozilla succeeds, it will have made an important point: When the technology industry pushes itself to better respect user privacy, we all win.

The petition and the tech

In mid-April, Firefox developer Mozilla launched a public petition at Apple. The browser-making nonprofit asked Internet users around the world to push the world’s richest company into making one small change to its iPhones—regularly rotate an internal ID that lets advertisers track users’ online behavior.

“There is a unique ID living on your iPhone right now that allows advertisers to track the ads you click on, the videos you play, and the apps you install,” Mozilla wrote about the iPhone ID code, which is called an “ID for Advertisers,” or IDFA. Though the ID cannot reveal an iPhone user’s identity—and users can actually turn the identifying feature off—Mozilla argued that it still poses a roadblock to privacy.

“It’s like a salesperson following you from store to store while you shop and recording each thing you look at,” wrote Mozilla Vice President of Advocacy Ashley Boyd in a related blog.  Pushing back against Apple’s recent advertising campaign that bills the iPhone as the near-definition of privacy, Boyd wrote: “Not very private at all.”

Cybersecurity researchers are split on the idea. Some experts—including Thomas Reed, director of Mac and mobile at Malwarebytes—actually called for even tougher privacy controls.

“I think that Apple should disable ad tracking and location-based ads by default, rather than the user having to opt out,” Reed said, referring to users’ ability to turn off the IDFA capabilities. “That would provide way more benefit than what Mozilla proposes.”

Forrester Research senior analyst John Zelonis, in speaking to ThreatPost, shared Reed’s sentiment, explaining that monthly IDFA changes—as Mozilla proposed—would not meaningfully impede on advertisers’ ability to track users online.

“Rolling the IDFA on a monthly basis would only be an effective anonymizer if the app owners weren’t able to track a user across those newly-generated IDFAs using login sessions or other methods of associating a user to an IDFA,” Zelonis told the outlet. “The impact of making this change would likely only increase the value of the data collected by apps that are finding ways to track across IDFA, not necessarily solve the problem at hand.”

However, a separate researcher also told ThreatPost that Apple should not have to change a thing.

“Apple’s current way of handling the IDFA is the correct one,” the researcher said.

Despite the researchers’ disagreements, there’s a separate story here. It’s about privacy champions pushing one another to do better.

Privacy vs. privacy

For years, Mozilla has not only advocated for privacy, it has also developed it into online tools.

In 2017, Mozilla released its privacy-focused Android web browser, Firefox Focus, earning more than one million downloads in the first month. In 2018, Mozilla developed a browser add-on to give users a more private experience when using Facebook, making it harder for the social media giant to collect information away from the platform itself. In the past two months, Mozilla has also released a secure file transfer service and a password manager.

The nonprofit then pivoted, using its earned reputation in privacy to push others to do better.

In 2018, before the release of Amazon’s “Echo Dot Kids Edition”—which includes a version of the smart assistant Alexa that tells children “wake-wakey, eggs and bakey”—Mozilla asked the retail giant to open up about how it would collect children’s data.

Months later, Mozilla launched a public campaign about the payment processing app Venmo, gathering 25,000 signatures to steer the company into making users’ payment transactions private by default.

“It’s a tactic we use often,” said the Mozilla spokesperson. “We’ve learned that when companies hear from consumers, they act.”

As an example, the spokesperson pointed to Mozilla’s success in getting Target and Walmart to stop selling a hackable children’s toy last summer.

Despite Mozilla’s familiarity with this turf, the target is new: Apple has a far better track record than Amazon or Venmo in defending user privacy.

In 2015, Apple began its famous fight against a government request to build a workaround to its secure mobile operating system. The workaround—which many in the technology community called a “backdoor”—would have let the FBI access encrypted data on a suspected terrorist’s iPhone. But the demand pushed too far, said Apple CEO Tim Cook in an open letter published the day after his company received the legal order.

“Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation,” Cook wrote. “In the wrong hands, this software—which does not exist today—would have the potential to unlock any iPhone in someone’s physical possession.”

Apple’s stance won the approval of many privacy rights advocates, including the American Civil Liberties Union, Electronic Frontier Foundation, and Center for Democracy and Technology. The move also won the approval of Mozilla, conjuring executive-penned op-eds in both Time and CNN.

It is these two tech developers’ strong privacy records that makes Mozilla’s petition seem more like a friendly reminder than a stern warning. But no matter the tone, if Mozilla gets the iPhone maker to move, the impact could go beyond Apple’s ecosystem.

As Mozilla’s Boyd wrote:

“If Apple makes this change, it won’t just improve the privacy of iPhones—it will send Silicon Valley the message that users want companies to safeguard their privacy by default.”

We agree.

The post Mozilla urges Apple to make privacy a team sport appeared first on Malwarebytes Labs.

Mysterious database exposed personal information of 80 million US households

Word has broken of yet another massive data trove exposed for anyone to see. A research team from vpnMentor discovered an exposed 24GB database hosted on a Microsoft cloud server containing the addresses, income levels, and marital statuses of users within 80 million US households.

As we’ve seen recently, many organisations aren’t taking steps to secure their customer data and every so often one makes the news. Some may have been exploited while exposed; others will have been lucky.

Occasionally, there’s a quick takedown of the exposed information; sometimes it’s nearly impossible to find out who, exactly, is responsible. At that point, the only option left is to ping someone like Microsoft to take that final step and hope they can do something about it.

What’s the damage report?

Since 80 million US households were sitting in this database, that means considerably more people could have been impacted. Across thousands of entries, the researchers couldn’t find anyone listed under the age of 40.

The exposed data included a mixture of coded information and non-coded information. Non-coded items included street addresses, cities, states, counties, zip codes, latitude and longitude coordinates, ages, dates of birth, and first/last names along with middle initials. The data assigned a coded, numerical value contained information, such as marital status, income, gender, dwelling type, and homeowner status.

Decoding the numbers

In practice, what the coded and non-coded entries mean is you could easily view someone’s name or address, but something like gender or title is instead assigned a numerical value. Some of the information chained to coded values may not be possible to figure out: For example, “Income [1]” or “Income [6]” may be too obscure to put a salary range on it. However, if you see “Steve” and the gender assigned is “[1]” then it’s probable that 1 = male on all their records.

In this way, even where data is assigned a numerical code, you can piece together most of a person’s profile. If the salary for people listed 70 and up is “10”, then 10 might be “retired”, “on a pension plan”, or something similar.

In fact, there’s a lot of code-assigned sections alongside viewable data, so full street address + code for dwelling type + Google maps = a quicker and easier way to assign home-types to people listed then (say) target them with property-specific phish attacks or other social engineering tactics.

What exactly is this database for?

Given the upper end of the ages listed in this database, they could well be more susceptible to these kind of tricks. The database was eventually taken offline by Microsoft, who have apparently notified the owner(s). Meanwhile, researchers have asked the public to try and help identify exactly who this data belongs to.

They suspect it has some sort of financial service connection, such as insurance or mortgaging or perhaps healthcare. The specific age range shown in the data looks at might have suggested a form of dating app for older generations, except it makes no sense for it to focus on households rather than individuals. The geo-locational coordinates may associate this with some form of mobile app connection, as you’d typically expect to see that via portable apps as opposed something filled in on the desktop.

Time to play the waiting game

No matter the purpose of  the database, the good news is that it’s currently offline. It also doesn’t seem to be the case that it’s been used maliciously—for now, anyway. There isn’t a huge amount anyone can do in this situation beyond advising to be wary of the usual social engineering scams.

Ultimately, this database is large but also quite generic, with no way to say for sure exactly what it’s for. As a result, it’s a case of being on your guard and keeping some common sense handy at all times.

This isn’t something to worry about for the time being, and hopefully this tale begins and ends with “someone needs to secure their data better.”

The post Mysterious database exposed personal information of 80 million US households appeared first on Malwarebytes Labs.