After a fairly long hiatus that lasted nearly four months, Emotet is back with an active spam distribution campaign. For a few weeks, there were signs that the botnet was setting its gears in motion again, as we observed command and control (C2) server activity. But this morning, the Trojan started pumping out spam, a clear indication it’s ready to jump back into action.
The malicious emails started in the wee hours of Monday morning, with templates spotted in German, Polish, and Italian. Our Threat Intelligence team has also captured phishing samples sent in English.
Victims are lured to open the attached document and enable the macro to kick-start the infection process.
The PowerShell command triggered by the macro will attempt to download Emotet from compromised sites, often running the WordPress CMS.
Once installed on the endpoint, Emotet attempts to spread laterally, in addition to stealing passwords from installed applications. Perhaps the biggest threat, though, is that Emotet serves as a delivery vector for more dangerous payloads, such as ransomware.
Compromised machines can lay in a dormant state until operators decide to hand off the job to other criminal groups that will attempt to extort large sums of money from their victims. In the past, we’ve seen the infamous Ryuk ransomware being deployed that way.
While Emotet is typically focused on infecting organizations, Malwarebytes business and individual customers are already protected against this campaign, thanks to our signature-less anti-exploit technology. As always, we recommend users be cautious when opening emails with attachments, even if they appear to come from acquaintances.
As this campaign is not even a day old, we don’t yet know the impact on organizations and other users. We will continue to update this post as we learn more throughout the day. In the meantime, warn your coworkers, friends, and family to be wary of emails disguised as invoices or any other “phishy” instances.
AI superbrawl on the way? One researcher suggests people may soon have to make use of AI to turn the tide against those who use it maliciously. (Source: The Register) Interestingly, Labs posed this idea, too—back in June. (Source: Malwarebytes Labs blog)
Drone wars: After multiple incidents related to drones disrupting flights at international airports, it appears law enforcement has managed to prevent at least once such bout of aerial hijinks. (Source: Met Police)
Penetration testing is often conducted by security researchers to help organizations identify holes in their security and fix them, before cybercriminals have the chance. While there’s no malicious intent for the researcher, part of his job is to think and act like a cybercriminal would when hacking, or attempting to breach, an enterprise network.
Therefore, in this article, I will review Amazon AWS buckets as an avenue for successful penetration tests. The case study I’m using is from a reconnaissance engagement I conducted against a business. I will specifically focus on how I was able to use AWS buckets as an additional avenue to enrich my results and obtain more valuable data during this phase.
NOTE: For the safety of the company, I will not be using real names, domains, or files obtained. However, the concept will be clearly illustrated despite the lack of specifics.
The goal of this article is to present an alternative or an additional method for professionals conducting pen-tests against an organization. In addition, I hope that it may also serve as a warning for companies deciding to use AWS to host private data and a reminder to secure potentially leaky buckets.
What is an AWS bucket?
Amazon Simple Storage Service (S3) provides an individual or business the ability to store and access content from Amazon’s cloud. This concept is not new, however, because businesses use AWS buckets to not only store and share files between employees, but also host Internet-facing services, we have seen a wealth of private data being exposed publicly.
The types of data we have discovered range from server backups and backend web scripts to company documents and contracts. Files within S3 are organized into “buckets,” which are named logical containers accessible by a static URL.
A bucket is typically considered public if any user can list the contents of the bucket, and private if the bucket’s contents can only be listed or written by certain S3 users.
Checking if a bucket is public or private is easy. All buckets have a predictable and publicly accessible URL. By default this URL will be either of the following:
s3.amazonaws.com/[bucket_name]/ or [buckey_name].s3.amazonaws.com/
Pen-test workflow: hacking phases
Let’s begin by talking about the first phase of a penetration test: reconnoissance. The purpose of this phase is to gather as much information about a target in order to build an organized list of data which will be used in future hacking phases (scanning, enumeration, and gaining access).
In general, some of the data which pen-testers hope to obtain during this phase is as follows:
Names, email addresses, and phone numbers of employees (to be used for phishing)
Details of systems used by the business (domains, IPs, and other services to be enumerated in future phases)
Files containing usernames, passwords, leaked data, or other access-related items
Customer data (side channel compromise of a trusted third party can be just as valuable as the target themselves)
All of the items above are examples of things we can and have found while scouring AWS buckets. But first, before we get into the information discovered, let’s talk about finding the buckets themselves and the role that an S3 bucket search can play in a pen-test.
An easy first step in any recon phase is to enumerate the primary domain via brute force hacking or any other method, looking to hopefully find subdomains which may be hosting services within a company. This is pretty standard procedure, but a business does not always have all of its data or resources internally hosted. Often there are unofficial IPs that may be hosted offsite serving a secondary role to the primary business or possibly hosting developer resources or even file storage.
While the company’s internal services, such as mail, websites, firewalls, security, and documentation may be hosted within a subdomain, for several reasons there are still possibilities that offsite or separate servers are used by the company. For this reason, it is a good idea to expand your search, using not only Google hacks and keywords to look for related services or domains.
External resource example
Once specific example related to this was a PACS server data leak from one of UCLA’s medical centers. Now while this server was technically operated by a UCLA med centers, it was not an official service of UCLA, so to speak.
Translation: This server was not part of the UCLA domain. It happened to be independently hosted by one of the residing med center employees, yet in a more obscure way, it was still related to the company. This is an example of the sort of side channel opportunities available to criminals.
Finding leaky buckets
Moving forward, an Amazon S3 bucket is a prime example of one such “unrelated” service not directly tied to the business’ infrastructure. My main purpose for introducing this is to give pen-testers a new hacking avenue in addition to Google hacks. Because although Google searching on a company can lead you to their AWS bucket, it is more effective to search the open buckets directly.
There are a number of tools that can be used to discover wide-open buckets. The one I’d like to highlight is the web application Grayhat Warfare. Of all the tools I use, it is the most user friendly and most easily accessible.
As you can see below, it is quite intuitive:
Let’s take a look at this application and see how a pen-tester might try to use it to discover a bucket owned by an organization.
There are a few ways in which pen testers can uncover unsecured data at their companies. One is by searching for filenames that you might expect to find from the target organization. For example, knowing some of the services or products the enterprise produces, you might search those specific product names, or company_name.bak.
Additionally, having completed some other recon, incorporating usernames in this search can lead to some results. In general, this is the part of the process that requires all of the creativity and thinking outside of the box.
Hacking with AWS case study
Now let’s dig into the case study to see these recon methods in action. In this specific case, the target was an entertainment company that produces content for the music industry. From some Google searching, I happened to come across the fact that they developed an Android app. The app name in this case had no relation to the actual company name; these are exactly the discoveries needed to make to expand searches of leaky buckets.
Using the name of the app and searching it within Grayhat Warfare, I was lucky enough to find an AWS bucket containing a file with the name of the app. One important thing to note is that the bucket name and URL were also completely different and unrelated to the name of the company.
This unrelated naming scheme is often by design. Rather than creating an obvious name, business infrastructure architects often name servers according to themes. You might see planets, Greek gods, Star Wars characters, or favorite bands. This makes searching for services a bit more obscure for a hacker.
Once the app filename was found within a bucket, it was simply a matter of manually looking through the rest of the files to verify if the bucket in fact belonged to the target organization. This eventuality led me to find even more info to use for a deeper recon search.
One amazing find on this server was actually a zip file with the app’s source code. This contained database IPs, usernames, and passwords. This is something that may never have been discovered using traditional recon methods since the IP happened to be an offsite DreamHost account. It was completely untied to any of the company’s resources.
OSINT standards plus AWS buckets = more data
The main point I wanted to illustrate from my test case is how hacking with AWS can be incorporated into the pen-test workflow as an iterative fingerprinting cycle. Using Google hacks, Shodan, and social networks are a standard for open source intelligence (OSINT). We use these traditional methods to gather as much data as possible, then once we have found as much as we can find, we can blast that data against bucket search tools to retrieve deeper info.
From that point, pen-testers can restart the whole search process with this new data. Recon can often be recursive, one result leading to another, leading to another. However, incorporating AWS bucket searching into your pen-test workflow can provide data that may not have been obtained using the other methods.
If any readers have other hacking or search tools they have come across or alternative methods for recon, please feel free to mention them below in the comments.
Last week, the Federal Trade Commission (FTC) announced that it has required Google and YouTube to pay a settlement fee totaling $170 million after its video-sharing platform was found violating the Children’s Online Privacy Protection Act (COPPA). The complaint was filed by the FTC and the New York Attorney General, with the former set to receive the penalty amounting to $136 million and the latter $34 million.
According to the FTC’s press release, this penalty “is by far the largest amount the FTC has ever obtained in a COPPA case since Congress enacted the law in 1998.”
Note that the complaint doesn’t involve the YouTube Kids app, a YouTube service dedicated to showcasing child-directed content only. Although the app still displays ads, albeit to a limited degree, it doesn’t track child data for this purpose.
i-Dressup.com, a dress-up game website, agreed to settle charges in April after the FTC filed a complaint, alleging that the operators of the website failed to ask for parental consent when collecting data from children under 13.
Since its inception, YouTube has touted itself as a video-sharing platform for general audience content. It was created for adults and not intended for children under 13 years of age.
Through the years, however, YouTube has become a constant companion of young children. In fact, according to market research company, Smarty Pants, YouTube is recognized as the most beloved brand among US kids aged 6–12 years old for four straight years since 2016. 
It’s also exceedingly difficult to defend the argument that “YouTube is not for children” when a sizable amount of child-directed content is already present—and continues to grow and rake in billions of views—on the platform.
YouTube’s business model is dependent on collecting personal information and persistent identifiers (i.e., cookies) from users for behavioral or personalized advertising. Child-directed channel owners who chose to monetize their content allowed YouTube to collect data of their target audience: children under 13, the age group YouTube said it wasn’t built for.
It’s easy to assume that YouTube may not have the means to know which data belongs to which age group, else they would have acted on it. However, according to the complaint [PDF], YouTube did know that they were collecting data from children.
More surprising, even, is the fact that Google used YouTube’s brand popularity among young kids as part of their marketing tactic, selling themselves to manufacturers and brands of child-centric products and services as “the new Saturday Morning Cartoons” among others.
Despite this knowledge, YouTube never attempted to notify parents about their data collection process, nor did they ask parents for consent in the data collection. In COPPA’s eyes, these are enormous red flags.
Good news: positive change is at hand
The settlement agreed upon between the FTC and Google/YouTube includes a monetary relief—the $170 million payout in this case—and three injunctive reliefs, which is defined as an act or prohibition the companies must complete as ordered by the court. As per the press release, the injunctions are as follows:
Google and YouTube must “develop, implement, and maintain a system that permits channel owners to identify their child-directed content on the YouTube platform so that YouTube can ensure it is complying with COPPA.”
Google and YouTube must “notify channel owners that their child-directed content may be subject to the COPPA Rule’s obligations and provide annual training about complying with COPPA for employees who deal with YouTube channel owners.”
Google and YouTube are prohibited from violating the COPPA Rule, and the injunction “requires them to provide notice about their data collection practices and obtain verifiable parental consent before collecting personal information from children.”
As content creators are also culpable in letting
the platform know about the kind of content they’re producing and posting, the
FTC has noted that failure to inform YouTube that their content is aimed at
children could be subject to removal from YouTube and other civil penalties.
Susan Wojcicki, CEO of YouTube, took to its official blog to personally update readers by expanding on these changes and reminding users that the company has been actively making changes within the video platform from Q4 2017.
“We’ve been significantly investing in the policies, products and practices to help us do this,” wrote Wojcicki. “From its earliest days, YouTube has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased. We’ve been taking a hard look at areas where we can do more to address this…”
Here is a list of expanded changes that YouTube will undergo in the coming months:
In four months, data from anyone viewing content directed at children will be treated as coming from a child regardless of the viewer’s actual age.
Personalized ads will no longer be served within child-directed content. (Note that this doesn’t mean that no ads will be shown.)
Some YouTube features, like comments and notifications, will be unavailable on such content.
YouTube will be using machine learning to find child-directed content.
YouTube will further promote their YouTube Kids app to parents by running a campaign on YouTube itself and creating a desktop version of the app.
YouTube will be providing support to family and kid content creators during the transition phase.
YouTube is launching a $100 million fund for creators to make content that are both original and thoughtful. This fund is dispersed over the next three years.
General dissatisfaction with results
While some may see this as a historic win for the FTC and the New York Attorney General, others view it as another exercise in avoiding due punishment for another big company breaking the law.
Case in point: Commissioner Rohit Chopra, one of the two commissioners who voted against the settlement, pointed out in his statement [PDF] the same mistakes the Commission made in a similar Facebook case: “[There is] no individual accountability, insufficient remedies to address the company’s financial incentives, and a fine that still allows the company to profit from its lawbreaking.”
Chopra also noted inconsistencies with the way the FTC handles cases of small companies versus large firms. Ergo, the former is penalized excessively while the latter gets off easy. With this point, James P. Steyer, founder and CEO of Common Sense Media, agrees.
“The settlement is nothing more than a slap on the wrist for a company as large as Google, and does not enforce meaningful change to truly protect children’s data and privacy,” Steyer said in an official statement.
However, he also recognized that YouTube’s stated reforms are moving the dialogue forward. “YouTube’s commitment to enacting specific reforms on the platform is also a step in the right direction, but they must now put resources behind their statement. Kids and families must be a top priority in both Washington, DC, and in Silicon Valley.”
Commissioner Rebecca Slaughter, the other dissenting party, raised in her own statement [PDF] that the injunctions are incomplete, for they lack the orders and/or mechanisms—a “technological backstop”—that ensure content creators are telling the truth in properly designating channels of child-directed content.
Slaughter isn’t the only one to mention what’s lacking in the settlement. Speaking to Angelique Carson, editor for the International Association of Privacy (IAPP) in The Privacy Advisory Podcast, Linnette Attai, president of global compliance firm PlayWell and COPPA expert, expressed her concerns.
“We’re not seeing the rigorous third-party auditing that we’ve seen, traditionally, in COPPA settlements. We’re not seeing requirements to delete data, which is something that you will see in very early COPPA settlements but seems to have fallen off as an option for the FTC in recent years,” she said. “It’s one thing to say, ‘You cannot use this data.’ It’s quite another to say, ‘You have to delete it,’ which ensures that you cannot accidentally use it.”
V for vigilance
Every person has data, and in this day and age, it’s passed around regularly, oftentimes nonchalantly, to those who may or may not appreciate its value. If users are unfazed about big and small companies crossing lines to monetize personal data, perhaps a stark reminder that cybercriminals are after your PII, too, will make you seriously consider how you approach your data privacy.
Organizations in the emergency services sector are there for the public to provide help when situations get out of hand or are too much to handle. This can be because the problem requires special tools and skills to use them, and the organizations are set up to provide assistance at short notice. We are all familiar with the three main types of organizations that fall in this category:
Emergency medical services
But there are other similar organizations that can be put in the same category, for example, bomb squads, SWAT teams, HAZMAT teams, and sea rescue teams. These and similar groups exist both in the public as in the private sector.
One of the prerequisites for these types of first responders is that they react swiftly, accurate, and with coordinated effort. Besides regular drills, this requires a lot of automation and computerized equipment. Which is what makes it all the worse if one of these organizations get hindered by malware.
Ransomware doesn’t care whether it’s locking up a system full of family pictures or one that is filled with police files. And some malware authors have shown that they make use of the urgency to get certain systems back online, and up the ante accordingly.
Police departments and sheriff’s offices alike store a lot of confidential information about victims and suspects. Information that could give threat actors a good angle for a phishing campaign or extortion. Another delicate matter on police records is evidence. Evidence could become inadmissible even if there is only a suspicion that there has been illegal access to the system it was stored on. So these systems should at all times be kept inaccessible from the Internet to ward off information stealers, ransomware, and remote access trojans (RATs).
A Texas police department learned this the hard way went they lost 1TB of critical CCTV data due to a ransomware attack. The chief of police decided not to pay the ransom even though they did not have adequate backups, which led to a total loss of all the data.
In 2017, ransomware infected 70 percent of storage devices that held recorded data from D.C. police surveillance cameras eight days before President Trump’s inauguration, forcing major citywide re-installation efforts.
Another law enforcement agency that found itself hit by a ransomware attack was the Lauderdale County Sheriff’s Department in Meridian, Mississippi, on May 28, 2018. They became a victim to a variant of Dharma/Crysis ransomware and most of their systems were taken down by the attack. For Lauderdale County, an old, forgotten password was exploited by attackers to deliver the ransomware.
Emergency medical services
When you are in urgent need of medical attention or need to be transported to a medical facility in a hurry, you count on emergency medical services to come to the rescue. What the paramedics need most in such cases is trustworthy lines of communication to provide and receive updates about the medical emergency or the traffic conditions. The communications equipment in question can be diverse and include phones, radios, computers, and dispatch systems.
What you don’t want is some unnamed malware to cripple your communications systems. This happened to the St John Ambulance service in New Zealand. Mobile data and paging services were worst affected by the problem, suggesting that some sort of bandwidth-hogging worm overloaded the system. Dispatch staff normally send information on jobs over the ambulance crew via on-board mobile data terminals. Because of the malware, they had to call ambulance stations or the mobile phones of crew members instead.
The same communications dependency is certainly true for fire departments, whether they are a public fire department, or a company fire brigade trained to deal with specific dangers. They need to know all the relevant information about the situation, and they want to know it before they get there so they can anticipate and plan their actions accordingly.
One small slip, however, and an entire fire department can fall victim to a malware attack, which can cripple internal communications and data storage or compromise sensitive information for both department members and everyday citizens.
The Honolulu Fire Department personnel inadvertently downloaded a ransomware computer virus that infected about 20 of their computers in 2016, forcing the department to temporarily shut down all its administrative computers. The department’s emergency response was thankfully not affected because their computer-aided dispatch system and the computers in the firetrucks operate on a separate network.
Emergency services infrastructure
In some countries all the public emergency services use the same overhead infrastructure to communicate with each other and to receive calls. You really want these systems to be robust and redundant, but nevertheless sometimes they fail or get compromised.
Not attributed to malware but to a software bug, the Dutch emergency number—112 which is the Dutch equivalent of 911—was unreachable for hours. As it turned out, the backup system used the exact same software including the bug, which rendered the backup system quite useless in this scenario. The singular services responded quickly by providing the public with alternatives, but retrospectively, the service interruption was held responsible for two deaths.
In 2017, hackers managed to set off emergency sirens throughout the city of Dallas on a very early Saturday morning. Not only does the public lose trust in the system when false alarms occur, the consequences of a coinciding with a real emergency could have been disastrous. The mayor used the hack as a reason to upgrade and better safeguard the city’s technology infrastructure.
What can we take away from the examples we
Backup systems not only need to be easily deployed, but also need to be truly independent.
Even when the budget is tight, these systems need to be prioritized.
Separated networks can save your bacon, especially if you can keep them detached from the world wide web.
Backup systems are not the only backups you need. Important files need to be backed up as well.
And when it concerns sensitive and important data like evidence or investigation records, extra care is needed.
Systems should always be up and running so they are available for queries, but only to those with the proper authority. Backup systems should be adequate and separate.
Apply the principle of least privilege, making sure that users, systems, and processes only have access to those resources that are necessary to perform their duties.
The systems need a form of guaranteed integrity to ensure that the data entered into a system are untampered with, and it should be possible to track back any changes when they are needed.
A problem that most emergency services have in common is a limited budget and often the lack of a dedicated staff to handle IT security. That money is often spent on other necessary means—all understandable in a sector where human lives are regularly at stake. But recent events in the US have demonstrated all too well that emergency services need to be well orchestrated. There is no lack of dedication from the people doing these jobs, so they should be allowed to work with the best—and safest—equipment. Equipment that we can trust to be secured.
The times, they are a changin’. When users once felt free to browse the Internet anonymously, post about their innermost lives on social media, and download apps with frivolity, folks are playing things a little closer to the vest these days.
No wonder Internet users are on a hunt for certain tools that will give them added privacy—and not just security—while surfing the web, either at home, in the office, or on the go.
While some might go for Tor or a proxy server to address their need for privacy, many users today embrace virtual private networks, or VPNs.
Depending on who you ask, a VPN is any and all of these:  a tunnel that sits between your computing device and the Internet,  helps you stay anonymous online, preventing government surveillance, spying, and excessive data collection of big companies,  a tool that encrypts your connection and masks your true IP address with one belonging to your VPN provider,  a piece of software or app that lets you access private resources (like company files on your work intranet) or sites that are usually blocked in your country or region.
Not all VPNs are created equal, however, and this is true regardless of which platform you use. Out of the increasing number of VPN apps already out there, which is currently in the hundreds, a notable number of them are categorized as unsafe—especially those that are free.
In this post, we’ll take a closer look at free VPNs for mobile devices—a category many say has the highest number of unsafe apps.
But first, the basics.
How do VPNs work?
Rob Mardisalu of TheBestVPN illustrated a quick diagram of how VPNs work—and it’s pretty much as simple as it looks.
Normally, using a VPN requires the download and installation of an app or file that we call a VPN client. Installing and running the client creates an encrypted tunnel that connects the user’s computing device to the network.
Most VPN providers ask users to register with an email address and password, which would be their account credentials, and offer a method of authentication—either via SMS, email, or QR code scanning—to verify that the user is indeed who they say they are.
Once fully registered and set up, the user can now browse the public Internet as normal, but with enhanced security and privacy.
Let’s say the user conducts a search on their browser or directly visits their bank’s official website. The VPN client then encrypts the query or data the user enters. From there, the encrypted data goes to the user’s Internet Service Provider (ISP) and then to the VPN server. The server then connects to the public Internet, pointing the user to the query results or banking website.
Regardless of which data is sent, the destination website always sees the origin of the data as the VPN server and its location—and not the user’s own IP address and location. Neat, huh?
What VPNs don’t do
However comforting using VPNs can be, realize that they can’t be all things privacy and security for all users. There are certain functions they cannot or will not complete—and this is not limited to the kind of VPN you use.
Here are some restrictions to be aware of. VPNs don’t:
Offer full anonymity. Keeping you anonymous should be inherent in all available VPNs on the market. However, achieving full anonymity online using VPNs is nearly impossible. There will always be traces of data from you that VPNs collect, even those that don’t keep logs—and by logs, we mean browsing history, IP address(es), timestamps, and bandwidth.
Connect you to the dark web. A VPN in and of itself won’t connect you to the dark web should you wish to explore it. An onion browser, like the Tor browser, can do this for you. And many are espousing the use of both technologies—with the VPN masking the Tor traffic, so your ISP won’t know that you’re using Tor—when surfing the web.
Give users full access to their service for free. Forever. Some truly legitimate VPNs offer their services for free for a limited time. And once the trial phase expires, users must decide on whether they would pay for this VPN or look for something else free.
Protect you from law enforcement when subpoenaed. VPNs will not allow themselves to be dragged into court if law enforcement has reason to believe that you are engaging in unlawful activities online. When VPN providers are summoned to provide evidence of their user activities, they have zero compelling reason not to comply.
Protect you from yourself. No anti-malware company worth its salt would recommend users visit any website they want, open every email attachment, or click all the links under the sun because their security product protects them. Being careful online and avoiding risky behaviors, even when using a security product, is still an important way to protect against malware infection or fraud attempt. Users should apply the same security vigilance when using VPNs.
Who uses VPNs and why?
What started out as an exclusive product for businesses to ensure the security of files shared among colleagues from different locations has become one of the world’s go-to tools for personal privacy and anonymity.
Average Internet users now have access to more than 300 VPN brands on the market, and they can be used for various purposes.
According to the latest findings on VPN usage by market research company GlobalWebIndex, the top three reasons why Internet users around the world would use a VPN service are:
to access location-restricted entertainment content
to use social networks and/or news services (which may also have location restrictions)
to maintain anonymity while browsing the web
Mind you, these aren’t new. These reason have consistently scored high in many VPN usage studies published before.
Users from emerging markets are the top users of VPN worldwide, particularly Indonesia at 55 percent, India at 43 percent, the UAE at 38 percent, Thailand at 38 percent, Malaysia at 38 percent, Saudi Arabia at 37 percent, the Philippines at 37 percent, Turkey at 36 percent, South Africa at 36 percent, and Singapore at 33 percent.
The report also noted that among the 40 countries studied, motivational factors for using VPNs vary. Below is a summary table of this relationship:
Mobile VPN apps are most popular
A couple more interesting takeaways from the report: A majority of younger users are surfing the Internet with VPNs, especially on mobile devices. The details are as follows:
A vast majority of Internet users aged 16-24 (74 percent) and 25-34 (67 percent) use VPNs.
Users access the Internet using VPNs on mobile devices, which in this case includes smart phones (69 percent) and tablets (33 percent).
32 percent use VPNs on mobile devices nearly daily compared to 29 percent at this frequency on a PC or laptop.
With so many (mostly younger) users adopting both mobile and desktop VPNs to view paid content or beef up privacy, it’s no wonder that Android and iOS users often opt for free mobile VPN apps instead of paid products belonging to more established names.
But parsing through hundreds of brands is no easy feat. And the more you investigate, the more difficult it is to choose. For the average user, this is too much work when all they want to do is watch Black Mirror on Netflix. And that’s likely why so many unsafe apps make their way onto the market and are installed on users’ mobile devices.
“Free” doesn’t mean “risk-free”
When it comes to free stuff on the Internet, the majority of us know that we don’t really get something for nothing. Most of the time, we pay with our data and information. If you think this doesn’t apply to free mobile VPN apps, think again.
“There is a significant problem with free VPN apps in Google Play and Apple’s App Store,” says Simon Migliano, head of research at Top10VPN, in an email interview. He further explains: “[V]ery few of the VPN providers offer any transparency about their fitness to operate such a sensitive service. The privacy policies are largely junk, while 25 percent of apps suffer DNS leaks and expose your identity. The majority are riddled with ad trackers and are glorified adware at best, spyware at worst.”
Top10VPN also noted that several of the top 20 VPN apps on both Android and iOS have ties to China.
There were other investigations in the past about mobile VPN apps, both free and commercial. Thanks to them, we’ve seen improvements over the years, yet some of these concerns persist. Also note the severe lack of user awareness, which helped such questionable free VPN apps to have high ratings, encouraging more downloads, and possibly keeping them at the top of the ranks.
In a 2016 in-depth research report [PDF] published by the Commonwealth Scientific and Industrial Research Organization (CSIRO) along with the University of South Wales and UC Berkeley, researchers revealed that some mobile VPN apps, both paid and free, leak user traffic (84 percent for IPv6 and 66 percent for DNS), request sensitive data from users (80+ percent), employ zero traffic encryption (18 percent), and more than a quarter (38 percent) use malware or malvertising.
Traffic leaking was a problem not exclusive to free VPN apps. Researchers from Queen Mary University of London and Sapienza University of Rome had found that even commercial VPN apps were guilty of the same problem. They also found that the DNS configurations of these VPN apps could be bypassed using DNS hijacking tactics. Details of their study can be viewed in this Semantic Scholar page.
Free VPNs behaving badly
Research findings are one thing, but organizations and individuals finding and sharing their experiences of the problems surrounding free VPNs makes all the technical stuff on paper become real. Here are examples of events where free VPNs were (or continue to be) under scrutiny and called out for their misbehavior.
The Hotspot Shield complaint. Mobile VPN app developer AnchorFree, Inc. was in the limelight a couple of years ago—and not for a good reason. The Center for Democracy & Technology (CDT), a digital rights advocacy group, had filed a complaint [PDF] with the FTC for “undisclosed and unclear data sharing and traffic redirection occurring in Hotspot Shield Free VPN that should be considered unfair and deceptive trade practices under Section 5 of the FTC Act.”
HolaVPN caught red handed. HolaVPN is one of the many recognizable and free mobile VPN apps. In 2015, a spammer by the pseudonym of Bui began a spam attack against 8chan, which later revealed that he/she was able to do so with the help of Luminati, a known network of proxies and a sister company to HolaVPN. Lorenzo Franceschi-Bicchierai noted in his Motherboard piece that Luminati’s website boasted of having “millions” of exit nodes. Of course, these nodes were all free HolaVPN users.
In December 2018, AV company, Trend Micro, revealed that they found evidence of the former KlipVip cybercrime gang (who were known to spread fake AV software or rogueware) using Luminati to conduct what researchers believe is a massive-scale ad click fraud campaign.
Innet VPN and Secnet VPN malvertising. Last April, Lawrence Abrams of BleepingComputer alerted iPhone users of some mobile VPNs taking a page out of fake AV’s book in ad promotion: scare tactics. Users clicking a rogue ad on popular sites found themselves faced with pop-up messages claiming that their mobile device was either infected or they were being tracked.
Unfortunately, that was not the first time this happened—and may not be the last. Our own Jérôme Segura saw first-hand a similar campaign exactly a year before the Bleeping Computer report, but it was pushing users to download a VPN called MyMobileSecure.
VPNs are not inherently evil
In spite of inarguable evidence of the shady side of free mobile VPN apps, the fact is not all of them are bad. This is why it’s crucial for mobile users who are currently using or looking into using a free VPN service to conduct research on which brands they can trust with their data and privacy. No one wants an app that promises one thing but does the complete opposite.
When users insist on using a free VPN service, Migliano suggests that they should sign up to a service based on the freemium model, as these platforms don’t have advertising, so it keeps privacy intact. He also offered helpful questions users need to ask themselves when picking the best VPN that fits their needs.
Also watch out for VPN reviews. They can be disguised adverts.
Finally, users have the choice to go for a paid service, which is a business model a majority of well-established and legitimate mobile VPN services follow. Or they can create their own. As not everyone is savvy enough to do the latter, the former is the next logical choice. Migliano agrees.
“The best thing you can do is pay for a VPN,” he said. “It costs money to operate a VPN network and so if you aren’t paying directly, your browsing data is being monetized. This is clearly a cruel irony given that a VPN is intended to protect a user’s privacy.”
A new Chinese Deepfake app is under fire for privacy concerns related to the use of uploaded images. (Source: CNN)
Bucking the current trend for city councils and organizations paying the crooks ransom to regain control of computers, the public at large don’t seem to be happy with this arrangement. (Source: Help Net Security)
The classic “booby trap something students greatly desire” makes a comeback as new school year opens. (Source: The Next Web)
Many organizations will spend significant sums of money on phishing training for employees. Taking the form of regular awareness training, or even simulated phishes to test employee awareness, this is a common practice at larger companies.
However, even after training, a consistent baseline of employees will still click a malicious link from an unknown sender. Today, we’ll look at a potential reason why that might be: corporate communications often look like phishes themselves, causing confusion between legitimate and illegitimate senders.
Corporate communications templates
Below is an email template found on a Microsoft technet blog, used as an example of how a sysadmin can communicate with users.
While well meaning, and providing users with pretty good instructions, this template falls afoul of phishing design in a few ways.
The large “Action Required” in red with an exclamation point creates a false sense of urgency disproportionate to the information presented.
There is no way provided to authenticate the message as legitimate corporate communications.
The email presents all information at once on the same page, irrespective of relevance to an individual user.
The link for assistance is at the bottom and suggests a generic mailbox rather than referencing a person to contact.
So what’s the harm here? Surely a user can ignore some over-the-top design and take the intended message? One problem is that per Harvard Business Review, the average office worker receives on average 120 emails per day. When operating under consistent information overload, that worker is going to be taking cognitive shortcuts to reduce interactions with messages not relevant to them.
So training the employee to respond reflexively to “Action Required” can cue them to do the same with malicious emails. Including walls of texts in the body of the email reinforces scanning for a call to action (especially links to click), and a lack of message authentication or human assistance ensures that if there’s any confusion about safety, the employee will err on the side of not asking for help.
Essentially, well-meaning communications with these design flaws train an overloaded employee to exhibit bad behaviors—despite anti-phishing training—and discourage seeking help. It’s no wonder that, according to the FBI, losses from business email compromise (BEC) have increased by 1,300 percent since January 2015, and now total over $3 billion worldwide.
With this background in mind, what happens when the employee gets a message like this?
Of note is that both phishes are more accessible to a skimming reader than the Microsoft corporate notification, and the calls to action are less dramatic. The PayPal phish in particular has a passable logo and mimics the language of an actual account alert reasonably well.
A closer reader would spot incongruities right away: The first phish would be caught instantly. For the second, the sender domain does not belong to PayPal. If you copy the link and paste into a text editor, the link goes to an infected WordPress site rather than PayPal, and the boxed numbers with instructions look weird. But an employee receiving 120 emails a day is not a close reader. The phishes are “good enough.”
A safer alternative
So how do we do better? Let’s look at a notification email from AirBnB.
First and foremost, the notification is brief. The entire content of relevance to the user is communicated in a single sentence, made large and bold for readability upfront. What follows are details for the end user to authenticate the transaction, listed in an order of probable descending interest to the user.
Next is a clear path to obtain assistance, voiced in language suggestive of a person at the other end. Last is a brief explanation of why the user should consider the communication legitimate, with multiple use cases provided to set expectations.
The myth of the stupid user
Industry discussion of phishing and click through rates centers largely around how awful and ignorant users are. Solutions proffered generally concern themselves with restricting email functionality, “effective” shame-and-blame punishments for clicking the malicious link, and repetitive phish training that neither aligns to how users engage with email, nor provides appropriate tools for responding to ambiguous emails, like the notification template above.
All of this is a waste of time and budget.
If an organization has a “stupid user” problem, a more effective start to address it would be looking at design cues in that user’s environment. How many emails are they getting a day, and of those, how many look functionally identical? How many aren’t really relevant or useful to their job?
When network defenders send out communications to the company, do they look or feel like phishes? If the user gets a sketchy email, who’s available to help them? Do they know who that person is, if anyone? Structuring employees’ email loads such that they follow the steps below will both “smarten up” an employee quickly and cost nothing. Employees should therefore:
Have a light enough burden to engage critically with messages
Get corporate comms tailored to their job requirements
Have an easy way to authenticate that trusted senders are who they say they are
Be able to get help with zero friction
So before organizations engage in more more wailing and gnashing of teeth over the “stupid user” and the cost of training and prevention, think for a long while on how communication happens in your company, where the pain points are, and how you can optimize that workflow.
After all, wouldn’t you like to get less email, too?
As remote working has become standard practice, employees are working from anywhere and using any device they can to get the job done. That means repeated connections to unsecured public Wi-Fi networks—at a coffee shop or juice bar, for example—and higher risks for data leaks from lost, misplaced, or stolen devices.
Think about it.
Let’s say your remote employee uses his personal smart phone to access the company’s cloud services, where he can view, share, and make changes to confidential documents like financial spreadsheets, presentations, and marketing materials. Let’s say he also logs into company email on his device, and he downloads a few copies of important files directly onto his phone.
Now, imagine what happens if, by accident, he loses his device. Worse, imagine if he doesn’t use a passcode to unlock his phone, making his device a treasure trove of company data with no way to secure it.
Recent data shows these scenarios aren’t just hypotheticals—they’re real risks. According to a Ponemon Institute study, from 2016 through 2018, the average number of cyber incidents involving employee or contractor negligence has increased by 26 percent.
To better understand the challenges and best practices for businesses with remote workforces, Malwarebytes teamed up with IDG Connect to produce the white paper, “Lattes, lunch, and VPNs: securing remote workers the right way.” In the paper, we show how modern businesses require modern cybersecurity, and how modern cybersecurity means more than just implementing the latest tech. It also means implementing good governance.
Below are a few actionable tips from our report, detailing how
companies should protect both employer-provided and personal devices, along
with securing access to company networks and cloud servers.
If you want to dive deeper and learn about segmented networks, VPNs, security awareness trainings, and how to choose the right antivirus solution, you can read the full report here.
1. Provide what is necessary for an employee to succeed—both in devices and data access.
More devices means more points of access, and more points of access means more vulnerability. While it can be tempting to offer every new employee the perks of the latest smart phone—even if they work remotely—you should remember that not every employee needs the latest device to succeed in their job.
For example, if your customer support team routinely assists
customers outside the country, they likely need devices with international
calling plans. If your sales representatives are meeting clients out in the
field, they likely need smart devices with GPS services and mapping apps. Your
front desk staff, on the other hand, might not need smart devices at all.
To ensure that your company’s sensitive data is not getting
inadvertently accessed by more devices than necessary, provide your employees with
only the devices they need.
Also, in the same way that not every employee needs the
latest device, not every employee needs wholesale access to your company’s data
and cloud accounts, either.
Your marketing team probably doesn’t need blanket access to
your financials, and the majority of your employees don’t need to rifle through
your company’s legal briefs—assuming you’re not in any kind of legal
predicament, that is.
Instead, evaluate which employees need to access what data through a “role-based access control” (RBAC) model. The most sensitive data should only be accessible on a need-to-know basis. If an employee has no use for that data, or for the platform it is shared across, then they don’t need the login credentials to access it.
Remember, the more devices you offer and the more access that employees are given, the easier it is for a third party or a rogue employee to inappropriately acquire data. Lower your risk of misplaced and stolen data by giving your employees only the tools and access they need.
2. Require passcodes and passwords on all company-provided devices.
Just like you use passcodes and passwords to protect your personal devices—your laptop, your smart phone, your tablet—you’ll want to require any employee that uses an employer-provided device to do the same.
Neglecting this simple security step produces an outsized
vulnerability. If an unsecured device is lost or stolen, every confidential
piece of information stored on that device, including human resources information,
client details, presentations, and research, is now accessible by someone
outside the company.
If your employees also use online platforms that keep them
automatically logged in, then all of that information becomes
vulnerable, too. Company emails, worktime Slack chats, documents created and
shared on Dropbox, even employee benefits information, could all be wrongfully
To keep up with the multitude of workplace applications, software, and browser-based utilities, we recommend organizations use password managers with two-factor authentication (2FA). This not only saves employees from having to remember dozens of passwords, but also provides more secure access to company data.
3. Use single sign-on (SSO) and 2FA for company services.
Like we said above, the loss of a company device sometimes
results in more than the leak of just locally-stored data, but also network
and/or cloud-based data that can be accessed by the device.
To limit this vulnerability, implement an SSO solution when
employees want to access the variety of your available platforms.
Single sign-on offers two immediate benefits. One, your employees don’t need to remember a series of passwords for every application, from the company’s travel request service to its intranet homepage. Two, you can set up a SSO service to require a secondary form of authentication—often a text message sent to a separate mobile device with a unique code—when employees sign in.
By utilizing these two features, even if your employee has
their company device stolen, the thief won’t be able to log into any important
online accounts that store other sensitive company data.
Two of the most popular single sign-on providers for small and medium businesses are Okta and OneLogin.
4. Install remote wiping capabilities on company-provided devices.
So, your devices have passwords required, and your company’s
online resources also have two-factor authentication enabled. Good.
But what happens if an employee goes turncoat? The above
security measures help when a device is stolen or lost, but what happens when
the threat is coming from inside, and they already have all the necessary
credentials to plunder company files?
It might sound like an extreme case, but you don’t have to
scroll far down the Google search results of “employee steals company data” to
find how often this happens.
To limit this threat, you should install remote-wiping
capabilities on your company-provided devices. This type of software often
enables companies to not just wipe a device that is out of physical reach, but
also to locate it and lock out the current user.
Phone manufacturer-provided options, like Find my iPhone on Apple devices and Find my Mobile on Samsung devices, let device owners locate a device, lock its screen, and erase all the data stored locally.
5. Implement best practices for a Bring Your Own Device (BYOD) policy.
When it comes to remote workers, implementing a Bring Your Own Device policy makes sense. Employees often prefer using mobile devices and laptops that they already know how to use, rather than having to learn a new device and perhaps a new operating system. Further, the hardware costs to your business are clearly lower.
But you should know the risks of having your employees only accomplish
their work on their personal devices.
Like we said above, if your employee loses a personal device
that they use to store and access sensitive company data, then that data is at
risk of theft and wrongful use. Also, when employees rely on their personal
machines to connect to public, unsecured Wi-Fi networks, they could be
vulnerable to man-in-the-middle attacks, in which unseen threat actors
can peer into the traffic that is being sent and received by their machine.
Further, while the hardware costs for using BYOD are lower,
sometimes a company spends more time ensuring that employees’ personal devices
can run required software, which might decrease the productivity of your IT
Finally, if a personal device is used by multiple
people—which is not uncommon between romantic partners and family members—then
a non-malicious third party could accidentally access, distribute, and delete
To address these risks, you could consider implementing some
of the following best practices for the personal devices that your employees
use to do their jobs:
Require the encryption of all local data on personal devices.
Require a passcode on all personal devices.
Enable “Find my iPhone,” “Find my Mobile,” or similar features on personal devices.
Disallow jailbreaking of personal devices.
Create an approved device list for employees.
It’s up to you which practices you want to implement. You should find a balance between securing your employees and preserving the trust that comes with a BYOD policy.
Securing your company’s remote workforce requires a multi-pronged approach that takes into account threat actors, human error, and simple forgetfulness. By using some of the methods above, we hope you can keep your business, your employees, and your data that much safer.
Malware was discovered in a Google Play listed PDF-maker app that had over 100 million downloads. (Source: Techspot)
Insurance companies are fueling a rise in ransomware attacks by telling their customers to take the easy way to solve their problems. (Source: Pro Publica)
Hackers are actively trying to steal passwords from two widely used VPNs using unfixed vulnerabilities. (Source: ArsTechnica)
A new variant of the Asruex Backdoor targets vulnerabilities that were discovered more than six years ago in Adobe Acrobat, Adobe Reader, and Microsoft Office software. (Source: DarkReading)
In a first-ever crime committed from space, a NASA astronaut has been accused of accessing mails and bank accounts of her estranged spouse while aboard the International Space Station (ISS). (Source: TechWorm)
Command and control (C2) servers for the Emotet botnet appear to have resumed activity and deliver binaries once more. (Source: BleepingComputer)
A security researcher has found a critical vulnerability in the blockchain-based voting system Russian officials plan to use next month for the 2019 Moscow City Duma election. (Source: ZDNet)
The French National Gendarmerie announced the successful takedown of the wide-spread RETADUP botnet, remotely disinfecting more than 850,000 computers worldwide. (Source: The Hacker News)
The developers behind TrickBot have modified the banking trojan to target customers of major mobile carriers, researchers have reported. (Source: SCMagazine)
A coin-mining malware infection previously only seen on Arm-powered IoT devices has made the jump to Intel systems. (Source: The Register)