What to do when you receive an extortion e-mail

In the last few weeks, there has been an upswing in people receiving threatening, extortion e-mail messages, demanding payment to avoid release of sensitive information. Most of the time, these e-mails are what we call “sextortion” e-mails, as they claim that malware on your computer has captured embarrassing photos of you through the webcam, but there can be other variants on the same theme.

These extortion e-mails are nothing new, but with the recent increase in frequency, many people are looking for guidance. If you have received such an e-mail message and want to know how you should respond, you’re in the right place. Read on!

Extortion claims

These e-mail messages are not all exactly the same, but they do have fairly common characteristics. Consider the following example:

Example extortion e-mail text

This is fairly representative of many examples. It starts out by telling you that the scammer knows one of your passwords, and the password really IS one of your passwords, which immediately ratchets up the fear and puts you in a mindset to believe that the rest of the message is also true. (Hint: it is not.)

Next, it tells you that the scammer knows other things about you, including photos of you doing something embarrassing, captured through malware on the computer. The message threatens to send these photos to people you know. Some variants may not involve this kind of “sextortion,” but the general pattern of doing something damaging with data stolen from the user is the same.

In order to prevent this, the scammer demands to be paid, usually in a currency called Bitcoin. There’s usually a time limit given for the payment, to really put the pressure on and encourage fast action rather than seeking help.

Are the extortion claims true?

With one exception, none of this is true. There is no malware involved. The scammer does not have any of the claimed information. If you don’t pay the demanded sum, nothing bad will happen. For the most part, these messages can simply be ignored.

However, the one part that is true is the password—which is the part that makes everything else seem more believable. The password did not, however, come from malware on the computer. Instead, it came from a third-party data breach.

What happens is that a site you have an account on gets breached, and someone is able to extract a bunch of e-mail addresses and passwords. How this happens is not particularly important for our purposes here, but the effect is that two pieces of your personal information may have been published to various “dark web” sites: your e-mail address and a password used with an account associated with that e-mail address.

This is very similar to someone writing your phone number on the wall in a bathroom stall: it becomes public knowledge, for anyone who knows where to look, and it can lead to a lot of harassment.

Once this information has become public knowledge, criminals can take these lists and send mass e-mail messages to everyone on the list, including the password associated with their e-mail message. This is the real source of the seed of truth in these messages, not the fictitious malware the scammers want you to believe you’re infected with.

So I can ignore this, right?

Well, yes and no. Yes, the threat itself is an empty one, since there’s no malware. However, there’s a real danger under the surface: you have a password that has become public knowledge!

If the password provided is an old one that you are no longer using, then you’re golden. You’ve got no need to do anything further. However, for many people, the password is one that is still in active use, and that presents a problem. This particular scammer decided to use the password to scare you, but there are other criminals out there who might decide to use it for more nefarious purposes, like taking over your online life!

To prevent this from happening, there are a few steps you’ll need to take.

Step 1: Change your password

First and foremost, on any account using the password that was provided, change your password. While you’re at it, though, let’s make sure that it’s a good strong one. The best passwords are long, random ones… for example, “vdBdq8GoDh8ELGm$qRdgXVTq.” The longer the better.

It’s also important to use a different password on every site. Because password breaches will always happen, if you use the same password on multiple sites, that can lead to a breach on one site making it possible for an attacker to access your accounts across many different sites.

Okay, I hear you. No, I’m not expecting you to memorize ridiculous passwords for every site you have an account on. There’s a solution to that problem.

Step 2: Use a password manager

A password manager is a program designed to remember your passwords for you. Password managers can keep a list of not just your passwords, but also what site you’ve used them on, the username you use to log in to that site, any security questions you use on that site, etc.

A password manager can be as simple as a notebook you keep in a drawer in your desk. Of course, that’s also something that can be read by anyone with access to your office, and it’s not something you can easily carry around with you.

Password managers more typically come in the form of software, which can encrypt your passwords with a single master password, help you share them between devices, and much more.

You may have a password manager right at your fingertips already, as some web browsers have them built in. Examples include iCloud Keychain in Safari, Google Password Manager in Chrome, and the Firefox Password Manager.

Safari's password management settings
Safari’s password management settings

If you use a more obscure browser, don’t want to use the built-in password manager, or just need something more powerful, you can consider something like 1Password or Lastpass.

Whatever fits your particular needs, use it. A password manager is the only way you can realistically have long, strong passwords that are different on every site. Your password manager’s “master” password becomes the only password you need to remember.

Creating a master password for your password manager follows the same, simple rules for your regular passwords—the longer the better. Since you’ll be typing this password in regularly, it could be easier to make a passphrase, which is a string of words that should have no direct meaning to you. Avoid birthdates and street addresses and lean into the chaos of your brain’s random word generator: something like “cantankerousbuffalopotteryhypothesis.”

Whoa, hold on a minute! Don’t walk away yet. Having good passwords and a way to store them is only a small part of the battle. After all, a good password is no good as soon as the site gets breached by a hacker and spills all its passwords. Believe it or not, there’s something else beyond the password.

Step 3: Use two-factor authorization

Two-factor authorization (abbreviated 2FA) is some kind of secondary piece of information, in addition to a password, that can be required for you to log into a website. These typically are some kind of code—most commonly four or six digits—that you must enter during the login process.

The most common way to receive these codes is via text message on your phone. However, they can also be codes that change every 30 seconds, which are generated by a variety of different apps, such as Authy, Google Authenticator, or some more full-featured password managers. These are more secure than texted codes, but also less commonly supported, and codes sent to your phone via text message are better than nothing.

2FA token generated by Authy
2FA token generated by Authy on an iPhone

Whatever type your accounts support, use it. It can take some time to set this up these days, when people often have a LOT of accounts, so just take it a few at a time until you’re done.

For help figuring out what kinds of 2FA a site supports see the Two Factor Auth site. You can search this site for the site you’re interested in, and it will tell you what types of 2FA it supports (SMS and Software Token being the two types described above), and link you to that site’s documentation for how to set up 2FA.

For more information about 2FA, see Duo Security’s Two-Factor Authentication: The Basics.

Step 3a: What if there’s no 2FA?

Some sites don’t support 2FA, instead only supporting something like security questions… you know, “What’s the name of your first pet,” or “What street did you live on growing up,” and any number of other similar questions. Here’s the problem with these questions: they’re easy to guess, and the information may be public knowledge.

So, here’s what you do if security questions are all you have to secure a site: lie! Never, ever use true answers to security questions. Instead, make something up. For example, maybe say your first car was a “Millennium Falcon.” Or maybe you drove an “avocado toast.” Even better, say you drove a “dknO6RF%an!Fdke8.”

By now, I’m sure you’re not asking how you’re supposed to remember these ridiculous answers, because you know what the answer will be already: use your password manager. Most password managers support arbitrary notes, so add both the questions and the nonsensical answers to a note for that login in your password manager.

Wrapping up

If you skipped to the end without reading the details (we hope you did not), here’s the tl;dr: these messages are fake, there is no malware involved, and the only thing to be concerned about is the fact that one of your passwords is floating around in cyberspace.

Once you have followed all the instructions above to secure your online accounts, you’ll have nothing left to do, other than mark the message as junk and delete it (if you haven’t already).

Keep in mind that no antivirus software can prevent you from seeing these types of extortion messages. E-mail systems or clients that do junk mail (spam) filtering can help to catch some of these, but they cannot be relied on to catch all of them. These scammers are sneaky, and are good at evading junk mail filtering.

The fact that you keep receiving these extortion messages does not represent a security issue, and you do not need to be afraid of these thugs. They are only a threat to your wallet, and only if you fall for their tricks and send them money. Otherwise, they cannot do you any harm… so long as you’ve secured your accounts so they can’t use your leaked password against you.

The post What to do when you receive an extortion e-mail appeared first on Malwarebytes Labs.

Cybersecurity and the economy: when recession strikes

Cybercrime and the economy have always been intertwined, but with COVID-19 on the road to causing a seemingly inevitable global recession, many are asking what, exactly, will the impact be on cybercrime. Will criminals step up and increase malware production, ramp up phishing attacks, do whatever it takes to pull in some cash? Or will it cause a little downturn in malware making and other dubious dealings?

Cyber recession: setting the scene

One of our key points of observation would be back in 2009, during the last global recession. While searching for information, one of the flashpoints which kept coming up was a paper put together by a team of researchers from around the world called Crime online: cybercrime and illegal innovation. Almost every article which came back to me referenced it in some way, and it was front and centre for every writeup. It’s so pervasive that even articles written in the last 12 months tend to link it when talking about the impact of recessions on professional computer criminals.

The Telegraph, Metro, OneIndia and many more all focused on the impact the recession would have as per the research paper. The only problem? Having read it, the paper mentions a recession 3 times, 2 of which are the same sentence reused stating that a global recession will likely increase the chances of people moving into cybercrime. And for all the lasting impact references to this paper have had down the years, that’s essentially what all of the linkage is based on: someone saying “here comes the cybercrime recession, probably”.

The Past: Making predictions

Mostly, it’s a very solid and wide-ranging paper covering a large range of threat developments, from credit card fraud and phishing to malware authoring and “value chain analysis.” All very interesting, but outside of many claims that technology x or people getting better at y would result in probable increases in z, nothing really leapt out at me to say, “recession is going to explode bad activity online and this is why”. Was it possible people on the receiving end of the presumed press release saw the single line about recession and pinned their entire piece around it? Who knows, but there didn’t seem to be an awful lot to go on.

Putting the puzzle together

In fairness, it’s not just that one research paper taking up the entirety of 2009’s “here comes the recession hacker boom” content. It was up for discussion and there’s no harm in considering the problem. A panel back in 2009 talked about how a recession creates “more cybercriminals” who then go on to do a lot more cybercriminaling. There’s a fair bit of assumption at work here; that a big slice of people hit by a recession will automatically turn to crime, and computer crime at that. If resources are tight and money is short, if people are so physically impacted by a recession that they need to turn to crime to survive, will they:

  1. Invest time, electricity, and stamina they may not have on crash course hacking, malware, phishing, digging around on forums for someone—anyone—to help them so they can maybe go off and rip someone off online with no guarantee any of it will work; or
  2. Go out and steal some food or break into physical objects such as cars?

Personally, I’d be in Camp B all the way. Camp A seems like incredibly slim vanishing returns all round.

Wages down, crime up? Not so simple

When a recession hits, do criminals come creeping out of the woodwork? More to the point, do we end up with whole new waves of criminals? We have a few data points we can draw on for this. When major recessions and downturns have struck, crime rates can actually fall significantly. Apart from anything else, it’s quite tricky for career burglars to go about their business when economic factors are keeping people at home.

Throw a global pandemic into the mix which relies on as many people as possible staying indoors whether working or not, and it’s time to get a new criminal enterprise. The question is what, specifically, that criminal enterprise would involve. Computers or something else?

Driving the direction of technological attacks

While many folks seem to think cybercrime is the perfect place to go for replacement crime activities, the reality is it’s not quite that straightforward. In more normal times, the shifts inside online crime as a whole are represented by an ebb and flow towards different types of attack as opposed some sort of wholesale digital stampede to do something differently.

For a while now, we’ve seen consumer detections decrease while their business counterparts go up due to the juicy stuff being locked away behind corporate firewalls. Now, with so many people working from home, we expect to see cybercriminals modify their approach somewhat and start going back to poking around home computers (or at least, work computers suddenly on a home network).

Here comes the massive caveat:

It’s worth mentioning that for every “crime goes down during a recession” piece you know of, you’ll always find a few others claiming the opposite. You want confusing? Have fun with the first page of search results in Google should you want to do some digging of your own:

Click to Enlarge

Criminology and sociology aren’t my field of expertise, and I don’t pretend they are. I’m just highlighting the potentially significant shifts in data analysis for anyone trying to figure out the cybercrime / recession link, because even the non-cybercriminal data seems to have a hard time being stacked up one way or another depending on which data is used and who is telling the story.

What about good old infection / attack numbers? Is it even possible to dust off a big book of figures from more than a decade ago?

Playing the numbers

The answer is “sort of”, and “very cautiously”. Cybercrime from last year tends to be somewhat old hat, never mind something from 5 or 10 years ago which often looks as though it’s landed here from another planet. Everything and anything could potentially be different, from infection types, to spreading techniques, to operating systems and security tools, even down to the way everybody from security vendors and governments tally up their figures.

Having said that, there are some interesting snippets of information buried in the pile. The Great Recession hit in 2009, after the build-up of the 2007-08 financial crisis. A UKGOV-hosted cybercrime report from 2013 notes that many aspects of internet fraud dipped around the time of the year-long recession, with higher tallies surrounding it depending on attack type.

“Internet enabled card-not-present fraud” (catchy!) was at around £131 million loss costs recorded in 2010, starting off at a peak of about £181 million in 2008. This is, however, a partial estimate, and online banking fraud hit a peak of £59.7 million in the year of the recession before collapsing to £39.6 million by 2012. Even so, Financial Fraud Action reported “just” 50k phish banking phishes in 2009 and 256k by 2012.

The malware explosion of 2012 onwards

Numbers are somewhat tricky to come by, but not impossible. Although this AV Test chart for overall malware development begins with 2011, you can see the full chart in this 2015/16 PDF document which ranges from 2005 at 1.7 million all the way up to 2016 hitting some 578 million(!) From 2007 onwards, the new figure increases year on year from anything between 10 to 20 million, with nothing unusual about 2009 compared to the others. In fact, it isn’t until 2012/13 that the numbers begin to explode into the stratosphere. The one thing I mainly remember about 2009 in terms of security was prevalence of worms: Sality, Conficker, and others.

In terms of *new* malware created per year, another AV Test report (2017/18) is similarly illuminating. Once again, 2009 isn’t particularly notable whereas 2012 seems to be the point where things kick into high gear, remaining that way until 2016 when things take a small dip.

Elsewhere, though, different types of fraud received a boost. Internet fraud losses were up to the tune of 33% in 2008, though your mileage will vary with regards to taking the final year of the financial crisis and tying it specifically to the period of 2009 accepted as the recession itself. However you stack it up, it’s fair to say some types of crime would go up and some down, as expected – or at least, not explode the way you’d think it might.

Present: The cybers will get us

If we wind ourselves forward to the past few years, we see talk of cybercrime specifically being a potential cause for a possible recession. In 2018, the fear of a massive attack on banking systems worldwide was touted as the way we’d all be dragged into recession town, population: us. The way this was supposed to happen is as follows:

  1. Rogue nation state or someone with equivalent resources somehow causes a massive “cashout strike”, where a huge wave of fraudulent withdrawals happens simultaneously and this is on such a scale that the banks all fall over. Yes, this is quite speculative.
  2. A script kiddy does…something…malicious and everything breaks. This is even more speculative.

That’s, uh, pretty much it. The article itself mentions that the banks would probably return to normal once functionality is restored, and if you’re undercutting your own “this is bad” point with “actually not really” then in all fairness it’s probably not how civilization is brought to its knees.

Elsewhere, we have another prediction of cyber related recession antics from 2019. Once again, the trigger is going to be some sort of undefined bank exploit / attack where the financial sector comes crashing down around our ears. The fascinating part is that the article begins by stating that a recession is definitely going to happen “within 2 years”. Well, they were correct – but not for the reasons stated. As it turns out, the cybers getting us might have been a bit more preferable to what came along in 2020…

(Potential) future: 2020 and beyond

As we’ve seen so far, computer criminals deciding to shuffle the deck and throw it out the window is primarily based on what-if scenarios ranging from unlikely and incredibly vague to unlikely and a bit less vague. Dusting off the crystal ball is an interesting exercise, but the reality of the situation is that the current financial meltdown came hand in hand with a virus of the non-digital kind.

Right now, we can’t move for conflicting reports during the actual pandemic itself. On the one hand, you have Ransomware authors claiming they won’t target hospitals during the pandemic. This isn’t entirely altruistic; they must know hammering health services will attract unnecessary legal attention in the fallout. Having said that: here’s a bunch of health services under fire from hack attacks during the pandemic. As before, some types go down, some go up. It isn’t uniform and very difficult to make sense of so much conflicting data.

Elsewhere, we have organizations reporting “five-fold increases” in cyber-attacks. By the same token, we have entities such as Microsoft and NCSC claiming the overall levels of cyber-crime aren’t going up. Criminals don’t seem to be making more money off the back of COVID-19 either.

That’s all well and good for scammers riding the coat-tails of the pandemic in the here and now, and numbers could change dramatically as time goes on. How about any future-based, lasting recession?

My entirely unscientific guess – and that’s all we can do, guess – is that even accounting for any new recession, cybercrime will just keep on keeping on and expand or contract at its own pace if it follows the same general pattern we saw in 2009. We’re in an unprecedented situation for technology, and may need to wait till the smoke clears to figure out what we do next. Believe me when I say I’m as fascinated as you are to see where it ends up.

Let’s just hope it’s a little bit more preferable to what we have right now.

The post Cybersecurity and the economy: when recession strikes appeared first on Malwarebytes Labs.

VPNs are mainstream, which is good news

Virtual private networks (VPNs) have been growing in popularity for the last three years, a notable trend revealed in a collaborative report [PDF] by Top10VPN and GlobalWebIndex. This year is no different.

When a majority of the world’s internet users are in isolation due to the COVID-19 global pandemic, the increase in VPN usage is likely and expected, especially with so many people moving regular work from offices to their very homes. VPNs are best used in this time when employees cannot be physically within office premises to securely connect and access sensitive files, local apps, and other internal resources they need to do their job.

A jump in work-from-home employees isn’t the only reason why VPNs nowadays are in high demand. If anything, its steady growth was suddenly sped up by the effects of the current pandemic, introducing a historical spike in usage while internet users are thrust to a “new normal” when it comes to living life closer to family and away from colleagues, extended family, friends, and strangers.

However, there are other factors at play when it comes to motivations for using VPNs. The report entitled “The Global VPN Usage Report 2020” sheds a light on these and more. Let’s take a look.

Current VPN usage trend

Why use VPNs?

More than 30 percent of internet users are now using VPNs, with the heaviest users being in Asia and the Middle East & Africa regions. Specifically, Indonesia and India—61 percent and 45 percent, respectively—have the biggest number of VPN users compared to other countries. If you may recall, the Indonesian government have made attempts to filter content their citizens see online, especially on social media platforms like Facebook, Twitter, and Reddit. The use of certain communication channels, such as WhatsApp, were also restricted.

Both the Middle East & Africa (MEA) and the Asia Pacific (APAC) regions are heavy users of VPN. (Courtesy of Top10VPN and GlobalWebIndex)

It’s not unusual to say that some VPN growth actually stems from attempts to enact censorship over a population. Note that while VPN usage is high in areas where government repression is heaviest, these are also the countries where the use of VPN is legal.

Perhaps surprisingly, countries in democratic countries like Australia (69 percent) and the Netherlands (76 percent) have seen a notable market growth over a three-year period.

“In 2017, the Netherlands introduced a law that gave the intelligence services the right to wiretap online communications around suspects on a large scale and store the data for a period of 3 years,” explains Pieter Arntz, malware intelligence researcher for Malwarebytes, regarding this trend, “For that reason, the law was called the “Sleepwet” (or dragnet law). Amnesty International and local privacy advocates made objections against the scale and the long retention period. Since the introduction, we have seen a big rise in the use of VPN’s in the Netherlands.”

A data retention law coming into effect that year in Australia is the likely trigger for citizens to start using VPNs.

The report also outlines other reasons why people use VPNs.

The paradigm has shifted. VPN users typically claim they want to access entertainment content—currently ranking as the 6th top reason—that they otherwise cannot normally access. (Courtesy of Top10VPN and GlobalWebIndex)

In some countries, government surveillance isn’t a massive concern. What makes their citizens opt to use VPNs is to hide their browsing activities from potential snoopers, of which might be their ISP, advertisers, or threat actors.

Who uses VPNs?

For every 10 internet user, 3 use VPNs, according to the report.

Below is a global profile of who uses VPNs based on demographic data collected for this study. A VPN user is typically:

  • Male (36 percent, compared to 26 percent female)
  • Young (average of 37 percent between Gen Y and Gen Z users, compared to only an average of 21 percent for Gen X and Baby Boomers) *
  • More educated (average of 37 percent between college/university students and post-graduate users, compared to users who are schooling at the age of 18 and below)
  • Mobile users (64 percent, compared to 62 percent of PC/laptop users)

*Older generations are notably catching up, though.

Heavy users in the APAC and MEA regions are young users who are “more urban and more affluent, relative to the rest of the population”. They are also more comfortable with digital tools.

What’s in a user’s VPN wish list?

Most users (72 percent) in the US and UK are using free VPNs compared to those who opted to pay (36 percent). For payers, the most common reason for this is to avoid the sharing of their information with third parties (54 percent).

When looking for a VPN, users prefer those with reliable connection (54 percent), that are easy to use (54 percent), quick (54 percent), has privacy/logging policies (43 percent), and reasonably priced (42 percent).

What attitudes or behaviors do VPN users have?

VPN users are more likely to be consistent with how they protect their online privacy than someone who doesn’t use a VPN. This means they use other measures like deleting browser cookies and using browsers that promote private browsing.

It also came out that internet users are at least aware that protecting their privacy online is important but don’t know how. Even those deemed privacy-conscious are mostly not using VPNs.

When it comes to frequency in use, users in the US and UK tend to use VPNs every day for their daily browsing activities, not just for more private browsing. Younger users in these regions also claim that they see VPNs, primarily, as a privacy tool.

The road to safer surfing

It’s always interesting to take note of trends, motivations, and even buying behavior. However, there are other points in the report that merit some highlights. For one, many users associate VPNs with the word “secure”, although as with all things occasionally this isn’t the case. This is particularly true for mobile devices.

When it comes to finding “the one” VPN for you, it is therefore no longer enough to just take other people’s word for it. It is more crucial than ever for users to go hands on and experience the products themselves. It is also important that users do a little investigative work about the company behind the software or service they were eyeing to try out. And when you do, please remember: Ask the right questions.

Good luck!

The post VPNs are mainstream, which is good news appeared first on Malwarebytes Labs.

Threat actors release Troldesh decryption keys

A GitHub user claiming to represent the authors of the Troldesh Ransomware calling themselves the “Shade team” published this statement last Sunday:

“We are the team which created a trojan-encryptor mostly known as Shade, Troldesh or Encoder.858. In fact, we stopped its distribution in the end of 2019. Now we made a decision to put the last point in this story and to publish all the decryption keys we have (over 750 thousands at all). We are also publishing our decryption soft; we also hope that, having the keys, antivirus companies will issue their own more user-friendly decryption tools. All other data related to our activity (including the source codes of the trojan) was irrevocably destroyed. We apologize to all the victims of the trojan and hope that the keys we published will help them to recover their data.”

Are these the real Troldesh decryption keys?

Yes. Since the statement and the keys were published the keys have been verified as our friends at Kaspersky have confirmed the validity of the keys and are working on a decryption tool. That tool will be added to the No More Ransom project.  The “No More Ransom” website is an initiative by the National High Tech Crime Unit of the Dutch police, Europol’s European Cybercrime Centre, Kaspersky and McAfee with the goal to help victims of ransomware retrieve their encrypted data without having to pay the criminals.

In the past, a few decryption tools for some of the Troldesh variants have already been published on the “No More Ransom” website. We will update this post when the Kasperky decryptor is released and would like to warn against following the instructions on GitHub unless you are a very skilled user. The few extra days of waiting shouldn’t hurt that much and a failed attempt may render the files completely useless.

When is it useful to use the Troldesh decryption tool?

Before you go off and run this expected tool on your victimized computer as soon as it comes out, check if your encrypted files have one of these extensions:

  • xtbl
  • ytbl
  • breaking_bad
  • heisenberg
  • better_call_saul
  • los_pollos
  • da_vinci_code
  • magic_software_syndicate
  • windows10
  • windows8
  • no_more_ransom
  • tyson
  • crypted000007
  • crypted000078
  • rsa3072
  • decrypt_it
  • dexter
  • miami_california

If the file extensions from your affected system(s) do not match one on the list above, then your files are outside of the scope of this decryption tool. If you do find a match you should wait for the decryption tool to be published.

Why would this gang publish the Troldesh decryption keys?

The reason for all this is unknown and subject to speculation. We can imagine a few different reasons. From not very likely to credible.

  • Maybe their conscience caught up with them. After all they do apologize to the victims. But these are only the victims that didn’t pay or were unable to recover their files despite paying the ransom.
  • The Shade team may suspect that someone has breached their key vault and they were forced or decided on their own accord to publish the keys for that reason. But we have seen no claims to support that possibility.
  • The profitability of the ransomware had reached its limit. Ransom.Troldesh has been around since 2014 and we saw a steep detection spike once the threat actors ventured outside of Russian targets in February of 2019. But after that initial spike the number of detections gradually faded out. It was still active and generating money though.
Ransom.Troldesh detections over time
Number of Malwarebytes detections of Ransom.Troldesh from July 2018 till April 2020
  • The development of this ransomware has reached its technical limit and the team will focus on a new software project. The team stated to have stopped distribution in the end of 2019, but failed to let on what they are currently working on.

What we know

All we know for sure is that the keys have been verified and a decryption tool is in the works. All the rest are speculations based on a statement made on GitHub by an account by the name of “shade-team” that joined GitHub on April 25th, just prior to the statement.

Victims can keep their eyes peeled for the release of the decryption tool. We’ll keep you posted.

Stay safe!

The post Threat actors release Troldesh decryption keys appeared first on Malwarebytes Labs.

Switching from a “Just in Time” delivery system should include planning ahead

As it becomes clear that some things will never again be the same after the global coronavirus pandemic, it is time to prepare for the future. The cybersecurity implications of upcoming changes will be most noticeable in organizations that rely on security models like the software defined perimeter.

The software defined perimeter is a model closely related to the zero trust framework, in which users must authenticate themselves first before accessing any company-sensitive documents or on-site information. Connectivity in the software defined perimeter is based on the premise that each device and identity must be verified before being granted access to the network.

Below, we explore why unexpected demand spikes may force organizations to reconsider their “Just in Time” delivery networks. But remember, a switch from one system brings questions about any new one.

Just in time delivery

As an example of the changes we can expect, let’s assume that after the coronavirus pandemic, some organizations will transition away from the Just In Time (JIT) delivery system they used when their supply lines began diminishing.

Just In Time delivery systems provide goods as orders come in, allowing for a lean, at-need production process with little to no surplus. But as we’ve recently seen, these types of systems are vulnerable to sudden peaks in demand, as depleting supply chains have already hit several industries, with the most poignant victim being healthcare. Hospitals, clinics, and medical centers around the world have quickly run of masks, hand sanitizer, and ventilators in the months since COVID-19 struck.

Many stores, both brick-and-mortar and web shops, have already faced the same problem. Soon after China applied its regional quarantine, global supply chains took a hit, with some businesses impacted sooner than others. It makes a big difference whether your goods come per container or air freight in terms of how soon your line could dry up.

How we need a constant stream of goods

To western economies, a continuous flow of goods and components is of the utmost importance. We regard transport and logistics as vital infrastructure for compelling reasons. Many of our factories depend on components made on the other side of the globe, and consumers recently learned just how many of their daily products originate from Asia. It’s not just electronics, toys, and clothing being made elsewhere, but also a lot of car parts, tools, and condoms.

One way to solve this problem for the next lock-down (which is a possibility, depending on how local governments decide to “open up” their economies) is to decentralize the origin of products that we can’t afford to miss. But by market standards, goods are often produced wherever labor is cheapest, and spreading production would increase price. In some cases, consumers might be willing to pay a higher price for locally produced goods. In other cases, trade restrictions could drive up the price for goods produced abroad. In both cases, the supply lines would get shorter and gain stronger defenses to interruption.

Just in Time inventory management saves money by minimizing the necessary amount of storage room and by limiting goods going to waste because they go over the expiration date. What you need to realize is that you are not solving this problem, you are just moving it to your logistics partner, who may be better equipped to handle it as they probably do it for many others. And in turn they rely on other shipping and production companies to keep their stocks at a level which allows them to satisfy the needs of their customers.

Now that organizations have learned that a broken link in the supply chain can have drastic results for those at the end of the line, the question is whether this system can be used for every type of good, or whether we need to prioritize between essential goods and those we can afford to miss for a while.

Different software

Switching to another inventory system requires another type of software. Where JIT inventory management may be as simple as sending out an order to the logistics partner—whether it’s yours or the one of your supplier is not really relevant—keeping your own inventory requires a different approach. Countless goods have expiration dates, and not just food and drugs. Some other products also lose their usefulness over time. Others may even lose their value, or the cost to produce them may drop rapidly compared to other products.

Different software comes with a bunch of question, mainly related to security:

  • Who needs access?
  • What will be the permissions of the software itself?
  • How are we going to manage (remote) accessibility?
  • Do we anticipate any compliance issues?
  • How did the software perform during security testing?
  • What will be the procedure during transition?
  • How will this influence my software defined perimeter?

Most of the time, simple stock-keeping software should be less complicated than Just-In-Time inventory management, so it may be a good time to rethink some of the settings you have chosen while you were still using JIT. Even when you end up using a mix of both systems (as many organizations do) the time of change is typically a good time to reconsider choices made in the past. Nobody may have reviewed them because they simply worked. But that doesn’t necessarily mean that they were the optimal choices.

Most of the questions above speak for themselves but will need to be answered on a case by case basis.


Recommended reading: Explained: the strengths and weaknesses of the Zero Trust model


Software defined perimeter

As you may have expected, the software defined perimeter is a security model which is often used in combination with cloud-based software or when remote access to on-premise applications is needed. The software defined perimeter finds its base in the Zero Trust model and divides network access into small segments by establishing direct connections between users and the resources they access.

Logic dictates that when you switch from JIT to a more local inventory this will impact the software defined perimeter. In the JIT system you can expect outbound connections to be established that control the flow of needed goods into the organization. In a system based on local storage, you may see more requests from remote workers to check up on the state of the inventory.

If you this type of change will not affect your organization, there are many other changes that might be caused or ramped up by this crisis. So, it might be beneficial to try and plan ahead. A prepared organization doesn’t get caught by surprise.

Stay safe!

The post Switching from a “Just in Time” delivery system should include planning ahead appeared first on Malwarebytes Labs.

Cloud data protection: how to secure what you store in the cloud

The cloud has become the standard for data storage. Just a few years ago, individuals and businesses pondered whether or not they should move to the cloud. This is now a question of the past. Today, the question isn’t whether to adopt cloud storage but rather how.

Despite its rapid pace of adoption, there are some lingering concerns around cloud storage. Perhaps the most persistent issue is the matter of cloud data security. With as much critical data as there is stored on the cloud, and with a “nebulous” grasp on exactly how it’s stored and who has access, how can people be sure it’s safe?

Growing cloud usage

Cloud usage has exploded in recent years. Five years ago, global cloud traffic was at 3,851 exabytes, a number which has since skyrocketed to more than 16,000 exabytes. As the functionality and connectivity of the Internet grows, cloud traffic will likely increase with it.

People store a vast amount of information on the cloud. It’s not just businesses hosting IT operations or client data on these platforms anymore. Individuals use services like OneDrive, Google Drive, Dropbox, and iCloud to store everything from tax documents to family photos.

With all this data so easily accessible on the cloud, privacy and data protection become more prevalent concerns. Where exactly is the data going and who can see it? If someone can access all of their documents, pictures and contacts instantly from their phone, can hackers just as easily obtain this information? There are more than 1 billion cloud users today who, if they don’t already know, should be asking themselves these questions and learning how to keep their cloud data private and secure.

Securing cloud data

Cloud storage may seem like a security threat at first glance, but it can offer superior security over other methods for businesses. So, what about individuals? By taking the right steps towards careful cloud usage, people can be sure their data is safe.

Keep local backups

The first step in cloud data protection is locally backing up data. Storing things on the cloud offers greater convenience and utility, making it an ideal primary option, but it’s essential to back up important files. Having backups on a local storage device like a flash drive or server ensures files are safe in the event of a breach.

Use the cloud judiciously

Users should be mindful of what kinds of data they store on the cloud. As secure as modern cloud storage is, there’s no such thing as being too careful. Most files are fine to keep anywhere, but sensitive information like bank info or Social Security numbers are best left offline.

Use encryption

Encryption is one of the most helpful methods of securing any digitally stored data. By encrypting files before uploading them to the cloud, users can ensure that the files are safe even from their cloud provider. Some providers offer varying levels of encryption services, but third-party software provides another layer of protection.

Read the terms of service

Most people skip over the terms of service, but this can be a security risk. If someone agrees to terms they didn’t read, they could legally give their cloud service provider more rights over their data than they realize. It can seem like a tedious task, but reading user agreements highlights what a company can and can’t do with data on their platforms.

Use good password hygiene

One of the simplest ways to bolster cloud data security is by using a strong password. Hackers can crack 90 percent of passwords in a matter of seconds because the vast majority of people prefer easy-to-remember passwords over strong ones, and a disappointing number of people choose passwords like “123456” or “password” to protect their online info.

The advice here is simple: Create a unique, long password that includes special characters, numbers, and letters. On top of that, change your password every few months to better improve your security. Do not share your password via email or text, and do not use easily identifiable information in your password, like your birthdate or address.

Multi-factor authentication further secures the login process. Most cloud providers should have the option to turn on two-step verification so that users need more than just a password to access their data. This function ensures that even if a hacker cracks the password, they still can’t get into the server.

Protect yourself from cyberthreats

Antivirus programs are an essential part of all computer-based functions, including cloud storage. Some forms of malware like keyloggers can give hackers entry into protected systems without users realizing it. By using a cloud provider with built-in antivirus software, third-party antivirus software or both, users can ensure they’re safe from these threats.

Common security mistakes

Quite often, the most significant threat to cloud data protection is improper use. In the corporate sphere, more than 40 percent of data breaches are the result of employee errors. No matter how many safety features a system has, user mistakes can always jeopardize security.

One of the most common cloud security mistakes is poor password handling. People use weak or repeated passwords, don’t change them or even list passwords on unsecured online documents, putting their information at risk. Users can avoid this by using strong passwords and changing them periodically.

Data breaches are not as substantial a problem if there is no sensitive data at risk. To avoid essential or private information from leaking or being stolen, the most secure practice is to store these somewhere other than the cloud. People should use cloud storage for things they need to access frequently, but not for things like credit card numbers.

Finally, many people also fall victim to phishing or pharming scams. Users can easily avoid these by never clicking suspicious links or giving out personal information to an unknown source.

With robust security measures and a healthy dose of general internet safety guidelines, cloud storage can be as secure as any other option on the market.

The post Cloud data protection: how to secure what you store in the cloud appeared first on Malwarebytes Labs.

iOS Mail bug allows remote zero-click attacks

On Monday, ZecOps released a report about a couple concerning vulnerabilities with the Mail app in iOS. These vulnerabilities would allow an attacker to execute arbitrary code in the Mail app or the maild process that assists the Mail app behind the scenes. Most concerning, though, is the fact that even the most current version of iOS, 13.4.1, is vulnerable.

The way the attack works is that the threat actor sends an email message designed to cause a buffer overflow in Mail (or maild). A buffer overflow is a bug in code that allows an attack to happen if the threat actor is able to fill a block of memory beyond its capacity. Essentially, the attacker writes garbage data that fills up the memory, then writes code that overwrites existing code in adjoining memory, which later gets executed by the vulnerable process.

The bad news

The vulnerabilities disclosed by ZecOps would allow an attacker to use such a buffer overflow to attack an iOS device remotely, on devices running iOS 6 through iOS 13.4.1. (ZecOps writes that it may work on even older versions of iOS, but they did not test that.)

On iOS 12, the attack requires nothing more than viewing a malicious email message in the Mail app. It would not require tapping a link or any other content within the message. On iOS 13, the situation is worse, as the attack can be carried out against the maild process in the background, without requiring any user interaction (ie, it is a “zero-click vulnerability”).

In the case of infection on iOS 13, there would be no significant sign of infection, other than temporary slowness of the Mail app. In some cases, evidence of a failed attack may be present in the form of messages that have no content and cannot be displayed.

The messages—shown in the image above from the ZecOps blog—may be visible for a limited time. Once an attack is successful, the attacker would presumably use access to the Mail app to delete these messages, so the user may never see them.

The good news

I know how this sounds. This is an attack that can be carried out by any threat actor who has your email address, on the latest version of iOS, and the infection happens in the background without requiring action from the user. How is there good news here?!

Fortunately, there is. The vulnerabilities revealed by ZecOps only allow an attack of the Mail app itself. Using those vulnerabilities, an attacker would be able to capture your email messages, as well as modify and delete messages. Presumably the attacker would also be able to conduct other normal Mail operations, such as sending messages from your email address, although this was not mentioned. While this isn’t exactly comforting, it falls far short of compromising the entire device.

In order to achieve a full device compromise, the attacker would need to have another vulnerability. This means that if you have version 13.4.1, it would require a publicly unknown vulnerability, which would for the most part restrict such an attack to a nation-state-level adversary.

In other words, someone would have to be willing to risk burning a zero-day vulnerability, worth potentially a million dollars or more, to infect your phone. This means that you’re unlikely to be infected unless some hostile government or other powerful group is interested in spying on you.

If you are, for example, a human rights advocate working against a repressive regime, or a member of an oppressed minority in such a country, you may be a target. Similarly, if you are a journalist covering such news, you may be a target. You could also be at risk if you are an important business person, such as a CEO or CFO at a major corporation, or hold an important role in the government. The average person will not be at significant risk from this kind of attack.

Why disclose now?

It is common practice as part of “responsible disclosure” to avoid public mention of a major vulnerability until after it has been fixed, or until sufficient time has passed that it is believed the software or hardware vendor does not intend to fix the vulnerability in a timely fashion. Release of this kind of information before a fix is available can lead to increased danger to users, as hackers who learn that a vulnerability exists can find it for themselves.

Of course, this must be balanced against the risk of existing attacks that are going undetected. Disclosure can help people who are under active attack to discover the problem, and can help people who are not yet under attack learn how to prevent an attack.

With this in mind, ZecOps mentioned three reasons why they chose to disclose now:

  1. Since the disclosed vulnerabilities can’t be used to compromise the entire device without additional vulnerabilities, the risk of disclosure is lower.
  2. Apple has released a beta of iOS 13.4.5, which addresses the issue. Although a fix in beta is not exactly the same as a fix in a public release, the changes in the beta could be analyzed by an attacker, which would lead to discovery of the vulnerabilities. Essentially, the vulnerabilities have been disclosed to malicious hackers already, but the public was unaware.
  3. At least six organizations were under active attack using these vulnerabilities. (The organizations were not named.)

What you should do

First, don’t panic. As mentioned, this is not a widespread attack against everyone using an iPhone. There have been other zero-click vulnerabilities used to push malware onto iPhones in the past, yet none have ever been widespread. This is because the more widespread such an attack becomes, the more likely it is to be spotted, and subsequently fixed by Apple.

To protect their investment in million-dollar iOS zero-day vulnerabilities, powerful organizations use those vulnerabilities sparingly, only against targeted individuals or groups. Thus, unless you’re someone who might be targeted by a hostile nation or other powerful organization, you’re not likely to be in danger.

However, the risk does increase following disclosure, as malicious hackers can discover and use the vulnerability to attack Mail, at least. So you shouldn’t ignore the risk, either.

As much as I’d like to say, “Install Malwarebytes, run a scan, and remove the malware,” I can’t. Unlike macOS, installing antivirus software isn’t possible on iOS, due to Apple restrictions. So there is no software that can scan an iPhone or iPad for malware.

This, plus the lack of noticeable symptoms, means that it will be difficult to determine whether you’ve been affected. As always with iOS, if you have reason to believe you’ve been infected, your only option is to reset your device to factory state and set it up again from scratch as if it were a new device.

As for precautions to avoid infection, there are a couple things you can do. One would be to install the iOS 13.4.5 beta, which contains a fix for the bug. This is not something that’s easy to do, however, as you need an Apple developer account to download the beta. Plus, using a beta version of iOS, which may have bugs, isn’t recommended for all users.

The other possible security measure would be to disable Mail until the next version of iOS is released publicly. To do so, open the Settings app and scroll down to Password & Accounts. Tap that, then look at the list of accounts.

You may have multiple accounts, as shown above, or only one. For any accounts that say “Mail” underneath, that means that you’re using Mail to download mail for that account. Tap on each account, and on the next screen, look for the Mail toggle.

The image above shows that Mail is enabled. Toggle the switch to off. Do this for each of your accounts, and do not switch Mail back on again until you’ve updated to a version of iOS newer than 13.4.1.

Stay safe, everyone!

The post iOS Mail bug allows remote zero-click attacks appeared first on Malwarebytes Labs.

The passwordless present: Will biometrics replace passwords forever?

When it comes to securing your sensitive, personally identifiable information against criminals who can engineer countless ways to snatch it from under your nose, experts have long recommended the use of strong, complex passwords. Using long passphrases with combinations of numbers, letters, and symbols that cannot be easily guessed has been the de facto security guidance for more than 20 years. But does it stand up to scrutiny?

A short and easy-to-remember password is typically preferred by users because of convenience, especially since they average more than 27 different online accounts for which credentials are necessary. However, such a password has low entropy, making it easy to guess or brute force by hackers.

If we factor in the consistent use of a single low-entropy password across all online accounts, despite repeated warnings, then we have a crisis on our hands—especially because remembering 27 unique, complex passwords, PIN codes, and answers to security questions is likely overwhelming for most users.

Instead of faulty and forgettable passwords, tech developers are now pushing to replace them with is something that all human beings have: ourselves.

Bits of ourselves, to be exact. Dear reader, let’s talk biometrics.

Biometrics then and now

Biometrics—or the use of our unique physiological traits to identify and/or verify our identities—has been around for much longer than our computing devices. Handprints, which are found in caves that are thousands of years old, are considered one of the earliest forms of physiological biometric modality. Portuguese historian and explorer João de Barros recorded in his writings that 14th century Chinese merchants used their fingerprints to finalize transaction deals, and that Chinese parents used fingerprints and footprints to differentiate their children from one another.

Hands down, human beings are the best biometric readers—it’s innate in all of us. Studying someone’s facial features, height, weight, or notable body markings, for example, is one of the most basic and earliest means of identifying unfamiliar individuals without knowing or asking for their name. Recognizing familiar faces among a sea of strangers is a form of biometrics, as is meeting new people or determining which person out of a lineup committed a certain crime.

As the population boomed, the process of telling one human being from another became much more challenging. Listing facial features and body markings were no longer enough to accurately track individual identities at the macro level. Therefore, we developed sciences (anthropometry, from which biometrics stems), systems (the Henry Classification System), and technologies to aid us in this nascent pursuit. Biometrics didn’t really become “a thing” until the 1960’s—the same era of the emergence of computer systems.

Today, many biometric modalities are in place for identification, classification, education, and, yes, data protection. These include fingerprints, voice recognition, iris scanning, and facial recognition. Many of us are familiar with these modalities and use them to access our data and devices every day. 

Are they the answer to the password problem? Let’s look at some of these biometrics modalities, where they are normally used, how widely adopted and accepted they are, and some of the security and privacy concerns surrounding them.

Fingerprint scanning/recognition

Fingerprint scanning is perhaps the most common, widely-used, and accepted form of biometric modality. Historically, fingerprints—and in some cases, full handprints—were used as a means to denote ownership (as we’ve seen in cave paintings) and to prevent impersonation and the repudiation of contracts (as what Sir William Herschel did when he was part of the Indian Civil Service in the 1850’s).

Fingerprint and handprint samples taken by William Herschel as part of “The Beginnings of Finger-printing”

Initially, only those in law enforcement could collect and use fingerprints to identify or verify individuals. Today, billions of people around the world are carrying a fingerprint scanner as part of their smartphone devices or smart payment cards.

While fingerprint scanning is convenient, easy-to-use, and has fairly high accuracy (with the exception of the elderly, as skin elasticity decreases with age), it can be circumvented—and white hat hackers have proven this time and time again.

When Apple first introduced TouchID, its then-flagship feature on the 2013 iPhone 5S, the Chaos Computer Club (CCC) from Germany bypassed it a day after its reveal. A similar incident happened in 2019, when Samsung debuted the Galaxy S10. Security researchers from Tencent even demonstrated that any fingerprint-locked smartphone can be hacked, whether they’re using capacitive, optical, or ultrasonic technologies.

“We hope that this finally puts to rest the illusions people have about fingerprint biometrics,” said Frank Rieger, spokesperson of the CCC, after the group defeated the TouchID. “It is plain stupid to use something that you can’t change and that you leave everywhere every day as a security token.”

Voice recognition

Otherwise known as speaker recognition or speech recognition, voice recognition is a biometric modality that, at base level, recognizes sound. However, in recognizing sound, this modality must also measure complex physiological components—the physical size, shape, and health of a person’s vocal chords, lips, teeth, tongue, and mouth cavity. In addition, voice recognition tracks behavioral components—the accent, pitch, tone, talking pace, and emotional state of the speaker, to name a few.

There are two variants of voice recognition: speaker dependent and speaker independent.

Voice recognition is used today in computer operating systems, as well as in mobile and IoT devices for command and search functionality: Siri, Alexa, and other digital assistants fit this profile. There are also software programs and apps, such as translation and transcription services, reading assistance, and educational programs designed with voice recognition, too.

There are currently two variants of voice recognition used today: speaker dependent and speaker independent. Speaker dependent voice recognition requires training on a user’s voice. It needs to be accustomed to the user’s accent and tone before recognizing what was said. This is the type that is used to identify and verify user identities. Banks, tax offices, and other services have bought into the notion of using voice for customers to access their sensitive financial data. The caveat here is that only one person can use this system at a time.

Speech independent voice recognition, on the other hand, doesn’t need training and recognizes input from multiple users. Instead, it is programmed to recognize and act on certain words and phrases. Examples of speaker independent voice recognition technology are the aforementioned virtual assistants, such as Windows’ Cortana, and automated telephone interfaces.

But voice recognition has its downsides, too. While it has improved in accuracy by leaps and bounds over the last 10 years, there are still some issues to solve, especially for women and people of color. Like fingerprint scanning, voice recognition is also susceptible to spoofing. Alternatively, it’s easy to taint the quality of a voice recording with a poor microphone or background noise that may be difficult to avoid.

To prove that using voice to authenticate for account access is an insufficient method, researchers from Salesforce were able to break voice authentication at Black Hat 2018 using voice synthesis, a piece of technology that can creates life-like human voices, and machine learning. They also found that the synthesized voice’s quality only needed to be good enough to do the trick.

“In our case, we only focused on using text-to-speech to bypass voice authentication. So, we really do not care about the quality of our audio,” said John Seymour, one of the researchers. “It could sound like garbage to a human as long as it bypasses the speech APIs.”

All this, and we haven’t even talked about voice deepfakes yet. Imagine fraudsters having the ability to pose as anyone they want using artificial intelligence and a five second recording of their voice. As applicable as voice recognition is as a technology, it’s perhaps the weakest form of biometric identity verification.

Iris scanning or iris recognition

Advocates of iris scanning claim that iris images are quicker and more reliable than fingerprint scanning as a means of identification, as irises are less likely to be altered or obscured than fingerprints.

Sample iris pattern image. The bit stream (top left) was extracted based on this particular eye’s lines and colors. This is then used to compare with other patterns in a database.

Iris scanning is usually conducted with an invisible infrared light that passes over the iris wherein unique patterns and colors are read, analyzed, and digitized for comparison to a database of stored iris templates either for identification or verification.

Unlike fingerprint scanning, which requires a finger to be pressed against a reader, iris scanning can be done both within close range and from afar, as well as standing still and on-the-move. These capabilities raise significant privacy concerns, as individuals and groups of people can be surreptitiously scanned and captured without their knowledge or consent.

There’s an element of security concern with iris scanning as well: Third parties normally store these templates, and we have no idea how iris templates—or all biometric templates—are stored, secured, and shared. Furthermore, scanning the irises of children under 4 years old generally produces scans of inferior quality compared to their adult counterparts.

Iris scanners, especially those that market themselves as airtight or unhackable, haven’t escaped cybercriminals’ radar. In fact, such claims often fuel their motivation to prove the technology wrong. In 2019, eyeDisk, the purported “unhackable USB flash drive,” was hacked by white hat hackers at PenTest Partners. After making a splash breaking Apple’s TouchID in 2013, the CCC hacked Samsung’s “ultra secure” iris scanner for the Galaxy S8 four years later.

“The security risk to the user from iris recognition is even bigger than with fingerprints as we expose our irises a lot,” said Dirk Engling, a CCC spokesperson. “Under some circumstances, a high-resolution picture from the Internet is sufficient to capture an iris.”

Facial recognition

This biometric modality has been all the rage over the last five years. Facial recognition systems analyze images or video of the human face by mapping its features and comparing them against a database of known faces. Facial recognition can be used to grant access to accounts and devices that are typically locked by other means, such as a PIN, password, or other form of biometric. It can be used to tag photos on social media or optimize image search results. And it’s often used in surveillance, whether to prevent retail crime or help police officers identify criminals.

As with iris scanners, a concern of security and privacy advocates is the ability of facial recognition technology to be used in combination with public (or hidden) cameras that don’t require knowledge or consent from users. Combine this with lack of federal regulation, and you once again have an example of technology that has raced far ahead of our ability to define its ethical use. Accuracy is another point of contention, and multiple studies have backed up its imprecision, especially when identifying people of color.

Private corporations, such as Apple, Google, and Facebook have developed facial recognition technology for identification and authentication purposes, while governments and law enforcement implement it in surveillance programs. However, citizens—the target of this technology—have both tentatively embraced facial recognition as a password replacement and rallied against its Big Brother application via government monitoring.

When talking about the use of facial recognition technology for government surveillance, China is perhaps the top country that comes to mind. To date, China has at least 170 million CCTV cameras—and this number is expected to increase by almost threefold by 2021.

With this biometric modality being used at universities, shopping malls, and even public toilets (to prevent people from taking too many tissues), surveys show Chinese citizens are wary of the data being collected. Meanwhile, the facial recognition industry in China has been the target of US sanctions for violations of human rights.

China is one of the top five countries named in the “State Enemies of the Internet” list, which was published by Reporters Without Borders in 2013.

“AI and facial recognition technology are only growing and they can be powerful and helpful tools when used correctly, but can also cause harm with privacy and security issues,” wrote Nicole Martin in Forbes. “Lawmakers will have to balance this and determine when and how facial technology will be utilized and monitor the use, or in some cases abuse, of the technology.”

Behavioral biometrics

Otherwise known as behaviometrics, this modality involves the reading of measurable behavioral patterns for the purpose of recognizing or verifying a person’s identity. Unlike other biometrics mentioned in this article, which are measured in a quick, one-time scan (static biometrics), behavioral biometrics is built around continuous monitoring and verification of traits and micro-habits.

Gait recognition, or gait analysis, is a popular example of behavioral biometrics.

This could mean, for example, that from the time you open your banking app to the time you have finished using it, your identity has been checked and re-checked multiple times, ensuring your bank that you still are who you claim you are for the entire time. The bonus? The process is frictionless, so users don’t realize the analysis is happening in the background.

Private institutions have taken notice of behavioral biometrics—and the technology and systems behind this modality—because it offers a multitude of benefits. It can be tailored according to an organization’s needs. It’s efficient and can produce results in real time. And it’s secure, since biometric data of this kind is difficult to steal or replicate. The data retrieved from users is also highly accurate.

Like any other biometric modality, using behavioral biometrics brings up privacy concerns. However, the data collected by a behavioral biometric application is already being collected by device or network operators, which is recognized by standard privacy laws. Another plus for privacy advocates: Behavioral data is not defined as personally identifiable, although it’s being considered for regulation so that users are not targeted by advertisers.

While voice recognition (which we mentioned above), keystroke dynamics, and signature analysis are all under the umbrella of behavior biometrics, take note that organizations that employ a behavioral biometric scheme do not use these modalities.

Biometrics vs. passwords

At face value, any of the biometric modalities available today might appear to be superior to passwords. After all, one could argue that it’s easy for numeric and alphanumeric passwords to be stolen or hacked. Just look at the number of corporate breaches and millions of affected users bombarded by scams, phishing campaigns, and identity theft. Meanwhile, theft of biometric data has not yet happened at this scale (to our knowledge).

While this argument may have some merit, remember that when a password is compromised, it can be easily replaced with another password, ideally one with higher entropy. However, if biometric data is stolen, it’s impossible for a person to change it. This is, perhaps, the top argument against using biometrics.

Because a number of our physiological traits can be publicly observed, recorded, scanned from afar, or readily taken as we leave them everywhere (fingerprints), it is argued that consumer-grade biometrics—without another form of authentication—are no more secure than passwords.

Not only that, but the likelihood of cybercriminals using such data to steal someone’s identity or to commit fraud will increase significantly over time. Biometric data may not (yet) open new banking accounts under your name, but it can be abused to gain access to devices and establishments that have a record of your biometric. Thanks to new “couch-to-plane” schemes several airports are beginning to adapt, stolen biometrics can now put a fraudster on a plane to any destination they wish to go.

What about DNA as passwords?

Using one’s DNA as password is a concept that is far from far-fetched, although not widely-known or used in practice. In a recent paper, authors Madhusudhan R and Shashidhara R have proposed the use of a DNA-based authentication scheme within mobile environments using a Hyper Elliptic Curve Cryptosystem (HECC), allowing for greater security in exchanging information over a radio link. This is not only practical but can also be implemented on resource-constrained mobile devices, the authors say.

This may sound good on paper, but as the idea is still purely theoretical, privacy-conscious users will likely need a lot more convincing before considering to use their own DNA for verification purposes. While DNA may seem like a cool and complicated way to secure our sensitive information, much like out fingerprints, we leave DNA behind all the time. And, just as we can’t change our fingerprints, our DNA is permanent. Once stolen, we can never use it for verification.

Furthermore, the once promising idea of handing over your DNA to be stored in a giant database in exchange for learning your family’s long-forgotten secrets seems to have lost its charm. This is due to increased awareness among users of the privacy concerns surrounding commercial DNA testing, including how the companies behind them have been known to hand over data to pharmaceutical companies, marketers, and law enforcement. Not to mention, studies have shown that such test results are inaccurate about 40 percent of the time.

With so many concerns, perhaps it’s best to leave the notion of using DNA as your proverbial keys to the kingdom behind and instead focus on improving how you create, use, and store passwords instead.

Passwords (for now) are here to stay

As we have seen, biometrics isn’t the end-all, be-all most of us expected. However, this doesn’t mean biometrics cannot be used to secure what you hold dear. When we do use them, they should be part of a multi-authentication scheme—and not a password replacement.

What does that look like in practice? For top level security that solves the issue of having to remember so many complex passwords, store your account credentials in a password manager. Create a complex, long passphrase as the master password. Then, use multi-factor authentication to verify the master password. This might involve sending a passcode to a second device or email address to be entered into the password manager. Or, if you’re an organization willing to invest in biometrics, use a modality such as voice recognition to speak an authentication phrase.

So, are biometrics here to stay? Definitely. But so are passwords.

The post The passwordless present: Will biometrics replace passwords forever? appeared first on Malwarebytes Labs.

A week in security (April 13 – 19)

Last week on Malwarebytes Labs, we looked at how to avoid Zoom bombing, weighed the risks of surveillance versus pandemics, and dug into a spot of WiFi credential theft.

Other cybersecurity news:

  • Malware creeps back into the home: With a pandemic forcing much of the workforce into remote positions, it’s worth noting that a study found malware on 45 percent of home office networks. (Source: TechTarget)
  • Free shopping scam: Coronavirus fraudsters attempt to cash in on people’s fears with fake free offers at Tesco. (Source: Lincolnshire Live)
  • Browser danger: Researchers tackle a fake browser extension campaign that targets users of Ledger and other plugins. (source: MyCrypto/PhishFort)
  • Phishing for cash: Research shows how phish kit selling is a profitable business. (Source: Help Net Security)
  • Big problem, big bucks: The FTC thinks Americans have lost out to the tune of 13 million dollars thanks to coronavirus scams. (Source: The Register)
  • Facebook tackles bots: A walled off simulation has been created to dig deep into the world of scams and trolls. (Source: The Verge)
  • Apple of my eye: Apple remains the top brand for phishing scammers to target. (Source: CISO Mag)
  • Fake Valorant beta keys: Reports have surfaced of fake tools promising access to upcoming game Valorant’s beta, with horribly predictable results. (Source: CyberScoop)

Stay safe, everyone!

The post A week in security (April 13 – 19) appeared first on Malwarebytes Labs.

Discord users tempted by bots offering “free Nitro games”

The last few weeks have seen multiple instances of problematic bots appearing in Discord channels. They bring tidings of gifts, but the reality is quite a bit different. Given so many more young kids and teens are at home during the current global lockdown, they may well see this scam bouncing around their chat channels. Worried parents may want to point them in this direction to learn about the warning signs.

What is Discord?

Sorry, teens who’ve been pointed in this direction: You can skip this part. For anyone else who needs it, Discord is a mostly gaming-themed communication platform incorporating text, voice, and video. It’s not to be mixed up with Twitch, which is more geared toward live gaming streams, e-sports competitions, and older recordings of big events.

DIY bots: part of the ecosystem

One of the most interesting features of Discord is how anyone can make their own channel bot. Simply bolt one together, keep the authorization token safe, and invite it into your channel. If you run into a bot you like the look of in someone else’s channel, you can usually invite them back into your own (or somewhere else), but you’ll need to have “manage server permissions” on your account.

You have to do a little due diligence, as things can go wrong if you don’t keep your bot and account locked down. Additionally, the very openness available to build your own bot means people can pretty much make what they like. It’s up to you as a responsible Discord user to keep that in mind before inviting all and sundry into the channel. Not all bots have the best of intentions, as we’re about to find out.

Discord in bot land

Click to enlarge

If you’re minding your business in Discord, you could be sent a direct message similar to the one above. It looks official, calls itself “Twitch,” and goes on to say the following:

Exclusive partnership

We are super happy to announce that Discord has partnered with Twitch to show some love to our super great users! From April 05, 2020 until April 15, 2020 all our users will have access to Nitro Games

You have to invite me to your servers

If there’s one thing people can appreciate in the middle of a global pandemic, it’s freebies. Clicking the blue text will pop open an invite notification:


Click to enlarge

Add bot to: [server selection goes here]

This requires you to have manage server permissions in this server.

It then goes on to give some stats about whatever bot you’re trying to invite. The one above has been active since April 13, 2019, and is used across 1,000 servers so it’s got a fair bit of visibility. As per the above notification, “This application cannot read your messages or send messages as you.”

Sounds good, right? Except there are some holes in the free Nitro games story.

Nitro is a real premium service offered by Discord offering a variety of tools and functions for users. The problem is that the games offered by Nitro were shut down last October due to lack of use. What, exactly then, is being invited into servers?

Spam as a service

Multiple Discord users have reported these bots in the last few days, mostly in relation to spam, nude pic channels, and the occasional potentially dubious download sitting on free file hosting websites. A few folks have mentioned phishing, though we’ve seen no direct links to actual phishes taking place at time of writing.

Another Discord user mentioned if given access, the bot will (amongst other things) ban everyone from the server and delete all channels, but considering the aim of the game here is to spam links and draw additional people in, this would seem to be counterproductive to the main goal of increasing traffic in specific servers.

Examples: Gaming spam

Here’s one server offered up as a link from one of the bots as reported by a user on Twitter:

Click to enlarge

This claims to be an accounts center for the soon-to-be-smash-hit game Valorant, currently in closed Beta. The server owner explains they’d rather give accounts away than sell them to grow their channel, which is consistent with the bots we’ve seen spreading links rather than destroying channels. While they object to “botted invites,” claiming they’ll ban anyone shown to be inviting via bots, they’re also happy to suggest spamming links to grow their channel numbers.

Click to enlarge

Click to enlarge

It’s probably a good idea they’re not selling accounts, because Riot take a dim view of selling; having said that, promoting giveaway Discords doesn’t seem too popular either.

Examples: Discord goes XXX

Before we can stop and ponder our Valorant account invite frenzy, a new private message has arrived from a second bot. It looks the same as the last bogus Nitro invite, but with a specific addition:

You’ve been invited to join a server: JOIN = FREE DISCORD NITRO AND NUDES

Nudes? Well, that’s a twist.

Click to enlarge

This is a particularly busy location, with no fewer than 15,522 members and roughly 3,000 people online. The setup is quite locked down: There’s no content available unless you work for it, by virtue of sending invites to as many people as possible.

Click to enlarge

The Read Me essentially says little beyond “Invite people to get nudes.”

Click to enlarge

Elsewhere it promotes a “nudes” Twitter profile, with the promise of videos for retweets. The account, in keeping with the general sense of lockdown, has no nudity on it.

Click to enlarge

As you can guess, these bots are persistent. Simply lingering in a server can result in a procession of invites to your account.

Click to enlarge

We were sent to a variety of locations during testing, including some which could have been about films and television or pornography, or both, but in most cases, it was hard to say, as almost every place we landed locks content down.

This makes sense for the people running these channels: If everyone was open from the get-go, there’d be no desire from the people visiting to go spamming links in the dash to get some freebies.

Bots on parade

We didn’t see a single place linked from any of these bots that mentioned free Discord Nitro—it’s abandoned entirely upon entry. Visitors probably have no reason to question otherwise, and so will go off to do their free promotional duties. Again, while it’s entirely possible bots out there are wiping out people’s communities, during testing all we saw in relation to the supposed Nitro spam bots was a method for channel promotion.

If you have server permissions, you should think carefully about which bots you allow into your server. There are no free games, but there is a whole lot of spam on the horizon if you’re not paying attention.

The post Discord users tempted by bots offering “free Nitro games” appeared first on Malwarebytes Labs.