Malware was discovered in a Google Play listed PDF-maker app that had over 100 million downloads. (Source: Techspot)
Insurance companies are fueling a rise in ransomware attacks by telling their customers to take the easy way to solve their problems. (Source: Pro Publica)
Hackers are actively trying to steal passwords from two widely used VPNs using unfixed vulnerabilities. (Source: ArsTechnica)
A new variant of the Asruex Backdoor targets vulnerabilities that were discovered more than six years ago in Adobe Acrobat, Adobe Reader, and Microsoft Office software. (Source: DarkReading)
In a first-ever crime committed from space, a NASA astronaut has been accused of accessing mails and bank accounts of her estranged spouse while aboard the International Space Station (ISS). (Source: TechWorm)
Command and control (C2) servers for the Emotet botnet appear to have resumed activity and deliver binaries once more. (Source: BleepingComputer)
A security researcher has found a critical vulnerability in the blockchain-based voting system Russian officials plan to use next month for the 2019 Moscow City Duma election. (Source: ZDNet)
The French National Gendarmerie announced the successful takedown of the wide-spread RETADUP botnet, remotely disinfecting more than 850,000 computers worldwide. (Source: The Hacker News)
The developers behind TrickBot have modified the banking trojan to target customers of major mobile carriers, researchers have reported. (Source: SCMagazine)
A coin-mining malware infection previously only seen on Arm-powered IoT devices has made the jump to Intel systems. (Source: The Register)
Researchers from Dell Secureworks saw a new feature in TrickBot that allows it to tamper with the web sessions of users who have certain mobile carriers. According to a blog post that they published early last week, TrickBot can do this by “intercepting network traffic before it is rendered by a victim’s browser.”
Before it took yet another step up its evolutionary ladder, TrickBot already has an impressive repertoire of features, such as a dynamic webinject it uses against financial institution websites; a worm module; a persistence technique using Windows’s Scheduled Task; the ability to steal data from Microsoft Outlook, cookies, and browsing history; the means to target point-of-sale (PoS) systems; and the capability to spread via spam messages and moving laterally within an affected network via the EternalBlue, Eternal Romance, or the EternalChampion exploit.
Now, more recently, the same webinject feature is used against the top three US-based mobile carriers: Verizon Wireless, T-Mobile, and Sprint. Augmentation to accommodate attacks against users of these companies was added to TrickBot on August 5, August 12, and August 19, according to Dell Secureworks.
How does the attack work?
Dell Secureworks researchers were able to capture proof of certain changes TrickBot make on the original page of mobile carrier sites.
Above is a side-by-side comparison of Verizon Wireless’s sign in page before (image of the right) and after (image on the left) TrickBot tampered with it. Aside from some texts missing, notice also new added fields, specifically those asking for PIN numbers.
In the case of Sprint, the change is more subtle and quite seamless: an additional PIN form displays once users are able to successfully sign in with their user name and password.
The sudden targeting of mobile phone PINs suggests that threat actors using TrickBot are showing interest in getting involved with certain fraud tactics like port-out fraud and SIM swap, according to the researchers.
A port-out fraud happens when threat actors call their target’s mobile carrier to request the target’s number be switched or ported over to a new network provider. SIM swapping or SIM hijacking works in a similar fashion, but instead of changing to a new provider, the threat actor requests for a new SIM card from the carrier that they can put in their own device.
These will cause all calls, MMS, and SMS supposedly for you to be sent to the threat actor instead. And if their target is using text-based two-factor authentication (2FA) on their online accounts, the threat actor can easily intercept company-generated messages to gain access to those accounts. This results in account takeover (ATO) fraud.
Such a scam is typically done when threat actors already got a hold of their target’s credentials and wish to circumvent 2FA.
How to protect yourself from TrickBot?
So as not to reinvent the wheel, we implore you, dear Reader, to go back and check our post entitled TrickBot takes over as top business threat wherein we outlined remediation steps that businesses (and consumers alike) can follow. This post also has a section on preventative measures—ways one can lessen the likelihood of TrickBot infection in endpoints—starting with regular employee education and awareness campaigns on the latest tactics and trends about the threat landscape.
Note that Malwarebytes automatically detects and removes TrickBot without user intervention.
I think I may have fallen victim to this. What now?
Go ahead and changes the passwords of all your online accounts that you have tied in with your phone number.
You might also want to consider using stronger authentication methods, such as the use of time-based one-time passwords (OTP) 2FA—Authy and Google Authenticator comes to mind—for accounts that hold extremely sensitive information about you, loved ones and friends, and your business or employees.
Some of the most common web threats we track have a social engineering component. Perhaps the more popular ones are those encountered via malvertising, or hacked websites that push fraudulent updates.
We recently identified a website compromise with a scheme we had not seen before; it’s part of a campaign using a social engineering toolkit that has drawn over 100,000 visits in the past few weeks.
The toolkit, which we dub Domen, is built around a detailed client-side script that acts as a framework for different fake update templates, customized for both desktop and mobile users in up to 30 languages.
Loaded as an iframe from compromised websites (most of them running WordPress) and displayed over top as an additional layer, it entices victims to install so-called updates that instead download the NetSupport remote administration tool. In this blog we describe its tactics, techniques, and procedures (TTPs) that remind us of some past and current social engineering campaigns.
Fake Flash Player update
The premise looks typical of many other social engineering toolkit templates we’ve come across before. Here, users are tricked into downloading and running a Flash Player update:
Note that the domain wheelslist[.]net belongs to a legitimate website that has been hacked and where an iframe from chrom-update[.]online is placed as a layer above the normal page:
Clicking the UPDATE or LATER button downloads a file called ‘download.hta’, indexed on Atlassian’s Bitbucket platform and hosted on an Amazon server (bbuseruploads.s3.amazonaws.com):
Upon execution, that HTA script will run PowerShell and connect to xyxyxyxyxy[.]xyz in order to retrieve a malware payload.
That payload is a package that contains the NetSupport RAT:
Link with “FakeUpdates” aka SocGholish
In late 2018, we documented a malicious redirection campaign that we dubbed FakeUpdates, also known as SocGholish based on a ruleset from EmergingThreats. It leverages compromised websites and performs some of the most creative fingerprinting checks we’ve seen, before delivering its payload (NetSupport RAT).
We recently noticed a tweet that reported SocGholish via the compromised site fistfuloftalent[.]com, although the linked sandbox report shows the same template we described earlier, which is different than the SocGholish one:
The reason why the sandbox is flagging SocGholish is because the compromised site contains artifacts related to it, and does, in some circumstances, actually redirect to it:
Although the templates for SocGholish and the new campaign are different, they both:
can occasionally be found on the same compromised host
abuse or abused a cloud hosting platform (Bitbucket, Dropbox)
Side note: A publicly saved VirusTotal graph (saved screenshot here) shows that the threat actors also used DropBox at some point to host the netSupport RAT. They double compressed the file, first as zip and then as rar.
Similarities with SocGholish could be simply due to the threat actor getting inspired by what has been done before. However, the fact that both templates deliver the same RAT is something noteworthy.
Link with EITest
At about the same time as we were reviewing this new redirection chain, we saw this other one identified by @tkanalyst tagged as FontPack that is reminiscent of the HoeflerText social engineering toolkit reported by Proofpoint in early 2017.
A closer look at the template.js file confirms they are practically identical except for a different payload URL and some unique identifiers:
Domen social engineering kit
The template.js file is a beautiful piece of work that goes beyond fake fonts or Flash Player themes. While we initially detected this redirection snippet under the FontPack label, we decided to call this social engineering framework Domen, based on a string found within the code.
One particular variable called “banner” sets the type of social engineering theme: var banner = ‘2’; // 1 – Browser Update | 2 – Font | 3 – Flash
We already documented the Flash Player one, while the Font (HoeflexText copycat) and some of its variations (Chrome, Firefox) was also observed. Here’s the third one, which is a browser update:
There is also a template for mobile devices (which again is translated into 30 languages) that instructs users how to download and run a (presumably malicious) APK:
Scope and stats
The scope of this campaign remains unclear but it has been fairly active in the past few weeks. Every time a user visits a compromised site that has been injected with the Domen toolkit, communication takes place with a remote server hosted at asasasqwqq[.]xyz:
The page will create a GET request that returns a number:
If we trust those numbers (a subsequent visit increments it by 1), it means this particular campaign has received over 100,000 views in the past few weeks.
Over time, we have seen a number of different social engineering schemes. For the most part, they are served dynamically based on a user’s geolocation and browser/operating system type. This is common, for example, with tech support scam pages (browlocks) where the server will return the appropriate template for each victim.
What makes the Domen toolkit unique is that it offers the same fingerprinting (browser, language) and choice of templates thanks to a client-side (template.js) script which can be tweaked by each threat actor. Additionally, the breadth of possible customizations is quite impressive since it covers a range of browsers, desktop, and mobile in about 30 different languages.
Malwarebytes users were already protected against this campaign thanks to our anti-exploit protection that thwarts the .hta attack before it can even retrieve its payload.
Note: We shared a traffic capture with the folks at EmergingThreats who created a new set of rules for it.
A post by Ian Beer of Google Project Zero released late yesterday evening sent the security community reeling. According to Beer, a small set of websites had been hacked in February and were being used to attack iPhones, infecting them with malware. These sites, which see thousands of visitors per day, were used to distribute iOS malware over a two-year period.
History of iOS infections
Historically, iOS has never been completely free of malware, but it has mostly been limited to one of two scenarios: Either you jailbroke your device, hacking it to remove the security restrictions and installing something malicious as a result, or you were the target of a nation-state adversary. A classic example of the latter was the case of Ahmed Mansoor, in which he was targeted with a text message in an attempt to infect his phone with the NSO’s malware, now referred to as Trident.
The difficulty with infecting an iPhone is that it requires some kind of zero-day vulnerability (i.e., unknown to the security community at time of its release), and these vulnerabilities can be worth $1 million or more on the open market. Companies like Zerodium will purchase them, but widespread use of such vulnerabilities “burns” them, making it more likely that Apple will learn of their existence and apply fixes.
This is exactly what happened in the Trident case—a clumsy text message to an already-wary journalist resulted in three separate million-dollar vulnerabilities being discovered and patched.
Thus, iPhone malware infections were always seen as problems that didn’t affect average people. After all, who would burn $1 million or more to infect individuals, unless the gain was greater than the potential cost? There was never any guarantee, of course, and Beer’s findings have upended that conventional wisdom.
Mechanism of infection
According to Beer, the websites in question “were being used in indiscriminate watering hole attacks against their visitors,” using 14 different vulnerabilities in iOS that were combined into five different attack chains.
An attack chain is a series of two or more vulnerabilities that can be used together to achieve a particular goal, typically infection of the target system. In such cases, one of the vulnerabilities alone is not sufficient to achieve the goal, but combining two or more makes it possible.
Among the vulnerabilities used, only two were mentioned as still being zero-days at the time of discovery (CVE-2019-7286 and CVE-2019-7287). These were fixed by Apple in the iOS 12.1.4 release on February 7. The remaining 12 were not zero-days at the time, meaning they were already known, and they had already been patched by Apple. The various attack chains were capable of infecting devices running iOS 10 up through iOS 12.1.3.
For the technically-minded, Beer has included excellent, highly-detailed descriptions of each attack chain. However, what’s important is that each of these attack chains was designed to drop the same implant on the device, and it is that implant (the iPhone malware) that we will focus on here.
iPhone malware/implant behavior
The iPhone malware implant, which has not been given a name, is able to escape the iOS sandbox and run as root, which basically means it has bypassed the security mechanisms of iOS and has the highest level of privileges.
The implant communicates with a command and control (C&C) server on a hard-coded IP address over plain, unencrypted HTTP. In addition to uploading data to the server, it can also receive a number of commands from the server.
systemmail : upload email from the default Mail.app
device : upload device identifiers
(IMEI, phone number, serial number etc)
locate : upload location from CoreLocation
contact : upload contacts database
callhistory : upload phone call history
message : upload iMessage/SMSes
notes : upload notes made in Notes.app
applist : upload a list of installed non-Apple apps
keychain : upload passwords and certificates stored in the keychain
recordings : upload voice memos made using the built-in voice memos app
msgattach : upload SMS and iMessage attachments
priorapps : upload app-container directories from hardcoded list of
third-party apps if installed (appPriorLists)
photo : upload photos from the camera roll
allapp : upload container directories of all apps
app : upload container directories of particular apps by bundle ID
dl : unimplemented
shot : unimplemented
live : unimplemented
This list of commands reveals a frightening list of capabilities. Among other things, the iPhone malware is capable of stealing all keychains, photos, SMS messages, email messages, contacts, notes, and recordings. It can also retrieve the full call history, and is capable of doing real-time monitoring of the device location.
It also includes the capability to obtain the unencrypted chat transcripts from a number of major end-to-end encrypted messaging clients, including Messages, Whatsapp, and Telegram. Let that sink in for minute. If you’re infected, all your encrypted messages are not only collected by the attacker, but they’re transferred in clear-text across the Internet.
What do I do now?
The bad news is that we don’t yet know which websites were affected, so it’s impossible to know who may have been infected with this mysterious iPhone malware. That is causing a substantial amount of fear among those aware of the problem.
Fortunately, there’s no need to panic at this point. These vulnerabilities have been patched for quite some time now. Also, the implant is actually incapable of remaining persistent after a reboot. This means that any time an infected iPhone is restarted—such as when an iOS update is installed, for example—the implant is removed. (Of course, a vulnerable device could always be re-infected by visiting an affected site.)
Because of this, any device running iOS 12.1.4 is not only immune to these particular attacks, but it can’t be infected anymore either, due to the reboot when installing 12.1.4 (or later). It’s unlikely that anyone is still infected at this time, unless they never update or restart their phone. If you’re concerned you may be infected, just install the latest iOS update, which will also reboot the phone and remove the malware, if present.
If you do have a phone that you suspect could be infected, it sounds like there is an easy test to see if it is, but you would have to do so before rebooting, as the malware needs to be active. (Disclaimer: without having a copy of the implant, I have not been able to verify this personally, or find anyone else who was able to do so.)
First, connect the affected device to a Mac via a Lightning (or, in the case of an iPad Pro, USB-C) cable.
Next, open the Console app on the Mac, which is found in the Utilities folder in the Applications folder.
In the Console, locate the phone in the Devices list and select it.
At this point, you’ll see log messages from the iOS device start scrolling past in the right-hand pane. Although the Console will not show you past messages, if you monitor, within 60 seconds or less, an infected iOS device should generate messages containing certain phrases, such as “uploadDevice,” “postFile success,” and “timer trig.” A full list of possible strings to look for can be found in Beer’s teardown of the implant; anywhere the code shows an NSLog command, that represents a message that will be echoed into the log.
Who was affected?
At this point, it is impossible to know who is responsible, or who was infected, without more information, such as the names of the sites that were compromised.
Beer’s article begins by saying that the malware did not target specific people:
The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.
There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant.
Ian Beer, https://googleprojectzero.blogspot.com/2019/08/a-very-deep-dive-into-ios-exploit.html
This leaves things a bit open to interpretation. It certainly seems that the malware did not target individuals. However, that does not necessarily mean that it wasn’t targeted.
As an example, in the watering hole attacks used in the recent attacks on Coinbase and other cryptocurrency companies, using a Firefox zero-day, many people visited the page containing the exploit. However, only a handful were selected by the malicious scripts to be infected.
Clearly, that kind of targeting did not happen in this case. However, that doesn’t necessarily mean that the attack wasn’t targeted at a particular group of people who would be likely to visit the hacked sites. In fact, that is the typical modus operandi for a watering hole attack: a site likely to be visited by the target group is compromised and used to spread malware. Such a thing happened in 2013, when attackers compromised a developer website with a Java-based exploit, infecting developers at many major companies, including Apple, with the OSX.Pintsized malware.
Near the end of the article, it says:
The reality remains that security protections will never eliminate the risk of attack if you’re being targeted. To be targeted might mean simply being born in a certain geographic region or being part of a certain ethnic group.
Ian Beer, https://googleprojectzero.blogspot.com/2019/08/a-very-deep-dive-into-ios-exploit.html
Some have interpreted this as a hint that people within a certain region or ethnic group were targeted by this attack, but my read is that this is simply general advice about what it means for someone to be “targeted,” in the greater context of the discussion of targeted vs. untargeted malware.
I do not intend to imply that China is the culprit, as that cannot be known with the currently available information. This is merely an example to point out that this could be a similar incident, perpetrated by a country that is using similar techniques. Then again, it also may not be. Time will hopefully tell.
Although the threat from this particular incident has passed, this was an eye-opening revelation. Ultimately, nothing has actually changed. This kind of attack was always a possibility; it just hadn’t happened yet. Now that it has, people will not look at the iPhone in quite the same way.
I still think the iPhone is the most secure phone on the planet (not counting obscure or classified devices that are only secure because few people actually have them). However, there are always vulnerabilities, and it’s entirely possible this kind of attack could be going on right now, somewhere else, against the current version of iOS.
It’s also worth pointing out that most of these vulnerabilities were not actually zero-days at the time of their discovery. Many people never update their devices, and as a result, zero-days aren’t always necessary. Updating to the latest version of iOS is always recommended, and would have protected against all but one of the attack chains up to February 7, and all of them after.
The extremely closed nature of iOS means that when malware like this crops up, there’s no way for people to tell if their devices are infected or not. If this malware were capable of remaining persistent, and if it didn’t leak strings into the logs, it would be most difficult to identify whether an iPhone was infected, and this would lead to dangerous situations.
Although Apple doesn’t allow antivirus software on iOS, there does need to be some means for users to check their devices for known threats. Perhaps something involving unlocked devices connected by wire to trusted machines? If such a thing were possible, this attack probably wouldn’t have gone undetected for two years.
The Heartbleed vulnerability was introduced into the OpenSSL crypto library in 2012. It was discovered and fixed in 2014, yet today—five years later—there are still unpatched systems.
This article will provide IT teams with the necessary information to decide whether or not to apply the Heartbleed vulnerability fix. However, we caution: The latter could leave your users’ data exposed to future attacks.
What is the Heartbleed vulnerability?
Heartbleed is a code flaw in the OpenSSL cryptography library. This is what it looks like:
memcpy(bp, pl, payload);
In 2014, a vulnerability was found in OpenSSL, which is a popular cryptography library. OpenSSL provides developers with tools and resources for the implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols.
Websites, emails, instant messaging (IM) applications, and virtual private networks (VPNs) rely on SSL and TLS protocols for security and privacy of communication over the Internet. Applications with OpenSSL components were exposed to the Heartbleed vulnerability. At the time of discovery, that was 17 percent of all SSL servers.
Upon discovery, the vulnerability was given the official vulnerability identifier CVE-2014-0160, but it’s more commonly known by the name Heartbleed. The latter was invented by an engineer from Codenomicon, who was one of the people that discovered the vulnerability.
The name Heartbleed is derived from the source of the vulnerability—a buggy implementation of the RFC 6520 Heartbeat extension, which packed inside it the SSL and TLS protocols for OpenSSL.
Heartbleed vulnerability behavior
The Heartbleed vulnerability weakens the security of the most common Internet communication protocols (SSL and TSL). Websites affected by Heartbleed allow potential attackers to read their memory. That means the encryption keys could be found by savvy cybercriminals.
With the encryption keys exposed, threat actors could gain access to the credentials—such as names and passwords—required to hack into systems. From within the system, depending on the authorization level of the stolen credentials, threat actors can initiate more attacks, eavesdrop on communications, impersonate users, and steal data.
The Heartbleed vulnerability damages the security of communication between SSL and TLS servers and clients because it weakens the Heartbeat extension.
Ideally, the Heartbeat extension is supposed to secure the SSL and TLS protocols by validating requests made to the server. It allows a computer on one end of the communication to send a Heartbeat Request message.
Each message contains a payload—a text string that contains the transmitted information—and a number that represents the memory length of the payload—usually as a 16-bit integer. Before providing the requested information, the heartbeat extension is supposed to do a bounds check that validates the input request and returns the exact payload length that was requested.
The flaw in the OpenSSL heartbeat extension created a vulnerability in the validation process. Instead of doing a bounds check, the Heartbeat extension allocated a memory buffer without going through the validation process. Threat actors could send a request and receive up to 64 kilobytes of any of the information available in the memory buffer.
Memory buffers are temporary memory storage locations, created for the purpose of storing data in transit. They may contain batches of data types, which represent different stores of information. Essentially, a memory buffer keeps information before it’s sent to its designated location.
A memory buffer doesn’t organize data—it stores it in batches. One memory buffer may contain sensitive and financial information, as well as credentials, cookies, website pages and images, digital assets, and any data in transit. When threat actors exploit the Heartbleed vulnerability, they trick the Heartbeat extension into providing them with all of the information available within the memory buffer.
The Heartbleed fix
Bodo Moeller and Adam Langley of Google created the fix for Heartbleed. They wrote a code that told the Heartbeat extension to ignore any Heartbeat Request message that asks for more data than the payload needs.
if (1 + 2 + payload + 16 > s->s3->rrec.length) return 0; /* silently discard per RFC 6520 sec. 4 */
How the Heartbleed vulnerability shaped OpenSSL as we know it
The discovery of the Heartbleed vulnerability created worldwide panic. Once the fixes were applied, idle fingers started looking for the causes of the incident. Close scrutiny at OpenSSL revealed that this widely-popular library was maintained solely by two men with a shockingly low budget.
This finding spurred two positive initiatives that changed the landscape of open-source:
Organizations realized the importance of supporting open-source projects. There’s only so much two people can do with their personal savings. Organizations, on the other hand, can provide the resources needed to maintain the security of open-source projects.
To help finance important open-source projects, Linux started the Core Infrastructure Initiative (CII). The CII chooses the most critical open-source projects, which are deemed essential for the vitality of the Internet and other information systems. The CII receives donations from large organizations and offers them to OSS initiatives in the form of programs and grants.
As with any change-leading crisis, the Heartbleed vulnerability also carried a negative side-effect: the rise of vulnerability brands. The Heartbleed vulnerability was discovered at the same time by two entities—Google and Codenomicon.
Google chose to disclose the vulnerability privately, sharing the information only with OpenSSL contributors. Codenomicon, on the other hand, chose to spread the news to the public. They named the vulnerability, created a logo and a website, and approached the announcement like a well-funded marketing event.
In the following years, many of the disclosed vulnerabilities were given an almost celebrity-like treatment, with PR agencies building them up into brands, and marketing agencies deploying branded names, logos, and websites. While this can certainly help warn the public against zero-day vulnerabilities, it can also create massive confusion.
Nowadays, security experts and software developers are dealing with vulnerabilities in the thousands. To properly protect their systems, they need to prioritize vulnerabilities. That means deciding which vulnerability requires patching now, and which could be postponed. Sometimes, branded vulnerabilities are marketed as critical when they aren’t.
When that happens, not all affected parties have the time, skills, and resources to determine the true importance of the vulnerability. Instead of turning vulnerabilities into a buzz word, professionals could better serve the public by creating fixes.
Today, five years after the disclosure of the Heartbleed vulnerability, it still exists in many servers and systems. Current versions of OpenSSL, of course, were fixed. However, systems that didn’t (or couldn’t) upgrade to the patched version of OpenSSL are still affected by the vulnerability and open to attack.
For threat actors, finding the Heartbleed vulnerability is a prize; one more easily accessed by automating the work of retrieving it. Once the threat actor finds a vulnerable system, it’s relatively simple to exploit the vulnerability. When that happens, the threat actor gains access to information and/or credentials that could be used to launch other attacks.
To patch or not to patch
The Heartbleed vulnerability is a security bug that was introduced into OpenSSL due to human error. Due to the popularity of OpenSSL, many applications were impacted, and threat actors were able to obtain a huge amount of data.
Following the discovery of the vulnerability, Google employees found a solution and provided OpenSSL contributors with the code that fixed the issue. OpenSSL users were then instructed to upgrade to the latest OpenSSL version.
Today, however, the Heartbleed vulnerability can still be found in applications, systems, and devices, even though it’s a matter of upgrading the OpenSSL version rather than editing the codebase. If you are concerned that you may be affected, you can test your system for the Heartbleed vulnerability and patch to eliminate the risk or mitigate, if the device is unable to support patching.
Any server or cloud platform should be relatively easy to patch. However, IoT devices may require more advanced mitigation techniques, because they are sometimes unable to be patched. At this point, we recommend speaking with your sysadmin to determine how to mitigate the issue.
Security leaders in institutions of higher education face unique challenges, as they are charged with keeping data and the network secure, while also allowing for a culture of openness, sharing, and communication—all cornerstones of the academic community. And depending on the college or university, concerns such as tight budgets and staffing shortages can also make running a successful security program difficult. So how do CISOs get their boards to invest in higher education cybersecurity?
In the second part of our series of posts about CISO communication, we look at the considerations and skills required for presenting to the board on higher education cybersecurity, including which tactics will increase their understanding and financial support.
This month, I asked David Escalante, Director of Computer Policy & Security at Boston College and a veteran information security leader, for his perspective on what it takes to advocate for security in this environment.
What unique challenges do CISOs/security managers working in higher education have that differ from their peers in the public sector?
Many large universities are best thought of as small cities. Frequently, an organization is able to focus on a few products, or a range of products in its given industry space. Because of the diversity of things a university does, the variety of software and hardware required to run everything is huge, and this, in turn, means that security teams are stretched thin across all those systems, versus being able to focus on a smaller number of critical systems.
University environments have a culture of openness, and that can conflict culturally with a least privilege or zero trust security model.
getting into detail, risk trade-offs in higher education aren’t as well
understood as in many other sectors. And because of the diverse systems alluded
to above, balancing those trade-offs is complex.
What do education CISOs need to keep in mind when they communicate with either the board or other governing bodies in their organization?
Boards in education, in non-profits, and for state entities don’t tend to have the same makeup as public company boards do. For a non-profit example, think of the opera company whose board members are the big donors. As a result of this, we’ve noted that the “standard” templates for cybersecurity communication with the board tend not to strike the right notes, since they’re pitched for a public company board made up largely of senior corporate officers. So don’t just go “grab a template.”
The trend we’ve seen, advice-wise, of “tell the board stories” seems to resonate better than, say, a color-coded risk register. The scope of the systems running at a big university that need to be secured, plus the board’s limited detailed knowledge, makes substantive conversations about specific security approaches difficult. It’s better to highlight things both good and bad than to try to be comprehensive.
It’s very hard to balance being technical or not. Use a mix. On the one hand, board members have probably read about ransomware bringing organizations to their knees, and may even have read up on ransomware to prep for the board meeting, and will expect some technical material on the subject. On the other hand, almost all board members will not be technical, so overdoing the technical component will lose them.
Don’t directly contradict your own management chain—if you’ve asked for more staff and haven’t gotten it, don’t ask the board for it.
What other advice would you give higher ed CISOs when it comes to communication?
On the non-board management side, if you aren’t already, it’s time to emphasize that security is everyone’s responsibility. The days when you could “set and forget” antivirus and be secure are long gone.
Now social engineering and credential theft are rampant, and management is consuming information on personal mobile devices. Non-IT management needs to be clear that securing campuses is a team effort, not just an IT one.
At BC, we have been having the CIO, versus the security team, communicate personally with senior management a couple times a year on specific cyberattacks we’ve seen to emphasize that they need to be vigilant partners, and not to assume that IT will catch all threats in advance.
Clickjacking has been around for a long time, working hand-in-hand with the unwitting person doing the clicking to send them to parts unknown—often at the expense of site owners. Scammers achieve this by hiding the page object the victim thinks they’re clicking on under a layer (or layers) of obfuscation. Invisible page elements like buttons, translucent boxes, invisible frames, and more are some of the ways this attack can take place.
Despite being an old tool, clickjacking is becoming a worsening problem on the web. Let’s explore how clickjacking works, recent research on clickjacking, including the results of a study examining the top 250,000 Alexa-ranked websites, and other ways in which researchers and site owners are trying to better protect users from this type of attack.
Laying the groundwork
There are many targets of clickjacking.
Cursors, cookies, files, even your social media likes. Traditionally, an awful lot of clickjacking relates to adverts and fraudulent money making. In the early days of online ad programs, certain keywords that brought a big cash return for clicks became popular targets for scammers. Where they couldn’t get people to unintentionally click on an ad, they’d try to automate the process instead.
This is not to say clickjacking techniques are stagnant; here’s a good example of how these attacks are tough to deal with.
Clickjacking: back in fashion
There’s a lot of clickjack-related activity taking place at the moment, so researchers are publishing their works and helping others take steps to secure browsers.
Researchers from a wide variety of locations and organisations pooled resources and came up with something called “Observer,” a customised version of Chromium, the open-source browser. With it, they can essentially see under the hood of web activity and tell at a glance the point of origin of URLs from every link.
Modifying existing links on a page
Creating new links on a page
Registering event handlers to HTML elements to “hook” a click
All such events are identified and tagged with a unique ID for whichever script kicked the process into life, alongside logging page navigation to accurately record where an intercepted click is trying to direct the victim.
Observer logs two states of each webpage tested: the page fully rendered up to a 45-second time limit, and then interaction data, where they essentially see what a site does when used. It also checks if user clicks update the original elements in any way.
Some of the specific techniques Observer looks for:
Visual deception tricks from third parties, whether considered to be malicious or accidental. This is broken down further into page elements which look as though they’re from the site, but are simply mimicking the content. A bogus navigation bar on a homepage is a good example of this. They also dig into the incredibly common technique of transparent overlays, a perennial favourite of clickjackers the world over.
Hyperlink interception. Third party scripts can overwrite the href attribute of an original website link and perform a clickjack. They detect this, as well as keeping an eye out for dubious third-party scripts performing this action on legitimate third-party scripts located on the website. Observer also checks for another common trick: large clickable elements on a page, where any interaction with the enclosed element is entirely under its control.
Event handler interception. Everything you do on a device is an event. Event handlers are routines which exist to deal with those events. As you can imagine, this is a great inroad for scammers to perform some clickjacking shenanigans. Observer looks for specific API calls and a few other things to determine if clickjacking is taking place. As with the large clickable element trick up above, it checks for large elements from third parties.
Observer crawled the Alexa top 250,000 websites from May 2018, ending up with valid data from 91.45 percent of the sites they checked accounting for timeouts and similar errors. From 228,614 websites, they ended up with 2,065,977 unique third-party navigation URLs corresponding to 427,659 unique domains, with an average of 9.04 third-party navigation URLs pointing to 1.87 domains.
Checking for the three main type of attack listed above, they found no fewer than 437 third-party scripts intercepting user clicks on 613 websites. Collectively, those sites receive about 43 million visitors daily. Additionally, a good slice of the sites were deliberately working with dubious scripts for the purpose of monetizing the stolen clicks, to the tune of 36 percent of interception URLs being related to online advertising.
The full paper is a fascinating read, and well worth digging through [PDF].
Plans for the future
Researchers point out that there’s room for improvement with their analysis—this is more of a “getting to know you” affair than a total deep dive. For example, with so many sites to look at, they only look at the main page for analysis. If there were nasties lurking on subpages, they wouldn’t have seen it.
They also point out that their scripted website interaction quite likely isn’t how actual flesh-and-blood people would use the websites. All the same, this is a phenomenal piece of work and a great building block for further studies.
What else is happening in clickjacking?
Outside of conference talks and research papers, there’s also word that a three-year-old suggestion for combating iFrame clickjacking has been revived and expanded for Chrome. Elsewhere, Facebook is suing app developers for click injection fraud.
As you can see from a casual check of Google/Yahoo news, clickjacking isn’t a topic perhaps covered as often as it should be. Nevertheless, it’s still a huge problem, generates massive profits for people up to no good, and deserves to hog some of the spotlight occasionally.
How can I avoid clickjacking?
This is an interesting one to ponder, as this isn’t just an end-user thing. Website owners need to do their bit, too, to ensure visitors are safe and sound on their travels , , . As for the people sitting behind their keyboards, the advice is pretty similar to other security precautions.
Those plugins could easily break the functionality of certain websites though, and that’s before we stop to consider that many sites won’t even give you access if ads are blocked entirely. It comes down, as it often does, to ad networks waging war on your desktop. How well you fare against the potential risks of clickjacking could well depend where exactly you plant your flag with regards advertiser access to your system.
Whatever you choose, we wish you safe surfing and a distinct lack of clickjacking. With any luck, we’ll see more research and solutions proposed to combat this problem in the near future.
Dutch police departments and consumer organizations issued warnings about the use of the Nextdoor neighborhood app because people received letters (yes, as in snail-mail) pretending to come from someone in their neighborhood, which the alleged senders did not send or deliver. So, everyone figured there must be some kind of scam going on and decided to warn the public.
Nextdoor is an app that you can use to stay informed about what’s going on in your neighborhood. It can be used to find last-minute babysitters, share safety tips, or simply communicate with neighbors. The app ties people together based on their location, so in this way, it is different from many other apps where people can form their own groups.
We talked to a woman whom we’ll refer to as W.H., as she wishes to remain anonymous. Letters in her neighborhood were delivered with her as the sender. The letters were asking the receivers to install the app and join the community. W.H. did not send those letters, but she was a user of the Nextdoor app. And she remembered receiving an email from Nextdoor asking whether she would like to invite the people in her neighborhood.
Invite your neighbors to help grow your Nextdoor neighborhood. This are [sic] 100 extra invitations to send to your neighbors!
Click the button below and we will automatically and completely free of charge send 100 personalized invitations to your closest neighbors by mail.
The invitation will have your name and street on them and contain information about Nextdoor.
Michel on behalf of Nextdoor Netherlands”
W.H. clicked the button expecting to get the option to select a number of her neighbors that she wanted to invite, but all she got was a notice that the link had expired. She didn’t think about it again until one of her neighbors showed her the letter they received and informed her about the warnings that had already started to circulate by then.
This is an example of the letter that was sent out in her name.
Our neighborhood uses the free and invitation-only neighborhood app Nextdoor. It is our hope that you will join as well. In this neighborhood app we share local tips and recommendations……
It is 100% free and invitation only – for our neighbors only.
Download the Nextdoor app ….. and enter this invitation code to sign up for our neighborhood.
(this code expires in 7 days)
Your neighbor from [redacted]”
In a blog where Nextdoor explains (in Dutch) how this invitation model came to be, they point out that when you first register with the app, it also asks for your permission to send out invitations to your neighbors. This may indicate that there are members who didn’t even get the email W.H. received to ask whether she wanted to invite 100 extra neighbors. So to these users, a query from their neighbors about the letter may come as an even bigger surprise.
To be fair, we should expect an amount of targeted advertising when we sign up for free apps like these. It is important to remember that when it comes to free apps, there is a good possibility that you and your personal data are the commodities.
Not a scam, but…
Neighborhood apps are becoming more popular because people want to be more involved with their communities, and because they provide a feeling of enhanced security.
Although the method used by Nextdoor to reach new customers is questionable, we can’t deny that they did inform their customers and asked for their permission. However, sending out snail mail messages in someone’s name is a bit unorthodox, therefore should have been communicated much more clearly. This method has backfired for Nextdoor, due to negative media attention, and may have scared more customers away than they have gained.
From the reaction on their own blog, where they explain the how and why behind this method, we learned that Nextdoor intends to keep mailing out letters on its users behalf, which is another reason we felt we should raise awareness about this matter.
Like many other apps of this kind, Nextdoor gathers information about their customers and uses it for targeted marketing. Given the type of data—community information, locations, names—this is extremely valuable for marketing purposes, but could also be a security issue.
Sharing your information with people that live in your neighborhood, but that you really don’t know very well could have its drawbacks as well. We advise not to ask everyone to keep an eye out while you share your vacation plans. You may also be informing the resident burglar.
Back in May, we classified what we believed was just another generic Android/Trojan.Dropper, and moved on. We didn’t give this particular mobile malware much thought until months later, when we started noticing it had climbed onto our top 10 list of most detected mobile malware.
Henceforth, we feel a piece of mobile malware with such a high number of detections prompts a proper name and classification. Therefore, we now call it Android/Trojan.Dropper.xHelper. Furthermore, this prominent piece of malware deserves a closer look. Let’s discuss the finer points of this not-so-helpful xHelper.
Package name stealer
The first noticeable characteristic of xHelper is the use of stolen package names. It isn’t unusual for mobile malware to use the same package name of other legitimate apps. After all, the definition of a Trojan as it relates to mobile malware is pretending to be a legitimate app. However, the package names this Trojan has chosen is unusual.
To demonstrate, xHelper uses package names starting with “com.muf.” This package name is associated with a number of puzzle games found on Google Play, including a puzzle called New2048HD with the package name com.mufc.fireuvw. This simple game only had a little more than 10 installs at the time of this writing. Why this mobile malware is ripping off package names from such low-profile Android apps is a puzzle in itself. In contrast, most mobile Trojans rip off highly-popular package names.
Full-stealth vs semi-stealth
xHelper comes in two variants: full-stealth and semi-stealth. The semi-stealth version is a bit more intriguing, so we’ll start with this one. On install, the behavior is as follows:
Creates an icon in notifications titled “xhelper”
Does not create an app icon or a shortcut icon
After a couple of minutes, starts adding more icons to notifications: [GameCenter] Free Game
Press on either of these notifications, and it directs you to a website that allows you to play games directly via browser.
These websites seem harmless, but surely the malware authors are collecting pay-for-click profit on each redirect.
The full-stealth version also avoids creating an app icon or shortcut icon, but it hides almost all traces of its existence otherwise. There are no icons created in notifications. The only evidence of its presence is a simple xhelper listing in the app info section.
Digging deeper into the dropper
Mobile Trojan droppers typically contain an APK within the original app that is dropped, or installed, onto the mobile device. The most common place these additional APKs are stored is within the Assets Directory.
In this case, xHelper is not using an APK file stored in the Assets Directory. Instead, it’s a file that claims to be a JAR file, usually with the filename of xhelperdata.jar or firehelper.jar. However, try to open this JAR file in a Java decompiler/viewer, and you will receive an error.
The error is the result of two reasons: First, it’s actually a DEX file, which is a file that holds Android code unreadable to humans. To clarify, you would need to convert a DEX file to a JAR file to read it. Secondly, this file is encrypted.
Considering that the hidden file is encrypted, we assume that the first step xHelper takes upon opening is decryption. After decryption, it then uses an Android tool known as dex2oat that takes an APK file and generates compilation artifact files that the runtime loads. In other words, it loads or runs this hidden DEX file on the mobile device. This is clever workaround to simply installing another APK and obfuscates its true intentions.
What’s in a DEX
Every variant of xHelper uses this same method of disguising an encrypted file and loading it at runtime onto a mobile device. In order to further analyze xHelper, we needed to grab a decrypted version of the file caught during runtime. In this case, we were able to do so by running xHelper on a mobile device. Once it finished loading, it was easy to export the DEX file from storage.
However, even the decrypted version is an obfuscated tough nut to crack. In addition, each variant has slightly different code, making it difficult to pinpoint exactly what is the objective of the mobile malware.
Nevertheless, it’s my belief that its main function is to allow remote commands to be sent to the mobile device, aligning with its behavior of hiding in the background like a backdoor. Regardless of its true intentions, the clever attempt to obfuscate its dropper behavior is enough to classify this as a nasty threat.
High probability of infection
With xHelper being on our top 10 most detected list, there is a good chance Android users might come across it. Since we added the detection in mid-May 2019, it has been removed from nearly 33,000 mobile devices running Malwarebytes for Android. That number continues to rise by the hundreds daily.
The big question: What is the source of infection that is making this Trojan so prominent? Obviously this type of traffic wouldn’t come from carelessly installing third-party apps alone. Further analysis shows that xHelper is being hosted on IP addresses in the United States. One was found in New York City, New York; and another in Dallas, Texas. Therefore, it’s safe to say this is an attack targeting the United States.
We can, for the most part, conclude that this mobile infection is being spread by web redirects, perhaps via the game websites mentioned above, which are hosted in the US as well, or other shady websites.
If confirmed to be true, our theory highlights the need to be cautious of the mobile websites you visit. Also, if your web browser redirects you to another site, be extra cautious about click anything. In most cases, simply backing out of the website using the Android’s back key will keep you safe.
After turning away two vulnerability reports brought about by the same independent security researcher, Valve Corporation, the company behind the Steam video gaming platform admitted its mistake and updated its policies. (Source: Ars Technica)