QxSearch hijacker fakes failed installs

Recently, one of the more dominant search hijacker families on our radar has started to display some curious behavior. The family in question is delivered by various Chrome extensions and classified as PUP.Optional.QxSearch because of its description in listings of installed extensions, which tells us that “QxSearch configures your default search settings.”

QxSearch Tres extension

This branch of the search hijacker family is a clear descendant of SearchPrivacyPlus, which is referenced in our removal guide for a Chrome extension called SD App. The Chrome Web Store entries and websites that promote both QxSearch and SearchPrivacyPlus are almost identical. What’s different is that QxSearch tells users that the installation failed or that an extra step is required.

QxSearch tells you to try again?

However, despite the message asking users to try again, the extension has already been installed. Curious.

How can we recognize QxSearch extensions?

QxSearch can be found in more than one Chrome extension in the Web Store. We can recognize them by spotting the QxSearch description, which also shows up in the overview section of the store.

QxSearch mentioned in the webstore overview
QxSearch configures your default search settings

At the moment, these extensions are installed from the Web Store after a redirect from sites that are served up by ad-rotators. The sites all look similar, showing a prompt that tells users, “Flash SD App required to proceed” and a button marked “Browse Safely” that leads to the extension in the Web Store.

Typical QxSearch promoting website

In the Web Store, another common denominator so far has been the “Offered by: AP” subhead.

offered by AP

During the installation, the “Permissions prompt” will show that the extension reads and changes your data on a number of websites:

number of websites

Using the “Show Details” link will show users that the sites they want to read and change belong to some of the most commonly-used search engines, including Google, Bing, and Yahoo.

show details
Bing.com, booking.com, google.com, yahoo.com, and appsearch.xyz are the domains targeted by this search hijacker.

The hijacker intercepts searches performed on these domains and redirects the user to a domain of their own, showing the search results while adding some sponsored results at the top.

We are not sure whether the behavior of showing a failed install notification is by design or just sloppy programming, but given the fact that the “error” hasn’t been corrected after a few weeks, this leads us to believe it might be on purpose.

Looking at the installation process, it looks as if the fail occurs when the extension is due to add an icon to the browser’s menu bar. As a result, these hijackers do not display an icon—a handy way to make them more difficult to remove.

Protection against QxSearch

Malwarebytes removes these extensions and blocks the known sites that promote these extensions.

Malwarebytes blocks chissk.icu

It is useless to blacklist the extensions, because a new one is pushed out at least once every day. So instead, we’ll show you some typical trademarks that they have in common so you can recognize them—and avoid them.

IOCs of QxSearch

Search result domains:

  • qxsearch.com
  • bigsrch.xyz

Landing pages:

  • chissk.icu
  • wajm.icu
  • xv11.xyz
  • … /chrome/new/2/?v=500#sdapp93

Similar but not the same

Another family of hijackers displays slightly similar behavior by showing an installation failed notification.

installation interrupted

Only in this case the “interrupted installation” refers to the installation of a second extension that the first one tried to trigger. In this family, the first extension is a search hijacker and the second one is a “newtab” hijacker. The search hijackers in this family are detected as PUP.Optional.Safely and the Newtab hijacker is called Media New Tab.

Media New Tab extension

Why would search hijackers do this?

Search hijackers don’t generate large amounts of cash for threat actors, like ransomware or banking Trojans. So, the publishers are always looking for ways to get installed on large numbers of systems and stay installed for as long as possible.

This “installation failed” tactic could have been invented to make users think nothing was installed, so there is no reason to check for or suspect suspicious behavior. This does not explain why they opted to redirect to their own domain rather than simply adding the sponsored results as we have seen in the past.

So, it remains a bit of a mystery and reason enough to keep an eye on this family.

Search hijackers in general

Search hijackers come in different flavors. Basically, they can be divided into three main categories if you look at their methodology:

  • The hijacker redirects victims to the best paying search engine.
  • The hijacker redirects victims to their own site and show additional sponsored ads.
  • The hijacker redirects victims to a popular search engine after inserting or replacing sponsored ads.

By far the most common vehicle are browser extensions, whether they are called extensions, add-ons, or browser helper objects. But you will see different approaches here as well:

  • The extension lets the hijacker take over as the default search engine.
  • The extension takes over as “newtab” and shows a search field in that tab.
  • The extension takes permission to read and change your data on websites. It uses these permissions to alter the outcome of the victim’s searches.

Especially in the case of both lists, it helps the hijacker to be hidden from plain sight as the user might not notice that his search results are “off.” Which seems to be exactly what this branch of the QxSearch family is doing.

A short lesson

The lesson we can take away from these search hijackers is that the sheer notification that an install has failed is not enough reason to assume that nothing was installed. Stay vigilant so that, even if the culprit isn’t readily visible, you’ll know what to do.

Stay safe everyone!

The post QxSearch hijacker fakes failed installs appeared first on Malwarebytes Labs.

The Hidden Bee infection chain, part 1: the stegano pack

About a year ago, we described the Hidden Bee miner delivered by the Underminer Exploit Kit.

Hidden Bee has a complex and multi-layered internal structure that is unusual among cybercrime toolkits, making it an interesting phenomenon on the threat landscape. That’s why we’re dedicating a series of posts to exploring particular elements and updates made during one year of its evolution.

Recently, we decided to revisit this interesting miner, describing its loader that starts the infection from a single malicious executable. This post will present an alternative loader that is deployed when the infection starts from the Underminer Exploit Kit. It is analogous to the loader we described in the following posts from 2018: [1] and [2].

The dropped payloads: an overview

The first time we spotted Hidden Bee, it started the infection from a flash exploit. It downloaded and injected two elements with WASM extensions that in reality were executable modules in a custom format. We described them in detail here.

The files with WASM extensions, observed a year ago

Those elements were the initial loaders, responsible for initiating the infection chain that at the end installed the miner.

Nowadays, those elements have changed. If we take a look at the elements dropped by the same EK today, we will no longer find those WASM extensions. Instead, we encounter various multimedia files: a WAV (alternatively two WAVs), a JPEG, and a PNG.

The elements downloaded nowadays: WAV, JPG, PNG

The WAV files are downloaded by iexplore.exe, the browser where the exploit is run. In contrast, the images are downloaded at later stages of infection. For example, the JPG is always downloaded from the dllhost.exe process. The PNG is often downloaded from yet another process.

In some runs, we observed the PNG to be downloaded instead of the JPG:

Alternative: PNG being downloaded after WAV

We will start our journey of Hidden Bee analysis by looking at these files. Then, we will move to see the code responsible for processing them in order to reveal their hidden purpose.

The roadmap of the full described package:

Diagram showing the transitions between the elements

The downloaded WAV

The WAV file sounds like grey noise, and we suspect that it is meant to hide some binary belonging to the malware.

An oscillogram of the WAV file

The data is unreadable, probably encrypted or obfuscated:

We also found a repeating pattern inside, which looks like an encrypted padding. The size of the chunk is 8 bytes.

The repeating pattern inside the file: 8 bytes long

This time, using the repeating pattern as an XOR key didn’t help in getting a readable result, so probably some more complex block cipher was used.

The JPG

Below is a sample JPG, downloaded from the URL in the format: /views/[unique_string].jpg

In contrast to the WAV content, the JPG always looks like a valid image. (Interestingly, all the JPGs we observed have a consistent theme of manga-styled girls.) However, if we take a closer look at the image, we can see that some data is appended at the end.

Let’s analyze the JPG and try to extract the payload.

First, I opened the image in a hexeditor (i.e. HxD). The size of the full image is 156,005 bytes. The last 118,762 bytes belong to the malware. So, we need remove the first 37,243 bytes (156,005-118,762=37,243) in order to get the payload.

The appended part of the JPG

The payload does not look like a valid code, so it is probably obfuscated. Let’s try the easiest option first and see if there are any candidates for the XOR key. We can see that the payload has padding at the end:

Let’s try to apply the repeating character (in the given example it is 0xE5) as an XOR key. This is the result (1953032199142ea8c5872107da8f2297):

Repeating the experiment on various payloads, we can see that the result always start from the keyword !rcx. As we know from analyzing other elements of Hidden Bee, the authors of this malware decided to use various custom formats named after 64-bit Intel registers. We also encountered packages starting from !rbx and !rsi at different layers. So, this is the first element in the chain that uses this convention.

When we load the !rcx module into IDA, we can confirm that it contains valid code. More detailed explanation about the !rcx format will be given later on in this article.

The PNG

Let’s have a look at a sample PNG, download from the “captcha.png” (URL format: /images/captcha.png?mod=attachment&u=[unique_id]):

Although it is a PNG in a valid format, it looks like noise. It probably represents bytes of some encrypted data. An attempt of converting PNG to raw bytes didn’t give any readable results. We need to analyze the code in order to discover what it hides.

Code analysis: the initial SWF file

The initial SWF file is embedded on the website and responsible for serving the exploit. If we look inside it, we will not find anything malicious at first. However, among the binary data we can find another suspicious WAV as an audio asset:

The beginning of the file:

This SWF file also contains a decoder for it:

The function “decode” takes four parameters. The first of them is the byte array containing the WAV asset: That is the content to be decoded. The second argument is an MD5 (the “setup” function is an MD5 implementation) made of concatenation of the AppId and the AppToken: That is probably the encryption key. The third parameter is a salt (probably the initialization vector of the crypto).

The salt is fetched from the HTML page, where the Flash component is embedded:

Alternative case: two WAV files

Sometimes, rather than embedding the WAV containing the Flash exploit, authors use another model of delivering it. They store the URL to the WAV, and then they retrieve the file.

In the below example, we can see how this model is applied to Hidden Bee. The salt, along with the WAV URL, are both stored in the Javascript embedded in the HTML:

The Flash file first loads it and then decodes as the next step:

Looking at the traffic capture, we can see that in this case, not one, but two WAV files are downloaded:

A case when two WAV files were downloaded (and none embedded in the Flash)

The algorithms used to encrypt the content of the first WAV may vary and sometimes the algorithm is supplied as one of the parameters. After the content is fetched, the data from the WAV files is decoded using one of the available algorithms:

We can see that the expected content is a Flash file that is then loaded:

The “decode” function

The function “decode” is imported from the package “com.google”:

The full decompiled code is available here.

When we look inside, we see that the code is slightly obfuscated:

Looking at the decompiled code, we see some interesting constants. For example, –889275714 in hex is 0xCAFEBABE. As we found during analysis of other Hidden Bee elements, this DWORD was used by the same authors before as a magic number identifying one of the custom formats.

Internally, there are references to a function from another module: E_ENCRYPT_process_bytes(). Inside this function, we see calls suggesting that the Rabbit Cipher has been used:

Rabbit uses a 128-bit key (the same length as the MD5 hash that was mentioned before) and a 64-bit initialization vector. (In different runs, a different encryption algorithm may be selected.)

After the decoding process is complete, the revealed content is loaded:

The first WAV: a Flash exploit

The decoded WAV contains a package with two elements embedded: a Flash file (movies.swf) and the configuration file (config.cfg). The decrypted data starts from the magic DWORD 0xCAFEBABE, which we noticed in the code of the previous SWF.

The Flash file (movies.swf) contains an embedded exploit. In the analyzed case, the exploit used is CVE-2015-5122, however, a different exploit may be used on a different machine:

The payload (shellcode) is stored in form of an array (binary version available here: 9aec11ff93b9df14f060f78fbb1b47a2):

The configuration file (config.cfg) contains the URL to another WAV file.

The payload is padded with NOP (0x90) bytes, and the parameters, including the configuration, are filled there before the payload runs.

The fragment of the code feeding the configuration into the payload

The shellcode: downloading the second WAV

The second WAV, in contrast to the first one, is always downloaded and never embedded. It is retrieved by the “PayloadWin32” shellcode (9aec11ff93b9df14f060f78fbb1b47a2), deployed after the successful exploitation.

Looking inside this shellcode, we find the function that is responsible for downloading and decrypting another WAV. The shellcode uses parameters that were filled by the previous layer. This buffer contains the URL that will be queried and the key that will be used for decryption of the payload. It loads functions from wininet.dll using their checksums. After the initialization steps, it queries the supplied URL. The expected result is a buffer with a header typical for WAV files.

As we already suspected, the data of the WAV (starting from the offset 0x2C) contains the encrypted content. Indeed, blocks that are 8 bytes long are decrypted in a loop:

After the decryption is complete, the next module will be revealed. It is interesting to take a look at the expected header of the payload to learn which format is used for the output element. This time, the decoded data is supposed to start with the following magic numbers: 0x01, 0x04, …, 0x10.

The second WAV: an executable in proprietary format

On the illustration below, we can see how the data of the WAV looks after being decrypted (9b37c9ec19a53007d450b9b9c8febbe2):

This is an executable component that is loaded into Internet Explorer. After it decodes the imports, it starts to look much more familiar:

We can see that it follows an analogical structure to the one described in last year’s article.

This module is first executed within Internet Explorer. Then, it creates another process (dllhost.exe) in a suspended state:

It injects its original copy there (769a05f0eddd6ef2ebdd13618b244758):

Then it redirects execution to its loading function. Below, we can see the Entry Point of the implanted module within dllhost.exe.

A detailed analysis of the execution flow of this module and its format will be given later in the article.

At this point, it is important to note that the dllhost.exe is the module that further downloads the aforementioned images.

The modules with the custom format

The module with the custom format is analogous to the one described before. However, we can see that it has significantly evolved.

There are changes in the header, as well as improvements in the implementation.

Changes in the custom format

The new header is similar to the previous one. The few details that have changed are: the magic number at the beginning (from 0x10000301 to 0x10000401), and the format in which the DLLs are stored (the length of a DLL name has been added). That’s why we will refer to this format as “0x10000401 format.”

Another change is that now the names of the DLLs are obfuscated by a simple XOR with 1 byte character. They are deobfuscated just before being loaded.

Summing up, we can visualize the new format in the following way:

Obfuscation used

This time, authors decide to obfuscate all the strings used inside the module. Now all the strings are decoded just before use.

Example: decoding the string before the use

The decoding algorithm is simple, based on XOR:

The string-decoding algorithm

Inside the images downloader

Let’s look inside the first module in the 0x10000401 format that we encountered. This module is an initial stage, and its role is to download and unpack the other components. One such component is in a CAB format (that’s why we can see the Cabinet.dll among the imported DLLs).

The role of this module is similar to the first “WASM” mentioned in our post a year ago. However, the current version is not only better protected, but also comes with some improvements. This time the downloaded content is hidden in the images. So, analyzing this element can help us to understand how the used stenography works.

First, we can see that the URLs are retrieved from their Base64 form:

This string decodes to a list containing URLs of the PNG and JPG files that are going to be downloaded. For each sample, this set is unique. None of the URLs can be reused: the server gives a response only once. An example of a URL set:

http://38.75.137.9:9088/pubs/wiki.php?id=937a4eadd6f5a94b3738a58dcc79ca13
http://38.75.137.9:9088/images/captcha.png?mod=attachment&u=357e27e8af72925144ec1db2421d0cc5<
http://38.75.137.9:9088/views/q5ul78uv4b4q8bg8d95canrsns.jpg

So, we can confirm that this module is the one responsible for downloading and processing the observed images. Indeed, inside we can find the functions responsible for their decoding.

Decoding the JPG

After the payload is retrieved, the JPG header is validated.

Then, the payload is decoded by simply using an XOR with the last byte. The decoded content is expected to start from the !rcx magic ID.

After decoding the content, the hash of the !rcx module is validated with the help of SHA256 hash. The valid hash is stored in the module’s header and compared with the calculated hash of the file content.

If the validation passed, the shellcode stored in the !rcx module is loaded. More details about the execution flow will be given later.

The !rcx package has a simple header:

Decoding the PNG

Retrieving the content from the PNG is more complex.

“captcha.png” – the encrypted CAB file

First, after downloading, the PNG header is checked:

The function decoding the PNG has the following flow:

It converts the PNG into byte content and decrypts it with the help of ARIA cipher. The result should be a CAB format. The unpacked CAB is supposed to contain a module “bin/i386/core.sdb” that also occurred in our previous encounters with Hidden Bee.

The authors are careful not to reuse URLs as well as encryption keys. That’s why the Aria key is different for every unique payload. It is stored just after the end of the 0x10000401 module :

Key format: WORD key length; BYTE key_bytes[];

During the module’s loading, the key is rewritten into another memory area, from which it is used to decrypt the downloaded module.

The CAB file retrieved from the PNG is available here: 001bdc26b2845dcf839f67a8760c6839

It contains core.sdb (d1a2fdc79c154b120a0e52c46a73478d). That is a second module in Hidden Bee’s custom format.

Inside core.sdb

This module (retrieved from the PNG) is a second downloader component in the 0x10000401 format. This time, it uses a custom TCP-based protocol, referenced by the authors as SLTP. (This protocol was also used by the analogical component seen one year ago). The embedded links:

sltp://dns.howtocom.site:1108/minimal.bin?id=998 sltp://bbs.favcom.space:1108/setup.bin?id=999

Execution flow

  1. Checks for blacklisted processes. If any are detected, exits.
  2. Removes functions: DbgBreakPoint, DbgUserBreakPoint by overwriting their beginning with the RET instruction.
  3. Checks if the malware is already installed. If yes, exits.
  4. Creates an installation mutex {71BB7F1C-D700-4487-B9C6-6DD9863DFE91}-ins.
  5. If the module was run with the flag==1:
    1. Connects to the first address: sltp://dns.howtocom.site:1108/minimal.bin?id=998
    2. Sets an environment variable INSTALL_SOURCE to the value given as an argument.
    3. Runs the downloaded next stage module.
  6. If the module was run with the flag!=1:
    1. Performs checks against VM. If detected, exits.
    2. Connects to the second address: sltp://bbs.favcom.space:1108/setup.bin?id=999. This time, appends the victim’s fingerprint to the URL. Format: <URL>&sid=<INSTALL_SID>&sz=<unique machine ID: 16 bytes hex>&os=<Windows version number>&ar=<architecture>
    3. Runs the downloaded next stage module.

Defensive checks

At this stage, many anti-analysis checks are deployed. First, there are checks to detect if any of the blacklisted processes are running. The enumeration of the processes is implemented using a low-level function: NtQuerySystemInformation with a parameter 5 (SystemProcessInformation).

The blacklist contains popular debuggers and sniffers:

“devenv.exe” , “wireshark.exe”, “vmacthlp.exe”, “procmon.exe”, “ollydbg.exe”, “idag.exe”, “ImmunityDebugger.exe”, “windbg.exe”
“EHSniffer.exe”, “iris.exe”, “procexp.exe”, “filemon.exe”, “fiddler.exe”

The names of the processes are obfuscated, so they are not visible on the strings list. If any of those processes are detected, the execution of the module terminates.

Another function deploys a set of anti-VM checks. The anti-VM checks include:

CPUID with EAX=40000000 (a check for Hypervisor’s Brand):

The VMWAre I/O Port (more details [here]):

VPCEXT instruction (more details [here])

Checking the list of common VM vendors:

Checking the BIOS versions typical for virtual environments:

Detection of any of the features suggesting a VM results in termination of the component.

Downloading new modules

The next elements of HiddenBee are downloaded over the custom “STLP” protocol.

The raw TCP socket created to communicate using the SLTP protocol:

The communication is encrypted. We can see that the expected output is a shellcode that is loaded and executed:

The way in which it is loaded reminds me of the elements we described recently in “Hidden Bee: Let’s go down the rabbit hole“. The current module loads a list of functions that will be passed to the next module. It is a minimalistic, custom version of Import Table. It also passes the memory with the downloaded filesystem to be used for further loading of components.

The !rcx package

This element retrieves the custom filesystem used by this malware. As we know from previous analysis, Hidden Bee uses its own, custom filesystems that are mounted in the memory of the malware and passed to its components. This filesystem is important for the execution flow because it contains many other components that are supposed to be installed on the attacked system in order to continue the infection.

As mentioned before, unpacking the JPG gave us an !rcx package. After this package is downloaded, and its SHA256 checksum is validated, it is repackaged. First, at the end of the !rcx package, the list of URLs (JPG, PNG) from the previous module is copied. Then, the ARIA key is copied. The size of the module and its SHA256 hash are updated. Then, the execution is redirected to the first stage shellcode fetched from the !rcx.

This shellcode was the one that we saw at first, after decoding the !rcx package from the JPG. Yet, looking at this part, we do not see anything malicious. The elements that are more important are well protected and revealed at the next execution stages.

The shellcode from the !rcx package is executed in two stages. The first one unpacks and prepares the second. First, it loads its own imports using hardcoded names of libraries.

The checksums of the functions that are going to be used are stored in the module and compared with the names calculated by the function:

The checksum calculation algorithm

It uses the functions from kernel32.dll: GetProcessHeap, VirtualAlloc, VirtualFree, and from ntdll.dll: RtlAllocateHeap, RtlFreeHeap, NtQueryInformationProcess.

The repackaged !rcx module is supposed to be supplied as one of the arguments at the Entry Point of the first shellcode. It is most important because the second stage shellcode will be unpacked from the supplied !rcx package.

Checking the !rcx magic (first stage shellcode)

A new memory area is allocated, and the second stage shellcode is unpacked there.

Decoding and calling next module

Inside the second shellcode, we see strings referencing further components of the Hidden Bee malware:

/bin/i386/preload
/bin/i386/coredll.bin

The role of the second stage is unpacking another part from the !rcx: an !rdx package.

Checking the !rdx magic (second stage shellcode)

From our previous experience, we know that the !rdx package is a custom filesystem containing modules. Indeed, after the decryption is complete, the custom filesystem is revealed:

So the part that was hidden in the JPG is, in reality, a package that decrypts the custom filesystem and deploys the next stage modules: /bin/i386/preload and /bin/i386/coredll.bin. This filesystem has even more elements that are loaded at later stages of the infection. Their full functionality will be described in the next article in our series.

Even more hidden

From the beginning, Hidden Bee malware has been well designed and innovative. Looking at one year of its evolution, we can be sure that the authors are serious about making it even more stealthy—and they don’t stop improving it.

Although the initial dropper uses components analogous to ones observed in the past, revealing their encrypted content now takes many more steps and much more patience. The additional difficulty in the analysis is introduced by the fact that the URLs and encryption keys are never reused, and work only for a single session.

The team behind this malware is skilled and determined. We expect that the Hidden Bee malware won’t be going extinct anytime soon.

The post The Hidden Bee infection chain, part 1: the stegano pack appeared first on Malwarebytes Labs.

Trojans, ransomware dominate 2018–2019 education threat landscape

Heading into the new school year, we know educational institutions have a lot to worry about. Teacher assignments. Syllabus development. Gathering supplies. Readying classrooms.

But one issue should be worrying school administrators and boards of education more than most: securing their networks against cybercrime.

In the 2018–2019 school year, education was the top target for Trojan
malware, the number one most-detected (and therefore most pervasive) threat category for all businesses in 2018 and early 2019. Adware and ransomware were also particularly drawn to the education sector last year, finding it their first and second-most desired target among industries, respectively.

To better analyze these threats, we pulled telemetry on educational institutions from our business products, as well as from IP ranges connecting from .edu domains to our consumer products. What we found was that from January to June 2019, adware, Trojans, and backdoors were the three most common threats for schools. In fact, 43 percent of all education detections were adware, while 25 percent were Trojans. Another 3 percent were backdoors.

So what does this tell us to expect for the 2019–2020 school year? For one, educational institutions must brace themselves for a continuing onslaught of cyberattacks, as the elements that made them attractive to criminals have not changed. However, more importantly, by examining trends in cybercrime and considering solutions to weaknesses that made them susceptible to attack, schools may be able to expel troublesome threat actors from their networks for good.

Why education?

Surely there are more profitable targets for cybercriminals than education. Technology and finance have exponentially bigger budgets that could be tapped into via large ransom demands. Healthcare operations and data are critical to patient care—loss of either could result in lost lives.

But cybercriminals are opportunistic: If they see an easy target ripe with valuable data, they are going to take advantage. Why spend the money and time developing custom code for sophisticated attack vectors when they can practically walk through an open door onto school networks?

There are several key factors that combine to make schools easy targets. The first is that most institutions belonging to the education sector—especially those in public education—struggle with funding. Therefore, the majority of their budget is deferred to core curriculum and not so much security. Hiring IT and security staff, training on best practices, and purchasing robust security tools and programs are often an afterthought.

The second is that the technological infrastructure of educational institutions is typically outdated and easily penetrated by cybercriminals. Legacy hardware and operating systems that are no longer supported with patches. Custom school software and learning management systems (LMSes) that are long overdue for updates. Wi-Fi routers that are operating on default passwords. Each of these make schools even more vulnerable to attack.

Adding insult to injury, school networks are at risk because students and staff connect from personal devices (that they may have jailbroken) both on-premises and at home. With a rotating roster of new students and sometimes personnel each year, there’s a larger and more open attack surface for criminals to infiltrate. In fact, we found that devices plugging into the school network (vs. school-owned devices) represented 1 in 3 compromises detected in H1 2019.

To complicate matters, students themselves often hack school software out of sheer boredom or run DDoS attacks so they can shut down the Internet and disrupt the school day. Each infiltration only widens the defense perimeter, making it nearly impossible for those in education to protect their students and themselves from the cyberattacks that are sure to come.

And with such easy access, what, exactly, are criminals after? In a word: data. Schools collect and store valuable, sensitive data on their children and staff members, from allergies and learning disorders to grades and social security numbers. This information is highly sought-after by threat actors, who can use it to hold schools for ransom or to sell for high profit margins on the black market (data belonging to children typically garners a higher price).

School threats: a closer look

Adware represented the largest percentage of detections on school devices in H1 2019. Many of the families detected, such as SearchEncrypt, Spigot, and IronCore, advertise themselves as privacy-focused search engines, Minecraft plugins, or other legitimate teaching tools. Instead, they bombard users with pop-up ads, toolbars, and website redirects. While not as harmful as Trojans or ransomware, adware weakens an already feeble defense system.

Next up are Trojans, which took up one quarter of the threat detections on school endpoints in H1 2019. In 2018, Trojans were the talk of the town, and detections of this threat in organizations increased by 132 percent that year.

While still quite active in the first half of 2019, we saw Trojan detections decrease a bit over the summer, giving way to a landslide of ransomware attacks. In fact, ransomware attacks against organizations increased a shocking 365 percent from Q2 2018 to Q2 2019. Whether this is an indication of a switch in tactics as we head into the fall or a brief summer vacation from Trojans remains to be seen.

The top two families of Trojans in education are the same two who’ve been causing headaches for organizations worldwide: Emotet and TrickBot. Emotet leads Trojan detections in every industry, but has grown at an accelerated pace in education. In H1 2019, Emotet was the fifth-most predominant threat identified in schools, moving up from 11th position in 2018. Meanwhile TrickBot, Emotet’s bullying cousin, represents the single largest detection type in education among Trojans, pulling in nearly 6 percent of all identified compromises.

Emotet and TrickBot often work together in blended attacks on organizations, with Emotet functioning as a downloader and spam module, while TrickBot infiltrates the network and spreads laterally using stolen NSA exploits. Sometimes the buck stops there. Other times, TrickBot has one more trick up its sleeve: Ryuk ransomware.

Fortunately for schools but unfortunately for our studies, Malwarebytes stops these Emote-drops-TrickBot-drops-Ryuk attacks much earlier in the chain, typically blocking Emotet or TrickBot with its real-time protection engine or anti-exploit technology. The attack never progresses to the Ryuk stage, but our guess is that many more of these potential breaches would have been troublesome ransomware infections for schools if they hadn’t had the proper security solutions in place.

Class of 2020 threats

The class of 2020 may have a whole lot of threats to contend with, as some districts are already grappling with back-to-school attacks, according to The New York Times. Trojans such as Emotet and TrickBot had wildly successful runs last year—expect them, or other multi-purpose malware like them—to make a comeback.

In addition, ransomware has already made waves for one school district in Houston County, Alabama, which delayed its return to classes by 12 days because of an attack. Whether it’s delivered via Trojan/blended attack or on its own, ransomware and other sophisticated threats can bring lessons to a halt if not dealt with swiftly.

In 2019, Malwarebytes assisted the East Irondequoit Central School District in New York during a critical Emotet outbreak that a legacy endpoint security provider failed to stop. Emotet ran rampant across the district’s endpoint environment, infecting 1,400 devices and impacting network operations. Thankfully Malwarebytes was able to isolate, remediate, and recover all infected endpoints in 20 days without completely disrupting the network for students or staff.

If school IT teams do their research, pitch smart security solutions to their boards for funding, and help students and staff adopt best practices for online hygiene, they can help make sure our educational institutions remain functional, safe places for students to learn.

The post Trojans, ransomware dominate 2018–2019 education threat landscape appeared first on Malwarebytes Labs.

Data and device security for domestic abuse survivors

For more than a month, Malwarebytes has worked with advocacy groups, law enforcement, and cybersecurity researchers to deliver helpful information in fighting stalkerware—the disturbing cyber threat that enables domestic abusers to spy on their partners’ digital and physical lives.

While we’ve ramped up our detections, written a safety guide for those who might have stalkerware on their devices, and analyzed the technical symptoms of stalkerware, we wanted to take a few steps back.

Many individuals need help before stalkerware even strikes. They need help with the basics, like how to protect a device from an abuser, how to secure sensitive data on a device, and how to keep their private conversations private.

At the end of July, we presented on data and device security to more than 150 audience-members at the National Network to End Domestic Violence’s Technology Summit. Malwarebytes broke down several simple, actionable device guidelines for domestic abuse survivors.

Let’s take a look.

Device security

Similar to our guide for domestic abuse survivors who suspected stalkerware was implanted on their devices, the following safety tips are options—not every tip can be used by every survivor. The survivor who just escaped their home has different options available to them than the survivor who still lives with their partner, or the survivor being supported at a domestic abuse shelter’s safe house.

Taken together, we hope these guidelines will give individuals the information they need to stay safe in the ways that help them most.

Create and use a device passcode

A device passcode is the first line of defense in device security. With it, you can prevent unwanted third parties from rummaging through your apps, reading your notes, viewing your messages and emails, and looking through your search history. It’s simple, it’s effective, and it’s available on nearly every single modern smart device available today.

When creating a passcode, create a string of at least six numbers that have no immediate connection to you. That way, another person can’t guess your passcode by inputting a few important numbers, like your birthdate, your zip code, or select digits from your phone number.

If you own a newer iPhone or Android device, you can also choose to lock your device by using your biometric information, like a scan of your face or thumbprint.

Finally, while it may seem like an annoyance, you should set your device to require a passcode for every attempted unlock. Some devices let their users keep a device unlocked if the passcode was entered within the past 10 minutes, or even an hour. Don’t do that. It leaves your device unnecessarily vulnerable to someone simply picking it up and accessing the important data inside.

Install an antivirus

Now that more cybersecurity companies are taking stalkerware seriously, you have a few options in best protecting yourself and your device.

The free-to-download version of Malwarebytes, available for iOS and Android devices, as well as Mac and PC computers, both detects and removes thousands of malware strings that we’ve identified as stalkerware. Even if you don’t recognize the names of popular stalkerware products, we’re still helping you find and remove them.

The premium version of Malwarebytes, which has a paid subscription, runs a 24-hour defense shield on your device, preventing stalkerware from being implanted on your device.

Practice good link hygiene

Stalkerware typically gets delivered onto a device when someone clicks on a link that has been sent through an email or text. Because of this easy installation process, you should be careful with the links you click.

Do not open any email attachments from unknown senders, and do not click on links sent in text messages from phone numbers you do not recognize.

Check your notification settings

While helpful, the notifications that pop up on your device that tell you about the latest news alert or your most recent email have a security vulnerability. Depending on your phone settings, these notifications could reveal—even on your phone’s lock screen—the subject line of an email, the sender, and even the first few words of a text message.

To protect yourself, you can navigate to your device’s system settings and find the specific settings for notifications, blocking them from revealing any details on your lock screen.

On iPhones, you have the option of hiding all notifications, or choosing how notifications are shown: on the Lock Screen, on a pull-down menu, or in banners across the top of your device’s screen.

On Android devices, depending on the model, you can again go into your system settings, find the notification settings, and choose whether your notifications will “Show Content” or “Hide Content.”

Update your software

This last step for securing your device has two huge upsides: It’s easy to remember, and it’s extremely useful. Updating your software is the simplest, most efficient way to protect your device from known security vulnerabilities. The longer your software goes without an update, the longer it stays open to cyber threats.

Don’t take the risk.

Securing your sensitive data

Next, we’re going to explain the many ways to secure the data that lives on your device. Maybe you’re thinking of protecting photos that you will eventually send to law enforcement as evidence of abuse. Maybe you want to keep your private conversations private. Or maybe you’re trying to protect the online accounts that you access on your device.

Here are a few ways to protect yourself.

Use a secure messaging app

Secure messaging apps do just that—they keep messages secure. By implementing a feature called end-to-end encryption, these apps prevent third parties from eavesdropping on your conversations. For many of these apps, even the companies that develop them cannot access users’ messages, because those companies do not have the keys necessary to decrypt that data.

There are several options today for both iOS and Android devices, and you may have heard their names—Signal, WhatsApp, iMessage, Wire, and more.

Because there are several options, the important thing to remember is that there is no one, perfect secure messaging app—there is only the right secure messaging app for you. When choosing an app, think about what you need. The domestic abuse survivor who still lives with their partner might need to have their messages erased if their partner somehow finds a way into their device. The domestic abuse survivor with a new device might need to hide their phone number from new contacts.

Here is a brief rundown of some popular secure messaging apps that offer end-to-end encryption:

  • iMessage
    • Available only on iOS devices
    • Requires a text messaging plan through your phone provider
    • Apple has fought requests to reveal iMessages
  • Signal
    • “Ephemeral” messages that disappear entirely after a user’s chosen time limit
    • Secondary lock screen to enter the application
    • Users can “name” a conversation to obscure contact details. For example, a message thread with “Jane Doe” can be renamed “Mom”
  • WhatsApp
    • Users can manually clear chats
    • Secondary lock screen based on an iPhone’s Face ID credentials
  • Wire
    • Easiest option for users who want to hide their phone numbers
    • Relies on Wire account names to talk to other users

Securing data and online accounts—methods and challenges

There are several methods to protecting the data stored on your device, and each method comes with its own challenges. Below, we look at what you should know when using these methods.

Encrypting your device’s data

Encrypting the data stored on your device means that, if your device is stolen or lost, most anyone who gets their hands on it cannot read the data in any legible form without knowing your passcode. Even if a third party tries to copy the entire contents of your device onto a laptop or desktop computer, your data will still be encrypted and unable to read without your passcode.

Your photos, videos, notes, screenshots, and audio records will all be protected this way, so long as a third party does not have some very high-tech, currently-questionable forensic devices only sold to law enforcement.

On iPhones, your data is encrypted by default, and on Android devices, users can go into their settings and choose to encrypt their data.

The one caveat to remember here is that, if you choose this method to protect sensitive data, if you lose your device, you also lose that data.

Using a secure folder app

On the Apple App store and the Google Play Store, several apps let users choose specific pieces of data that they can encrypt and protect behind a separate passcode that is required to access the app itself.

On the Google Play Store, the app Secure Folder (easy enough name to remember, right?) lets users hide the Secure Folder app itself from appearing in a device’s app menu. This could provide a level of clandestine coverage to domestic abuse survivors who may not have the agency to have a secret, unshared device passcode.

But, if you use a secure folder app that does not have a cloud backup option, the same wrinkle applies—if you lose your device, you lose your data.

Uploading data to the cloud

Let’s talk about cloud storage.

Uploading your data to the cloud has become a popular option for both individuals and businesses that want to access data across multiple devices, removing the concern of losing a specific document, spreadsheet, or presentation because it only rests on one device.

Several popular cloud storage platforms today include iCloud, Google Drive, Dropbox, Box, and Amazon Drive.

Malwarebytes Labs has previously told readers that uploading their data to an encrypted cloud database is a secure model for protecting data, and we stand by that. But cloud storage presents another challenge: Privacy.

Uploading your data to the cloud is inherently not private because, rather than relying on only your device’s storage—whether it’s your phone’s memory or your laptop’s hard drive—you are relying on a separate company to hold your data.  

What this means is that using cloud storage is a balancing act of your needs. If your data is for your eyes only, that need won’t be met with cloud storage. But if you want to protect your sensitive data outside of just one device, cloud storage fits that need.

If you do use cloud storage, remember to choose a provider that encrypts users’ data, create and use a secure and complex password that can’t be guessed, and enable two-factor authentication.

What’s two-factor authentication? Oh, right.

Two-factor authentication

Two-factor authentication (2FA), or multi-factor authentication, is a feature you can enable on the important online accounts that hold your banking information, health care info, emails, and social media presence.

By turning on 2FA, you will be telling an online service provider, like Facebook or Gmail, that when you sign in to the service from a new device, you’ll need more than just your password to access your account. You’ll need a second authenticator, which, for many platforms, is delivered to you as a multi-digit code sent in a text message to your mobile device. After you’ve entered the code, only then can you access your online account.

But, because so many 2FA schemes rely on sending a text message with a second code, you have to remember the importance of securing your notification settings. When an online service texts you to verify your account, the 2FA code might show up on your device’s lock screen if you haven’t hidden your notifications. This means that a third party could still log into your account if they are simply near your device and able to read the notifications that pop up.

Takeaways

Protecting your device, and the data on it, can be a long, complicated process, but we hope that some of the tips above help you start that process. If you need help understanding your own safety—and thus, your own available device security options—you can call the domestic abuse advocates at the National Domestic Violence Hotline at 1-800-799-7233.

You’re not alone in this. Stay safe.

The post Data and device security for domestic abuse survivors appeared first on Malwarebytes Labs.

A week in security (August 5 – 11)

Last week on Malwarebytes Labs, we explained how brain-machine interface (BMI) technology could usher in a world of Internet of Thoughts, why having backdoors is problematic, and how we can improve the security of our smart homes.

To cap off Hacker Summer Camp week, the Labs team released a special ransomware edition of its quarterly cybercrime tactics and techniques report, which you can download here.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (August 5 – 11) appeared first on Malwarebytes Labs.

Facial recognition technology: force for good or privacy threat?

All across the world, governments and corporations are looking to invest in or develop facial recognition technology. From law enforcement to marketing campaigns, facial recognition is poised to make a splashy entrance into the mainstream. Biometrics are big business, and third party contracts generate significant profits for all. However, those profits often come at the expense of users.

There’s much to be said for ethics, privacy, and legality in facial recognition tech—unfortunately, not much of it is pretty. We thought it was high time we take a hard look at this burgeoning field to see exactly what’s going on around the world, behind the scenes and at the forefront.

As it turns out…quite a lot.

The next big thing in tech?

Wherever you look, government bodies, law enforcement, protestors, campaigners, pressure and policy groups, and even the tech developers themselves are at odds. Some want an increase in biometric surveillance, others highlight flaws due to bias in programming.

One US city has banned facial tech outright, while some nations want to embrace it fully. Airport closed-circuit TV (CCTV)? Fighting crime with shoulder-mounted cams? How about just selling products in a shopping mall using facial tracking to find interested customers? It’s a non-stop battlefield with new lines being drawn in the sand 24/7.

Setting the scene: the 1960s

Facial recognition tech is not new. It was first conceptualised and worked on seriously in the mid ’60s by pioneers such as Helen Chan Wolf and Woodroe Bledsoe. They did what they could to account for variances in imagery caused by degrees of head rotation using RAND tablets to map 20 distances based on facial coordinates. From there, a name was assigned to each image. The computer then tried to remove the effect of changing the angle of the head from the distances it had already calculated, and recognise the correct individual placed before it.

Work continued throughout the ’60s, and was by all accounts successful. The computers used consistently outperformed humans where recognition tasks were concerned.

Moving on: the 1990s

By the mid to late ’90s, airports, banks, and government buildings were making use of tech essentially built on its original premise. A new tool, ZN-face, was designed to work with less-than-ideal angles of faces. It ignored obstructions, such as beards and glasses, to accurately determine the identity of the person in the lens. Previously, this type of technology could flounder without clear, unobstructed shots, which made it difficult for software operators to determine someone’s identity. ZN-face could determine whether it had a match in 13 seconds.

You can see a good rundown of these and other notable moments in early facial recognition development on this timeline. It runs from the ’60s right up to the mid ’90s.

The here and now

Looking at the global picture for a snapshot of current facial recognition tech reveals…well, chaos to be honest. Several distinct flavours inhabit various regions. In the UK, law enforcement rallies the banners for endless automated facial recognition trials. This despite test results so bad the universal response from researchers and even Members of Parliament is essentially “please stop.”

Reception in the United States is a little frostier. Corporations jostle for contracts, and individual cities either accept or totally reject what’s on offer. As for Asia, Hong Kong experiences something akin to actual dystopian cyberpunk. Protestors not only evade facial recognition tech but attempt to turn it back on the government.

Let’s begin with British police efforts to convince everyone that seemingly faulty tech is as good as they claim.

All around the world: The UK

The UK is no stranger to biometrics controversy, having made occasional forays into breach of privacy and stolen personal information. A region averse to identity cards and national databases, it still makes use of biometrics in other ways.

Here’s an example of a small slice of everyday biometric activity in the UK. Non-European residents pay for Biometric Residence Permits every visa renewal—typically every 30 months. Those cards contain biometric information alongside a photograph, visa conditions, and other pertinent information linked to several Home Office databases.

This Freedom of Information request reveals that information on one Biometric Residence Permit card is tied to four separate databases:

  • Immigration and Asylum Biometric System (Combined fingerprint and facial image database)
  • Her Majesty’s Passport Office Passports Main Index (Facial image only database)
  • Caseworking Immigration Database Image Store (Facial image only database)
  • Biometric Residence Permit document store (Combined fingerprint and facial image database)

It’s worth noting that these are just the ones they’re able to share. On top of this, the UK’s Data Protection Act contains an exemption that prevents immigrants from accessing data, or indeed preventing others from processing it, as is their right under the Global Data Protection Regulation (GDPR). In practice, this results in a two-tier system for personal data, and it means people can’t access their own case histories when challenging what they feel to be a bad visa decision.

UK: Some very testing trials

It is against this volatile backdrop that the UK government wants to introduce facial recognition to the wider public, and residents with biometric cards would almost certainly be the first to feel any impact or fallout should a scheme get out of hand.

British law enforcement have been trialling the technology for quite some time now, but with one problem: All the independent reports claim what’s been taking place is a bit of a disaster.

Big Brother Watch has conducted extensive research into the various trials, and found that an astonishing 98 percent of automated facial recognition matches at 2018’s Notting Hill carnival were misidentified as criminals. Faring slightly (but not much) better than the Metropolitan Police were the South Wales Police, who managed to get it wrong 91 percent of the time—yet, just like other regions, continue to promote and roll out the technology. On top of that, no fewer than 2,451 people had their biometric photos taken and stored without their knowledge.

Those are some amazing numbers, and indeed the running theme here appears to be: “This doesn’t work very well and we’re not getting any better at it.”

Researchers at the Essex University of Essex Human Rights Centre essentially tore the recent trials to pieces in a comprehensive rundown of the technology’s current failings.

  • Across six trials, 42 matches were made by the Live Facial Recognition (LFR) technology, but only eight of those were considered a definite match.
  • Approaching the tests as if the LFR tech was simply some sort of CCTV device didn’t account for its invasive-by-design nature, or indeed the presence of biometrics and long-term storage without clear disclosure.
  • An absence of clear guidance for the public and the general assumption of legality for this tech used by police, versus a lack of explicit legal use in current law leaves researchers thinking this would indeed be found unlawful in the courts.
  • The public might naturally be confounded, considering that if someone didn’t want to be included in the trial, law enforcement would assume that the person avoiding this technology may be suspect. There’s no better example of this than a man who was fined £90 (US$115) for avoiding the LFR cameras for “disorderly behaviour” (covering his face) because they felt he was up to no good.

https://www.youtube.com/watch?v=KqFyBpcbH9A

A damning verdict

The UK’s Science and Technology Committee (made up of MPs and Lords) recently produced their own findings on the trials, and the results were pretty hard hitting. Some highlights from the report, somewhat boringly called “The work of the Biometrics Commissioner and the Forensic Science Regulator” (PDF):

  • Concerns were raised that UK law enforcement is either aware or “struggling to comply” with a 2012 High Court ruling that the indefinite retention of innocent people’s custody images was unlawful—yet the practise still continues. Those concerns are exacerbated when considering they’d potentially be included in image matching watchlists for any LFR technology making use of custodial images. There is, seemingly, no money available for investing in the manual review and deletion of said images. There are currently some 21 million images of faces and tattoos on record, which will make for a gargantuan task. [Page 3]
  • From page 4, probably the biggest hammer blow for the trials: “We call on the Government to issue a moratorium on the current use of facial recognition technology and no further trials should take place until a legislative framework has been introduced and guidance on trial protocols, and an oversight and evaluation system, has been established”
  • The Forensic Science Regulator isn’t on the lists it needs to be with regards to whistleblowing, so whistleblowers in (say) the LFR sector wouldn’t be as protected by legislation as they would in others. [Page 10]

There’s a lot more in there to digest but essentially, we have a situation where facial recognition technology is failing any and all available tests. We have academics, protest groups, and even MP committees opposing the trials, saying “The error rate is nearly 100 percent” and “We need to stop these trials.” We have a massive collection of images, many of which need to be purged instead of being fed into LFR testing. And to add insult to injury, there’s seemingly little scope for whistleblowers to call time on bad behaviour for technology potentially deployed to a nation’s police force by the government.

UKGOV: Keep on keeping on

This sounds like quite the recipe for disaster, yet nobody appears to be listening. Law enforcement insists human checks and balances will help address those appalling trial numbers, but so far it doesn’t appear to have helped much. The Home Office claims there is public support for the use of LFR to combat terrorism and other crimes, but will “support an open debate” on uses of the technology. What form this debate takes remains to be seen.

All around the world: the United States

The US experience with facial recognition tech is fast becoming a commercial one, as big players hope to roll out their custom-made systems to the masses. However, many of the same concerns that haunt UK operations are present here as well. Lack of oversight, ethics, failure rate of the technology, and bias against marginalised groups are all pressing concerns.

Corporate concerns

Amazon, potentially one of the biggest players in this space, has their own custom tech called Rekognition. It’s being licensed to businesses and law enforcement, and it’s entirely possible someone may have already experienced it without knowing. The American Civil Liberties Union weren’t exactly thrilled about this prospect, and said as much.

Wanting to roll out Amazon’s custom tech to law enforcement, and ICE specifically, was met with pushback from multiple groups, including their own employees. As with many objections to facial recognition technology, the issue was one focused on human rights. From the open letter:

“We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights. As ethically concerned Amazonians, we demand a choice in what we build, and a say in how it is used.”

Even some shareholders have cold feet over the potential uses for this powerful AI-powered recognition system. However, the best response you’ll probably find to some of these concerns from Amazon is a blogpost from February called “Some thoughts on facial recognition legislation.”

And in the blue corner

Not everyone in US commercial tech is fully on board with facial technology, and it’s interesting to see some of the other tech giant responses to working in this field. In April, Microsoft revealed they’d refused to sell facial tech to Californian law enforcement. According to that article, Google flat out refused to sell it to law enforcement too, but they do have other AI-related deals that have caused backlash.

The overwhelming concerns were (again) anchored in possible civil rights abuses. Additionally, the already high error rates in LFR married to potential bias in gender and race played a part.

From city to city, the battle rages on

In a somewhat novel turn of events, San Francisco became the first US city to ban facial recognition technology entirely. Police, transport authorities, and anyone else who wishes to make use of it will need approval by city administrators. Elsewhere, Orlando passed on Amazon’s Rekognition tech after some 15 months of—you guessed it—glitches and technical problems. Apparently, things were so problematic that they never reached a point where they were able to test images.

Over in Brooklyn, NY, the pressure has started to bear down on facial tech on a much smaller, more niche level. The No Biometric Barriers to Housing act wants to:

…prohibit the use of biometric recognition technology in certain federally assisted dwelling units, and for other purposes.

This is a striking development. A growing number of landlords and building owners are inserting IoT/smart technology into people’s homes. This is happening whether they want them or not, regardless of how secure they may or may not be.

While I accept I may be sounding like a broken record, these concerns are valid. Perhaps, just perhaps, privacy isn’t quite as dead as some would like to think. Error rates, technical glitches, exploitation of certain communities and using them as guinea pigs for emerging technology are all listed as reasons for the great United States LFR pushback of 2019.

All around the world: China

China is already a place deeply wedded to multiple tracking/surveillance systems.

There are 170 million CCTV cameras currently in China, with plans to add an additional 400 million between 2018 and 2021. This system is intended to be matched with facial recognition technology tied to multiple daily activities—everything from getting toilet roll in a public restroom to opening doors. Looping it all together will be 190 million identity cards, with an intended facial recognition accuracy rate of 90 percent.

https://www.youtube.com/watch?v=lH2gMNrUuEY

People are also attempting to use “hyper realistic face molds” to bypass biometric authentication payment systems. There’s certainly no end of innovation taking place from both government and the population at large.

https://platform.twitter.com/widgets.js

Hong Kong

Hong Kong has already experienced a few run-ins with biometrics and facial technology, but mostly for promotional/marketing purposes. For example, in 2015, a campaign designed to raise awareness of littering across the region made use of DNA and technology produced in the US to shame litterbugs. Taking samples from rubbish found in the streets, they extracted DNA and produced facial reconstructions. Those face mockups were placed on billboards across Hong Kong in high traffic areas and places where the litter was originally recovered.

Mileage will vary drastically on how accurate these images were because, as has been noted, the “DNA alone can only produce a high probability of what someone looks like” and the idea was to generate debate, not point fingers.

All the same, wind forward a few years and the tech is being used to dispense toilet paper and shame jaywalkers. More seriously, we’re faced with daily protests in Hong Kong over the proposed extradition bill. With the ability to protest safely at the forefront of people’s minds, facial recognition technology steps up to the plate. Sadly, all it manages to achieve is to make the whole process even more fraught than it already is.

Protestors cover their faces, and phone owners disable facial recognition login technology. Police remove identification badges, so people on Telegram channels share personal information about officers and their families. Riot police carry cameras on poles because wall-mounted devices are hampered with laser pens and spray paint.

https://platform.twitter.com/widgets.js

Rules and (bending) regulations

Hong Kong itself has a strict set of rules for Automatic Facial Recognition. One protestor attempted to make a home-brew facial recognition system using online photos of police officers. The project was eventually shelved because of lack of time, but the escalation of recognition tech development by a regular resident is quite unique.

This may all sound a little bit out there or over the top. Even so, with 1,000 rounds of tear gas being fired alongside hundreds of rubber bullets, protestors aren’t taking chances. For now, we’re getting a birds-eye view of what it would look like if LFR were placed front-and-center in a battle between government oversight and civil rights. Whether it tips the balance one way or the other remains to be seen.

Watching…and waiting

Slow, relentless legal rumblings in the UK are one thing. Cities embracing or rejecting technology in the US is quite another—especially when the range of stances is from organizations and policies all the way down to the housing level. On the opposite side of the spectrum, seeing LFR in Hong Kong protests is an alarming insight into where the state of biometrics and facial recognition could lead if concerns aren’t addressed head on before implementation.

It seems technology, as it so often does, has raced far ahead of our ability to define its ethical use.

The question is: How do we catch up?

The post Facial recognition technology: force for good or privacy threat? appeared first on Malwarebytes Labs.

Backdoors are a security vulnerability

Last month, US Attorney General William Barr resurrected a government appeal to technology companies: Provide law enforcement with an infallible, “secure” method to access, unscramble, and read encrypted data stored on devices and sent across secure messaging services.

Barr asked, in more accurate, yet unspoken terms, for technology companies to develop encryption backdoors to their own services and products. Refusing to endorse any single implementation strategy, the Attorney General instead put the responsibility on cybersecurity researchers and technologists.  

“We are confident that there are technical solutions that will allow lawful access to encrypted data and communications by law enforcement without materially weakening the security provided by encryption,” Attorney General Barr said.

Cybersecurity researchers, to put it lightly, disagreed. To many, the idea of installing backdoors into encryption is antithetical to encryption’s very purpose—security.

Matt Blaze, cybersecurity researcher and University of Pennsylvania Distributed Systems Lab director, pushed back against the Attorney General’s remarks.

“As someone who’s been working on securing the ‘net for going on three decades now, having to repeatedly engage with this ‘why can’t you just weaken the one tool you have that actually works’ nonsense is utterly exhausting,” Blaze wrote on Twitter. He continued:

“And yes, I understand why law enforcement wants this. They have real, important problems too, and a magic decryption wand would surely help them if one could exist. But so would time travel, teleportation, and invisibility cloaks. Let’s stick to the reality-based world.”

Blaze was joined by a chorus of other cybersecurity researchers online, including Johns Hopkins University associate professor Matthew Green, who said plainly: “there is no safe backdoor solution on the table.”

The problem with backdoors is known—any alternate channel devoted to access by one party will undoubtedly be discovered, accessed, and abused by another. Cybersecurity researchers have repeatedly argued for years that, when it comes to encryption technology, the risk of weakening the security of countless individuals is too high.

Encryption today

In 2014, Apple pushed privacy to a new standard. With the launch of its iOS 8 mobile operating system that year, no longer would the company be able to access the encrypted data stored on its consumer devices. If the company did not have the passcode to a device’s lock screen, it simply could not access the contents of the device.

“On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode,” the company said.

The same standard holds today for iOS devices, including the latest iPhone models. Data that lives on a device is encrypted by default, and any attempts to access that data require the device’s passcode. For Android devices, most users can choose to encrypt their locally-stored data, but the feature is not turned on by default.

Within two years of the iOS 8 launch, Apple had a fight on its hands.

Following the 2015 terrorist shooting in San Bernardino, Apple hit an impasse with the FBI, which was investigating the attack. Apple said it was unable to access the messages sent on an iPhone 5C device that was owned by one of the attackers, and Apple also refused to build a version of its mobile operating system that would allow law enforcement to access the phone.

Though the FBI eventually relied on a third-party contractor to crack into the iPhone 5C, since then, numerous messaging apps for iOS and Android have provided users with end-to-end encryption that locks even third-party companies out from accessing sent messages and conversations.

Signal, WhatsApp, and iMessage all provide this feature to users.

Upset by their inability to access potentially vital evidence for criminal investigations, the federal government has, for years, pushed a campaign to convince tech companies to build backdoors that will, allegedly, only be used by law enforcement agencies.

The problem, cybersecurity researchers said, is that those backdoors do not stay reserved for their intended use.

Backdoor breakdown

In 1993, President Bill Clinton’s Administration proposed a technical plan to monitor Americans’ conversations. Installed within government communications networks would be devices called “Clipper Chips,” which, if used properly, would only allow law enforcement agencies to listen in on certain phone calls.

But there were problems, as revealed by Blaze (the same cybersecurity researcher who criticized Attorney General Barr’s comments last month).  

In a lengthy analysis of the Clipper Chip system, Blaze found glaring vulnerabilities, such that the actual, built-in backdoor access could be circumvented.   

By 1996, adoption of the Clipper Chip was abandoned.

Years later, cybersecurity researchers witnessed other backdoor failures, and not just in encryption.

In 2010, the cybersecurity expert Steven Bellovin—who helped Blaze on his Clipper Chip analysis—warned readers of a fiasco in Greece in 2005, in which a hacker took advantage of a mechanism that was supposed to only be used by police.

“In the most notorious incident of this type, a cell phone switch in Greece was hacked by an unknown party. The so-called ‘lawful intercept’ mechanisms in the switch—that is, the features designed to permit the police to wiretap calls easily—was abused by the attacker to monitor at least a hundred cell phones, up to and including the prime minister’s,” Bellovin wrote. “This attack would not have been possible if the vendor hadn’t written the lawful intercept code.”

In 2010, cybersecurity researcher Bruce Schneier placed blame on Google for suffering a breach from reported Chinese hackers who were looking to see which of its government agents were under surveillance from US intelligence.

According to Schneier, the Chinese hackers were able to access sensitive emails because of a fatal flaw by Google—the company put a backdoor into its email service.

“In order to comply with government search warrants on user data, Google created a backdoor access system into Gmail accounts,” Schneier said. “This feature is what the Chinese hackers exploited to gain access.”

Interestingly, the insecurity of backdoors is not a problem reserved for the cybersecurity world.

In 2014, The Washington Post ran a story about where US travelers’ luggage goes once it gets checked into the airport. More than 10 years earlier, the Transpiration Security Administration had convinced luggage makers to install a new kind of lock on consumer bags—one that could be unlocked through a physical backdoor, accessible by using one of seven master keys which only TSA agents were supposed to own. That Washington Post story, though, revealed a close-up photograph of all seven keys.

Within a year, that photograph of the keys had been analyzed and converted into 3D printing files that were quickly shared online. The keys had leaked, and the security of nearly every single US luggage bag had been compromised. The very first flaw only required human error.

Worth the risk?

Attorney General Barr’s comments last month are part of a long-standing tradition in America, in which a representative of the Department of Justice (last year it was then-Deputy Attorney General Rod Rosenstein) makes a public appeal to technology companies, asking them to install backdoors as a means to preventing potential crime.

The arguments on this have lasted literal decades, invoking questions of the First Amendment, national security, and the right to privacy. That argument will continue, as it has today, but encryption may pick up a few surprising defenders along the way.

On July 23 on Twitter, the chief marketing officer of a company called SonicWall posted a link to a TechCrunch article about the Attorney General’s recent comments. The CMO commented on the piece:

“US attorney general #WilliamBarr says Americans should accept security risks of #encryption #backdoors

Michael Hayden, the former director of the National Security Agency—the very agency responsible for mass surveillance around the world—replied:

“Not really. And I was the director of national security agency.”

The fight to install backdoors is a game of cat-and-mouse: The government will, for the most part, want its own way to decrypt data when investigating crimes, and technologists will push back on that idea, calling it dangerous and risky. But as more major companies take the same stand as Apple—designing and building an incapability to retrieve users’ data—the public might slowly warm up to the idea, and value, of truly secure encryption.

The post Backdoors are a security vulnerability appeared first on Malwarebytes Labs.

Labs quarterly report finds ransomware’s gone rampant against businesses

Ransomware’s back—so much so that we created an entire report on it.

For 10 quarters, we’ve covered cybercrime tactics and techniques, covering a wide range of threats we saw lodged against consumers and businesses through our product telemetry, honeypots, and threat intelligence. We’ve looked at dangerous Trojans such as Emotet and TrickBot, the explosion and subsequent downfall of cryptomining, trends in Mac and Android malware, and everything in between.

But this quarter, we noticed one threat dominating the landscape so much that it deserved its own hard look over a longer period than a single quarter. Ransomware, which many researchers have noted took a long breather after its 2016 and 2017 heyday, is back in a big way—targeting businesses with fierce determination, custom code, and brute force.

Over the last year, we’ve witnessed an almost constant increase in business detections of ransomware, rising a shocking 365 percent from Q2 2018 to Q2 2019.

Therefore, this quarter, our Cybercrime Tactics and Techniques report is a full ransomware retrospective, looking at the top families causing the most damage for consumers, businesses, regions, countries, and even specific US states. We examine increases in attacks lodged against cities, healthcare organizations, and schools, as well as tactics for distribution that are most popular today. We also look at ransomware’s tactical shift from mass blanket campaigns against consumers to targeted attacks on organizations.

To dig into the full report, including our predictions for ransomware of the future, download the Cybercrime Tactics and Techniques: Ransomware Retrospective here.

The post Labs quarterly report finds ransomware’s gone rampant against businesses appeared first on Malwarebytes Labs.

8 ways to improve security on smart home devices

Every so often, a news story breaks that hackers have made their way into a smart home device and stolen personal data. Or that vulnerabilities in smart tech have been discovered that allow their producers (or other cybercriminals) to spy on customers. We’ve seen it play out over and over with smart home assistants and other Internet of Things (IoT) devices, yet sales numbers for these items continue to climb.

Let’s face it: No matter how often we warn about the security concerns with smart home devices, they do make life more convenient—or at the very least, are a lot of fun to play with. It’s pretty clear this technology isn’t going away. So how can those who’ve embraced smart home technology do so while staying as secure as possible?

Here are eight easy ways to tighten up security on smart home devices so that users are as protected as possible while using the new technologies they love.

1. Switch up your passwords

Most smart home devices ship with default passwords. Some companies require that you change the default password before integrating the technology into your home—but not all. The first thing users can do to ensure hackers can’t brute force their way into their smart home device is change up the password, and make it something that is unique to that device. Once a hacker finds out one password, they’ll try to use it on every other account.

A few ways you can do to create less hackable passwords are:

  • Making them longer than eight characters
  • Creating passwords that are unrelated to pets, kids, birthdays, or other obvious combinations
  • Using a password manager so you don’t have to remember 27 different passwords
  • Using a password generator to create random combinations

2. Enable two-step authentication

Many online sites and smart devices are now allowing users to opt into two-step authentication, which is a two-step process for verifying information before allowing someone access to your account.

If you use a Chromecast or any other Google device, you can turn on this verification and receive email alerts. While you may be the only person to try logging into your account, it helps to know you’ll be notified if someone does try to hack in and get your information.

3. Disable unused features

Smart home tech, like the Amazon Echo or Google Nest, have made headlines for invasively recording users without their knowledge or shipping with unknown features, such as a microphone, that are later enabled. This makes trusting those devices implicitly a bit of a hazard.

While your voice assistant won’t be recording you all the time, it can be triggered by words used in a conversation between two people. Check your home assistant’s logs and delete your voice recordings if you find any you don’t approve of.

You can always turn off the voice control features on these devices. While their purpose is to respond to vocal commands, they can also be accessed through an app, remotely, or through a website instead.

4. Upgrade your devices

When was the last time you purchased smart tech for your home? If it was long enough ago that software updates are no longer compatible with the operating system, it might be time to upgrade.

Upgraded tech will always have new features, fewer malfunctions of previously cutting-edge but now standard innovations, plus more advanced ways to secure the device that may not be available on earlier models.

Another benefit of upgrading is that there are far more players on the market today than there were just a couple years ago. For example, many people use smart plugs to control their electricity usage, but not every brand considers security a top priority. Keep an eye on those that have received positive reviews by tech and science businesses. They’ll have fewer security issues than older models.

5. Check for software updates

Don’t worry if you don’t have the money for a big smart home upgrade right now. Many times, keeping software updated will do the trick—especially because security issues are most often fixed in periodic software updates, and not necessarily addressed in brand-new releases. Each new version of software released includes not only new functionality, but fixes to bugs and security patches.

These patches work to plug any known vulnerabilities in the smart device that allow for hackers to drop malware or steal valuable data. To make sure your device’s software is always updated, go into your settings to make sure to select automatic software updates. If that’s not possible, set reminders to check for updates yourself at least once per month.

6. Use a VPN

If you have concerns about the security of your ISP’s Wi-Fi network, you might consider using a VPN. A virtual private network (VPN) creates a closed system from any Internet connection, including public ones where you’re most at risk.

A VPN keeps your Internet protocol (IP) address from being discovered. This prevents hackers from knowing your location and also makes your Internet activity untraceable.

Perhaps the most important benefit of using a VPN is that it creates secured, encrypted connections. No matter where you access Wi-Fi—say if you wanted to turn on the air-conditioning at home from the airport—a VPN keeps that traffic secure.

7. Monitor your data

Are your devices sending you reports on energy usage or the top songs you played at home this month? Are you storing or backing up smart home data on the cloud? If not, where does that data go and how is it secured?

Smart home devices may not have easy instructions for determining whether data produced from their usage is stored on the cloud or on private, corporate-facing servers. Rest assured that, whether it’s visible or not, smart device companies are collecting data, whether to improve their marketing efforts or simply to show their own value.

So how can users monitor how their data is collected, stored, and transmitted? Some devices may allow you to back up info to the cloud in settings. If so, you should create strong passwords with two-factor authentication in order to access that data and protect it from hackers. If not, you might need to dig through a device’s EULA or even contact the company to find out how they store the data at rest and at transit, and whether that data is encrypted to ensure anonymity.

If you’d prefer not to let your smart tech back up information to the cloud, you can often manually turn this off in settings. The question still remains: What happens to your data if it’s not in the cloud? That’s where poking around the company’s website or calling them to learn how personal information is stored can hopefully calm your fears.

8. Limit smart home device usage

The only way to guarantee your privacy and security at home is to avoid using devices that connect to the Internet—including your phone. Obviously, in today’s world, that’s a difficult task. Therefore, the second-best option is to consider which devices are absolutely necessary for work, pleasure, and convenience, and slim down the list of smart-enabled devices.

Perhaps it makes sense for an energy-conscious person to use a Nest device to regulate temperatures, but do they need Internet-connected smoke detectors? Maybe some folks couldn’t live without streaming, but could get by using a tradition key over a smart lock.

There’s no such thing as 100 percent protection from cybercrime—even if you don’t use the Internet. So if you want to embrace the wonders of smart home technology, be sure you’re smart about how and when you use it.

The post 8 ways to improve security on smart home devices appeared first on Malwarebytes Labs.

A week in security (July 29 – August 4)

Last week on Malwarebytes Labs we discussed the security and privacy changes in Android Q, how to get your Equifax money and stay safe doing it, and we looked at the strategy of getting a board of directors to invest in government cybersecurity. We also reviewed how a Capital One breach exposed over 100 million credit card applications, analyzed the exploit kit activity in the summer of 2019, and warned users about a QR code scam that can clean out your bank account.

The busy week in security continued with looks at Magecart and others intensifying web skimming, ATM attacks and fraud, and an examination of the Lord Exploit Kit.

Other cybersecurity news

  • The Georgia State Patrol was reportedly the target of a July 26 ransomware attack that has necessitated the precautionary shutdown of its servers and network. (Source: SC Magazine)
  • Houston County Schools in Alabama delayed the school year’s opening scheduled for August 1st due to a malware attack. (Source: Security Affairs)
  • Over 95% of the 1,600 vulnerabilities discovered by Google’s Project Zero were fixed within 90 days. (Source: Techspot)
  • Researchers who discovered several severe vulnerabilities now uncovered two more flaws that could allow attackers to hack WPA3 protected WiFi passwords. (Source: The Hacker News)
  • Germany’s data protection commissioner investigates revelations that Google contract-workers were listening to recordings made via smart speakers. (Source: The Register)
  • Experts tend to recommend anti-malware protection for all mobile device users and platforms , but 47% of Android Anti-Malware apps are flawed. (Source: DarkReading)
  • Many companies don’t know the depth of their IoT-related risk exposure. (Source: Help Net Security)
  • Apple’s Siri follows Amazon Alexa and Google Home in facing backlash for its data retention policies. (Source: Threatpost)
  • There has been a 92% increase in the total number of vulnerabilities reported in the last year, while the average payout per vulnerability increased this year by 83%. (Source: InfoSecurity magazine)
  • Multiple German companies were off to a rough start last week when a phishing campaign pushing a data-wiping malware dubbed GermanWiper targeted them and asked for a ransom. (Source: BleepingComputer)

Stay safe, everyone!

The post A week in security (July 29 – August 4) appeared first on Malwarebytes Labs.