7 Potential Security Concerns for Wearables

Is your organization safe from all these connected devices?

Wearables are rapidly invading the workplace in much the same way that smartphones did. Fitness trackers, smartwatches, head-mounted displays and other new form factors are beginning to capture the public imagination. Sales of wearable electronic devices topped 232 million in 2015, and Gartner forecasts they’ll rise 18.4% this year, when another 274.6 million devices are sold.

These wearable devices represent some appealing opportunities for businesses to increase efficiency and gather data, but in the rush to win market share, security concerns are taking a backseat for many manufacturers and app developers. The potential ramifications of unchecked wearable device usage within the enterprise are alarming.

1. Easy Physical Access to Data

The fact that many wearables store data on the local device without encryption is a real issue. There’s often no PIN or password protection, no biometric security and no user authentication required to access data on a wearable. If it falls into the wrong hands, there’s a risk that sensitive data could be accessed very easily.

2. Ability to Capture Photos, Videos and Audio

The kinds of discreet abilities that many modern wearable devices have in terms of video and audio surveillance surpass high-end spy gear from just a few years ago. It’s easy for someone to surreptitiously take photographs or record video or audio files using something like a smartwatch or smart glasses. Covert capture of confidential information, and videos and images of sensitive areas, is a very real possibility.

3. Insecure Wireless Connectivity

The fact that wearable devices tend to connect to our smartphones or tablets wirelessly using protocols such as Bluetooth, NFC and Wi-Fi creates another potential point of entry. We may have Bluetooth on our smartphones turned on all the time now so they can sync with the wearable, but what else could be connecting? Many of these wireless communications are insufficiently secure to guard against a determined brute-force attack. The first step for securing networks is simply to get visibility on how many connected devices there are. One-third of the organizations surveyed by AT&T recently revealed they have more than 5,000 connected devices.

4. Lack of Encryption

We already mentioned the lack of encryption on many wearable devices, but there are also serious issues with data in transit when it’s being synced and with data being stored on manufacturer’s or service provider’s cloud servers. Some third-party apps neglect basic security standards and send or store information that’s not encrypted. The kind of data that’s automatically being collected by wearables is very valuable to the right people.

5. No Regulation or Compliance

Because many of the security issues around wearables really have to be addressed by the manufacturers, the issue of whether they’ll self-regulate or be bound by government regulations is an important one. In either case, companies suffering a data breach that breaks compliance or regulatory requirements for their specific industry will not be able to shift the blame onto wearables. They’ll still be held fully accountable. Ignorance of wearable device security and manufacturer or third-party app policy is no defense.

6. Patching and Vulnerabilities

Many wearables run their own operating system and applications. As wearable devices become more common, they’ll also become bigger targets for hackers. The same principles that apply to keeping the software on your desktops, laptops, smartphones and tablets fully patched and up to date to avoid the latest vulnerabilities also apply to wearables. But there’s a lack of insight and policy to cater for this issue right now.

7. Current MDM Policies Don’t Cover Wearables

We can’t assume that MDM (mobile device management) systems developed to deal with the BYOD trend can also cater to this influx of wearables. For the sake of convenience, mobile platforms generally make it easy to share data between apps and devices. Because wearables work differently from smartphones, there are many unforeseen circumstances where they pose new security risks. Banning or restricting features is not a sound long-term strategy, so companies need to rethink policies, draft new plans and employ new services to deal with mobile device management.

The security challenge with wearable devices is by no means insurmountable, and the wearable trend will undoubtedly be a real boon for many industries, but it’s important that the enterprise starts to treat it more seriously. Cisco predicts there will be more than 600 million wearable devices in use by 2020.

We need a plan to make sure they’re safe and secure.


This article was recently published in Network World.

Image courtesy of Cutcaster.

Bugs for cash: Bounty hunters in the new wild west of security

How security researchers and programmers hunt software bugs for cash rewards


The business of bug hunting is a potentially lucrative one for both seasoned security researchers and amateurs with an interest in hacking. It’s an area that’s gaining legitimacy thanks to official bug bounty programs and hacking contests, but there’s still a seedy underbelly that unscrupulous bounty hunters can take advantage of if they successfully identify a vulnerability.

The average cost of a data breach is $3.8 million, according to research by the Ponemon Institute. It’s not hard to understand why so many companies are now stumping up bounties. It can also be very difficult, time consuming and expensive to root out bugs and flaws internally. Turning to the wider security community for help makes a lot of sense, and where there’s need there’s a market.

Let’s take a closer look at how the market works.


White market for bugs

Assuming you are a law-abiding, morally upright citizen, you have three options when you identify a serious flaw:

1. Submit directly to the vendor
2. Submit to a third-party bug-bounty program
3. Submit to a hacking contest

Big players such as Google, Samsung and Facebook all offer bounty programs. Back in 2014, Facebook fixed 61 high-severity flaws through its bug bounty program. Since its bug bounty program began in 2011, the social media giant has doled out more than $4.3 million to more than 800 researchers after receiving in excess of 2,400 valid submissions, according to its 2015 Highlights report.


A lot of flaws can earn a lot of money

We’re also seeing the rise of many third-party platforms, such as Bugcrowd. These companies allow clients to list applications they want tested and offer bounties that crowdsourced security talent compete for. Tesla, Western Union, Pinterest and many other companies are customers. Founded in 2012, Bugcrowd boasts that more than 27,000 researchers have identified more than 53,000 vulnerabilities for more than 250 companies since it started trading.

Hacking contests such as Pwn2Own are another option. Hackers demonstrated 21 new vulnerabilities in attacks on browsers and operating systems this year. There are sometimes large cash prizes, and job offers are likely to follow for anyone who finds a big vulnerability that doesn’t involve jumping through too many hoops. Sometimes companies, including Google and Microsoft, run their own hacking competitions.


The dark side of bug bounty hunting

Beyond the white market, there’s also a gray market, with questionable legality. Security researchers can sell vulnerabilities to private brokers with policies about only selling to ethical and approved sources. In that case, the vulnerability may end up being used to spy on private citizens suspected of crimes or used to shut down a terrorist organization, according to Hewlett Packard Enterprise’s Cyber Risk Report 2016. However, it’s often unclear, and sellers can only guess at how the vulnerability may have been used.

In the black market, which is unquestionably illegal, buyers simply sell to the highest bidder. It might be sold to a cybercriminal or network of criminals. It might also be used for corporate spying or even national spying. The seller generally has no insight into how the vulnerability will be used, but it’s a safe bet that someone is going to end up at a disadvantage.


Slow to respond

Finding vulnerabilities is just the beginning. Far too many developers are slow to act to patch those flaws. This can lead the researchers who uncover them to disclose flaws publicly, piling on the pressure for the vendor to take action. They might lose out on a potential bounty, but they’ll still be able to discuss the flaw and benefit from making their discovery of it public.

Even when the developer does patch an exploit or vulnerability, far too many companies are even slower to remediate. You might think that known solutions would be enacted immediately, but that’s simply not the case. Known vulnerabilities often persist much longer than they should, allowing cybercriminals to continue exploiting them long after they’ve been revealed. For example, hundreds of cloud apps were still vulnerable to DROWN weeks after it was unveiled.

Offering bounties can be cost-effective for businesses, and it may go some way towards persuading researchers or hackers to aim for the white market, rather than the gray or black. But they have to act quickly to deal with vulnerabilities and protect their customers. The longer it takes to deal with flaws, the greater the risk that would-be attackers will weaponize them.

Aiming for the good white-hat-wearing side even further, a smart approach can entail using systems development lifecycle (SDLC) and Open Web Application Security Project (OWASP) programming standards. Also, a well-thought-out vulnerability management program that includes application penetration testing will go a long way in securing any and all applications.


This article was originally posted on NetworkWorld.

Image credit: flickr/Nguyen Hung Vu

Towerwall Information Security Alert Vol 14.07 – Watch out for April Fools scamming on Friday

Watch out for April Fools scamming on Friday

by Kevin Frey

Annually, businesses and organizations often put up jokes or pranks for April Fools’ Day. Google, Starbucks, Amazon, etc. are frequent participants.

E.g. Last year, Amazon revamped their site to look their old, original 1999 version… and Google (known for multiples) turned its “Maps” app into the classic arcade game “Pac-Man.”

However, it is important to remember to think twice before clicking on things you receive on email or see on websites on April 1st.

Like Christmas, New Years, and July the 4th in the US, April Fools’ Day is another infamous day for hackers to release viruses and other types of malware… They can mimic well-known or reputable sites/emails for nefarious purposes under the guise of the holiday. One famous virus, Conficker, even threatened to “activate” a malicious payload on April 1, 2009… However, the day came and went without any major issues.

Tax scams are often a major target as well, since April is also the month when taxes are due in the United States.

Fake “updates” to software is another frequent offender.

Don’t panic: If you read rumors of Facebook shutting down, or you get an email saying you are locked out of your bank account, etc. — check by going directly to the site or calling your bank, not via a “link” (that is, YOU initiate the connection or communication independently). And, check other trusted sources to see if it is identified as a prank. Bottom line: Don’t take things on face value tomorrow – “real” events DO happen (after all Marvin Gaye was killed on April 1st), but on the Internet, this day has a special status for both well- and ill-intentioned pranksters.

A single click could make you vulnerable to phishing scams, data loss, identity theft, or worse.

Here are some quick tips/references:

It is already April 1 in the Far East, so please take this as a friendly warning and to always “think before you click.”

Safe Internet’ing…


Towerwall is now partnering with PHISHME. To learn more call 774 204 0700