Facebook leaks are a lot leakier than Facebook is letting on

Facebook Privacy

Remember last week, Facebook leaked email addresses and phone numbers for 6 million users, but that it was really kind of a modest leak, given that it’s a billion-user service?

OK, scratch the “modest” part.

The researchers who originally found out that Facebook is actually creating secret dossiers for users are now saying the numbers don’t quite match up.

The number of affected users Facebook noted in a posting on its security blog is far less than what they themselves found, and Facebook is also “hoarding non-user contact information – seen when it was also shared and exposed in the leak,” writes ZDNet’s Violet Blue.

The bug involved the exposure of contact details when using the Download Your Information (DYI) tool to access data history records, which resulted in access to an address book with contacts users hadn’t provided to Facebook.

What that means is that even if you don’t share details of your own personal information with Facebook, Facebook well may have gotten it through other people in your network who’ve let Facebook have access to their contact lists.

Facebook accidentally combined these “shadow” profiles with users’ own Facebook profiles and then blurted both data sets out to people who used the DYI tool and who had some connection to the people whose data was breached.

It’s understandable why Facebook users are steamed.

Facebook has gotten information you didn’t choose to share, has retained it, and has inadvertently left it open for unauthorized access since at least 2012.

Some users, in fact, complained in comments that the bug persisted even after Facebook reportedly fixed it, according to Violet Blue.

Packet storm reported on Wednesday that its researchers, who had prior test data verifying the leak, were able to compare what they knew was being leaked with what Facebook reported to its users.

Packet Storm claims that Facebook didn’t come clean about all the data involved.

From its posting:

“We compared Facebook email notification data to our test case data. In one case, they stated 1 additional email address was disclosed, though 4 pieces of data were actually disclosed. For another individual, they only told him about 3 out of 7 pieces of data disclosed. It would seem clear that they did not enumerate through the datasets to get an accurate total of the disclosure…

“Facebook claimed that information went unreported because they could not confirm it belonged to a given user. Facebook used its own discretion when notifying users of what data was disclosed, but there was apparently no discretion used by the ‘bug’ when it compiled your data. It does not appear that they will take any extra steps at this point to explain the real magnitude of the exposure and we suspect the numbers are much higher.”

Not only is the extent of exposed data likely to expand, Packet Storm says, but the number of people affected is much higher than 6 million, given that Facebook has only contacted its users.

Here’s how Facebook replied when Packet Storm asked about contacting non-users about the breach:

“We asked Facebook if they enumerated the information in hopes that their reporting had a bug but we were told that they only notified users if the leaked information mapped to their name.

“We asked Facebook what this means for non-Facebook-users who had their information also disclosed. The answer was simple – they were not contacted and the information was not reported. Facebook felt that if they attempted to contact non-users, it would lead to more information disclosure.”

That’s a “weak, circular” argument, Packet Storm complains.

To better protect users’ contact and personal information, the researchers suggest that Facebook can simply adopt this suggested flow:

1. When a person uploads someone’s contact information, Facebook should automatically correlate it to what they have shared on their profile (and obviously only suggest them as a friend if their settings allow it). If their settings do not allow it, they should treat it as a user not in Facebook (see #2). If the information uploaded includes data specific to an individual who does not already have that data included in their profile, Facebook should provide a notification along the lines of: “You are attempting to add data about John Smith that he has not shared with Facebook. How do you want to handle this situation?”

Two options are provided:

  • A)”Ask John Smith’s permission to add this information”
  • B) “Discard additional information”

If they choose option A, John Smith is notified by Facebook the next time he logs in and gets to decide what he wants to do with HIS data. Seems simple enough.

2. When a person uploads someone’s contact information and it does not correlate to any Facebook user, they should be able to use it for the Invitation feature with the caveat that Facebook automatically deletes all data within 1 week. The invite to the person can say “this link will expire in 1 week”, which it should anyways. When an individual uses the invitation link to sign up, THEY will decide what information to share with Facebook.

That does seem simple enough, but Facebook hadn’t responded to the suggestion at the time of writing.

While we wait for Facebook to (maybe) fix a situation that seems far more widespread than originally reported, we can help each other out by immediately removing our imported contacts, to keep everybody’s personal data out of this swamp.

If you haven’t done so already, you can easily remove uploaded contacts here.

 

Introducing Towewall’s Alternative Cloud-based File Sharing Solutions

Cloud_Blog

Towerwall is proud to offer our new Alternative Cloud-based File Sharing Solutions

Cloud based file synchronization services have exploded. Organizations need to be able to provide a modern collaboration experience with the infrastructure that they’ve already invested in, and that they already know how to manage and protect. Watch the video below for more:
 

 
To learn more about our Alternative Cloud-based File Sharing Solutions, and all of our Mobile and Wireless Services, visit our Mobile and Wireless Page

Tips for testing your mobile app security

Wherever an app originates from, it is vital that you can vouch for its security before it is circulated

The enterprise has gone mobile and there’s no turning back. And while the BYOD movement has received plenty of attention, IT departments are getting a handle on the security risks of personal mobile devices in the workplace. The next challenge is “bring your own application” (BYOA), because many public app stores have serious malware problems.

Enterprise app stores could be the answer. Gartner is predicting that 25% of enterprises will have their own app store by 2017. This will enable companies to push out apps more efficiently, it will be a major boost formobile device management, and it could offer a secure, automated process that will work equally well for apps developed in-house and curated applications from third parties. Wherever an app originates from, it is vital that you can vouch for its security before it is circulated.

[ ANALYSIS:Enterprise application store: There’s one in your future]

Broadly speaking, there are three types of mobile apps:

  •  Native applications — Written for a specific platform, native apps will only run on supported devices. This means an iOS app will only run on the iPhone, for example.
  •  Web applications — Any mobile device can access a Web app because they are built using standards like HTML5 and effectively housed online. The mobile app is often little more than a shortcut to the Web app.
  •  Hybrid applications — a Web-based user interface may have a layer of native application around it in order to get the best of both worlds.

Companies are increasingly opting for the hybrid approach so they can cover a wide range of platforms, but also leverage the hardware capabilities of different mobile devices. Gartner analysts suggest that more than 50% of deployed apps will be hybrid by 2016. [Also see: “What enterprise mobile apps can learn from mobile games“]

As you may imagine, each type of app requires specific testing. In each case you’ll need to consider how to protect data as it travels across mobile networks. There’s always a split between what is actually deployed to the mobile device, and the central processing or data storage that’s deployed to a server. There’s a range of software out there designed to assist your IT department in testing an app’s security.

To cover all the bases and ensure effective penetration testing is carried out, your best option is to engage a third-party organization with the right expertise. They will put your app to the test, approaching it as a real attacker would — with no regard for how the system is intended to be used, just a determination to breach it.

Tips for testing vulnerabilities

There are many potential weak spots in mobile apps. Knowing where they are can get you off to a good start.

  •  Data flow — Can you establish an audit trail for data, what goes where, is data in transit protected, and who has access to it?
  •  Data storage — Where is data stored, and is it encrypted? Cloud solutions can be a weak link for data security.
  •  Data leakage — Is data leaking to log files, or out through notifications?
  •  Authentication — When and where are users challenged to authenticate, how are they authorized, and can you track password and IDs in the system?
  •  Server-side controls — Don’t focus on the client side and assume that the back end is secure.
  •  Points of entry — Are all potential client-side routes into the application being validated?

This is only the tip of the iceberg in terms of comprehensive security testing for mobile apps. Factor in the peculiar demands of compliance in your industry, because it is vital that you meet the right standards for regulations and mandates. The majority of internal IT departments are simply not equipped to carry out the rigorous testing that’s required to pass a mobile app as safe. [Also see: “Hardening Windows 8 Apps for the Windows Store“]

It’s also worth knowing that you can’t just test an app and forget about it. If you frequent the developer forums for all of the major mobile platforms, you’ll find that new security threats are emerging all the time, and it takes effort to stay abreast of the situation and take the necessary action to keep your apps and systems secure.

By Michelle Drolet, founder and CEO, Towerwall
Special to CSO

This article was recently published in CSO

Security Alert – Anonymous’ #OpPetrol: What is it, What to Expect, Why Care?

by Darin Dutcher (Threat Research)

Last month, the hacker collective Anonymous announced their intention to launch cyber-attacks against the petroleum industry (under the code name #OpPetrol) that is expected to last up to June 20.

Their claimed reason for this attack is primarily due to petroleum being sold with the US dollar instead of currency of the country where petroleum originates. However, some chatter indicates there was a desire to launch new attacks due to both #OpIsrael and #OpUSA being regarded as ineffective.
Users should note that June 20 is only the day that most attacks are expected to occur and/or be made public. Similar to last month’s #OpUSA, they have begun mobilizing prior that date. Since the announcement of this operation, targets have been hit, credentials have been stolen, and the list of targets is already growing.

It is also not uncommon for these activities to be used as a distraction to mask other attacks. Based on the collateral damage recorded from previous operations and data leaks outside publicized attack dates, their targeting and timing aren’t always precise either.

An announced operation like this is a good opportunity for all current existing and potential targets to exercise the necessary steps to protect themselves. Everyone is a target eventually; there will always be vulnerabilities to be exploited for cause or profit.

If your organization or country you defend is a potential target in this operation, you should consider doing the following steps (see below) and possibly more. If you’re in any way connected to the targeted industries or located in one of the potential target countries, we advise that you consider going through these steps anyway. However, if you are not affected or linked to the expected targets, you may use these steps as proactive measures against attacks like #OpPetrol.

Before June 20:

  • Ensure all IT systems (OSs, applications, websites, etc.) are updated.
  • Ensure IT security systems are current, have as wide a view as they can, and can inspect deeply. Can they detect and prevent phases of attack plan and can they be integrated into part of a kill-chain? Can they observe indicators over the network, on disk, and in memory?
  • Ensure relevant third party vendors are aware and accessible.
  • Probe any anomalous network and system behavior and examine it. Reconnaissance phases of the attack are already in play. Opportunities for exploit are being logged and credentials are already being stolen. Solutions such as Trend Micro Deep Discovery can help you examine dubious network activities.
  • Remind your users to be particularly careful and watch out for phishing and spear-phishing emails.
  • Plan or review your incident response procedures with all necessary parties (not only IT groups). Explore how the planned response differs among DDoS, defacement, and disclosure.
  • Have IT Security, Attorneys, and External Communications departments prepare or review public statements in the event your organization is affected. Ask the question of “how your statements and response might differ if it wasn’t a hacktivist group, but a criminal, nation state, insider, or terrorist?”
  • Monitor the many Anonymous sources for any changes in targeting, tools, or motives, lists of accomplishments, or data dumps.

On June 20:

  • Note that attackers may attack across different time zones, so it can last longer than the 24 hours in your time zone.
  • Continue to monitor the Anonymous’ sources for any changes in targeting, tools, motives, lists of accomplishments, or data dumps.
  • Exercise a high level of awareness of your IT and IT Security systems and their logs; continue to apply questioning curiosity to anything interesting.
  • If you think your organization is affected, assume that you are affected by DDoS, defacement, and disclosure – and not just one of them.

After June 20:

  • Continue to monitor Anonymous’ sources for any lists of accomplishments or data dumps.
  • If you’ve made it into Anonymous’ news, you’ll be remediating and designing against future occurrence.
  • If you didn’t make it in Anonymous’ news, review for any sign of breach, compromise, or excessive probing.
  • Remain vigilant, especially if you’re in the target list. The attacks may not be over.

Similar to how DDoS, defacement, and disclosure tactics can distract and mask each other, so can threat actors. A hacktivist group’s activity can mask or distract criminal, nation state, insider, or even terrorist activity.

Announced operations like these with their relative open disclosure of tactics, tools, and procedures are golden opportunities for evaluation and improvement of countermeasures in real world scenarios. Taking advantage of these opportunities helps train people, process, and technology to recognize signals of a targeted attack regardless whether it is publicly disclosed or covert.

For more information on how targeted attacks work and how organizations can better protect themselves from such threats, you may refer to some of our previous entries here.

This is an opt in Security Alert to be removed reply with remove.
Always,
Michelle

Towerwall Security / Vulnerability Alert: Microsoft announces five Bulletins for Patch Tuesday, including Office for Mac

Midsummer Patch Tuesday (or midwinter, depending on your latitude) takes place on Tuesday 11 June 2013.

As you probably already know, Microsoft publishes an official Advance Notification each month to give you early warning of what’s coming.

These early notifications generally don’t give any details, summarizing only the basics, such as:

  • The number of Bulletins (read: security patches) you’ll get.
  • The severity levels (read: urgency) of the patches.
  • The products or components being fixed.
  • Whether a reboot is required.

And June’s answers, as briefly as possible, are:

  • Five.
  • One critical and four important.
  • Windows and Office.
  • Yes.

So it sounds on the surface like a light month, with only two remote code execution (RCE) vulnerabilies to worry about.

Take note, however, that Microsoft’s Affected Software chart states that one of the RCEs is a vulnerability in Internet Explorer 6 to Internet Explorer 10, on platforms from Windows XP right up to Windows 8 and Windows RT.

That makes it a risk to almost every Windows user out there.

The other RCE, which isn’t rated critical, affects Office.

Interestingly, the versions at risk seem to be Office 2003 for Windows, and Office 2011 for Mac, meaning that this isn’t just a Windows Patch Tuesday.

→ As usual, Server Core installations aren’t affected by the vulnerability in Internet Explorer (nor by the hole in Office), because Server Core deliberately omits the graphical components required to run GUI-based software like browsers, file viewers and word processors. You won’t get caught out by surprise on Server Core when you visit a website, look at an image, or open a risky PDF file – for the compellingly simple reason that, by design, you can’t do any of those things. We recommend that you use Server Core whenever technically possible.

There’s also an update dealing with an elevation-of-privilege (EoP) flaw listed as being simply in “Windows.”

The burning question is whether this fix deals with a vulnerability in the Windows kernel recently disclosed by Google researcher Tavis Ormandy, who published a working exploit on the Full Disclosure mailing list about three weeks ago.

Ormandy’s initial Full Disclosure post appeared on 17 May 2013, noting that he had found a potentially exploitable vulnerability and asking for help to turn the bug into a working exploit.

Three days later, he’d solved his own problem and published what he claimed to be working exploit for all supported versions of Windows.

 

Note that EoPs don’t always get critical ratings because they’re often local exploits that can’t be triggered remotely.

In such cases, you have to land before you can expand: you need to break into your victim’s computer first, for example by using an RCE, and then use the EoP to “promote” yourself to administrator level.

Of course, if you’re able to pull off an RCE in the first place, you can still infect your victim and wreak plenty of havoc, because malware doesn’t need root-level access to log keystrokes, steal files, send spam and much more.

But an RCE followed by an EoP makes everything much worse, since any malware you unleash can do much more harm, such as altering system services, sucking data out of memory belonging to other processes, and even manipulating the operating system kernel itself.

Towerwall and the Information Security Summit highlighted in SearchSecurity.com Article

Check out Search Security’s article – “HIPAA Omnibus Rule, PPACA challenge enterprise compliance management”, where our own Natalie Kmit and the Information Security Summit 2013 are highlighted:

 

HIPAA Omnibus Rule, PPACA challenge enterprise compliance management

WELLESLEY, Mass. — For information security professionals, compliance-related tasks have often proved to be a trying yet necessary part of the job. However, Thursday at the MassBay Community College Information Security Summit, a panel of information security experts said new compliance mandates are making practitioners’ jobs even harder.

One thing I’ve learned is you can’t storm into the CIO’s office with a print out of legislation and say, ‘This is something we need to do.’

Steven Beaudrot,
IT director of regulatory management and compliance, Fresenius Medical Care

During a discussion on compliance and risk management, Natalie Kmit, an IT security services consultant with Framingham, Mass.-based consultancy Towerwall Inc., said the most recent compliance game-changer is the new Health Insurance Portability and Accountability Act (HIPAA) Omnibus Rule. Released in January, the rule stipulates that as of Sept. 23, not only will more stringent requirements for “business associates” of HIPAA-compliant organizations take effect, but it will also require breach notification when a covered entity or business associate experiences an impermissible use or disclosure of protected health information.

Kmit said the HIPAA Omnibus Rule has broadened the definition of a business associate, encompassing a variety of subcontractor organizations that weren’t previously included. She said this has created more work for subcontractors, as well as for the covered entities managing them.

“Many of my clients are small and midsized businesses, and so it’s about finding a way to stay within budget to do what’s necessary,” Kmit said. “Even to understand the 563-page piece of legislation is, I would say, very challenging.”

 Click here to read the entire article.