Friday, January 31, 2014

Risk Identification

Have you ever heard the phrase, “We don’t know what we don’t know?”  The idea of risk management – with risk identification in particular – is the process of reducing the unknown to the least amount possible. The problem with that is how do you know when you are done?  The unknown is still the unknown.  The real trick is to eliminate or mitigate as many possible risks as you can reasonably foresee, along with taking the time to actually run through several scenarios in which not everything goes as planned.

http://international.fhwa.dot.gov/riskassess/risk_hcm06_02.cfm

As a “technical” type of person, I used to expect things to work as designed or for projects to be as easy as I envisioned.  From working on my car to installing server systems, experience has taught me that anything can happen, so I always try to add extra time to whatever I figure the project should take.  Most issues can be solved with enough time (and sometimes money), which is the paradox in project management; You try to add in buffer time to account for unforeseen issues, yet still present a short timeline for the customer.  As part of the risk mitigation process, I have always favored extra time in a schedule since no one is ever upset when and if you finish early.

http://www.mitre.org/publications/systems-engineering-guide/acquisition-systems-engineering/risk-management/risk-identification

In risk management, the first thing we have to ask ourselves is, “What is a risk?”  A risk is the “effect of uncertainty on objectives,” and an effect is a positive or negative deviation from what is expected. Most organizations focus on the negative effects from risk, which is what we are truly attempting to mitigate. However, risk by definition can have a positive effect.  I have tried several times to enter a positive risk into our risk databases for projects I was working; Usually they are discarded as they are viewed as a positive outcome.  Now I will admit that some of the “positive risks” I have envisioned involved unknown magical fairies breaking in at night and completing my work ahead of schedule, but some of them were actually possible and could have been leveraged to our advantage.

http://www.praxiom.com/iso-31000-terms.htm

Murphy’s Law is real and applies more often than not in risk management.  There have been more times than I like to admit when I thought of a risk that seemed inconsequential or improbable that later turned out to be a real issue, costing more time to correct than I would have ever imagined.  I believe that you get better at identifying risks with experience – You only have to touch a hot stove so many times before you know to watch out for it.  I have found it helpful to have “Risk Storming” sessions where several engineers will talk a project through and look for the “gotchas;” It is truly surprising what a fresh set of eyes and different perspectives can see.

http://www.murphys-laws.com/murphy/murphy-laws.html

Saturday, January 25, 2014

Emerging Trends in Information Security Models

Back when I started in IT, security was considered an edge issue; the network team was responsible for the network firewall and email gateway.  Most of our files were on an internal file server or more likely stored on the local disk of the user’s stand-alone workstation.  Network and internet connectivity was extremely limited and was used mostly as a path for email and to download the latest patches.  Our security model was pretty simple - protect the edge and keep the users’ workstations and files private. This did leave a lot to be desired, but the risk with this scenario wasn’t really an attack but failures. Usually there was very little in the way of centralized backups, management, or control, so when a workstation failed, data was lost unless users had backed up their files to a removable disk.

As network technology improved, we started using more centralized services.  Domains were created to control user access, and file servers were used to store user files in a more secure and reliable location. Centralized managed backups were used to provide data integrity and reliability.  Our security model became more complex, addressing ways and requirements to keep user files separate and preventing users from seeing other users’ files.  We not only had to protect our edge, but now we also had to secure our internal environment as well to prevent unauthorized file access and disclosure of private information.  Internet connectivity became more robust and available at a level that could support more than just basic email traffic and patching, and users began to use the internet as part of their work functions as well as for personal use.  We now needed to monitor and control internet traffic and content, making our security model much more complex.

http://www.ebizq.net/topics/service_security/features/11428.html?page=1

Now we are seeing the move to the “Cloud” - Internal office products are giving way to cloud based products such as Office 365, email servers are moving out to hosted solutions, applications are moving online more and more.  Databases are also being moved to hosted commercial solutions. Every part of the enterprise is becoming more integrated with the internet.  The local computer is becoming more of a portal than a workstation.  This presents even more complex issues with the security model.  How do we secure data that is not located within our physical environment?  How do we limit internet usage and content and still allow the needed services?  How do we protect against phishing attacks, viruses, and hackers?  How do we secure the connections with these service providers?  Now we need to provide more reliable, secure external connectivity that will allow thousands of enterprise users to connect to their applications hosted in the cloud.  With the increasingly frequent evolutions in technology, Information Security Models must adapt and change just as rapidly to address these questions and issues.

http://www.cioinsight.com/security/slideshows/mobile-and-cloud-computing-face-emerging-threats.html

Tuesday, January 14, 2014

Security Education, Training, and Awareness

Most organizations, I believe, put more effort into security policies and compliance with those policies than actual security.  There is a belief that the more “security type settings” we enforce on systems, the more secure they will be.  My currently used security template is over 250 pages of settings that are required to be set on the base OS of each server we deploy.  Most of the time, these settings are blindly followed because they are required, and nobody really knows what most of them are, anyway.  The end result of this policy is a loss of functionality for our end users, along with confusion for our admins and security administrators.

http://iase.disa.mil/stigs/os/windows/2008r2.html

While we do have security personnel who are very good at their jobs, and in reality, do a very good job of securing our networks, we could have a much better overall security stance if more in-depth education, training, and awareness programs were provided to the admins and users.
We currently have security training; however, it is always something like, “Don’t click unknown links.”  As an Admin, I often face questions from users of why certain websites, software programs, or behaviors are not allowed – usually things that are very simple on the users’ home PC.  A lot of the time the answer is, “Because the policy disallows it;” I honestly don’t have any idea why the latest version of some software is not allowed on our network, and yes, I know it works great at home.

https://www.sans.org/reading-room/whitepapers/awareness/security-awareness-training-privacy-394

Unfortunately, I believe this creates an “us against them” attitude for everyone.  As an Admin, I really would like to know there was some reason for this policy other than someone ruling on high (who may or may not have ever actually seen a computer).  And as a user, tell me why it is almost impossible to perform some tasks at work that are commonplace elsewhere.  Just like kids who are told “No,” our first question is almost always, “Why?”

http://www.symantec.com/connect/blogs/awareness-education-and-training

If there was more security awareness (exposure), education (study and testing), and training (hands-on) – both upwards and downwards – a better understanding of our security policy could be achieved and a more secure environment would result.  As an Admin, I would know why I was utilizing a particular setting and why not to disregard it when it was inconvenient.  As a user, I would have a better understanding why I can’t use the latest desktop widget – even though it might save me time and effort – and would be less likely to try and circumvent the system.  Communication is critical if you want everyone involved and on-board for security initiatives.

http://www.sans.org/reading-room/whitepapers/awareness/developing-integrated-security-training-awareness-education-program-1160?show=developing-integrated-security-training-awareness-education-program-1160&cat=awareness

Saturday, January 11, 2014

Information Security Policy Standards and Guidelines

The need for good solid security policies, standards, and guidelines is fairly obvious - Without a framework in place, there can be no cohesive security in an enterprise. However, as I have mentioned before, there is the need to stay flexible and allow for changes and advancements in technology and business requirements.

http://searchsecurity.techtarget.com/feature/Information-security-policies-Distinct-from-guidelines-and-standards

That being said, the term “flexible” just begs to be abused. Just because a policy can be changed, doesn’t mean it should or needs to be changed. We need to avoid policy changes based on knee-jerk reactions, i.e., every time a news article or report appears about a large business getting hacked, I have to add three or four more characters to my password.

http://www.post-gazette.com/businessnews/2012/08/30/Password-length-is-more-beneficial-than-complexity/stories/201208300277

I feel like I’m beating this point to death, but a balance to security requirements – policies, standards, and guidelines – and user/business requirements must be achieved. How much more secure are you really when most of your users have their username and password written down and stashed under their keyboards because you have forced an overly long and complex password requirement?

Users will always try to circumvent a policy or a system that either makes their jobs more difficult or prevents them from doing things the way they have always done them, creating a security nightmare. If a poorly planned policy actually prohibits users from efficiently doing their jobs, thereby forcing them to avoid or go around the requirement, then a policy or systems review is necessary to allow normal user activity in a secure fashion.

http://infosecisland.com/blogview/14329-Security-Stupid-Is-As-Stupid-Does.html

Policies and systems need to be reviewed periodically to determine if they are still relevant, since as technology advances and changes we need to frequently adapt our security policies to fit the new needs and requirements. Technologies like biometrics and single-sign on can go a long way toward creating a more secure authentication step than a 27 character alpha numeric password with special characters. Technologies need to be put into place that will allow a secure environment with the least amount of burden on your users. If they don’t notice it, they won’t try to break it. Not all changes can be implemented invisibly, but if we try to envision proposed changes from the viewpoint of the users, we can certainly try to make them as painless as possible. In the end, we will experience less pushback from users and an overall higher level of security in our environment.

Friday, January 3, 2014

Incident Response and Disaster Planning

Over the last few months I have been involved with a lot of discussions about Disaster Recovery versus Disaster Avoidance. I am surprised that I keep hearing the misconception that if we employ disaster avoidance, we no longer need disaster recovery plans or procedures. I can understand this misconception to a certain point… If I have my data and servers spread across multiple locations and datacenters, why would I need to have separate backups? I will just restore from another datacenter, right?

I believe this comes from the old mentality of hot-site / warm-site methods of disaster recovery. This is where data is replicated from a primary site to an offsite location with varying levels of equipment to restore critical systems. We are now seeing more Active-Active disaster avoidance scenarios, where the data is replicated between multiple hot sites. This allows a company to have an active hot site and actually use all the equipment it is purchasing.

http://www.vclouds.nl/2012/04/16/understanding-stretched-clustering-and-disaster-avoidance/

But there is still a real need for disaster recovery plans and procedures as well as incident response plans and procedures. Data corruption and loss is still very real and painful, so a good solid, tested backup solution is still necessary. Data spills still happen, people still delete the wrong files, and equipment still fails. The better your documentation is, the less painful an incident will be. Equipment and technology can only take us so far - There is still the human factor to consider, and humans make mistakes.

About five years ago, I was working on a customer’s virtual environment and was asked to delete a server that was no longer needed. The environment was replicated over four different sites globally, backups were done nightly, and all datacenters were hot sites with failover capacity for the alternate sites. Pretty much bullet-proof - except that I deleted the wrong server. I had little knowledge of their procedures as I was just onsite performing some maintenance. Luckily, I was working with one of the company engineers who was able to pull an incident response plan to have the server restored from backup. It contained contact information, the proper procedures on who to notify, what customers it affected, and so on. We were able to have the system restored and operational again in less than an hour. Had I been alone to guess at it, it would have taken considerably longer.

Proper documentation with defined incident and response procedures, as well as a comprehensive disaster recovery plan and policy will make your life much easier and can ultimately save your bacon when failures occur.

http://www.7x24exchangedelval.org/pdf/What_to_protect_against_DA_Vs_DR.pdf