Saturday, December 6, 2014

Emerging Technology and Cybersecurity

As an engineer, I am often asked to research and implement new technologies, and I am frequently surprised by the lack of security awareness from vendors.  Most of the time, the lengthiest part of the entire implementation is getting a new piece of technology through the security process, as the vendors often have little in the way of documentation concerning what is acceptable to close or disable to address security risks.

Now, more and more companies are releasing products as appliances, both physical and virtual, that you just plug into your environment and begin using. This is great from a deployment and management standpoint and is much easier for the vendor to support; however, any unapproved change can cause you to lose support from the vendor.  Surprisingly, multiple appliances from the same vendors will often also have dissimilar configurations and vulnerabilities.  These days, each product is made by a different group within the company, and little to no effort is made to standardize these products.

One product I recently started evaluating consists of eighteen virtual appliances.  All these appliances have the same operating system, but every one of them is at different patch levels, different revisions, and have different services (unused) enabled.  This makes the security process a nightmare – forcing us to:
Evaluate each appliance and submit a report on each one to security
Determine exactly what they absolutely require us to fix
Submit that list to the vendors for permission or assistance to fix
Wait for their engineers to evaluate the fix
Fix what is allowed by the vendor (to keep support)
Submit back to our local security for approval

I would expect this lack of awareness from a startup or newer vendor who may have not yet had to deal with secure environments; most corporate enterprises are more forgiving when it comes to internal security.  But I am seeing this from large longtime vendors who should have security awareness in mind when these products were designed and built.

http://searchsecurity.techtarget.com/tip/Information-security-policy-management-for-emerging-technologies

I understand that some security items may have to be adjusted for functionality; however, I am talking about basic cleanup – Things like unused services still enabled, unneeded ports left open, application or operating systems patches not up to date or at least at the same levels, etc.

Many of the new technologies, and current technologies that we have recently upgraded, are now requiring Internet access to download patches or updates and report health back to the vendor.  This has been true for a long time now, but there was always the off-line option that an admin could use to download the patches and updates for non-Internet connected networks.  Now, I no longer have these options (at least not in an easy way).  This forces me to backdoor the systems to maintain current operations, possibly introducing more security risks, or at least risks for application corruption.

I have said before, and I still strongly believe, that it is much easier to build with security in mind than to try to secure a product that was never designed from the standpoint of security.  With all the reports of data thefts in the news, what company is not concerned with security?  Vendors need to establish security as a key cornerstone of any new product to remain relevant and competitive in today’s environment.

https://www.gtisc.gatech.edu/pdf/Threats_Report_2014.pdf


Friday, September 5, 2014

Datacenter Security - Back to Basics

A few days ago, a situation in a local organization took place where a well-intentioned IT employee caused several thousands of dollars’ worth of damage and hours of downtime.  He was trying to help organize a datacenter and damaged some fiber cables that provide the backbone between two environments.

Immediately, questions were asked.  Why he was in the datacenter unescorted, how did he get access, why was he trying to work on equipment that he had no reason to have access to?  In IT, we put a lot of effort into securing our systems and our networks.  We spend a lot of money purchasing intrusion detection systems, firewalls, anti-virus/malware systems, and so on.

It occurred to me that we often have no idea who is in a datacenter at any given time.  Especially in a large datacenter or an environment with multiple datacenters, several people from different departments, divisions, companies, or workgroups can gain access to the datacenters and everything in them.  Some time ago, I attended a training class for ethical hacking that focused on preventing hacking; several of the methods involved the hacker actually having physical access to your equipment, cables, etc.  I kind of dismissed these threats, thinking that if a hacker had physical access to my datacenter, they would own me; they could just pick up my equipment and take it with them. I still believe that to be true; however, an insider could very well walk out of a datacenter they have legitimate access to with data they don’t.

My first thought was that combos and locks should be installed on all datacenters to ensure that nobody could gain unauthorized access , but what about shared datacenters?  Many equipment racks have locking doors that can be secured to protect the equipment and cabling inside, but most of the racks usually take the same key.

I have worked in datacenters that had card swipes on each rack so that you would be able to access the equipment to which were you were granted rights and nothing more.  A record was also kept of who and when the racks were accessed.  The only down side is that this is another system that needs to be  managed.

Obviously you need to put some trust into who you allow into your datacenter, but accidents do happen.  Most equipment and racks look very much alike, so mistakes can occur.  The challenge we now face is how to defend against and mitigate the risk of unauthorized datacenter access.  I have utilized several shared datacenters, and access to these datacenters was very controlled and limited… but most of the time when I went in there, someone else was there.  I often didn’t know who they were, and they didn’t know me.

There are lots of potential solutions that come to mind, but this just goes to show the importance of the basic physical security aspect of IT – which is, unfortunately, often overlooked when developing layered security and defense-in-depth strategies.

http://www.techrepublic.com/blog/it-security/understanding-layered-security-and-defense-in-depth/

https://www.nsa.gov/ia/_files/support/defenseindepth.pdf

http://www.sans.org/reading-room/whitepapers/warfare/defense-depth-impractical-strategy-cyber-world-33896

Thursday, June 26, 2014

Big Data, Big Data Loss

More and more we are hearing about people’s personal data being lost by big companies.  Recently Target lost forty million customers’ credit card information and seventy million home addresses.  My first reaction was that I was really glad that I didn’t have any information with Target, but then I got to thinking…  We do shop at Target, and we do use credit cards, so maybe they did get some of my info.  However, as far as we know, we were lucky and were not part of the data breach. Target is in no way the only business to lose personal data, just one of the biggest recently.  A while back I had to do some research on Data Mining and Big Data providers, and this got me to thinking about how to avoid being on the next compromised data list.

http://datalossdb.org/index/largest

http://www.businessweek.com/articles/2014-03-13/target-missed-alarms-in-epic-hack-of-credit-card-data

So I am a little paranoid by nature, and although I work in IT and am on computers all day most every day, I don’t use one at home for entertainment purposes.  I have no interest in Facebook, Pinterest, Twitter, and so on.  I do occasionally shop online, but only with companies I research first.  I do have a Linked-In account for professional networking and a Google Plus account for school.  I believe that because of my reduced foot print online that I am probably safer than most, but there is still way more information about me out there than I would like.  I was surprised to see that my home address and phone number were easily available for anyone to see, an old Department of Natural Resources accident report was still there from when my boat caught on fire 10+ years ago, a quick search from my home county showed every (usually deserved) speeding ticket I ever had, etc.

Part of this I can understand – Court records are public records, but what could someone do with that information?   Some of the others I can’t – How did my home address and phone number get out there? Turns out companies make extra money from selling your data to these Big Data Providers, who in turn sell it to others.  So when I had my utilities turned on, I paid them to do it… then they got a bonus selling my information to someone else.

Something as small as that seems like no big deal, but when you keep collecting all this information and putting it together, a pretty comprehensive snapshot can be made of someone’s private life.  Put all this information together (home address, phone numbers, contacts, property records, criminal or civil court records, browsing history, shopping habits), and maybe a bad guy can use it for bad purposes.

http://humphreybc.com/post/54668654006/a-few-tips-to-reduce-your-online-footprint

Now we have these Big Data providers collecting and organizing all this data (supposedly for marketing and such), so what happens when they have a breach?  Instead of some customers at Target, it is now anyone who has ever been on the Internet, bought anything online, etc., who is at risk for having their identity stolen and privacy compromised.  The more data they have, the more they can lose.

http://www.nbcnews.com/tech/tech-news/big-data-breach-360-million-newly-stolen-credentials-sale-n38741

Recently in Europe, a law was passed to essentially allow a person to opt out of Google’s data collection and have all data about themselves deleted from Google’s servers, kind of like a no call list for the Internet.  This is a great start, but what about all the others?  How can I opt out, or control what is available?  I really hope some regulations similar to this are enacted in the United States in the near future.

http://www.nytimes.com/2010/05/16/technology/16google.html?pagewanted=all

Wednesday, March 26, 2014

To Enable or Disable IPv6… Not Really a Question

When implemented, IPv6 will offer several security enhancements and benefits, such as the ability to run end-to-end encryption natively and built-in integrity checking.  The SEND protocol will be capable of enabling encrypted confirmation of the host rendering ARP attacks. The move to IPv6 will also protect from man in the middle attacks.  

http://www.sophos.com/en-us/security-news-trends/security-trends/why-switch-to-ipv6.aspx

While IPv6 will provide many benefits, it also presents several security issues. Most networks are still designed around the IPv4 architecture, meaning all the monitoring and security systems and policies are still focused on IPv4 and will require extensive hardware upgrades to be IPv6 compatible.  The main issue with this is that most enterprises are still not able to monitor or manage IPv6 traffic, and the bad guys are taking advantage of it. Because admins don’t or can’t monitor IPv6 traffic, attackers are using this security loop-hole to attack by tunneling IPv4 traffic inside IPv6, creating malware that communicates with IPv6 or using the auto-configuration capabilities of IPv6 to actually control devices.

The move to IPv6 will be costly, so most enterprises will attempt a gradual migration, replacing older IPv4 equipment with IPv6 compatible equipment.  While this is an understandable and prudent approach, it will also present its own set of issues. When running in a mixed environment, tunnels must be created to allow traffic to transverse both segments of the network. This will open the door for misconfigurations and unintended security consequences.

The lack of understanding with IPv6 will also be an issue. Most admins know it will provide an exponentially larger address pool; however, the details are still foggy. How will this affect how we currently manage our networks? Does DHCP go away? What about DNS? Planning, training, and extensive preparation will be required when migrating.

http://www.techrepublic.com/blog/it-security/ipv6-oops-its-on-by-default/1955/

Most Operating Systems vendors have jumped onto the IPv6 bandwagon with enthusiasm. Now Red Hat Linux, Windows 7 and 2008r2, Apple, and Solaris come with IPv6 enabled by default. This is creating a huge security risk by allowing unmonitored and unmanaged traffic onto your networks. What is worse is that it can be difficult and time consuming to disable it. With Windows there are GPO templates that you can import that will disable it across your domain, but the native solution is to perform a registry hack at each machine. Just disabling it in the network control panel does not completely disable it. Linux, Solaris, and Apple use similar methods forcing you to touch each computer and are very time intensive when you consider the amount of systems you need to configure. Most security experts are now recommending disabling IPv6 until you have the ability to actually manage and use it - Until then, it is an unnecessary risk that can be easily avoided.

http://gcn.com/articles/2013/08/09/ipv6-attack.aspx

Thursday, February 27, 2014

Cybersecurity Blog Review and Analysis

Over the last eleven weeks I have blogged about a variety of subjects with what I hope is one central theme: personal knowledge.   I try to choose topics that I have had personal experience with, since I feel this gives me a better insight into the subjects and the issues that surround them.  I have written about Security versus Functionality; Planning for Security and Functionality; Security Policies and Guidelines; Flexible Security; Emerging Trends in Information Security Models; and Security Education and Training with the theme that we need a balance between Security and Functionality. There are two points I wanted to emphasize.  The first is that Security and Functionality are not mutually exclusive – If proper planning is done before an implementation, most issues can be addressed and resolved to everyone’s satisfaction.  The second point is that we all need to remember why we are here and in the position that we hold… The Security Administrators need to remember that we are hired to provide a service or solution, and while it does need to be secure, it also has to work.  The Systems/Network Administrators and Engineers need to remember that without security, the system that performs well now will not perform for long.

I also wrote about Incident Response and Disaster Planning; Risk Identification and Management; Intrusion Detection/Prevention Systems and Strategies; and Skills, Requirements, and Certifications, with the idea that a good balance of all these subjects can prevent or mitigate most security issues. For example, with properly trained administrators and engineers, solutions will be more secure and stable, thus lowering the number of incident responses and disaster recoveries.  With proper risk management and identification, projects can run more smoothly, resulting in better implementation without sacrificing scope, budget, timeline, or quality.  The best intrusion detection and prevention systems are junk without a properly trained person to install and configure them; Nothing works out of the box. Although these security topics may seem somewhat disconnected, they all come into play when performing projects, implementing solutions, and planning your enterprises.

I get my reference sources from the “Database of Infinite Knowledge,” sometimes known as Google. Once I decide on a topic, I generally first write out what my thoughts are and then google the topic and read several articles.  I try to select the most credible articles that I believe do the best job explaining the topic I chose.  Not only does that allow me to pick a source to quote, but it also allows me to supplement my objective with points that I may not have initially considered.  I believe it is OK to revise my stance on a subject while doing this, since I don’t always just pick the articles that agree with my opinion.  For example, I am a Windows Engineer and have been one for years; I started out in Unix and progressed through Novell, Linux, and eventually Windows.  I was just given an article listing 10 reasons why Linux is better in my datacenter than Windows.  While I laughed a lot, it did have some valid points, mostly about having properly trained people to run your datacenter.

I do believe a blog like this can be beneficial to an IT professional, for both the reader and the author. As the writer, I get to fully explore and research topics to increase my personal knowledge and expertise.  As an IT Professional, I myself follow several blogs and often use them when troubleshooting an issue.  Very rarely am I the first to experience a particular problem, so why reinvent the wheel?  I have had a few IT Professionals I know comment on my blog and request posts on certain subjects.  Now that class is over, I intend to fulfill a request and post on an IPV6 issue a peer is experiencing.  My advice to the next group of students is to always choose a topic in which you have experience or interest.  I have very little to do with our intrusion detection and prevention systems, so that blog post was the hardest one that I had to write, as I had to rely mostly on other people’s documentation and my limited experience.  Make sure to blog about your own individual and unique interests, experiences, and viewpoints, and you will be surprised to discover the number of other professionals who share many of your same situations, frustrations, and opinions.

http://www.writersdigest.com/online-editor/the-12-dos-and-donts-of-writing-a-blog

Saturday, February 22, 2014

Skills, Requirements, and Certifications

When I started in the commercial world of IT, I was lucky.  Back then, experience counted more than certifications, education, and actual training.  Most IT professionals were self-trained or had a base amount of education with the rest filled in on their own.  I was trained in IT in the Navy and had several years’ experience by the time I got out, so getting jobs was never a problem.  At that time, most interviews were conducted by HR and a technical person, who would question you on your experience and knowledge, and they could tell if you actually had the experience you claimed.

http://www.cc-sd.edu/blog/the-great-debate-education-vs-experience

As time passed, I noticed more of a focus on certifications, and more interviews were conducted solely by a HR rep.  Certifications became your foot in the door, but you still had to perform.  There was a high level of suspicion reserved for certified people with little experience - We called them “paper techs.”  They would buy a study guide, pass a test, and were suddenly declared an expert in a system on which they had no experience.  This would usually show up pretty quickly, and they would either move on or would be paired with someone with actual experience to learn the ropes.  More recently, certifications have implemented hands-on portions of the test that help to weed out those individuals just trying to pass from a guide.  This forces you to have actual working experience with a system before you are granted expert status.

For a long time, I didn’t pay much attention to certifications.  I had twelve or fourteen years in the field by that time and was getting by on the fact that I had a lot of experience.  Then I was turned down for a job I was interviewing for simply because I didn’t have the proper certifications.  In the Navy, I had obtained a MCSE in Windows NT 3.5 but had not bothered to update anything since.  The strange thing is that I really wasn’t that interested in the job or the certs until I was denied.  I quickly updated my certs to Windows 2000, then 2003 and so on; I have also gained several other certs in various technologies, so this is no longer an issue.  I will agree that certifications are at least a fair indicator that a person has a base level of knowledge in their particular field.  One thing I don’t really trust is the “cert grabber,” like a Windows guy that has a Linux Red Hat engineer and a Cisco cert, with a Solaris cert thrown in for good measure. Pick a discipline and focus on it - There is nothing wrong with being well rounded, but a jack of all trades is a master of none.

http://www.avidtr.com/Job-Seekers/Industry-Articles/Work-Experience-vs--Certifications---What-Do-Emplo.aspx

Lately I have seen the trend shift from experience and certs to formal education.  Most job postings now have a four year degree minimum but can be offset with enough actual experience.  The majority of the IT professionals I know now are chasing a degree, and with the focus on a BA or BS, most are going to the Master’s degree level (like me) just to try and stay ahead of the game.  So if you have a good mixture of experience and certs and education, you have a better chance of filling in more of the HR person’s check boxes, getting an interview, and walking in through the door… Once in, though, you still have to prove yourself every time.

http://virtualizedgeek.com/2013/09/09/vendor-certification-vs-college-degree/

It does make me feel sorry for the guys trying to break into the IT field now.  How would you get started? At one time I would have said, “Go get a basic cert.”  The A+ used to equate to about six months in the field.  Now I guess I would say the same, but I would also advise getting into at least a two year program and building as much experience as possible.  In my opinion, formal education is more of a path for career advancement and progression to managerial levels.  But as the IT field develops more and more specialties, the certifications become increasingly important – especially those in the areas of risk, project management, and security.

http://www.cio.com/slideshow/detail/130807/18-Hot-IT-Certifications-for-2014

Thursday, February 13, 2014

Intrusion Detection/Prevention Systems and Strategies

I guess I like to have my cake and eat it too.  Often in conversations about Disaster Recovery versus Disaster Avoidance, I just can’t understand why I wouldn’t want both.  The same goes for Intrusion Detection Systems vs Intrusion Prevention Systems, as most devices today will do both for your network.

http://www.inetu.net/about/server-smarts-blog/february-2011/intrusion-detection-or-prevention-ids-vs-ips

The big difference is that an IDS scans a copy of your network traffic looking for signs of intrusions while an IPS is looking at the active real time traffic.  These can be viewed as active (IPS) and passive (IDS) systems.  Nothing is perfect, so things will get past an active system that the passive system may catch.  A passive system can generally do a more thorough job digging deeper into traffic and logs than an active system, as the passive system is less time-sensitive.

https://www.sans.org/reading-room/whitepapers/detection/understanding-intrusion-detection-systems-337

Determining when and where to deploy these systems can be a source of contention.  The security guys want one on every network segment and every VLAN, with a couple monitoring the core and a few thrown in to monitor the monitoring traffic.  The network guys usually want one per entry point, POP, or VPN access.  A compromise between security, performance, and cost has to be reached, as each unit can cost from a few hundred to several thousands of dollars, depending upon the requirements, functionality, and throughput of the unit.  Each interruption of traffic also adds a hindrance to performance – Each inspection takes less than a millisecond, but add that up over your entire network for all your traffic, and it can end up being significant.

http://netsecurity.about.com/cs/hackertools/a/aa030504.htm

No one piece of equipment or technology should comprise your entire security strategy.  An IDS or IPS should be part of a layered security approach that includes network, systems, applications, and physical security.  I once attended a class that spent several hours explaining how to defend against a hacker connecting directly into your fiber channel switches in the datacenter and stealing data.  I kind of figured if I was able to get into a datacenter and couldn’t break into the switch, I’d just cart the entire SAN out with me to a place where I could take my time with it.  The point of the story is that it will do no good to spend a lot of time securing servers from outside attack if someone could just walk in and plug into my unprotected network.

Security needs to be approached as a combination of protections, starting from the outside and working inward.  Secure the doors, secure the workstations, secure the servers and the network, and so on; Otherwise, you could be locking the doors but leaving the windows open.

http://seann.herdejurgen.com/resume/samag.com/html/v08/i09/a7.htm

Friday, February 7, 2014

Risk Management Specialization

The job of an IT Administrator has greatly changed over the years.  I have talked about how when I started in IT, the network admin did it all. We managed the servers, workstations, switches, and most of the applications, but over the years it was recognized that some separation of disciplines was needed.  We have all heard the old saying, “Jack of all trades, master of none.”  This is very true, especially in today’s IT environment.  I want my network person to be great at networks; I really don’t care if he knows any Windows stuff, outside of how to make the network talk to it.  The same is true of my systems, applications, and virtualization people – I want them to do what they do well and leave the other stuff to the people who do it well.

Risk management is one of those specialties that is still hovering in the gray area.  We all know it is important, that it is a huge job, but somehow it is often still being pushed off onto the admin, engineer, or project manager.  As an administrator and later an engineer, I am more concerned with the technical issues of a project or system; I may have a vague idea of the risk, legal issues, and policies associated with what I was working on, but I could never claim to be an expert.  Risk can be very hard to predict, especially on very complex systems with lots of users and dependencies.  Risk Management professionals have developed several strategies to deal with assessing risk that they can use to determine actual risk and not just the guesses of an admin, engineer, or project manager.

http://www.zurichna.com/internet/zna/SiteCollectionDocuments/en/media/inthenews/strategiesformanaginginformationsecurityrisks.pdf

I really don’t believe that it is the admin’s, engineer’s, or project manager’s place to determine the risk, cost of the risk occurrence, and cost of mitigation to a project, product, or system.  Should all these people – as SMEs – have input?  You bet.  But as an admin, I really never concerned myself with the cost of a risk or mitigation… just that it was bad if something went down, and that more people would complain if system “A” went down than system “B.”  A Risk Management professional working with management should set those priorities, because what may be important to me as an admin or engineer may not be as important to the company overall.

http://www.sans.edu/research/leadership-laboratory/article/risk-assessment

It seems counterproductive that we would think nothing of bringing in a vendor or outsourcing a new application deployment or major project, but we try to handle something as complex as Risk Management in-house.  It may be that we don’t want to air our dirty laundry, but once again, let trained and dedicated people do what they do well.  Ever heard the phrase, “A fresh pair of eyes?” Same idea here:  look at it in-house and then let an objective outsider take a look and see what you missed.

http://outsourcemagazine.co.uk/using-outsourcing-to-address-risk-management/

We have already seen IT security and project management become a specialty, along with more dedicated IT security professional and project management positions being created.  I believe it is only a matter of time before risk management follows suit.  As we become more connected, and increasingly IT reliant, a dedicated position to handle the complexities of risk management and how it interacts with IT security and project management will have to become the standard.  Just as when it was decided that a network admin shouldn’t or couldn’t handle everything, we will need to determine the same of the project managers, engineers, and admins tasked with risk management as a side duty.

Friday, January 31, 2014

Risk Identification

Have you ever heard the phrase, “We don’t know what we don’t know?”  The idea of risk management – with risk identification in particular – is the process of reducing the unknown to the least amount possible. The problem with that is how do you know when you are done?  The unknown is still the unknown.  The real trick is to eliminate or mitigate as many possible risks as you can reasonably foresee, along with taking the time to actually run through several scenarios in which not everything goes as planned.

http://international.fhwa.dot.gov/riskassess/risk_hcm06_02.cfm

As a “technical” type of person, I used to expect things to work as designed or for projects to be as easy as I envisioned.  From working on my car to installing server systems, experience has taught me that anything can happen, so I always try to add extra time to whatever I figure the project should take.  Most issues can be solved with enough time (and sometimes money), which is the paradox in project management; You try to add in buffer time to account for unforeseen issues, yet still present a short timeline for the customer.  As part of the risk mitigation process, I have always favored extra time in a schedule since no one is ever upset when and if you finish early.

http://www.mitre.org/publications/systems-engineering-guide/acquisition-systems-engineering/risk-management/risk-identification

In risk management, the first thing we have to ask ourselves is, “What is a risk?”  A risk is the “effect of uncertainty on objectives,” and an effect is a positive or negative deviation from what is expected. Most organizations focus on the negative effects from risk, which is what we are truly attempting to mitigate. However, risk by definition can have a positive effect.  I have tried several times to enter a positive risk into our risk databases for projects I was working; Usually they are discarded as they are viewed as a positive outcome.  Now I will admit that some of the “positive risks” I have envisioned involved unknown magical fairies breaking in at night and completing my work ahead of schedule, but some of them were actually possible and could have been leveraged to our advantage.

http://www.praxiom.com/iso-31000-terms.htm

Murphy’s Law is real and applies more often than not in risk management.  There have been more times than I like to admit when I thought of a risk that seemed inconsequential or improbable that later turned out to be a real issue, costing more time to correct than I would have ever imagined.  I believe that you get better at identifying risks with experience – You only have to touch a hot stove so many times before you know to watch out for it.  I have found it helpful to have “Risk Storming” sessions where several engineers will talk a project through and look for the “gotchas;” It is truly surprising what a fresh set of eyes and different perspectives can see.

http://www.murphys-laws.com/murphy/murphy-laws.html

Saturday, January 25, 2014

Emerging Trends in Information Security Models

Back when I started in IT, security was considered an edge issue; the network team was responsible for the network firewall and email gateway.  Most of our files were on an internal file server or more likely stored on the local disk of the user’s stand-alone workstation.  Network and internet connectivity was extremely limited and was used mostly as a path for email and to download the latest patches.  Our security model was pretty simple - protect the edge and keep the users’ workstations and files private. This did leave a lot to be desired, but the risk with this scenario wasn’t really an attack but failures. Usually there was very little in the way of centralized backups, management, or control, so when a workstation failed, data was lost unless users had backed up their files to a removable disk.

As network technology improved, we started using more centralized services.  Domains were created to control user access, and file servers were used to store user files in a more secure and reliable location. Centralized managed backups were used to provide data integrity and reliability.  Our security model became more complex, addressing ways and requirements to keep user files separate and preventing users from seeing other users’ files.  We not only had to protect our edge, but now we also had to secure our internal environment as well to prevent unauthorized file access and disclosure of private information.  Internet connectivity became more robust and available at a level that could support more than just basic email traffic and patching, and users began to use the internet as part of their work functions as well as for personal use.  We now needed to monitor and control internet traffic and content, making our security model much more complex.

http://www.ebizq.net/topics/service_security/features/11428.html?page=1

Now we are seeing the move to the “Cloud” - Internal office products are giving way to cloud based products such as Office 365, email servers are moving out to hosted solutions, applications are moving online more and more.  Databases are also being moved to hosted commercial solutions. Every part of the enterprise is becoming more integrated with the internet.  The local computer is becoming more of a portal than a workstation.  This presents even more complex issues with the security model.  How do we secure data that is not located within our physical environment?  How do we limit internet usage and content and still allow the needed services?  How do we protect against phishing attacks, viruses, and hackers?  How do we secure the connections with these service providers?  Now we need to provide more reliable, secure external connectivity that will allow thousands of enterprise users to connect to their applications hosted in the cloud.  With the increasingly frequent evolutions in technology, Information Security Models must adapt and change just as rapidly to address these questions and issues.

http://www.cioinsight.com/security/slideshows/mobile-and-cloud-computing-face-emerging-threats.html

Tuesday, January 14, 2014

Security Education, Training, and Awareness

Most organizations, I believe, put more effort into security policies and compliance with those policies than actual security.  There is a belief that the more “security type settings” we enforce on systems, the more secure they will be.  My currently used security template is over 250 pages of settings that are required to be set on the base OS of each server we deploy.  Most of the time, these settings are blindly followed because they are required, and nobody really knows what most of them are, anyway.  The end result of this policy is a loss of functionality for our end users, along with confusion for our admins and security administrators.

http://iase.disa.mil/stigs/os/windows/2008r2.html

While we do have security personnel who are very good at their jobs, and in reality, do a very good job of securing our networks, we could have a much better overall security stance if more in-depth education, training, and awareness programs were provided to the admins and users.
We currently have security training; however, it is always something like, “Don’t click unknown links.”  As an Admin, I often face questions from users of why certain websites, software programs, or behaviors are not allowed – usually things that are very simple on the users’ home PC.  A lot of the time the answer is, “Because the policy disallows it;” I honestly don’t have any idea why the latest version of some software is not allowed on our network, and yes, I know it works great at home.

https://www.sans.org/reading-room/whitepapers/awareness/security-awareness-training-privacy-394

Unfortunately, I believe this creates an “us against them” attitude for everyone.  As an Admin, I really would like to know there was some reason for this policy other than someone ruling on high (who may or may not have ever actually seen a computer).  And as a user, tell me why it is almost impossible to perform some tasks at work that are commonplace elsewhere.  Just like kids who are told “No,” our first question is almost always, “Why?”

http://www.symantec.com/connect/blogs/awareness-education-and-training

If there was more security awareness (exposure), education (study and testing), and training (hands-on) – both upwards and downwards – a better understanding of our security policy could be achieved and a more secure environment would result.  As an Admin, I would know why I was utilizing a particular setting and why not to disregard it when it was inconvenient.  As a user, I would have a better understanding why I can’t use the latest desktop widget – even though it might save me time and effort – and would be less likely to try and circumvent the system.  Communication is critical if you want everyone involved and on-board for security initiatives.

http://www.sans.org/reading-room/whitepapers/awareness/developing-integrated-security-training-awareness-education-program-1160?show=developing-integrated-security-training-awareness-education-program-1160&cat=awareness

Saturday, January 11, 2014

Information Security Policy Standards and Guidelines

The need for good solid security policies, standards, and guidelines is fairly obvious - Without a framework in place, there can be no cohesive security in an enterprise. However, as I have mentioned before, there is the need to stay flexible and allow for changes and advancements in technology and business requirements.

http://searchsecurity.techtarget.com/feature/Information-security-policies-Distinct-from-guidelines-and-standards

That being said, the term “flexible” just begs to be abused. Just because a policy can be changed, doesn’t mean it should or needs to be changed. We need to avoid policy changes based on knee-jerk reactions, i.e., every time a news article or report appears about a large business getting hacked, I have to add three or four more characters to my password.

http://www.post-gazette.com/businessnews/2012/08/30/Password-length-is-more-beneficial-than-complexity/stories/201208300277

I feel like I’m beating this point to death, but a balance to security requirements – policies, standards, and guidelines – and user/business requirements must be achieved. How much more secure are you really when most of your users have their username and password written down and stashed under their keyboards because you have forced an overly long and complex password requirement?

Users will always try to circumvent a policy or a system that either makes their jobs more difficult or prevents them from doing things the way they have always done them, creating a security nightmare. If a poorly planned policy actually prohibits users from efficiently doing their jobs, thereby forcing them to avoid or go around the requirement, then a policy or systems review is necessary to allow normal user activity in a secure fashion.

http://infosecisland.com/blogview/14329-Security-Stupid-Is-As-Stupid-Does.html

Policies and systems need to be reviewed periodically to determine if they are still relevant, since as technology advances and changes we need to frequently adapt our security policies to fit the new needs and requirements. Technologies like biometrics and single-sign on can go a long way toward creating a more secure authentication step than a 27 character alpha numeric password with special characters. Technologies need to be put into place that will allow a secure environment with the least amount of burden on your users. If they don’t notice it, they won’t try to break it. Not all changes can be implemented invisibly, but if we try to envision proposed changes from the viewpoint of the users, we can certainly try to make them as painless as possible. In the end, we will experience less pushback from users and an overall higher level of security in our environment.

Friday, January 3, 2014

Incident Response and Disaster Planning

Over the last few months I have been involved with a lot of discussions about Disaster Recovery versus Disaster Avoidance. I am surprised that I keep hearing the misconception that if we employ disaster avoidance, we no longer need disaster recovery plans or procedures. I can understand this misconception to a certain point… If I have my data and servers spread across multiple locations and datacenters, why would I need to have separate backups? I will just restore from another datacenter, right?

I believe this comes from the old mentality of hot-site / warm-site methods of disaster recovery. This is where data is replicated from a primary site to an offsite location with varying levels of equipment to restore critical systems. We are now seeing more Active-Active disaster avoidance scenarios, where the data is replicated between multiple hot sites. This allows a company to have an active hot site and actually use all the equipment it is purchasing.

http://www.vclouds.nl/2012/04/16/understanding-stretched-clustering-and-disaster-avoidance/

But there is still a real need for disaster recovery plans and procedures as well as incident response plans and procedures. Data corruption and loss is still very real and painful, so a good solid, tested backup solution is still necessary. Data spills still happen, people still delete the wrong files, and equipment still fails. The better your documentation is, the less painful an incident will be. Equipment and technology can only take us so far - There is still the human factor to consider, and humans make mistakes.

About five years ago, I was working on a customer’s virtual environment and was asked to delete a server that was no longer needed. The environment was replicated over four different sites globally, backups were done nightly, and all datacenters were hot sites with failover capacity for the alternate sites. Pretty much bullet-proof - except that I deleted the wrong server. I had little knowledge of their procedures as I was just onsite performing some maintenance. Luckily, I was working with one of the company engineers who was able to pull an incident response plan to have the server restored from backup. It contained contact information, the proper procedures on who to notify, what customers it affected, and so on. We were able to have the system restored and operational again in less than an hour. Had I been alone to guess at it, it would have taken considerably longer.

Proper documentation with defined incident and response procedures, as well as a comprehensive disaster recovery plan and policy will make your life much easier and can ultimately save your bacon when failures occur.

http://www.7x24exchangedelval.org/pdf/What_to_protect_against_DA_Vs_DR.pdf