Monday, December 23, 2013

Vuurmuur and Centos 6.5

Today I got to play around with my centos box and I decided to install Vuurmuur.  The install on centos is pretty easy.

Basically, go to the site and download the tarball.  Follow the steps located here.  Once you think you have everything ready to go, make sure to start vuurmuur before you run the config.  I know this seems initiative enough, but I was under the impression that I would configure it before I ran it.  Oh well.

The last piece would be installing conntrack-tools.  I did not have this in any repos that I had configured, so I decided to build it from scratch.  Note that the latest version of conntrack-tools requires dependencies greater than that that are included in the base repos (such as libnfnetlink).  You are better off just building everything from scratch.  The default install location for the dependencies are in /usr/local/lib.  Pkg-config will not find this location by default.  I ended up using the PKG_CONFIG_PATH environment variable to set the location of the .pc files.  Conveniently they are located in /usr/local/lib/pkgconfig.

I think the end goal here is to install suricata and plug it into Vuurmuur.  I do want to spend some time playing around with the base features, however.

So far I have built a few rules (SNAT and such) and played around with the logging and connection list features.  They seem easy enough to use and quite powerful actually.  Vuurmuur seems to have some built in anti-spoof protection among other things.  It is interesting to do an iptables --list and check out what Vuurmuur has done to it!


Saturday, December 14, 2013

Troll Websites: The Insurance Salesmen of the IT Business

It is that time of year again, people are looking to spend their hard earned money on the latest and greatest technology.  When I need to compare products, I, as I assume most people do these days, turn to google and plug in a "product x vs product y" search.  Some of the search results yield technical reviews of the products, comparing the specific details against each other.  Others are people, reviewing products they have purchased.  You can find many "unboxing" youtube videos that provide an awesome amount of information. In the end, I make a decision for myself based on the information I have.  It isn't the "best decision" and might not even be the best for me, but it is at least an informed decision.

Unfortunately, embedded in these search results, are troll websites.  The insurance salesmen of the IT business.  They are generally easy to spot for the seasoned IT professional, but I'm sure the masses have a hard time distinguishing them.  One of the articles that prompted me to write this post is http://www.werockyourweb.com/best-tablet-for-kids-reading-students-work-gaming

Tell Tale Signs of a Troll Website

  • They are obsessed with "The Best"
    • I want to stress this point a little.  There is no "best" product on the market.  The market makes many different products which satisfy many different use cases.  There are tons of different people in the world who use the technology they buy for different tasks.  There is no "Best".  To quote one website in particular, you could say that the surface 2 is the "best tablet for work".  Maybe if you need office to do your job.  Maybe it is if your requirements are that you run windows and need to run windows apps.  What happens if all of your stuff is done in the cloud?  Google Docs?  There are many ways to do things, and the surface 2 has it's use cases.  If there was a single "best product" on the market, everyone would buy it.  The fact is, there isn't.  So stop advertising as such
  • The author has no credentials what so ever
    • Listen, we are talking about technology here.  I would expect the author to have some credential in this area, be it experience or otherwise. Some people have been reviewing technology for a long time, I would probably trust their advice.  Others have no credentials what so ever, and basically are just regurgitating information either google searched or paid for.  You wouldn't want me blogging about medicine or the law would you?  NO, I have little knowledge in this area and a bunch of google searches isn't going to change that.
  • They do not list the comparisons that they did
    • Many websites at least list the different products they were comparing in their review.  The website in question states.. "We thought we would help our readers out by researching each tablet on the market and determining which tablet is best, depending on the user."  Umm, each and every one?  I doubt it, but if you did, could you provide a list? 
  • They claim to "remove the geek speak".
    • Hey, I am all for this.  But misleading the people visiting your websites is just wrong.  There has to be a better way.
  • They do not cite any of their findings
    • The entire article is a big "citation please".  At the very least, the site could include some references to articles they used as research.  The best is the A7 chip.  According to the site in question, " A7 chip provides up to two times faster CPU".  Compared to what?  The A6?  An iPhone 3?  It certainly isn't 2 times faster than the galaxy note 10.1 or the asus transformer.  I was going to go through the entire article and debunk all of their "pros" for the air, but I figured that would just be a waste of time.  If you are reading THIS post, you m

Now before I continue, I want you to understand that I have no beef with this website in particular.  What my issue is is that we as a community have dropped down to this level.  It isn't just this website.  Many blogs/comments/articles basically disintegrate into a flame war of technology vs technology rather than trying to understand and embrace the differences between the products on the market.  We are all concerned with reassuring ourselves that the product we purchased is the "best" and that the others are inferior.  Lame.

I wonder what we as an industry can do about this problem.  Are there ways that we can fight back against these insurance salesmen?  Other than calling them out and generating awareness, what else can we do?




As an aside, I called out this particular website for receiving free ipad airs in exchange for their useless reviews.

Their response: 

+Shamir C, no free iPads over here. Just our honest review after extensive research. But, we'd love to know if you thought different from what our review states. For instance, we said that the iPad Air is the  'Best Tablet for Gaming' would you consider that a fair choice? or would you place another tablet in its place as the best for gaming? & why? 

Their Policy:

This article may contain links and/or phone numbers to merchants (affiliate links), and we may receive compensation if you purchase a product or service from these merchants. Our credibility is very important to us, which is why we research and write our articles before inserting any affiliate links. For more information, please read our compensation disclosure notice.

I think they go light on the enforcement of the "research" in their policy, but probably heavy on the compensation they receive from merchants.




Thursday, November 7, 2013

MVC4: Remove unnecessary headers

For some reason, Microsoft by default loves to advertise that you are running their products.  This is probably so that they can skew the webserver stats in their favour.

Even so, it is a good idea to hide these headers, or use them to provide misleading information.

In an MVC application, there are generally  3 headers you are going to want to target.

The first one is the server header.  This one is IIS specific.  Unfortunately, MS has not provided an easy way to change this header.  The two options you have are to use a WAF that will mask it for you, or to change it in code.  The code option involves creating a http module and adding it to the pipeline in the appropriate place.


    public class CustomServerHeaderModule : IHttpModule
    {
        public void Init(HttpApplication context)
        {
            context.PreSendRequestHeaders += OnPreSendRequestHeaders;
        }

        private void OnPreSendRequestHeaders(object sender, EventArgs e)
        {
            HttpContext.Current.Response.Headers.Set("Server","Jetty(6.0.x)");
        }

        public void Dispose()
        {
            
        }
    }

In the above case, I am setting the server header similar to that of Jetty.  Why?  Well, why not?

The second one you will want to target is the x-powered-by header.  I mean, who cares what powers your site.  Oh wait, an attacker does.  In any event, this a custom header that is set in one of the .config files that IIS reads.  You can override this by adding a clear tag.


<httpProtocol>
  <customHeaders>
    <clear />
  </customHeaders>
<httpProtocol>

The last one is the X-AspNetMvc-Version. Once again, I'm not sure why you would want to advertise this to anyone.  Luckily this one can easily be disabled by code.  In the application start of your global.asax simply add the following line.


MvcHandler.DisableMvcResponseHeader = true;

Trying to minimize the amount of information leaked by your application is always a good thing.

MVC4 Cookie Obsfucation

When building a MVC4 application with authentication, there are two cookie values that will generally be issued by your app.

The first one has to do with forms authentication and allows the MVC framework to determine if a user is authenticated (useful for the authorize attribute, etc).  This one is, by default, set to .ASPXAUTH.

The second is used to store session state.  This is actually an IIS setting (although it can be controlled via the web config).  The default is ASP.NET_SessionId. 

When conducting a review of a site, looking at the default cookie names to get an idea of the underlying technology is one of the first things one would do.  It is a good idea for a production facing Internet site to change these defaults.  Luckily this is quite easy to do.

In order to change the forms authentication cookie name, simply add the name attribute to the forms tags in your authentication section.  For example, you could add name="myauthcookie".

See this link for more info.

In order to change the session state one, you can add a session state tag to your system.web configuration section.  In this case you will use the cookieName attribute and you can try and emulate some other webserver default.  For example, cookieName=".jessionid".

See this link for more info.

Tuesday, November 5, 2013

Installing NTOP NG on CentOS 6.4

With my linux router in place between my cable modem and safe@office, I'm ready to start playing around with some network IDS/IPS/visualization tools.  The first one I want to play around with is ntop.

For the most part, I followed this link

NTOP has created a couple of YUM repos that store most of the binaries/etc you will need to run ntop on CentOS.  This makes it pretty easy to install.

Here is what my ntopng config looks like:


-G=/var/tmp/ntopng.gid
-i eth1
--data-dir /var/ntop
--local-networks 192.168.10.0/24,192.168.12.0/24,192.168.252.0/24


In my case, my inside interface is eth1.  Cable can be quite noisy, so I rather monitor the inside interface then the outside one.  Local networks just tells ntop what to consider local, and what not to.  Make sure the data-dir is writable by the user that ntop switches to after startup (usually nobody).

Other than that, have fun looking at the flows.  I've noticed that ntop is only taking up about 30mb of ram.  Nice!

Saturday, November 2, 2013

Centos 6.3 and static routes

I recently added a centos box inline between my cable modem and my checkpoint safe@office.  One of the things that I want to be able to do is run ntop.  Unfortunately, my safe@office was setup to NAT my wireless and wired connections behind it.  According to ntop, all I had was one host.

Disabling natting on the safe@office is easy, except, but default wireless clients and wired clients connect up on different subnets.  I could probably disable DHCP on the safe@office and have everything run via the centos box...but that is a project for another day.  For right now, I need to configure some static routes so that my centos box will properly relay traffic.

Standard route commands work, and are a great way for getting a configuration up and running.

For example:


ip route add 192.168.111.0/24 via 192.168.11.2 dev eth1


In the above example, I am adding a route for the 192.168.111.0/24 network and instructing all traffic to be sent to 192.168.11.2 via eth1.  Pretty sweet.  The problem is that adding routes like this does not persist the configuration after reboot.

I tried to edit the appropriate files as found in this doc, but I found that the configuration did not persist.  I may have messed up somewhere, but I didn't get that method to work.

I decided to go dirty and add the commands directly to rc.local.

Thursday, September 12, 2013

CCSK Study: Domain 9 - Incident Response

Notes
  • Self-Service nature of cloud may make CSPs unwilling to corporate to the extent required during an IR
  • Dynamic pooling may complicate the IR process when it comes to things like forensics
  • Resource pooling may cause privacy issues (for other cloud tenants)
    • Technical challenge that should be addressed primarily by the provider
  • Data may cross jurisdictional boundaries which can complicate the IR process
    • Tasks may be prescribed by legislation
    • Tasks may be prohibited by legislation
    • Need legal staff on the IR team
  • Eradication and recovery steps may be sped up due to the nature of how systems are provisioned in the cloud
  • Some investigations can be sped up
    • VMs can be moved to local resources for analysis
    • VMs can be paused to preserve memory
  • Different cloud architectures pose different problems to IR teams/process
    • Consider the visibility and control a customer has in each cloud scenario (IaaS,PaaS,SaaS)
    • Even in IaaS there are various aspects of the infrastructure that the customer does not control
      • How do you get logs/data from those sources
  • IR LifeCycle (NIST Version)
    • Preparation
      • Most important phase (regardless of cloud or not, really)
      • Make sure both the physical and logical data flows are mapped out
      • Establish SLA with CSP
        • points of contact
          • and HOW to contact (out-of-band, etc)
        • incident definitions and criteria
        • CSPs support (available event data, notifications, etc)
        • definitions of roles and responsibilities
        • IR Testing (if allowed)
        • Scope of post-mortem activities
      • Agree on format of data exchange
        • IODEF
        • RID
    • Detection and Analysis
      • timely detection depends on availability of the data and the ability to correctly interpret that data
      • data may come from non-transparent, provider-owned infrastructure
      • Key point:  Customers must make sure they have access to the relevant data
        • What information should be logged
        • How are the logs stored (tamper-proof) and proven that they are consistent and complete?
        • Logs should take into account the dynamic nature of the logged information
          • Is the time on the servers correct?
          • Are you getting logs from all components and can you put that together properly?
        •  Log retention settings?
        • What log format is being used?
          • CSS quoted but has since been abandoned
      • SLA should require CSP to provide notification of any breach detection of provider-hosted infrastructure / services
      • Forensic capabilities
        • Still an area under research (especially for SaaS and PaaS)
        • Customers should try and pick vendors that have forensic capabilities built in
    • Containment, Eradication, Recovery
      • Depending on deployment scenario, some tasks (such as eradication, recovery) can be made easy
        • IaaS: shutdown node, restore from snapshot, etc
      • SLA should include a "lessons learned" activity after the recovery
  • Recommendations
    • A clear understanding of how a CSP defines events vs incidents
    • CSP and customer should agree on communication channels
    • Customers must understand a CSPs role/support for IR
    • IaaS customers should leverage virtualization offers for forensic analysis and IR
    • CSP should be included in the design phase for an IR
  • Requirements
    • eIRP should include the approach for detecting and handling incidents involving a CSP
    • SLA must guarantee support for incident handling
    • Yearly testing

Summary

I think this domain touches on some key points.  Although, as a customer, you want to utilize a CSPs resources, you must understand what impact that will have in your ability to respond to a breach.  One interesting situation would involve the policy to "disclose" that a breach has occurred.  A customer and CSP may disagree on the best way to handle this and this could cause an embarrassment for one party.  This domain stresses that the SLA should be well defined and discussed, and I think is an important step that is missed during most conversations.

Capturing data from sources in the cloud can pose another problem.  One must consider the costs of first collecting the data (processing, storage, memory) and then the cost of transmitting that data to the corporate office for analysis.  What if you are also doing Security-as-a-Service?  The more abstract you get (PaaS, SaaS) will make this task even more difficult as now you are relying on the logs the provider has given you access to.

As stated in the domain, the technical solutions to the technical problems are still up in the air.  A balance needs to be found in the cost for any given solution and the benefit received from that. 

Friday, August 23, 2013

CCSK Study: Domain 8 - Data Center Operations

Notes
  • "Next Generation Data Center"
    • business intelligence 
    • understanding of all the applications running in a DC
  • Cloud Application Mission
    • The industry or application mission housed within the data center
    • HIPAA, PCI, etc
  • Data Center Dissemination
    • Cloud infrastructures that operate together but are in physically separate physical locations
  • different types of applications housed by data centers require automation (to varying degrees)
  • CSA Controls Matrix
    • number of physical requirements based upon different standards and regulatory requirements
  • Customers should request 3rd party audit of datacenter operations
    • ITIL and ITSM
  • New and Emerging Models
    • New CSP type services based off of SETI@home
    • cloud is increasingly being viewed as a commodity or as a utility
  • Recommendations
    • organizations building cloud data centers should incorporate management processes, practices, and software to understand and react o technology running inside the data center
    • customers should ensure CSPs have adopted service management processes and practices
    • understand the mission of what is running
      • consider the Cloud Control Matrix
Summary

I'm not entirely sure how to take this domain.  I think it is geared more to CSPs then customers.  Basically it is saying that you should run your datacenter with appropriate policies and procedures ideally following an ITSM framework such as ITIL.  Furthermore, you should use things like automation to ensure you deliver services to your customers in an agile way.  Customers may want to check to see what processes the CSP is following and should request an independent audit.

Wednesday, August 14, 2013

CCSK Study - Domain 7: Traditional security, Business Continuity, & Disaster Recovery

Notes
  • Traditional Security
    • the measures taken to ensure the safety and material existence of data and personnel against theft, espionage, sabotage, or harm
  • Physical protection is the initial step
    • can render all logical controls ineffective if implemented incorrectly
  • Security programs flow from well-developed series of risk assessments, vulnerability analysis, bcp/dr policies, processes, and procedures
    • reviewed on a regular basis
  •  Cloud service providers need to be tested regularly
    • Use industry-standard guidelines such as TOGAF, SABSA, ITIL, COSO, or COBIT
  • Establishing a physical security function
    • Responsibility should be assigned to a manager
      • Should be high-up (have power / bite)
      • personnel should be trained and constantly evaluated
    • As with general security, adopt a layered approach
      • include both active and passive defence
      • 4D's (detect, deter, delay, deny)
    • Several forms of design
      • Environmental design 
      • Mechanical, electronic, procedural controls
      • detection, response, and recovery procedures
      • personnel identification, authentication, access control
      • policies and procedures, training
      • Many of the above are similar to what you would take in the virtual world... (it is my opinion that too many security systems were designed based on physical parameters and that is why they are somewhat easy to bypass)
  • Evaluating CSP traditional physical security setup
    • There may be limits in what you can do and you should balance how much of this is done with the risk of the data being stored in the environment
    • Location
      • Do an analysis on the location of the primary/secondary data centers
        • Consider things such as seismic zones and flood planes
        • Also consider human factors (political landscape, crime, etc)
    • Documentation Review
      • Review all the documentation that you would have had to do yourself if this project was in house
        • Risk analysis, risk assessments, BCP Plans, DR Plans, Physical and environmental policies, user termination policies, contingency plans and tests, .... (lots more)
        • Essentially, because this company will be handling your data/services/applications you want to make sure their policies match or exceed your own
        • Eg:  Do they do background checks on all employees?, do they have technical documents of their environment? etc (there is a large list in the csa document)
        • Things to check
          • Are they up to date?
          • Are the policies distributed to employees and accessible by them?
          • Do they do training on their policies?
    • Compliance with Security Standards
      • ensure compliance with global security standards (ask for confirmation)
      • Verify the compliance certificate
      • Look for verifiable evidence of resource allocation, such as budget/manpower, to the compliance program
      • verify internal audit
    • Visual Walkthrough
      • If you want to, make sure you know what you are doing.  There is a checklist here of things to look at
  • Security Infrastructure
    • Applies more when selecting a physical infrastructure provider
    • Basically, you are looking for best practices in data center setup and security
    • Checklist in this section (7.1.2) should be considered
  • Human Resource Physical Security
    • purpose is to minimize the risk of the personnel closest to the data disrupting operations and compromising the cloud
    • Consider
      • Roles and responsibilities are clearly defined
      • Background verification and screenings are done
      • Employment agreements (NDA's)
      • Employment terminations
      • Training (security, code of conduct, etc)
  • Assessing CSP Security
    • This section contains various checklists on areas to assess when selecting a CSP
    • I'm not going to list them all out, read the doc
    • Procedures
      • Basically, are their procedures documented and made available for inspection on demand
      • Things like NDAs, background checks, policies for information sharing, etc
    • Security Guard Personnel
      • Verify the instructions given to security personnel on what they should be checking, etc
    • Environmental Security
      • What protections are in place against environmental hazards (protection or detection)?
      • Maintenance plans, humidity controls, physically secure locations, impact of near-by (next-door) disasters in plans, asset control policies, methods for destroying data
  • Business Continuity
    • Provisions should be put in place should a major outage occur 
      • Financial compensation should the SLAs not be met
    • Review the existence of 
      • Emergency Response Team (ERT)
      • Crisis Management Team
      • Incident Response Team
    • Restoration Priorities
      • Discuss, incorporate, and quantify the RPO and RTO
      • Understand the information security controls needed
  • Recommendations
    • There is a lot in this section and I will go over some key points.  This is another section you will want to just read
    • Policy Recommendations
      • "Stringent security practices should prove to be cost effective and quantified by reducing risk to personnel, revenue, reputation, and shareholder value"
      •  Ensure that various policies meet or exceed the tenants current implementations
        • ie: background checks, least privilege, NDAs are enforced, etc
    • Transparency Recommendations
      • Perform an on-site visit (preferably unannounced)
      • Acquire documentation prior to visit in order to be able to conduct a mini-audit
    • Human Resources Recommendations
      • Ensure security team has industry certifications
    • Business Continuity Recommendations
      • Review BCP Plans of the CSP
    • Disaster Recovery Recommendations
      • plans should account for supplier(CSP) failure and have planned for the ability to switch providers
      • full-site, system, disk, and file recovery should be implemented via a user-drive, self-service portal
      • SLA should be properly negotiate
Summary

It is amazing how similar all of these topics are to things you would/should do in your own datacenter or organization.  There are all important points, however, to consider when migrating to the cloud.  As pointed out in the document, one must pay attention to BCP and DR issues.  There have been several notable instances where cloud service providers have "gone down" for hours at a time.  One should either protect against this via a cloud broker type tool that allows for service migration across different providers, or protect against the loss in financial terms via the SLA.

The other main point in this section is around the review of practices and documents provided by the CSP.  One of the key points here is that the CSP should be able to provide most of these documents "on-demand".  It should not come as a surprise to them that you are requesting to see their policies and procedures.  IT can be "expensive" when done properly, but that is only when you are ignoring the risk to the data and services that IT support.  As stated in the document, when done properly, security controls and IT in general can actually mitigate risk and save the company money in the event of unforeseen circumstances. 

The last point to note here is around the policies and procedures of the CSP.  Ultimately, you need to ensure they are following the same or better standards that you are following.  There has been a lot of discussion lately as to whether the cloud is "secure" or not. Some say that it is more secure than traditional IT because CSPs actually put money into the things mentioned in this document.  I think the argument is ultimately flawed.  If an IT organization was not aware of these best practices, chances are, they are not looking for it in their cloud provider... or not able to make sure that the cloud provider is doing what they say they are doing.  I guess what I am trying to say is that bad IT breeds bad IT and the problem is just worse in the cloud than it is in traditional IT that you can control.  IT organizations with strong and mature policies would probably be able to strategically use cloud resources (if they wanted to) knowing that they have processes and policies that work in-house.  They would take those lessons learned, and look for a partner (notice I didn't say CSP) that shares their same values and can provide them service at a reasonable (NOT "cheap") price.

There are quite a few good lists in this section, probably all good exam questions too.  This is going to be a section that I have to come back and review before the test.

Tuesday, August 13, 2013

CCSK Study - Domain 6: Interoperability and Portability

Notes
  • Scenarios
    • interoperability and portability allows you to scale a service across multiple disparate providers on a global scale
    • could allow the easy movement of data/applications/services from one provider to another
  • Not a unique concept to cloud
  • Interoperability
    • High level: requirement for the components of a cloud eco-system to work together
    • mandates that components be replaceable by new or different components and continue to work
      • Sorta like how management views employees ??? ;)
    • also extends to exchange of data
    • Reasons to change providers (short-list)
      • Unacceptable increase in cost
      • New provider provides more/better features
      • Provider ceases business operations
      • Provider is shutdown due to legal/disaster
      • Unacceptable decrease in service quality
      • Dispute between cloud customer and provider
    • Remember, cloud companies are also in the business of making money!
    • Lack of interoperability will lead to vendor lock-in
  • Portability
    • easy of ability to which application components are moved and reused elsewhere regardless of provider, platform, OS, infrastructure, location, storage, data format, or APIs
    • Generally only feasible to be able to port from cloud providers in the same "class" (eg:  IaaS to IaaS)
      • referring to the octant of the cloud cube
  • Failure to plan for I & P Can lead to unforeseen costs
    • Vendor Lock-In
    • incompatibilities across different cloud infrastructure causing disruption of service
    • unexpected application re-engineering
    • costly data migrations or data conversion
    • retrain or retooling new applications or management software
    • loss of data or application security
  • Moving services to the cloud is a form of outsourcing; the golden rule of outsourcing is "understand up-front and plan for how to exit the contract"[sic]
  • Interoperability Recommendations
    • hardware
      • do not access direct hardware if you don't have to
      • virtualize when you can
    • physical network devices
      •  try to ensure APIs have the same functionality
      •  try to use network and security abstractions
    • virtualization
      • use open virtualization formations (OVF) when possible
      • Understand and document vendor customized virtualization hooks or extensions in use
    • Frameworks
      • investigate CSP APIs and plan for changes
      • use open and published APIs
    • Storage
      • use portable formats for unstructred data
      • understand database system used for structured data and conversion requirements
      • assess the need for encryption of data in transit
    • Security
      • Use SAML or WS-Security for auth controls (more portable)
      • Encrypt data, understand how keys are used/stored
      • Ensure that log data is portable and secured
      • Ensure data can be securely deleted from the original system
  • Portability Recommendations
    • Understand SLA differences
    • Understand different architectures
      • understand portability issues which may include API, hypervisors, application logics, and other restrictions
    • Understand encryption, keys, etc
    • Remember to check for metadata
  • Recommendations for different Cloud models
    • IaaS
      • use OVF
      • document/eliminate provider-specific extensions
      • understand the de-provisioning of VM process (secure?)
      • understand the decommissioning of storage
      • understand costs involved for moving data
      • understand the process/governance of encryption keys
    • PaaS
      • use platforms with standard syntax and apis and that use open standards such as OCCI
      • understand the tools available for secure data transfer/backup
      • understand how base services such as monitoring and logging transfer to a new provider
      • understand functionality of old provider vs new (control)
      • understand impact of performance and availability 
    • SaaS
      • Perform regular data extractions and backups
      • understand what metadata can be exported
      • understand custom tools that may need to be redeveloped
      • ensure backups of logs/access records are preserved for compliance reasons
    • Private Cloud
      • ensure interoperability between hypervisors
      • use standard APIs
    • Public Cloud
      • ensure cloud providers use open/common interfaces
    • Hybrid cloud
      • ensure the ability to federate with different cloud providers to enable higher levels of scalability
Summary

I personally found this chapter hard to get through.  Portability and Interoperability are fundamental tenants of any solution, in my mind.  Being a developer, you use concepts such as abstraction to make your code more modular.  Modularity leads to code being able to be ported to different environments and allow for extensions to be built to handle specific scenarios.  I think that this chapter basically echos those fundamental tenants (over and over again).  Although it is probably better as a checklist, there are some good points that are given.  Bottom line, use open standards.  Use open technologies.  Use open APIs.  Use Industry standards.  Plan for any deviations from the above.

Friday, August 9, 2013

CCSK Study - Domain 5: Information Managemetn and Data Security

Notes
  • This domain talks about the security of data in a global sense, with some emphasis on how data is secured as it moves into the cloud
  • Data security begins with managing internal data
  • Different cloud architectures offer different storage options
    • IaaS
      • Raw Storage: basically a physical drive
      • Volume Storage: virtual hard drive
      • Object Storage:  API access that stores data as "objects".
        • Sometimes called file storage
      • Content Delivery Network: Object storage which is then distributed to multiple geographically distributed nodes
    • PaaS
      • Database-as-a-Service
      • Big-Data-as-a-Service
        • Object storage with requirements such as broad distribution, heterogeneity, and currency/timeliness
      • Application Storage
        • Any storage that is consumable via API but does not conform to the above two
      • Consumes:
        • Databases
          • Information may be stored in databases directly that run on IaaS
        • Object/File Storage
          • IaaS object storage but only accessible via PaaS APIs
        • Volume Storage
          • May use IaaS Volume Storage
    • SaaS
      • As with PaaS, wide range of storage options/consumption models
      • Information Storage and Management
        • data is simply entered into the service
        • stored in a database (typically)
        • could provide some access to PaaS APIs for mass upload type functionality
      • Content/File Storage
        • File stores are made available via web-based user interface
      • Consumption
        • Database
        • Object/File Store
        • Volume Storage
        • Key is that the services that are consumed are only accessible via the SaaS service
  • Data Dispersion
    • Technique that can be used to secure data
    • Data is devided into chunks and those chunks are then signed
    • Chunks are distributed across multiple servers
    • In order to recreate the data, an attack must be able to target all servers that contain the chunks of data
    • Or attack the API that puts it all together?
  • Information Management
    • includes the processes and policies for both understanding how your information is used, and governing that usage
  • Data Security Lifecycle
    • Basically, we need to understand the "states" data can be in, the location where the data lives, and the functions/actors/controls in place to control data
    • 6 phases
      • Not liner, data can pass through some stages multiple times, or some stages not at all
      • Create: generation of new or modification of existing content
      • Store: Committing data to some sort of storage
      • Use: Data is viewed, processed, etc
      • Share: Information is made accessible to others
      • Archive: data enter long term storage
      • Destroy: data is permanently destroyed
    • Location and Access
      • Data can be accessed on a veriaty of end-user devices that all offer different security mechanisms
      • Data can live in traditional infrastructure
      • Data can live in cloud and hosting services
      • Key Questions
        • Who is accessing the data?
        • How can the access it?
    • Functions, Actors, and Controls
      • We need to identify what actions we can conduct on a given datum
        • Access
        • Process
        • Store
      • An Actor performs each function in a location
        • person, application, system, process
      • Controls
        • put in place to restrict the list of possible actions to the list of allowed actions
  • Information Governance
    • like information management, only different
    • Includes the definition and application of
      • Information Classification
        • Does not need to be super granular to work (ie: differentiate regulated content from non-regulated content)
      • Information Management Policies
        • Defines what types of actions are allowed on a given datum
      • Location and Jurisdictional Policies
        • defines where data may be located
      • Authorizations
        • Defines who is authorized to access which types of information
      • Ownership
      • Custodianship
  • Data Security
    • This section lists out some controls to protect data
    • Detecting and Preventing Migrations into the cloud
      • Monitoring Access to internal repositories
        • DAM: Database Access Monitoring
        • FAM: File Access Monitoring
      • Monitoring/Prevention of Data moving into the cloud
        • URL Filtering
          • Prevent access to mass upload apis, etc
        • Data Loss Prevention
      • Placement of network based tools must be understood and planned accordingly
    • Protecting data moving to the cloud or within it
      • Client/Application Encryption
        • Data is encrypted before it is sent to the cloud
      • Link/network encryption
        • Data is encrypted in transit (SSL)
      • Proxy-Based Encryption
        • Legacy apps
        • Not recommended
        • Data is sent to a proxy-based encryption device before being sent to the cloud
    • Protecting data in the cloud
      • Step 1: Detection
        • Content Discovery
          • Need to understand the content being stored in the cloud
      • Step 2: Encryption
        • The different cloud architectures offer different encryption options.
        • Generally: Volume encryption, object encryption
        • Key management is the important issue here
          • Provider-managed keys
          • Client-managed keys
          • Proxy-Managed keys
        • Should use per-customer keys if you have to use provider managed keys
          • SaaS and PaaS may not offer protections such as passpharses on the keys
    • Data Loss Prevention
      • Many different deployment options (endpoint, hypervisor, network, etc)
      • Definition:  Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis
    • Database and File Activity Monitoring
      • duh!
    • Application Security
      • Remember, most data breaches are due to poor application security
    • Privacy Preserving storage
      • Similar concept to VPN  is VPS or virtual private storage
      • Doesn't matter if someone intercepts the data, they cannot use it / understand it
      • Certs are good, but are bound to the identity of the user
        • May violate some regulations if the authentication requestor knows the identity of the person accessing the information
        • ABCs or attribute-based credentials
          • Sorta like claims based authentication, do not need to know the user anymore, just the "rights" they have been granted
    • Digital Rights Management
      • encrypts content and then applies a series of rights
        • For example, can play, but cannot share/copy
      • Consumer DRM
        • music industry! (ugh)
        • emphasis on one way distribution
      • Enterprise DRM
        • emphasis on more complex rights, policies, and integration
    • Recommendations
      • Understand the cloud storage architecture in use
      • chose data dispersion when available
      • use the Data Security Lifecycle as a guide for building controls
      • monitor internal data repositories with DAM/FAM
      • Use DLP and Url filtering to track employee activity
      • Use content discovery
      • Encrypt data ruthlessly (my words)
        • Transit, storage layer, and if possible against viewing of the CSP
      • Remember that most data breaches are because of weak application security
Summary
This domain was a little bit more involved that the last one, but, once again I think focuses more on common sense than anything else.  I think the key point here is that data is hard to manage internally.  And that is okay, most corporations do not have a good way to manage that data internally, but at least the data in internal and only accessible by employees that are under contract.  Once you move to CSPs (or enable the internet...) you need to start having the right tools in place to monitor activity and usage of your data.  These includes concepts such as DAM/FAM/URL Filtering/DLP.  I personally think that the best solutions these days are those that allow data to enforce its own "security".  IE:  The data is encrypted and a client needs to be installed to un-encrypt it.  The client can then enforce policy and nobody can access the data unless the client is installed etc.  As stated in the document, this leads to expensive infrastructure to have this happen.  There are also ways around this (copy and paste for example).  To make the problem easier to tackle, create board generalizations for the data (regulated vs not-regulated) and go from there.  Understand also the concepts of key management.  Ultimately, when you do PaaS or SaaS the service on the other end will need to "understand" the data in order to be able to provide you a service.  Those risks need to be weighed out during the initial cloud discussions.

Thursday, August 8, 2013

CCSK Study - Domain 4: Compliance and Audit Management

Notes
  • Corporate Governance: The balance of control between stakeholders, directors and manages to provide consistent management, cohesive application of policies, and enable effective decision making.
  • Enterprise Risk Management: Methods and processes (frameworks) used by organizations to balance decision making based on risks and opportunities
  • Compliance and Audit Assurance: Awareness and adherence to corporate obligations
  • Audit
    • key component to any proper organizational governance strategy
    • should be conducted independantly
    • should be robustly designed
    • should take into consideration the cloud
      • scale and services provided
  • Recommendations
    • Understand that audit processes change when moving to the cloud
    • Understand the contractual responsibilities of each party
    • Determine how existing compliance requirements will be impacted by the use of cloud services
      • Who does what?
    • Be careful with PII data
    • Customers and CSPs must agree on how to collect, store, and share compliance evidence
      • Select auditors that are "cloud aware"
      • request SSAE 16 SOC2 or ISAE 3402 Type 2 Report
      • Understand how audits will be conducted
  • Requirements
    • Ensure a  "right to audit" clause
      • Audit framework may be adapted to use 3rd party frameworks such as ISO, IEC, etc
    • Ensure a "right to transparency" clause
      • should include provisions for automated information such as logs, reports and pushed information such as diagrams, architectures and schematics
    • mutually selected 3rd party auditors
    • some agreement on common certification assurance framework (ISO,COBIT,etc)
Summary
I'm glad this was a short section!  I think that definitions used in this section are fairly common and apply to any organization, not just one using cloud.  The points made in this section seem fairly straightforward.  Basically, make sure the audit process takes into account the cloud.  Make sure that you have provisions in your contract that allow you to be compliant and force the CSP to do it's share.   All these things should be discussed up front with the CSP and the risk/benefits should be weighed if the contract is just a "click-wrapper" or non-negotiable.

CCSK Study - Domain 3: Legal Issues: Contracts and Electronic Discovery

Notes
  • Legal Issues
    • Many different regions and countries have numerous laws in place to protect the privacy of personal data and the security of information and computer systems.
    • Most specify terms such as "Adopt reasonable technical, physical, and administrative measures in order to protect personal data from loss, misuse, or alternations"
    • Examples
      • OECD: Organization for economic cooperation and development
      • APEC: Asia Pacific Economic Cooperation's Privacy Framework
      • European Union Data Protection Directive
    • Organizations should be aware of the laws they are subject to
      • Even contractors of corporations may be subject to certain laws
      •  HIPAA, GLBA, PCI DSS, ISO 27001, COPPA
    • May not be in the form of laws, but rather contractual obligations
    • Some laws may prohibit the export of data/information outside of the country
      • Obviously comes into play with cloud providers
    • Key point:  under many of these laws, the responsibility for protecting and securing the data typically remains with the collector or custodian of the data.  Before entering into a cloud computing arrangement, a company should evaluate it's own processes.  A company should, and in some cases is legally bound to, conduct DD of the proposed cloud service provider.
    • Companies should keep in mind that CSPs are constantly updating, and they should continually monitor, test, and update their process to reflect any changes in the CSP
      • Example: CYBEX
    • E-Discovery Issues
      • I think that although these issues were brought up during a conversation about e-discovery, they are relevant to all types of data being stored in the cloud
      • ESI: Electronically stored information
      • Possession, Custody, and Control
        • Clients are expected to turn over all data in their control (that pertains)
        • Clients do not have access to CSPs DR locations, or certain metadata that the CSP has created about a document
        • Clients should have an understanding of what data is and is not avaliable
      • Relevant Cloud Applications and Environment
        • The cloud app may come into scope and may require a separate subpoena
      • Searchability and E-discovery Tools
        • Certain tools will not work with the cloud, or may be expensive to run
        • Client may not have rights to search all data in the cloud
      • Preservation
        • Clients need to preserve the data (using all reasonable steps)
        • What about SLA's?  What happens if the SLA expires before the preservation term?
        • Monitoring of cloud provider?
        • What about the costs of storage for preservation?
        • Can the client effectively download the data in a forensically sound manner so it can be preserved off-line / near-line?
        • How is data tagged or scoped for preservation in the cloud?  Does the cloud provider offer that granularity?
      • Collection
        • Due to CSP, collection of data may be more difficult
        • Data may only be available in batches at a time
        • Access and bandwidth restrictions?
        • SLA may restrict the speed at which data is accessed or the manner in which it is accessed
        • Cannot do bit-by-bit forensics, if required
        • Client is subject to take reasonable steps to validate that its collection from its CSP is complete and accurate
      • CSP may deny direct access to its hardware
      • CSP may be able to produce "native production" of the data but it may not be in a usable format
      • Documents should not be considered more or less admissible or credible from the cloud (provided no evidence to contradict)
      • Clients should contract in provisions that they be notified and given sufficient time to fight subpoena or search warrant
Summary
This section brings up some good points about storing data in a CSP.  Although the focus here was more on the legal end, it is important to understand that these issues around the trust of data stored, how it is stored, and how it is accessible are applicable to all types of data.  The courts obviously require some degree of validation to be done that the data can be admissible in court.  Further to that, with respect to e-discovery, the courts need some degree of assurance that all the data that should have been submitted in fact was.  A subset of these issues may be important to other types of data based on contractual obligations or corporate policies.

Monday, August 5, 2013

CCSK Study - Domain 2: Governance & Enterprise Risk Management

As stated in the title, Domain 2 focuses on the issues of Governance and Enterprise Risk Management as it relates to the cloud.

Notes

  • Corporate Governance
    • is the set of processes, technologies, customs, policies, laws and institutions affecting the way an enterprise is directed, administered, or controlled
    • 5 basic principles
      • Auditing Supply Chains
      • Board and Management Structure and Process
      • Corporate responsibility and compliance
      • Financial transparency and information disclousure
      • Ownership structure and exercise of control rights
  • Enterprise Risk Management
    • is the process of measuring, managing and mitigating uncertainty or risk
    • Multiple methods to deal with risk
      • Avoidance
      • Reduction
      • Share / Insure
      • Accept
    • General goal: maximize value in line with risk appetite and strategy
    • Many benefits to cloud computing, however
      • Customers should view cloud service providers as supply chain security issues
      • Must evaluate providers incident management, disaster recovery policies, business continuity policies...
    • Companies should adopt an established risk framework
      • should use metrics to measure risk management
        • SCAP, CYBEX, GRC-XML
      • adopt risk centric viewpoint
      • framework should account for legal perspective across different jurisdictions
  • Recommendations
    •  Reinvest the cost savings from moving to the cloud into security
      • Detailed assessments
      • Application of Security Controls
      • Risk assessments, verifying provider capabilities, etc
    • Review security controls and vendor capabilities as part of DD
      • review for sufficiency, maturity, and consistency with the user's information security management processes
    • Ensure goverence processes and structures are agreed upon by both the tenant and provider
    • Security departments should be engaged as part of the SLAs
      • Ensure that security requirements are contractually enforceable
    • Define appropriate cloud security metrics
      • Really? Do these exist?
    • Consider the affect of cloud limitations on audit policies and assessments
      • may have to change the way audit is conducted
      • remember to contract requirements in
    • Risk management should include identification and valuation of assets, identificationa nd analysis of threats and vulnerabilities and their potential impact on assets, likelihoods of events/senarios, and management-approved risk levels
    • Take into account vendor risk
      • business sustainability, portability of data/applications,
Summary
This section essentially defines enterprise risk management and corporate governance.  In theory, all organizations should already be doing this at some level.  I think the important points here are to make sure the enterprise is aware that moving to the cloud means a loss of control over every aspect of the technical solution.  This means, in some cases, changing the way audits or testing is done to accommodate for the vendors preferences or limitations.  Further to this, you need to pay your lawyers and ensure that all requirements you have are stipulated in some form or another into the contract.  CSPs are basically an extension of the enterprise, much in the same way outsourcing is, but it basically has full control over the data you place in its possession.  I like the point about re-investing "savings" into increased security.  Ultimately, as you lose full control over an asset, you must increase your vigilance (detection tools) to ensure that your wishes as stipulated in a contract are being followed.  You can try and hide behind a contract, saying that it was the CSPs responsibility to do something, however in the courts you would have to prove that the CSP was negligent.  This may be harder than anticipated.

Wednesday, July 31, 2013

CCSK Study - Section 1: Cloud Architecture

I am currently studying for the CCSK and I thought I would post some of my notes here.

These notes are taken from the CSA : Security Guidance for Critical Areas of Focus in Cloud Computing V3.0.

Section 1 talks about cloud architecture and contains 1 domain which is titled Cloud Computing Architectural Framework.

The goal of Domain 1 is to establish a baseline of terminology as to facilitate the rest of the discussion around cloud security.

  • Cloud computing is defined as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (newtorks, servers, storage, etc).
    • This is the definition from NIST 800-145
  • NIST further describes cloud computing by defining essential characteristics, cloud service models, and cloud deployment models.
  • Essential Characteristics
    • Board Network Access
      • Capabilities are available over the network and accessed via standard protocols
    • On-Demand Self Service
      • Customer is in control to provision required services (network, storage, server) as needed
    • Resource Pooling
      • Providers resources are pooled to provide service to multiple customers in a "multi-tenant" model
      • Provider provides a "location independence" 
        • Customer is not really aware of where the resources are being provided from
        • Customer has no control over this either (except at higher levels of abstraction)
      • Security Impact: visibility or trace of operations by other uses or tenants
    • Rapid Elasticity
      • Capabilities can scale with demand (inward and outward)
      • Capabilities appear to the customer as "unlimited"
    • Measured Service
      • Cloud providers leverage a metering capability whereby usage can be monitored, controlled, and reported
  • Service Models
    • Service models build upon each other (ie:  PaaS builds upon IaaS and SaaS builds upon PaaS)
    • IaaS - Infrastructure as a service
      • Provides a set of API that allow management by consumers
    • PaaS - Platform as a service
      • Provides Integration and middleware services
      • Databases, messaging, queuing and development frameworks
    • SaaS - Software as a service
      • self-contained operating environment that is used to deliver the entire user experience including content, presentation, applications and management
    • Represents a tradeoff of security aspects between provider and tenant
    • "It should be clear in all cases that one can assign/transfer responsibility but not necessarily accountability"
  •  Deployment Models
    • Private Cloud
      • Cloud is provisioned for exclusive use by a single organization
      • Can be on or off permises
      • Can be serviced by a 3rd party
    • Community Cloud
      • Subset of Public cloud
      • Services are available to a specific community of consumers from organizations that have shared concerns
    • Public Cloud
      • Provisioned for use by the general public
    • Hybrid Cloud
      • Composition of two or more distinct cloud infrastructures
  •  Re-perimiterization of the network
    • Basically, trust boundaries are changing and so should the discussion and terms used
    • Cannot refer to services as "internal vs external" anymore as, for example, private cloud offerings could be considered internal as they are commissioned for a single customer but could be located externally to the traditional dmark points.
    • Risk conversation now has to include
      • Types of assests that are being managed
      • Who manages them and how
      • Who consumes them
      • Which controls are selected and how they are integrated
      • Compliance issues
  •  Gap analysis for security controls becomes a collaborative effort
    • Basically, we need to rely on the accuracy and transparency of a cloud provider to disclose the security controls in place (and provide access to output) and an organization must trust that it knows what security controls are required based on the compliance model it has chosen.
    • The ability to comply with any requirement is a direct result of the services and deployment model used and the design, deployment and management of the resources in scope.
  • Security controls are no different in the cloud than they are in traditional IT departments
    • maturity of posture defined by the completeness of the risk-adjusted security controls implemented
      • layered approach
      • Controls should be implemented at the people and process level as well as the technical level
Summary

My understanding is that the goal of this domain was to provide some basic definitions of cloud computing and to describe some global aspects and problems.   The NIST definition is pretty good, but, as described in the recommendations section of this domain, does not describe "Cloud Service Brokers".  CSBs seem to be a way of providing a unified model for security, governance, portability, etc across a number of CSPs.  It will be interesting to see how this all takes shape.  The cloud presents the same problems as traditional IT, except it doesn't reside all under your control.  The main point here is that while you can assign the responsibility, you as the customer are still accountable for the security of the whole solution.  Another point here is that many of the security controls that you would typically put in place must now be placed "in a contract" and that you must have sufficient provisions in that contract to have access/transparency into those controls.

Friday, July 19, 2013

Notes for a BitTorrent Sync Setup

I was looking into a few "self-hosted" cloud services and finally settled on btsync the other day.  I had taken a look at the synology disk station, and seafile / owncloud.  Ultimately I think the fact that btsync has security built from the ground up is why I chose it.  I don't need a ton of advanced features, but I don't want just anyone looking at my data.  I do, however, need something that runs on windows / linux / synology and andriod.  BTSync meets all of that.


Btsync is now at version 1.1.42 and with every version it is getting better and better.  I just recently had a chance to test out the beta app for andriod and it worked great!


A couple of notes to consider when using btsync.


1)  Use separate accounts to run the executable

I think this is important.  If you run btsync as yourself, you are giving it the same access that you have.  In all reality, the btsync application only needs access to the folder it is syncing.  Ideally, you can create "service accounts" for this, but keep in mind that by default btsync tries to create .sync files in home directories.  Not a big deal, this is all configurable (at least in linux).


I set up a btsync account on both my linux and windows box and then gave it access to one folder to sync.  This way, if the executable ever gets cracked, it won't (by default) have access to all of my user settings, etc.


2) TCP over LAN didn't work for me

I'm not sure if I did something wrong here, but I couldn't get my clients to connect using TCP over the lan.  They kept defaulting to UDP regardless of the settings on both clients.  Remember this if your lan isn't working like you planned

3)  Make sure you secure the webGUI

If you are using the webGUI (linux users almost always will) be careful about the webGUI.  By default it listens to 0.0.0.0 on port 8888.  The first thing you can do is ensure your firewall is on and protecting that port from the outside world.  The other thing you can do is set a basic username and password in the config file.  (See the user manual)

4)  Monitor connections from the interweb

Remember, if you open a port on your firewall allowing the internet directly in, it is always good to have that go through some type of proxy.  Ultimately this is not going to be feasible for most people, so we have to rely on the bittorrent coders not making any buffer overflow mistakes!

5)  If you can, stay away from the tracker service and the "relay service"

Try it without those features turned on.  I know in most cases, for most people, those settings are turned on because it will just work.  In the case of the tracker stuff, you are broadcasting that your ip is hosting something and what port it is accessible on.  Yes they may not have the secret, but if you don't need that service, just don't use it.  The second is the relay service.  You will need this if you are behind certain network architectures, but for the most part you should be okay with out.  This way your data is not traversing through a server it doesn't have to.  I know the NSA is watching everything, but we might as well try and limit where we are sending our data.  These days, most "dynamic" ips are fairly static.  There are also a few dns services you could use.

6)  256-bit AES is a great choice

But I wonder how the key is derived from the secret and if this could be figured out some how.  According to the docs you can substitute your own base64 encoded key that is more than 40 symbols long.  This might be easier than sharing the base64 encoded version as you could come up with a poem line or something like that, and share it with friends/family.  I do like how you can change the key at any time, they have really through a lot of this stuff through.

All in all, so far I am really impressed by the product.  It works fast and is configurable to tweak in some of your own settings.  I look forward to future releases!


Friday, July 12, 2013

Technet Expiration

I must confess something, I am a technet subscriber!  Phew, at least now it is out in the open and I don't have to hide it anymore.

I'll be honest, I am not Microsofts' #1 fan.  Not by a long shot.  Whenever I can, I upgrade my machines to the Fedora/CentOS Experience.  I rely as much as I can on FOSS tools to get me through my day.  I am always looking for ways to decrease my dependence on MS products.  When I am at work, I bond with the *nix guys.  We make fun of the point and click simplicity that is windows.

But alas, the world runs on MS products and services.  That is why I subscribe to technet.  As a general IT enthusiast, I want to be able to play around with products.  I want to test their limits.  I want to try customizing it, to try new things, to try and integrate it with FOSS tools, and to try and get windows and linux to play nice together.  As a security enthusiast, I need windows to be able to test out the latest malware and exploits.

I am saddened that the technet subscription service is expiring.  If anything, MS owes me money for licenses that I have purchased (unavoidable) as part of my PC purchases.  I do like the idea of a affordable way for IT generalists and the like to tinker with MS products.  It is unfortunate that MS does not feel the same way.

If you like technet, please sign this petition.  I'm sure that because we don't have Premier support, we cannot expect anyone to get back to us within 4 hours, but it is worth a shot.

Monday, July 8, 2013

OpenCrowd Taxonomy

This post is more of a personal note, but I found it interesting so I thought I would share.

I found this "cloud taxonomy" while reading for my CCSK exam.  Although probably not a comprehensive list, it does outline some cool providers that may be worth checking out!

Saturday, July 6, 2013

Fedora 18 and Synology Shares Via CIFS

I finally got around to setting up my synology shares via CIFS on my Fedora 18 box.  I tried following the instructions here, but they didn't work so well.

I kept getting the following:


mount error(5): Input/output error
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

I finally stumbled upon this forum post which helped me out.

It looks like in the newer version of CIFS, the security default has changed to ntlmssp from ntlm.  Switching this back worked. 

It will be interesting to see if the security settings on the synology can be upgraded to support some of the stronger protocols.

Hope that helps!

Friday, March 8, 2013

SFCP Certified!

Well, I just wrote my SFCP exam and am proud to say that I am now SFCP certified for v5.1.1.

Time to tackle the world... maybe.