My buddy Sean Gowing sent me a video today regarding SCADA. I'm not much of a SCADA guy, but I'm always interested in a good security debate.
You can watch the original video here.
I wanted to share some of my thoughts on this issue. Unfortunately, the "Apologists" are not only limited to the SCADA world. The video was awesome, and there was good discussion from both parties. It is good to see, in this day and age, that we can still agree to disagree and do it respectfully.
In my mind, I think apologists are a bad thing, but more on that in a moment.
In the video, Eric had posted a slide entitled "Welcome to the real world". I wanted to talk to this slide.
1) FACT: Large Industrial organizations are not agile
So? Is there actually a point here? I will give you that a lot of organizations are not agile, but are you saying that they have to be in order to be productive? Some how these organizations have accomplished great technical feats, solved business problems, and remained profitable. All ... without ... being ... agile.
I think the point Eric is trying to make here is that large organizations move slow. Okay, granted, but they should at least move. The point Dale is trying to make is that there has been little to no progress on the issue of SCADA security. In fact, the first audience question mentioned the fact that the industry is capable of producing new protocols/technologies.
2) FACT: Costs and downtime matter
I think that this is a valid point. There is going to be a battle of budgets to get this problem fixed, even if we can define or agree on what fixed means. But I think the point to take away here is that this is the cost of doing it WRONG. Had it been done right in the first place, we wouldn't be in this mess.
Yes yes, I understand that SCADA systems are old, and we didn't know things way back when. But I want to step back from SCADA here for a moment.
Many systems are designed like this. They do not build in any abstractions, and this makes the system brittle and resistant to change. They do not plan for upgrades in features or functionality. They do not plan for security. This is a systemic problem. The only thing we know for sure is the cost of having done some of this work upfront will pale in comparison to the cost of fixing the problem now.
3) FACT: Money is tight
I guess you have to look at this from two aspects. The first, is the "end-user" as Eric puts it. The second is the vendor. Actually scratch that. Most of these companies make billions of dollars a year. Don't believe me? Check out the latest on honeywell. I'm sure they can spend a little bit to improve their products.
The second point here is that Dale is clearly trying to focus the discussion on critical infrastructure. Dale brings up the point about the "cookie factory". If the CEO of the cookie company wants to accept the risk that his business could grind to a halt because of a few misplaced ping packets, more power to him. But when my water/power/heat is in jeopardy, you bet I want to be running robust systems. In some cases (most?) these utilities are handled by private companies who also make lots of money. Money is only tight because we allow substandard products to be put into the market place and now we need to continue to show the same or better profitability but increase the quality of the product.
4) FACT: Security isn't the only challenge facing the boardroom
So? Point? CEOs and their cronies take home millions of dollars a year. They get paid to deal with the company business and are entrusted to do so by their shareholders. Security of their systems is part of that business, just like HR, or the bean counter department. Like most things, the company should have a security policy and they should enforce that policy. The price of meeting the policy should be factored into project costs. It is time to grow up and look at the true cost of building IT systems, rather than always trying to shortchange it.
Conclusion
In the end, I'm mixed on the argument. I think, ultimately, I agree with both of them.
In my mind, the final message that Dale gave is the right message, and wins the argument.
When we are being consultants, we need to triage and do the best we can. We need to employ compensating controls, etc ,etc. But we also need to help our customers define what they need. This will help our customers demand more of their vendors and hopefully push for real meaningful change. I compare this to doing threat risk assessments. You identify the threat. You then list out potential solutions along with their mitigating factor. You then weight the cost of doing each solution vs the mitigating factor it provides. (Please note I do not agree with this approach, but it is the way security is "done"). The first solution, always, should be to build a secure system from scratch. If all the CEO ever sees is a checkbox saying we are secure, then we have failed at being consultants. Compensating controls are never as good as a proper solution, designed to be secure.
Secondly, when we are talking in public, we need to be evangelists. In my mind, Eric basically provided the C-level with a whole list of excuses to use when they tell their shareholders/regulators that they can't make their systems secure. The fact is that this problem is solvable. There are meaningful steps that could be taken that would push us down the right path.
Unfortunately, like Dale, I think we will need something bad to happen before this problem is taken seriously.
As a last note, I would love to see Dale and Eric work together on some of these initiatives. I think that Eric could focus on the short-term (3-5 years) on how we can make existing SCADA deployments more secure. Dale could focus on the long term (5-10 years) on how to get companies to adopt proper requirements, and how to push vendors/industry to bake security in.
Friday, January 31, 2014
Friday, January 3, 2014
CCSK Study: Domain 11 - Encryption and Key Management
Notes
Summary
In short, encryption is hard. There are systems that employ data security at the file level. This is great from a security perspective, but makes searching, etc difficult. You need to balance this. One idea is to use metadata for fields that you might want to search, leaving the actual data encrypted. Another would be for an offline dump of data from the cloud for "searching" purposes. The more metadata you store, the more you run the risk of "re-identification" issues.
A strong understanding of the reasons why you are encrypting data is necessary here. In some regulatory cases, you may be able to get away with enough compensating controls. If you find yourself having a hard time with this, maybe a cloud provider for this particular solution is not the right way to go.
- Encryption is necessary in certain situations, so understanding how this works in the cloud is important
- Introduction to Encryption
- Moving data to the cloud does not remove any requirements for confidentiality and data protection
- Cloud Considerations
- Data should be protected in transit, at rest, and in use
- Important in cloud deployments
- Encryption should be applied directly to unstructured content
- Key management over the data lifecycle
- Keys should be under enterprise control, not that of the cloud provider or 3rd party
- consider protection of log files or metadata that could contain sensitive information
- Use open standards with sufficient strength
- Alternatives to Encryption
- Tokenization
- Basically, public cloud service paired with private cloud/service. Public data is tokenized which reduces the value of the data stored.
- Data Anonymization
- Strip sensitive data before deploying to public cloud. Could be useful for aggregate data collection
- Utilize cloud based controls
- They may be sufficient...
- Risks/Responsibilities of Data (not necessarily in the cloud)
- Accidental public disclosure
- whoops
- Accidental or malicious disclosure
- attack against
- Compelled disclosure to 3rd parties
- obligation to respond to requests
- Government disclosure
- either by law or court order
- Misuse of user or network profiles
- deriving sensitive information from seemingly benign traffic
- Inference misuse
- being able to draw inferences about a person's behavior or identity based on data
- Re-identification and de-anonymizing misuse
- Capturing enough information to infer the original subject
- Cryptography in Cloud Deployments
- Two Types
- Content Aware
- Basically used in DLP type solutions. As content is being transmitted it is scanned for sensitive content. That content is then encrypted before being sent out
- Generally works on email, etc
- Format Preserving Encryption
- encryption that preserves the format of the original content
- Better than content aware because it works over all protocols, etc
- Issues
- If data is encrypted, it might not be searchable
- Key management can be difficult if there is batch processing of sensitive data and THAT process is moved to the cloud
- Some cloud provider types will not work with "encrypted" data
- Encryption in Cloud Databases
- Consider if encryption is actually necessary
- Databases provide ACLs if that is all that is necessary to protect your data (you don't need to use encryption)
- ACLs won't work for DBAs
- You may need to comply with legal frameworks
- If you need to store data in a schema whereby you cannot control access via ACLs
- SaaS
- Good luck!
- Use Object Security if possible (ACLs on a data row/table/object)
- Store a secure hash
- Sometimes all you need is to "verify" data. Store a hash in the cloud as opposed to the data itself
- Key Management
- Consider systems that encrypt data on the way out and decrypt on the way in
- enterprise users should have their own keys
- Use group level keys if groups are required to work on specific documents, etc
- What about the data life cycle?
- Encrypted data is easy to ensure that nobody can access it, simply lose/delete the encryption key
- consider segregation of duties around key services / process
- consider key encrypting keys (KEK)
Summary
In short, encryption is hard. There are systems that employ data security at the file level. This is great from a security perspective, but makes searching, etc difficult. You need to balance this. One idea is to use metadata for fields that you might want to search, leaving the actual data encrypted. Another would be for an offline dump of data from the cloud for "searching" purposes. The more metadata you store, the more you run the risk of "re-identification" issues.
A strong understanding of the reasons why you are encrypting data is necessary here. In some regulatory cases, you may be able to get away with enough compensating controls. If you find yourself having a hard time with this, maybe a cloud provider for this particular solution is not the right way to go.
CCSK Study: Domain 10 - Application Security
Notes
This was quite a long and involved chapter. It covered a few key points. Once again, most of this should have factored into consideration for traditional app deployment, cloud simply adds on some complexities not yet though of.
An SDLC is key. Building security in to the process it the best way to go. I think that two most important tasks in any SDLC is to ensure that the team has adequate training on the threats and mitigation techniques and that there is enough work put into defining security requirements as per policy and best practices. Additional things to consider would be things such as DR, data recovery, incident response / monitoring.
Authentication and authorization is interesting in the cloud scenario. Ideally, you do not pass passwords to any cloud provider. In this case, federation is key. An extension of this is to not pass the identities either (in the traditional email / userid sense). This is where attribute or claims based authorization comes into play. One special case is when you start to daisy chain cloud services together. In a traditional architecture, it is easy to take a back end service (db, web) and block any access to that except from the front end application. As everything lives in the cloud, this becomes harder to do. I think the discussion between identity centric and protected resource centric authorization is an interesting one, and one that I need to research further.
When I hear about application penetration testing in the cloud, I think "forget about it!". I think the best point here is to do a paper pen test whereby you review source code, as-is documentation against the threat vectors. I did a pen-test on a service now implementation only to find out that all controls were built client side in javascript as opposed to server side. Discovery of this would have been easier had I just had a chance to review the code and talk to the lead developers.
As with pen-testing, monitoring is another one that where you are really up to the mercy of the cloud provider. Security monitoring requires real tangible things such as ram/cpu/disk. Security enforcement, even more so. Cloud providers are in the business of delivering those resources and as such, want to charge for everything. I think when designing a solution, we need to understand what hooks we will be provided, and how much it is going to cost to implement controls. Sure, in an IaaS environment you could run AV, firewall, etc at the OS level. But think of the cost of that. It would be much better to run a majority of those services at the VM layer, but you may not have access to do that.
- Cloud environments challenge many fundamental assumptions about application security
- Can be a particular challenge for applications across the layers (SaaS,PaaS,IaaS)
- Application is solely responsible for providing security
- Cannot make any assumptions about external environment
- Application can be moved at any time
- Threat landscape is increased
- In addition to traditional attack vectors we must also consider attacks from within the cloud
- Example: You could be running your application on the same infrastructure as your competitor
- Secure SDLC
- Differences vs traditional apps
- No control over physical security
- Incompatibilities vs different providers
- Think about DR plans and migrations
- Protection across the Data lifecycle
- Data at rest is still in the cloud, so additional protections may need to be considered
- The combinations of web services in the cloud could lead to security vulnerabilities
- Maybe in the business logic domain?
- Each service now has to authenticate and secure itself
- Ability to access logs is more difficult
- Need to consider Incident Response here or even just basic troubleshooting
- Fail-over for data and data security in the cloud has to be more detailed
- Compliance related tasks can be more difficult
- It it not necessarily enough that the Service Provider is compliant, the whole process needs to be
- Organizations should have an application security assurance program
- Basically, we need to ensure we are developing applications with appropriate considerations and controls
- Goals and metrics should be defined
- Security and Privacy policies should be defined
- Consider any relevant regulations
- Appropriate hooks within SDLC to ensure security is "built-in"
- training, etc
- Perform security and privacy risk assessments to ensure appropriate requirements are defined
- Ensure appropriate verification steps are defined
- Configuration and Change management
- Requirements (security/privacy) have been met
- formal coding practices
- physical security
- Design Review
- Review design for the common secure-design principles
- See doc or "The Security Development Lifecycle" Chapter 8
- Also see Microsoft SDLC Design phase
- Code Review
- Pick guidelines to follow
- SAFECode, CERT, ISO Standards, OWASP
- Use both DAST and SAST
- http://blogs.gartner.com/neil_macdonald/2011/01/19/static-or-dynamic-application-security-testing-both/
- Other considerations
- Remove comments / names etc from code (it is being uploaded to a 3rd party)
- verify all input, including computer and inter-system input
- Security Testing
- Cloud architecture may prevent this step (check with vendor)
- Conduct black box penetration testing
- Use a wide scope (include remote access systems, and other related IT assets)
- Interoperability Testing
- Need to ensure that data can be exchanged between various components or applications
- Test against reference implementations
- ie: if you are using an open standard, test that the output is valid against that standard
- Test all pairs
- Test that the transfer is "secure"
- Quantitative Improvement
- We can only improve what we can measure
- Example metrics
- % of applications and data assets in the cloud evaluated for risk classification
- costs of the application security assurance program
- estimates of past loss due to security issues
- As much as possible, we should automate security controls / testing
- Application Security Architecture in the Cloud
- Considerations for cloud applications
- Lack of control
- Cloud subscriber does not control cloud provider security policy / enforcement
- Lack of visibility
- cloud subscriber cannot see cloud security policy enforcement and controls effectiveness
- Lack of manageability
- cloud subscriber cannot manage cloud provider app security (think audit and access policies)
- Loss of governance
- no direct control of infrastructure, cloud subscriber must "trust" cloud provider to do things right
- Compliance Risk
- cloud provider now becomes a "partner" in compliance and it's policy and process now become a part of the cloud subscribers overall regulatory landscape
- Isolation failure
- New threat to application. Isolation failure could allow competitors to access/use protected data
- Data protection
- direct control over data is relinquished and the cloud subscriber now relies on the cloud provider
- Management interfaces and access configuration
- New threat vectors as management interfaces and now accessed over the Internet
- Technical Risks and Solutions
- Identify, Entitlement, And Access Management (IdEA) is a little more complex in cloud environments
- User management lifecycle is now extended into the cloud
- typical requirements
- Understand how on-boarding / off-boarding of users will be handled
- Special cases such as service-to-service integration or cloud-to-cloud integration
- Ability to use open standards (SAML, WS-FED)
- Risk-based decisions
- User, device, code, organization, agents, geo-location, etc
- Support for internal security / compliance requirements
- Audit of user activity (especially privileged users)
- Claims-based / role-based authentication
- Manageability of permissions
- Ideally, one permission store (on-prem) uses a "mechanism" to translate and sync entitlements to various cloud providers
- may want to consider open standards such as XACML
- Compliance Building Blocks
- Infrastructure Controls
- Ex: protecting facility from natural disasters, electrical grid failures, etc
- Auditing considerations
- Access to data center
- system admin auditing
- internal security reviews
- Application Controls
- Special considerations for regulatory frameworks
- Multiple levels of security is required
- IdEA Management for Cloud Application Security
- Traditional edge network security devices have limited effectiveness in cloud solutions
- New perimeter could be defined as data and the method by which that data is accessed
- Definition of identity should be broadened
- Includes other information such as source device
- How the source device is managed / administered
- external B2B considerations
- Authentication
- The process of asserting the users identity to a given application
- Cloud considerations
- Plan to use open standards such as SAML vs traditional "authenticate to LDAP" type mechanisms
- Plan for BYOI (Identity) whereby 3rd parties and partners can use their authentication system rather than having to maintain separate username/passwords
- Consider alternative authentication methods to username/password
- Two factor such as RSA(nsa?) Tokens, OTP over SMS (has issues, more), Smartcard, Biometrics
- Plan for risk-based authentication
- Different security steps depending on device, user, location, heuristics
- Authorization
- the process of enforcing the rules by which access is granted to resources
- Plan to use open standards to communicate entitlements
- WS-policy for defining security and management policy assertions
- WS-security for enforcing access restrictions
- WS-trust for implementing STS
- Plan to use rule-based authorization model using claims and attributes
- Attribute security
- Ensure that attributes that are shared do not reveal the users identity (Privacy issues)
- Plan for attribute complexity
- There could be multiple attribute providers
- How to handle incomplete data
- Share only the minimum amount of information required
- Ensure that access policies and entitlements policies are manageable in addition to being technically enforceable
- Claims security
- Use meaningful claims
- Ex: verified email vs email
- Consider the type, surety, freshness and quality of the claim
- Ensure authority on claim based on context
- ex: some applications can make certain claims
- ex: telco can verify user phone number, others can't
- Plan for a disparate cloud landscape where multiple different authentication mechanisms need to be in place
- how do you seamlessly authentication across all cloud applications
- provide for granular control
- Audit / Compliance
- 3 questions
- What cloud resources does a user have access to?
- What cloud resources does a user actually use?
- Which access policy rules were used as a basis for a decision
- Considerations
- Build IdEA in from the beginning
- consider claims as the access mechanism
- SAPM(Shared account password management) for managing highly privileged accounts
- Use open standards (SAML)
- Cloud apps should take into consideration various token types, OAuth, API Keys, etc
- Cloud apps could be dependent on others for services such as logging or db connectivity
- Ensure modular design during development
- Can switch out IdEA module in the future if need be
- consider STRIDE during threat modeling
- Spoofing controlled via strong authentication
- Tampering controlled via digital signatures
- Repudiation controlled via digital signatures and logging
- Information Disclosure controlled via SSL, encryption
- Denial of Service controlled via Security Gateways
- Elevation of privilege controlled via authorization
- Policy Management
- access policy management is the process of specifying and maintaining access policies to resources
- Consider attributes based on identity
- caller related
- context related
- target related
- Can also consider
- General state of IT landscape
- crisis level, or emergency situation
- other decisions
- prior approvals, etc
- attributes related to the protected target resource
- QoS, throttling
- 3 typical enforcement points
- Using a Policy Enforcement Point (PEP)
- external, internal, as-a-Service
- Embedded as part of the cloud app
- using Identity-as-a-Service or Persona-aaS
- Cloud Issues
- Cloud subscriber at the mercy of enforcement points / decision making points already built into the application
- Ie: subscriber may not be able to define their own set
- Controlling access to multiple nodes / interconnected clouds
- Need to decide between identity/entitlement centric vs protected-resource centric
- Generally the PEP is the protected resource, so we need to find a way to package the information up and send it to that resource
- Identity is just one consideration
- Policies need to be in a manageable form
- Expressed in business terms and at a high enough level of abstraction that it can filter down and tools can be used to express the policy for each resource as required
- There could be many access policy providers
- How to integrate and manage them?
- Format (eg: XACML)
- Updating of PDP/PEP in a timely manner with correct(fresh) data is important
- Managing Access Policies For Cloud
- in-short, this is hard
- No real mapping exists between technical control points and business language
- Tools are required to convert Business DSL to technical controls
- Best practices
- Decide between identity-centric and resource-centric security models
- Ensure manageability of resources
- Designate clear responsibilities for policy management vs policy auditing
- Aim to have subscriber specific authorization policies
- Consider use of Policy as a service
- Consider generation/update of policy
- Aim for automatic generation tools/capabilities
- Use open standards
- Consider the use of PEP/PDPs with hooks for policy monitoring points / audits / compliance
- Application Penetration Testing for the Cloud
- Where applicable, do pen testing
- SaaS providers may not allow for this whereas PaaS and Iaas may have some support
- Use a framework
- OWASP ASVS
- OWASP Testing Guide V3
- Consider additional threats
- Isolation failure
- VM escapes
- Application Security Monitoring in the cloud
- Things to consider
- Log monitoring
- Performance Monitoring
- Monitoring for malicious use
- Monitoring for compromise
- monitoring for policy violations
- Requirements need to be defined and work needs to be done to see how the cloud provider can provide access/views to the information required
- Logs should be
- Easily parsable
- Easily readable
- Well documented
- Monitoring "ability" varies between cloud provider type
- IaaS == almost normal
- SaaS == OMG!
- Need to establish what access a cloud subscriber will have and how the cloud provider will notify/transmit information to the subscriber
This was quite a long and involved chapter. It covered a few key points. Once again, most of this should have factored into consideration for traditional app deployment, cloud simply adds on some complexities not yet though of.
An SDLC is key. Building security in to the process it the best way to go. I think that two most important tasks in any SDLC is to ensure that the team has adequate training on the threats and mitigation techniques and that there is enough work put into defining security requirements as per policy and best practices. Additional things to consider would be things such as DR, data recovery, incident response / monitoring.
Authentication and authorization is interesting in the cloud scenario. Ideally, you do not pass passwords to any cloud provider. In this case, federation is key. An extension of this is to not pass the identities either (in the traditional email / userid sense). This is where attribute or claims based authorization comes into play. One special case is when you start to daisy chain cloud services together. In a traditional architecture, it is easy to take a back end service (db, web) and block any access to that except from the front end application. As everything lives in the cloud, this becomes harder to do. I think the discussion between identity centric and protected resource centric authorization is an interesting one, and one that I need to research further.
When I hear about application penetration testing in the cloud, I think "forget about it!". I think the best point here is to do a paper pen test whereby you review source code, as-is documentation against the threat vectors. I did a pen-test on a service now implementation only to find out that all controls were built client side in javascript as opposed to server side. Discovery of this would have been easier had I just had a chance to review the code and talk to the lead developers.
As with pen-testing, monitoring is another one that where you are really up to the mercy of the cloud provider. Security monitoring requires real tangible things such as ram/cpu/disk. Security enforcement, even more so. Cloud providers are in the business of delivering those resources and as such, want to charge for everything. I think when designing a solution, we need to understand what hooks we will be provided, and how much it is going to cost to implement controls. Sure, in an IaaS environment you could run AV, firewall, etc at the OS level. But think of the cost of that. It would be much better to run a majority of those services at the VM layer, but you may not have access to do that.
Subscribe to:
Posts (Atom)