Hostage Negotiation and Cyber Security

Hostage negotiator

 

In one of my past lives, before cyber security came along, I was a hostage negotiator.

To give you an idea of just how long ago, during my training we were negotiating (through an interpreter) with “terrorists” who were holding “hostages” in a (very real) airliner, surrounded by (also very real) armed police, parked just off one of the (very busy) runways at London’s Heathrow Airport.

Unexpectedly, I had an up close and personal experience of being very, very close to a BA Concorde taking off at dusk, on full afterburners. I’m not sure that my hearing has been the same ever since!

Many people will have formed their view of hostage negotiating from watching movies, such as Dog Day Afternoon (my favourite), The Negotiator, and Inside Job, but in reality the art of negotiation is much less driven by a charismatic individual with a good haircut and a worrying appetite for risk and rule-breaking, and is much more of a disciplined team sport.

I have been asked many times, is there any similarity between the worlds of hostage negotiation and cyber security and – having given this some considered thought – the answer is yes.

So, here are my top two similarities:

 

1) Both rely on a strategy and an agreed risk appetite

Hostage Negotiation

In a complex and protracted hostage negotiation a Negotiator Co-ordinator who will be part of the Command Team determining the strategy & approach to resolving the incident will be appointed (I was one). On occasions, such as terrorist incidents, there can be political interest (which can affect the risk appetite), and the involvement of other agencies such as military special forces (which brings whole new dimensions of risk).

The job of the Negotiator Co-ordinator is firstly to be part of the Command Team to advise on how negotiation can support the overall incident resolution strategy and – once this has been agreed upon – to determine and execute a negotiation strategy.

As the incident unfolds, the strategy may have to evolve, and negotiation may be kept going to buy time whilst other options are made ready.

Cyber Security

In organisations where cyber security works well the CIO or CISO will play an active part in determining the overall organisational strategy. They will advise on how effective security can be a business enabler, and determe what constraints are necessary to achieve the level of security needed to meet the organisation’s risk appetite, bearing in mind legislation and regulation. They are then accountable for the delivery of various aspects of that strategy.

Unfortunately, in my experience it rarely happens like this. More often, the CIO or CISO do not have a seat at the table where strategy is developed and are often tasked with somehow creating an “appropriate” level of security, reactively, as in: after decisions have been made. This often leads to conflict, particularly in organisations embracing “agile” software development, where the very “agile” nature of the methodology can conflict with the stated organisation’s risk appetite and be almost impossible to secure.

Worse, in many cases there simply isn’t an agreed risk appetite, or simply a blanket statement which is meant to cover all aspects of an organisation from R&D (by definition high) to Audit (by convention low).

In either case, things work more smoothly, and with a greater chance of success if the Co-ordinator / CIO / CISO is an active and continuing part of the decision-making function.

 

2) Success is not achieved in isolation and works best when exercised

Hostage Negotiation

A peacefully negotiated outcome is always the aim of any negotiation, but contingencies always need to be made for a situation where negotiation is not possible or fails. These contingencies will always involve a host of other agencies, each of whom needs to understand their role in a hostage taking incident, and how they will interact with the other agencies present.

A critical factor is absolute clarity about which organisation and which person in that organisation is in overall command of the incident, whilst having cognisance of the power vested in stakeholder groups. So, for example, the person in charge of a hostage taking incident (most likely a cop) can do most things, but deploying the SAS requires Ministerial approval (as I found out the hard way on an exercise).

Cyber Security

Responding to a major cyber security incident will involve many internal teams (e.g. IT, Comms, Legal), may well have Board oversight, and will probably involve external specialists. In addition, regulators, law enforcement, the media, customers, and investors (to name a few) will need to be kept up to date.

This is far from easy, and there are many high-profile examples in major corporations of where it has gone wrong, particularly when it comes to communication strategy.

The simple answer, reinforced by governments around the world, is that this works much better if there is an Incident Response plan (worryingly many organisations, even surprisingly big ones fail at this), and works even better if said plan is exercised regularly (very few organisations actually engage in Incident Response simulations).

I have run many such exercises, and – spoiler alert – will often tell the key individuals everyone else looks up to for decisions, they are 30 minutes into a 12 hour flight, therefore out of play for the first 11 hours. Whilst this has never made me popular, it does tend to highlight how well decision making can work (or often not) in the absence of key players.

Just as in a hostage negotiation, roles and responsibilities need to be clearly defined for cyber security to deal with a crisis. Who can call in the SAS (or more likely turn off a company’s Internet connection?) if the need arises.

One final note: whilst I had a very cool “Hostage Negotiator” baseball hat – for the very essential purpose of not being shot by friendly fire – I have yet to see a cyber security equivalent!

The industry guide to being a successful “bad actor”

We thought we would provide a guide to anyone looking to be a bad actor/malicious adversary/evil hacker based on much of what we have been hearing from the security industry at large.   1) Discuss all your evil plans on any site that is probably visited by "threat intelligence analysts". A good place is the dark web, go there, talk about your plans, but make sure people can find you. 2) Talk about your evil actions at a con. 3) Always choose the most difficult, ultra complicated vulnerability to exploit (even when you don't have to) 4) Don't try to buy any of the security tools your targets may use, because you can't. 5) Skip over testimonials on any security companies website, what possible use is it in knowing who provides security assistance to your target? 6) Also, don't waste time trying to find out more about the aforementioned security companies, its just going to slow down you succeeding in your attack. 7) Be sure to use the word 1337 a lot. 8) Disclose all your discovered vulnerabilities, its only fair. 9) Participate in bug bounty programs and post on twitter about it. Remember fame is the key to being a successful bad actor. 10) Participate in corporate CTF's. You can win prizes and show off your attack methods. 11) Ignore all the ATP and malware reports. This stuff is no use to you, just because everyone else collects intelligence doesn't mean you should. 12) Get some certs, you need certs to show you are good at this stuff to other like minded people. 13) Worry about anti-virus, its good, really. 14) Avoid all companies who have to meet published compliance standards. They know you know their controls, and are ready for you. 15) LinkedIn is useless for recon, its just full of motivational quotes. 16) Stick to one communication method, why complicate matters for yourself? 17) When you send a phish, be sure it looks like an iTunes, amazon, or other receipt. 18) By default, security monitoring/detection tools like well known attack patterns, so be sure to use them. 19) Only perform your malicious activities between the hours of 9-5 in whatever time zone the target is in. 20) Pick out a flattering and intimidating costume and choose a cool handle. 21) Fear security awareness training, it has the magical power of overcoming some of the traits developed during a life - fear, obedience, greed, helpfulness - all succumb to the power of posters, power-points, and  catchy slogans. 22) Brag about everything you do, again, fame is the key. Ever seen a Bond movie? the bad guy always tells James Bond his plans in detail, this is a sign of confidence. Do the same.   ***The authors of this article take no responsibility for its accuracy, the information contained within should not be considered as advice***

Everyone can be taught new tricks – considerations for application pen tests

Application security is one of the areas that we put a great deal of our consulting efforts into and we perform  many web application penetration tests (WAPT). Overtime, we've seen a shift in the technical landscape and I wanted to write up something  for others to chew on.   The way to approach a WAPT has changed because application architecture and security technologies  are constantly evolving. Much of this is due to the adoption of the "cloud" and the seemingly endless new functionality being rolled out. We do quite a bit of in-house development and  often see a disconnect in how applications are developed and deployed, versus the technologies that are assumed to be in use when WAPT's are performed and in how automated tools are used for testing.   Initially, deployments started off as monolithic architectures with separate, independent layers of concern. The presentation layer serves the client interface, the business layer handles server side functionality and business logic, while the data layer is storage. This is where we have our LAMP stack deployments, along with Apache Tomcat RCE, WordPress Code Execution, webshells, and cgi-bin argument injection. You know, all the good stuff that we know and love. You obtain a foothold on the server, escalate privileges or move laterally, gain persistence, exfiltrate data, and call it a day. We continue to see a plethora of these deployments in the wild, however the technology in use isn't standard anymore, and to be successful a WAPT needs to consider this.   Let's say the application in scope for security testing is a Single Page Application (SPA) using React or Vue, with Redux and Redux Saga to handle state management. The SPA is deployed in an S3 bucket with CloudFront to handle content delivery, with the new shiny AI driven WAF to handle low hanging fruit and with signature detection and behavioral anomaly analysis. Authentication and authorization is handled using Amazon Cognito, Okta, or Auth0, with JWT for identity. The application client-side business logic is GraphQL queries to a serverless API endpoint using AWS API Gateway and Apollo Lambda Functions. For data storage it uses Neptune.   What if the SPA interacts with a API Gateway, and behind that gateway is a micro-services fabric cluster with key rotation using Vault for authorization of inter-service communication?   These architecture examples highlight some of the ways in which applications are deployed into cloud infrastructure. It allows for rapid deployment, decoupled services for scalability, and increased agility and efficiency. A production JAMStack application using Gatsby and AWS Amplify can be deployed in no time and the developer would never have to worry about the underlying infrastructure, patch management, or scalability.   As a penetration tester, what is your WAPT approach? How do you ensure you've covered the application effectively?   The inner monologue of a  penetration tester may unfortunately sound something like this when confronted with newer application architectures:  

"Doesn't matter, I'll fuzz the crap out of it and find the vulnerabilities eventually"

"Input validation is still input validation, let me at it and I'll dump that database in no time."

"I'm going to pop a webshell on that server, dump credentials, and own that network."

"Okay, let's run the vulnerability scanner and see what we get back."

 

You get the point, this can lead to ruin. Assuming the technology based on previous training, books, and lack of development exposure when doing WAPTs can lead you down the wrong path. Reconnaissance has always been an important part of an engagement, this is even more true today.   **Understand the architecture before you waste you and your clients time.**   Why are you trying payloads for Angular Template Injection when the client side is written in React? How are you going to determine, or would know if state management is mishandled? Why are you checking for authentication cookies when it's using JWT? Is JWT properly implemented?   If you happen to be an application developer reading this you're at least cracking a smile. The application security folks are as well, since they likely understand these deployments and the available security considerations that go into these architectures. The days of a developer not knowing how to handle authentication or authorization correctly are slowly going away with the shift in reliance on cloud providers and third-party services. Why roll my own auth when I can use Cognito? What I'm saying is the default security posture of applications is getting more mature as the application development and application security ecosystem continue to progress. More mature, but still not secure.   As mentioned before, the gap is in the offensive security tools and the assumptions we make when approaching WAPTS. Using common security tools on an engagement tends to lead to false positives and traditional coverage of HTTP interfaces, but can't handle REST APIs without contextual awareness. Client-side state management and front-end logic handling throws tools for a loop. Handling GraphQL endpoints in an automated fashion is almost out of the question.   On to manual testing, since we are doing a pentest right? The offensive approach to cloud-native deployments is different than for traditional monolithic on-prem deployments, and puts a heavier focus on client site input validation, API testing, authentication, authorization, and business logic. There are no CSRF attacks when using JWT. Directory traversal is out of the question due to dynamic routing in SPA. You can't SQLi to dump credentials when using an Cognito endpoint. It doesn't matter if you RCE an underlying S3 bucket, instead you want code execution on the application server run-time environment. When approaching Lambda functions, S3, and container instances, at the OS level there has to be an understanding of what is AWS territory and what is client territory. See the shared responsibility model. Of course, these are some forced examples to make a point that the landscape has shifted.   Ultimately the point I'm trying to make is:   **The more you understand about development and deployment capabilities, and overall application security within the SDLC, the more effective the penetration test will be.**   It's an obvious statement I'm sure, but you can't be a truly effective offensive player if don't keep up with the defensive and production capabilities.   Now, it is important to understand that cloud-native architectures are not bulletproof, it's just a shift in the attack surface. Increased complexity brings a new array of issues, like mishandling of JWT tokens (https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/), improper input validation, leaking of secrets and access tokens in source control, and over-permissive authorization. This names a few, but the list goes on. Keep an eye on HackerOne's Hactivity (https://hackerone.com/hacktivity) for some examples. Remember, combine the human element with the increased need for speed and scalability and mistakes will always happen.   To effectively test an environment, consider your threat model and emulate those threats in your penetration tests along with a testing approach that stems from your understanding of the ecosystem. Additionally, consider doing a secure architecture review with the client to provide input on secure best practices and highlight potential issues that reside behind the scenes. We've found  a combination of these to be effective in ensuring coverage. Again, the more you understand about the applications architecture and it's business objectives the more effective your security work will be.   Take the time to keep up with full stack development capabilities, cloud-native architectures and the application security strategies that go with it. It's not always about security testing, and it's easy to get left behind if you aren't careful   At OccamSec, we are always striving to stay up to date on both development and app sec practices to steer our continued security research, and ensure our WAPT methodology is effective when testing the most bleeding-edge deployments.

LinkedIn Pwnage: why we can’t all be friends

                                      persona abstract image

 

Last July an article appeared on the outline entitled “How to Beat LinkedIn: The Game”, it’s an entertaining read regardless of how you feel about LinkedIn. Ever since reading it, we’ve been thinking about writing up something on how we have used LinkedIn during our work.

 

How do we highlight the risk of a platform, one we use, in a constructive way?

 

LinkedIn is really good for recon. Project managers like to post updates on new technology deployments, companies post job ads, people are constantly updating their resumes with new skills they have learned from their jobs, and so on.

 

On one project, we were engaged to infiltrate a client using only social media. The diagram below shows how this was achieved (note, this company had a highly active security awareness program).

 

                              linkedin attack walkthru 

It was that simple, “hello” to login in two weeks.

 

Total effort: a couple of hours

 

Advanced technical knowledge required: zero

 

This is another example of social engineering, given the nature of social media and addictive nature of it, social media platforms are excellent forums for this kind of attack. People want connections, they want “likes”, “shares”, followers. Validation of others seems (for why you want to connect to them) to often come down to the connections they have - “this person has connections, therefore they must be real”. Some people will accept any connection (“The more connections I have the cooler I am!”). These are the initial targets, once you find a few of these you can connect to them, and then fan out from there.

 

After that it's a question of how much information can glean from connections without raising suspicion. We have learnt to avoid sending requests to anyone in legal, and IT security folk do tend to be trickier (although not as much as you would think). Marketing people are fantastic, as is anyone in sales.

 

As the walkthrough above showed, plain old human interaction can go a long way (the request for a date was entirely unsolicited we should add).

 

So how do you deal with this?

 

Ban social media use.

 

Ok so you can’t do that. Social media can be another target for security awareness training, however because of the basic human brain functions these sites stimulate, only a program of constant awareness training stands any chance. People want to be helpful, they want to make friends, and the dopamine buzz from receiving messages and getting likes is extremely powerful for humankind.

 

Alternatively, counter intelligence (or disinformation programs) can be utilized to pollute information available about your organization on social media. This is difficult given the terms of services of these sites; however, adversaries are unlikely to follow them. So, in the name of protecting your organization, you may want to investigate what can be done.

 

We’re not saying don’t use LinkedIn, or any social media, but be aware that it’s another conduit into your organization through those who work there and needs to be part of a security plan.

M&A Cybersecurity

Articles appear almost daily detailing yet another significant merger or acquisition, a trend common across sectors and geographies. Also growing are the number of de-mergers and spin-outs, with new entities being created from a larger parent organisation. Over the last couple of years, I have supported organisations going through all three of these changes, focusing on the potential impact of the change on the cybersecurity & business resilience of the organisation concerned. The results have consistently been concerning. The first learning point is that very few mergers are completed in the expected timescale, and the process of fully merging infrastructure can take several years. As an example, one organisation I work with was formed from a merger of 9 organisations but has over 15 legacy HR & Finance systems still active, as some of the 9 original organisations were themselves formed from yet more historical mergers which were never really completed. Trying to coherently map risks or produce an enterprise security plan for this type of environment is incredibly challenging, yet I rarely see such risks documented in an Enterprise Risk Register. The standard “merger” due-diligence template goes into great detail looking at financial & legal status issues, but rarely seems to consider the potential liability associated with linking into an organisation with a seriously compromised infrastructure. This is doubly surprising if you consider the well reported fact that having penetrated an organisation, most attackers reside within the organisation’s network for over 100 days before discovery, so there is a very real risk of starting work on merging infrastructure, whilst being observed by an interested resident attacker, who will be keenly looking out for an opportunity to vector into the core organisation’s networks. It isn’t as if this is particularly difficult, there are many vendors (including Occamsec) who understand this space, and several relatively lightweight tools available to conduct a remote vulnerability assessment (including Occamsec’s excellent Vendor Assessment Tool) as an initial due-diligence exercise which is likely to show what further investigative & remedial work is necessary (in my experience it always is!). So, why isn’t this being done – in my view it is predominantly because the processes of providing strategic due-diligence (and indeed internal & external audit) during a merger or acquisition, simply haven’t kept pace with the level of threat and potential organisation-breaking impact of an ineffective cybersecurity regime. Most Boards struggle to understand cybersecurity, but in my opinion, there is a simple test, which any Board Member can apply, it is to ask where cybersecurity & business continuity risk is featured on the Enterprise Risk Register. In my experience those organisations where cybersecurity risk is visible at Board level do more to monitor and mitigate the risk. Those where the risk is not featured in the ERR live to regret it later.

Copyright  © 2020 OccamSec

524 Broadway, New York, NY 10012