Application security is one of the areas that we put a great deal of our consulting efforts into and we perform many web application penetration tests (WAPT). Overtime, we’ve seen a shift in the technical landscape and I wanted to write up something for others to chew on.

The way to approach a WAPT has changed because application architecture and security technologies are constantly evolving. Much of this is due to the adoption of the “cloud” and the seemingly endless new functionality being rolled out. We do quite a bit of in-house development and often see a disconnect in how applications are developed and deployed, versus the technologies that are assumed to be in use when WAPT’s are performed and in how automated tools are used for testing.

Initially, deployments started off as monolithic architectures with separate, independent layers of concern. The presentation layer serves the client interface, the business layer handles server side functionality and business logic, while the data layer is storage. This is where we have our LAMP stack deployments, along with Apache Tomcat RCE, WordPress Code Execution, webshells, and cgi-bin argument injection. You know, all the good stuff that we know and love. You obtain a foothold on the server, escalate privileges or move laterally, gain persistence, exfiltrate data, and call it a day. We continue to see a plethora of these deployments in the wild, however the technology in use isn’t standard anymore, and to be successful a WAPT needs to consider this.

Let’s say the application in scope for security testing is a single-page application (SPA) using React or Vue, with Redux and Redux Saga to handle state management. The SPA is deployed in an S3 bucket with CloudFront to handle content delivery, with the new shiny AI driven WAF to handle low hanging fruit and with signature detection and behavioral anomaly analysis. Authentication and authorization is handled using Amazon Cognito, Okta, or Auth0, with JWT for identity. The application client-side business logic is GraphQL queries to a serverless API endpoint using AWS API Gateway and Apollo Lambda Functions. For data storage it uses Neptune.

What if the SPA interacts with a API Gateway, and behind that gateway is a micro-services fabric cluster with key rotation using Vault for authorization of inter-service communication?

These architecture examples highlight some of the ways in which applications are deployed into cloud infrastructure. It allows for rapid deployment, decoupled services for scalability, and increased agility and efficiency. A production JAMStack application using Gatsby and AWS Amplify can be deployed in no time and the developer would never have to worry about the underlying infrastructure, patch management, or scalability.

As a penetration tester, what is your WAPT approach? How do you ensure you’ve covered the application effectively?

The inner monologue of a  penetration tester may unfortunately sound something like this when confronted with newer application architectures:

“Doesn’t matter, I’ll fuzz the crap out of it and find the vulnerabilities eventually”

“Input validation is still input validation, let me at it and I’ll dump that database in no time.”

“I’m going to pop a webshell on that server, dump credentials, and own that network.”

“Okay, let’s run the vulnerability scanner and see what we get back.”

You get the point, this can lead to ruin. Assuming the technology based on previous training, books, and lack of development exposure when doing WAPTs can lead you down the wrong path. Reconnaissance has always been an important part of an engagement, this is even more true today.

**Understand the architecture before you waste you and your clients time.**

Why are you trying payloads for Angular Template Injection when the client side is written in React? How are you going to determine, or would know if state management is mishandled? Why are you checking for authentication cookies when it’s using JWT? Is JWT properly implemented?

If you happen to be an application developer reading this you’re at least cracking a smile. The application security folks are as well, since they likely understand these deployments and the available security considerations that go into these architectures. The days of a developer not knowing how to handle authentication or authorization correctly are slowly going away with the shift in reliance on cloud providers and third-party services. Why roll my own auth when I can use Cognito? What I’m saying is the default security posture of applications is getting more mature as the application development and application security ecosystem continue to progress. More mature, but still not secure.

As mentioned before, the gap is in the offensive security tools and the assumptions we make when approaching WAPTS. Using common security tools on an engagement tends to lead to false positives and traditional coverage of HTTP interfaces, but can’t handle REST APIs without contextual awareness. Client-side state management and front-end logic handling throws tools for a loop. Handling GraphQL endpoints in an automated fashion is almost out of the question.   On to manual testing, since we are doing a pentest right? The offensive approach to cloud-native deployments is different than for traditional monolithic on-prem deployments, and puts a heavier focus on client site input validation, API testing, authentication, authorization, and business logic. There are no CSRF attacks when using JWT. Directory traversal is out of the question due to dynamic routing in SPA. You can’t SQLi to dump credentials when using an Cognito endpoint. It doesn’t matter if you RCE an underlying S3 bucket, instead you want code execution on the application server run-time environment. When approaching Lambda functions, S3, and container instances, at the OS level there has to be an understanding of what is AWS territory and what is client territory. See the shared responsibility model. Of course, these are some forced examples to make a point that the landscape has shifted.

Ultimately the point I’m trying to make is:

**The more you understand about development and deployment capabilities, and overall application security within the SDLC, the more effective the penetration test will be.**

It’s an obvious statement I’m sure, but you can’t be a truly effective offensive player if don’t keep up with the defensive and production capabilities.

Now, it is important to understand that cloud-native architectures are not bulletproof, it’s just a shift in the attack surface. Increased complexity brings a new array of issues, like mishandling of JWT tokens, improper input validation, leaking of secrets and access tokens in source control, and over-permissive authorization. This names a few, but the list goes on. Keep an eye on HackerOne’s Hactivity for some examples. Remember, combine the human element with the increased need for speed and scalability and mistakes will always happen.

To effectively test an environment, consider your threat model and emulate those threats in your penetration tests along with a testing approach that stems from your understanding of the ecosystem. Additionally, consider doing a secure architecture review with the client to provide input on secure best practices and highlight potential issues that reside behind the scenes. We’ve found a combination of these to be effective in ensuring coverage. Again, the more you understand about the applications architecture and it’s business objectives the more effective your security work will be.   Take the time to keep up with full stack development capabilities, cloud-native architectures and the application security strategies that go with it. It’s not always about security testing, and it’s easy to get left behind if you aren’t careful.

At OccamSec, we are always striving to stay up to date on both development and app sec practices to steer our continued security research, and ensure our WAPT methodology is effective when testing the most bleeding-edge deployments.