Humble application security advice from an old school practitioner

Humble appsec perspective

Here is share some humble application security advice from an old school practitioner. This advice is for practitioners and cybersecurity leaders (CISO, CSO, etc) alike. I have been a player in the application security (appsec) space for many years and I see the appsec space through a fairly wide lens of both offensive and defensive arenas. My lens factors in secure coding, layer 7 protective mechanisms, processes and things like pen testing. My background in these areas spans back to before the days of automated tools in pen testing, we did things manually and actually had to deeply understand stuff under the hood. Back then it was both an art and a science, I am not so sure these days. By 2005 I had professionally performed enough pen tests that I confidently wrote a book on the subject.

I can’t think of any modern day organization that does not have a business-centric Internet presence. This of course implies some web app, or web site. And these of course need protection. This protective journey starts way before the first request is responded to via some port open to the Internet (whether directly or via proxy). From my perspective, there are a few key areas relevant to securing software that are critical.

Before this discussion goes any further let me clearly state something. Appsec is a journey. One that you either wholeheartedly embrace or don’t bother at all. Too many appsec initiatives are driven by some externally mandated compliance, or it’s the scenario where some software engineering team is forced to do this, and frankly just doesn’t want to. Straight talk – out of all the software engineers you have met how many actually give a crap about security? 28 years in for me and that number is minuscule.

This means it is on us to be advisors and aim to positively influence. It’s on us to weave this into the day to day reality of other teams, but we have to be all in. You have to be deep about appsec and be committed. Otherwise feel free to just stop reading here.

Another point I will make here may upset some and that is ok. More straight talk for my security peers ….. you do no one any justice when you, or any “security” expert, comes to the table with software engineers to discuss their “insecure” coding, yet it is obvious that you have never actually coded anything yourself. A software engineer will see right through you and will silently (hopefully at least) be thinking, “what the %$#& do you know about what you are actually saying?”

Maturity is a big factor when pursuing the build out of an appsec program. One could argue that a certain level of maturity is inherent when an organization is even thinking about appsec formally. One major challenge will be your ability to positively influence the engineering culture of your organization. Proper influence is key here because a force fed program will get you limited results and I assure you there is not enough time to review every line of code written for some given cycle. Hence, your goal is to positively influence the relevant people and the process to want to be a part of this. A “shift left” process, for instance, should become a mutually desired business enabler.

Both sides, software engineering and cybersecurity, should be after the same goal of a secure, resilient, functional piece of customer facing software. So building a successful appsec program requires commitment from each side. This will require some tactful education on the side of cybersecurity leadership. And let’s face it, some organizations are just not wired, culturally, to actually have a good appsec program, if any at all.

Organizational Culture

Let’s start with the organizational culture. To keep this relevant let me be clear that not all organizations need a dedicated appsec team. The ones that do are generally building something and pushing that something out for customers to utilize. But for instance if your organizations entire tech stack consists of a bunch of integrated SaaS solutions there may not be much for an appsec team to do. Those orgs can probably get away with periodic consultants reviewing security configurations from the SaaS vendors and perusing the integrations for weak controls. But honestly those orgs are at the security mercy of the SaaS vendor.

Mature organizations want application security, and security in general, to silently just be there. This is ultimately the goal of security as a business enabler. The more silent it is the better. The challenge is, however, that security hurts and it costs (time, money, effort and resources). As much as people love to push the enablement issue security simply doesn’t actually enable anything in most typical businesses. Hence, why they see security as a necessary evil. Changing organizational culture is critical but you have to factor in the reality of what I just mentioned. Weaving security thinking into an organization’s culture, especially in the engineering space, is foundational. Without this an appsec program will fail.

In the spirit of not being that “department of NO”, or not being a blocking entity, it’s up to us to figure out how to be “enablers”. This “how” is not trivial and organizationally subjective. Shifting left is a very common way to ease into enablement. To me, this requires moving security elements (code scanning, vulnerability scans, DAST/SAST, pen tests, etc) to be impactful earlier in an engineering and/or automated build cycle. Accomplish that and you may just have enabled a better solution to be built and deployed.

Focal Areas

In order to build this appsec program you now realize you have to positively change the culture of an organization. This will take time and perseverance. It will also take focus and a sound strategy. Here are a few areas that should be front and center in your appsec journey:

SSDLC – The key focal area for positive impact will be the Software Development Life Cycle (SDLC). You need to help your organizations engineering entities transform this into a Secure SDLC (SSDLC). I suggest you take slow and small steps here as change is difficult to accept. This is especially so if you are an outsider (and you most likely are from their perspective) asking them to change.

Focusing on adding security value with changes/additions to an SDLC, the typical areas you will hear experts speak about are:

  • secure code training (for software engineers)
  • secure design reviews (typically at an architectural level)
  • pen tests (internal and/or external)
  • risk assessments
  • regular advisories
  • secure code review (when/where possible)

For those of us who have really done this we know it is never that cut and dry. Moreover, to accomplish all of that your appsec team and budget better be pretty hefty. Software engineers are not going to rejoice with open arms based on what you are asking them to do (more work, longer deadlines, harder testing, etc). You must choose wisely which of those areas you will initially push for and look to make allies who willingly engage. The iron fist approach hardly ever works.

Depending on the size of your team, budget, the size and numbers of the target engineering teams, and company support (organizational culture) you may find a great approach is to embed appsec folks into the engineering teams/squads. This will create institutional knowledge and tailor the appsec program based on that domain expertise they will gain. The downside is that it takes resources away from your normal operations, but my experience is that this is an acceptable cost.

Measure Maturity – Maturity matters and you should set a goal of formally tracking progress in respect to your appec program. Two popular frameworks in regards to establishing a baseline and measuring this over time are:

There is a nice side by side comparison of the two here. While both of these frameworks seem straightforward give them some thought. See which one fits best based on your intimacy with the culture of your organization. One note, it’s ok for your scores to drop every now and then. This is a space that is highly impacted by certain events. Take a Merger & Acquisition (M&A) event for example, you have little control of what you inherit. This could instantly drop your scores based on no fault of yours. So the scores are a great metric but one that requires you to go with flow a bit.

Build relationships – This area really encompasses two distinct areas, testing and operational work. Building a relationship with Quality Assurance (QA) testing teams may prove very beneficial. After all, functional testing can very well go hand in hand with some security testing. Having security functions injected into other areas, such as regression testing, may prove valuable as well.

While automation plays a big role here you may be disappointed to find that in 2022 there is still a lot of manual QA testing that is performed. As humbling as that may be do you actually think your automated appsec scanners can catch everything? Irrespective, building relationships such that discoveries are made prior to production releases will prove invaluable over time. So work with QA to for instance have pen test elements built into some of their processes. They can become total allies over time.

Part of the relationship ecosystem is all about ownership. We are in no way trying to “pass the buck” but other teams, such as software engineering, need to own part of the security responsibility. They will ultimately have more of an impact on day to day security decision making than anyone outside of their team(s). RACI charts can be effective in identifying ownership borders clearly so consider as part of your arsenal. The appsec team is there to advise and try to influence best decisions but generally wont have the authority to force much of anything.

Invest in the right tooling – Focus on solving problems and building solutions, not implementing products. This area is broad and really spans numerous key areas of an appsec program. The goal is to positively impact an entire ecosystem from the left and right. On the left there is the SSDLC and all the goals already mentioned, you will need to invest in some tooling. On the right there are architectural components, pen testing, and a host of other initiatives that range from the systems thinking to the active protective. Active protection, for instance, will require tooling as well. But remember, build solutions.

Automation is going to play a key role in the overall impact of your appsec program. Building security components into X as code (where X can be build or infrastructure, etc) initiatives can add a lot of value. Injecting blocking security mechanisms into your organizations CI/CD pipelines are a great way of designing guardrails directly into a some processes.

Other Areas

Set boundaries – The scope of your appsec program will be critical. Scope is very subjective per organization. In order to set your team up for success set the boundaries early. For example, is your program going to cover database security? What about protecting the connections from apps, APIs, etc to databases? What about file security and protection? In some organizations those elements belong to an appsec team and this needs to be defined clearly.

Gain intimacy – Make sure your appsec team gains some intimacy with all relevant software engineering processes (to include tech stacks, 3rd party libs, hosting environments, file transfer mechanism, build processes, etc). This intimacy will allow your program to be effective with some of its goals, especially the one related to building and implementing a SSDLC.

Intimacy also has a direct impact and/or outcome related to the relationships you want to build. Bi-directional communication will prove invaluable. Get into the weeds with software engineering teams and, assuming you earn their respect, you will quickly identify who will be an ally and a voice for your appsec program(s).

Build inventories – You can’t effectively protect what you are not aware of. In my experience software engineers are generally bad at documentation. Part of your appsec program should aim at inventorying the application landscape and what it is made of. Pay close attention to the hidden “gotchas”, such as the APIs that are built into an app’s infrastructure, and hosted on the same server and transport (i.e. HTTPS) mechanism. Don’t limit your viewpoint here and consider infrastructure components along with supply chain elements as a part of your inventory. A lot of interesting security problems are most likely lurking there. A good inventory should at least expose some of these areas of interest.

As part of the inventory consider also building a control inventory. Frameworks (i.e. MITRE ATT&CK, etc) can help keep this organized as well as help make sense of adversarial tactics, techniques, and procedures (TTP). Creating an inventory of this sort can expose areas in your attack surface that maay need attention.

Build a risk register – This risk register will be focused on your apps and solutions. This should provide clarity in terms of risk the organization faces on a regular basis.

Create opportunities – Take every opportunity to get your message across and show how some security initiatives can be seamless and painless. For example, if you have an engineering team that creates compiled code. Why not have a library built (i.e. shared or static lib) that performs numerous security related functions (i.e. input validation, header setting, encoding/decoding, etc)? Then have the engineers take a look at how easy certain things become when they call exposed functions in that library. That is a seamless way of getting your objective some traction.

Gather metrics that matter – You will need to relay the value add, and effectiveness, of your appsec program to the corporate executives and/or the C-suite. Focus on metrics that matter. Not all metrics will be relevant to an appsec program. Any type of cost savings is always welcome, while you can also measure and track program adoption. Strategically, use one of the frameworks (SAMM or BSIMM) discussed earlier to show progress of maturity.

Final Thoughts

Security leaders set strategy and create programs, but more importantly we advise. A solid appsec program is one of those areas where we advise an organization. It is on us to mold an appsec program and focus it to map to, and benefit, the business. The points I have made here should give you a good sense of how you can enable a business via an appsec program.

Designing a program takes effort and thought. Factor in the people, processes, technology, and culture of the organization. Factoring these elements in is a continuous process as things, and organizations, change. They have to adapt and overcome constantly. So does your program. As long as you are continuously improving, and keeping step with the business, you will get positive results.

Understand that the biggest impact will come from the partnerships you pursue. This will be crucial in terms of having positive influence on the culture of your organization. Support this with some, hopefully many, of the technical components discussed in this writing and you will not be disappointed with the results. Stay focused on the fact that your appsec team should exist to be advisory, as subject matter experts.

Cybersecurity, and appsec specifically, subjectively mean different things to organizations. For some organizations these elements are simply part of modern day business success. My humble advice can get wrapped up as this: position yourself as a facilitator and an advisor, one that can transparently (as much as is possible) leverage the advice provided here to actually enable safe application based business. I wish you the best with your appsec initiatives.

Cybersecurity metrics, the challenge, measure what matters

Cybersecurity metrics


Cybersecurity metrics, the challenge, measure what matters.

Warning: there are a number of somewhat abstract concepts presented here. I know some people have a hard time with abstraction so please read with an open mind. There are a lot of open ended questions as well. This article is intended to spark thought outside of the norm as it relates to cybersecurity metrics.

As an industry we ([cyber | information] security) have struggled to pin down the art, and science, of cybersecurity metrics. It is a struggle that some feel they have mastered. I am not entirely convinced. Moreover, I see the general consensus sadly playing in the safe zone when it comes to this subject. It takes courage to measure what matters as opposed to measuring what is possible, or easy. It also takes courage to measure elements that are somewhat in the abstract because of the difficulty at hand.

I acknowledge that “what matters” is subjective to four entities, the organization, its C-Suite, its varying board members and us (Cybersecurity leadership). We can steer the conversation once there is internal clarity in reference to the items that really matter.

One of the enemies we have to contend with, is our indoctrination to always strive for 100%. This score, level, grade, is simply unachievable in most environments. And what really constitutes 100%? Is it that our organization has been event-less by way of incidents, breaches and/or data exfiltration? What constitutes the opposite, or a score 0 (zero)? We have to stop thinking like this in order to get a realistic sense of metrics that matter.

My contention is that we need a small, tight, set of metrics that are representative of real world elements of importance. This comes with a fear alert, because in some cases measuring these areas will show results that come off as some type of failure. We need not feel like this is reflective of our work, we are merely reporting the facts to those who need them. “Those” would generally be the board and the C-Suite. They will probably have a hard time initially understanding some of these areas and admittedly they are very difficult to measure/quantify.

It is the job of an effective CISO to make sense of these difficult to understand areas and educate those folks. But, the education aspect is not just about understanding them, but to how extract value from them. This is where the courage comes in because a lot of people have a hard time accepting that which is different than what they are accustomed to.

Subjectivity is important here. There are few formulas in the world of cybersecurity and what matters to one organization may have little relevance elsewhere. Organizations need to tailor their goals, and in turn the measuring mechanisms, based on what matters to them. This of course has a direct impact on what risk areas come to light, which ones need to be addressed with urgency and those that can wait. Hitting these subjective goals (that should be defined by the metrics) could also bring about ancillary benefits. For instance this could force the issue of addressing technical debt or force a technology refresh.

Here are some suggestions (nowhere near exhaustive) that are top of mind in respect to metrics we tend not to pursue (mainly due to the difficulty of measuring them):

Effectiveness of active protection mechanisms – This one seems obvious at face value. Grab some statistics after the implementation of some solution, for instance a Web Application Firewall (WAF) and show how many awful things it has prevented. But this is such a fragmented perspective that it may provide a false sense of security. What about your machine to machine communications deeper in your network (VPC or otherwise)? How are you actively protecting those (API requests/responses, etc) entities?

I find the bigger challenge here is ecosystem wide coverage and how you show the relevant effectiveness. There are other difficult to measure areas that directly impact this one, such as attack surface management. But if we, as an industry, are ever going to get ahead of attackers, even in the slightest way, this is an area we all need to consider.

Reproducibility – The “X as a Service” reality is here and quite useful. “X” can be infrastructure, it can be software, it can be many things depending on the maturity and creativity of an organization.

From the software perspective, what percentage of your build process exists within a CI/CD pipeline, or process? This strongly sets a reproducibility perspective. Within a CI/CD process many areas, such as security, resilience and DR, can be covered in automated fashion. Vulnerability management, and patching, can be included here as well. It’s 2022 and if your organization hasn’t invested in this area you need to put some metrics together to make a case for this.

Attack Surface Management – What does your organization look like to an outsider? What does it look like to an insider? What does it look like when you factor in ephemeral entities (such as elastic cloud resources)? Does your attack surface data factor in all assets in your ecosystem? What about interdependencies? Asset inventories are seldom accurate and so possibly your attack surface is a snapshot in time as opposed to something holistic.

There is a lot to consider in terms of attack surface metrics yet it is such a key component to a healthy cybersecurity program. Please don’t think that any one specific product will cover you in this area, most are focused on external perspectives and miss the insider threat vector entirely.

Software Security – This is an enormous subject and one that deserves an entire write itself. The maturity of software can certainly be measured with techniques like SAMM (one such model is OWASP SAMM). Creating, and implementing, a SSDLC goes a long way in integrating security into the core software development process. Underlying any of these techniques is the need to map software to business processes. Otherwise you have a purely technical set of metrics that no one outside of tech will be able to digest.

Technical Debt – This area is complex as it can contextually refer to software that needs to be refactored or it can refer to legacy systems (stagnant or otherwise). Regardless of the context how does one measure the level, or severity, of technical debt within an organization? If a successful relevant model is created it will probably create a strong argument for some budget 🙂

Distance Vector – How far into your ecosystem can an attack get before it is detected and handled (whatever handling means to your organization)? The logic here is simple, the longer it takes to detect something the more you need to pay attention to that area. Think of APTs and how long some of them exist inside of your network before there is detection and response.

Time vector – Who is faster you, the defender, or the attackers? There is a reality to the time factor and every time your organization is targeted there is a bit of a race that takes place. Where you end up in the final results of that race dictate, to an extent, the success factor of an attack. This is very hard to measure. But, spending time figuring out ways to measure this will yield an understanding of the threats you face and how well you will fair up against them.

One great benefit of spending time assessing your time vector is that it will force you to measure your ability to successfully address entire families, or classes, of attacks. Having the macro focus, as opposed to the typical micro focus may bring about an interesting level of discipline with your technical teams. Basically, they will be forced to think big and not exclusively on edge, or corner, cases.

Repeatability – How repeatable are key functions within your organization? Measuring repeatability is super difficult and yet this is a foundational aspect of mature cybersecurity programs. Playbooks exist for this exact reason and we do invest resources into creating, and maintaining, them. This means repeatability is undeniably important but yet how do we quantify this?

Budgeting – How do we know if enough is being funneled into a security program? At the end of the day we can’t plug every hole. One strategy is to perform crown jewel assessments and focus on those resources. Another one is to analyze attack surface data and cover areas of importance. But how do we measure the effectiveness of these, and any other related, strategies?

Insufficient budget obviously reduces the ability of a security team to implement protective mechanisms. The metrics we focus on need to push for clarity in terms of what becomes possible. There’s most likely no correct amount of budget but we get a slice of some larger budget. What we get becomes a variable amount over some period of time. But the budget itself needs to be treated as a metric. Think of it this way, if you don’t get enough budget to cover the areas you know need attention then there will be gaps that are directly attributable.

Sadly a lot of budget increases come about because something bad has happened. But this (the fact that something bad happened) means more work needs to be done. And yet we struggle with the necessary quantification. Ultimately we should pursue business aligned initiatives irrespective of the difficulty of trying to pin down an accurate budget.

All-Out Mean time to recovery (MTTR) – Imagine the absolute nightmare scenario that your entire organization is decimated somehow. Imagine you are brought back to the stone ages of bare metal and have nothing but a few back-ups to recover from. How long will it take you to get your organization back to an operating business? Some organizations are well positioned to recover from isolated incidents, like a ransomware event. My thought process is around something far more catastrophic.

I am not sure there is an organization on the planet that can answer this question at breadth and depth. I fear that there is also a lot of hubris around this subject and some may feel this is not a situation they need to account for. The more typical all-out scenarios you may encounter focus on operational areas. For instance if all servers become unusable there is a DR plan that has been designed, tested, and tweaked to reach acceptable MTTR.

From a positive vantage point, the very act of trying to measure some of these admittedly challenging areas of operation will likely reveal many areas of improvement. That in and of itself may prove valuable to your organization in the long run. There are so many more we could come up with but the areas presented here are a decent starting point.

On the negative end there is an enormous challenge in that board and the C-Suite might not understand these metrics. Hell, I can think of many IT leaders that wont understand some of them. But these are not reasons to shy away from the challenge of educating folks. I understand the notion, and am a practitioner, of talking to the board on their terms, in their language. But is it truly beyond their capabilities to understand some of these points? Is it unreasonable for us to push for a deeper level of understanding and interaction from the board and the C-Suite on these metrics?

One suggestion is to be super consistent in the metrics you choose. By consistent I mean stick with them and show changes over time. The changes can be negative, thats ok. No one is delusional enough to expect total positive momentum all the time. Your presentation, of the metrics you choose, will be an investment and in time the board and the C-Suite will start to see the value, and that you are one persistent individual.

Ultimately, there are many superficial security metrics that keep you in a safe zone. I challenge all of us, myself included, to do better and be more creative. This will be difficult but I find it is well worth it. The outcomes may surprise you and the ancillary benefits (areas you will be driven to address, etc) may as well. There will of course be push back that these are difficult to understand. Or maybe the arguments revolve around the effectiveness of the message to the board and the C-Suite. But the fact that something is difficult is no reason to not tackle it.

Some of the silliest cybersecurity strategies

We all make mistakes but some cybersecurity decisions and/or strategies are just downright silly. They are of course dangerous as well. Here are some of the silliest ones I have encountered.

Cloud security – relying on some automagical security posture simply because you have digitally transformed to a specific cloud provider is just ___ (you fill in the blank). I have actually been told, with a straight face, “we are protected because we are hosting on cloud provider X”. Silly strategy.

Lack of incident means we are safe – some executives take the lack of incidents as the impetus to not have to invest in cybersecurity. So the strategy is purely reactive, their goal is to save money until there is an incident (material or otherwise). Then of course they will be mortified when something bad happens. Silliness.

Cyber-Insurance – having insurance in no way makes an organization secure, not even close. Not worrying about some negatively impacting event because one has insurance coverage is just ___ (again, you fill in the blank). So the notion of transferring risk (as if that is even a real possibility) rather than addressing it is ….. silly.

Reliance on tools – deploying even the best tools, does not mean a “set it and forget it” approach will make them successful. Assuming security is easy because products and/or tools will handle everything is a silly strategy.

Reliance on THE tool – there is of course security by obscurity, but this is security by marketing. Some sales people are very good and of course their one product can solve all of your security issues. Actually believing that one specific tool can solve even a large portion of security issues within a mature/developed ecosystem is silly.

Obscurity – security by obscurity has been a “thing” for a long time. History has proven that this approach hardly ever works. But it is inexpensive and easier to pursue than building proper security controls. And some people out there underestimate the intelligence of the attackers we face on a regular basis. And no, the fact that you may be a small company means nothing to a cyber criminal. Trying to fly under the attackers radar, or assuming your obscure methods will outsmart them, are both silly strategies.

Ignoring it – one just can’t ignore security problems and hope they don’t become real. Hope is not a strategy, or at least it’s a silly one.

We will come back to that – this just never seems to happen. The notion of coming back to something problematic, at some future time, and “tightening things up” is just ___ (once again, you fill in the blank). Ignoring something you are aware of is downright irresponsible and of course, silly.

Assuming the vendor has you covered – assuming that a vendor does security right is off the mark on many levels. Products often get delivered and/or deployed with horrible security configurations (and plenty of easily guessed default credentials) all the time. This is yet another silly strategy.

Compliance equals secure – regulatory compliance does not equate to being secure or protected. Having a ISO-27001 certificate, and a SOC-2 Type 2 report, and a host of any other related compliance credentials, is not going to stop an attacker from being successful. Relying on looking good on some piece of paper is …. silly.

As a cybersecurity leader you must steer things towards a long term, sensible strategy. The foundation of this strategy should take into account all of the silliness I just wrote about. Otherwise, failure is inevitable.