Without a doubt, had a great time talking shop (Transforming and Securing Education Through Tech) with the team from Cyber Magazine (https://cybermagazine.com). You can read the interview at this link to the magazine article.
We talk about why security is critical to the present & future of education. Especially considering the face of education is changing.
I recently had a great Q&A Session with Education Technology Insights where I shared some thoughts. The subject was Cybersecurity and some general thoughts on what is currently, and what may be coming. This was enjoyable in that it had me step back a bit and think about the bigger, more abstract, picture.
The questions they asked me:
1. What are some of the major challenges and trends that have been impacting the Cybersecurity space lately?
2. What keeps you up at night when it comes to some of the major predicaments in the Cybersecurity space?
3. Can you tell us about the latest project that you have been working on and what are some of the technological and process elements that you leveraged to make the project successful?
4. Which are some of the technological trends which excite you for the future of the Cybersecurity space?
5. How can the budding and evolving companies reach you for suggestions to streamline their business?
The name of the article with my perspectives is “Protecting Critical Space Assets from Cyber Threats” and it can be found here: link.
Here is share some humble application security advice from an old school practitioner. This advice is for practitioners and cybersecurity leaders (CISO, CSO, etc) alike. I have been a player in the application security (appsec) space for many years and I see the appsec space through a fairly wide lens of both offensive and defensive arenas. My lens factors in secure coding, layer 7 protective mechanisms, processes and things like pen testing. My background in these areas spans back to before the days of automated tools in pen testing, we did things manually and actually had to deeply understand stuff under the hood. Back then it was both an art and a science, I am not so sure these days. By 2005 I had professionally performed enough pen tests that I confidently wrote a book on the subject.
I can’t think of any modern day organization that does not have a business-centric Internet presence. This of course implies some web app, or web site. And these of course need protection. This protective journey starts way before the first request is responded to via some port open to the Internet (whether directly or via proxy). From my perspective, there are a few key areas relevant to securing software that are critical.
Before this discussion goes any further let me clearly state something. Appsec is a journey. One that you either wholeheartedly embrace or don’t bother at all. Too many appsec initiatives are driven by some externally mandated compliance, or it’s the scenario where some software engineering team is forced to do this, and frankly just doesn’t want to. Straight talk – out of all the software engineers you have met how many actually give a crap about security? 28 years in for me and that number is minuscule.
This means it is on us to be advisors and aim to positively influence. It’s on us to weave this into the day to day reality of other teams, but we have to be all in. You have to be deep about appsec and be committed. Otherwise feel free to just stop reading here.
Another point I will make here may upset some and that is ok. More straight talk for my security peers ….. you do no one any justice when you, or any “security” expert, comes to the table with software engineers to discuss their “insecure” coding, yet it is obvious that you have never actually coded anything yourself. A software engineer will see right through you and will silently (hopefully at least) be thinking, “what the %$#& do you know about what you are actually saying?”
Maturity is a big factor when pursuing the build out of an appsec program. One could argue that a certain level of maturity is inherent when an organization is even thinking about appsec formally. One major challenge will be your ability to positively influence the engineering culture of your organization. Proper influence is key here because a force fed program will get you limited results and I assure you there is not enough time to review every line of code written for some given cycle. Hence, your goal is to positively influence the relevant people and the process to want to be a part of this. A “shift left” process, for instance, should become a mutually desired business enabler.
Both sides, software engineering and cybersecurity, should be after the same goal of a secure, resilient, functional piece of customer facing software. So building a successful appsec program requires commitment from each side. This will require some tactful education on the side of cybersecurity leadership. And let’s face it, some organizations are just not wired, culturally, to actually have a good appsec program, if any at all.
Organizational Culture
Let’s start with the organizational culture. To keep this relevant let me be clear that not all organizations need a dedicated appsec team. The ones that do are generally building something and pushing that something out for customers to utilize. But for instance if your organizations entire tech stack consists of a bunch of integrated SaaS solutions there may not be much for an appsec team to do. Those orgs can probably get away with periodic consultants reviewing security configurations from the SaaS vendors and perusing the integrations for weak controls. But honestly those orgs are at the security mercy of the SaaS vendor.
Mature organizations want application security, and security in general, to silently just be there. This is ultimately the goal of security as a business enabler. The more silent it is the better. The challenge is, however, that security hurts and it costs (time, money, effort and resources). As much as people love to push the enablement issue security simply doesn’t actually enable anything in most typical businesses. Hence, why they see security as a necessary evil. Changing organizational culture is critical but you have to factor in the reality of what I just mentioned. Weaving security thinking into an organization’s culture, especially in the engineering space, is foundational. Without this an appsec program will fail.
In the spirit of not being that “department of NO”, or not being a blocking entity, it’s up to us to figure out how to be “enablers”. This “how” is not trivial and organizationally subjective. Shifting left is a very common way to ease into enablement. To me, this requires moving security elements (code scanning, vulnerability scans, DAST/SAST, pen tests, etc) to be impactful earlier in an engineering and/or automated build cycle. Accomplish that and you may just have enabled a better solution to be built and deployed.
Focal Areas
In order to build this appsec program you now realize you have to positively change the culture of an organization. This will take time and perseverance. It will also take focus and a sound strategy. Here are a few areas that should be front and center in your appsec journey:
SSDLC – The key focal area for positive impact will be the Software Development Life Cycle (SDLC). You need to help your organizations engineering entities transform this into a Secure SDLC (SSDLC). I suggest you take slow and small steps here as change is difficult to accept. This is especially so if you are an outsider (and you most likely are from their perspective) asking them to change.
Focusing on adding security value with changes/additions to an SDLC, the typical areas you will hear experts speak about are:
secure code training (for software engineers)
secure design reviews (typically at an architectural level)
pen tests (internal and/or external)
risk assessments
regular advisories
secure code review (when/where possible)
For those of us who have really done this we know it is never that cut and dry. Moreover, to accomplish all of that your appsec team and budget better be pretty hefty. Software engineers are not going to rejoice with open arms based on what you are asking them to do (more work, longer deadlines, harder testing, etc). You must choose wisely which of those areas you will initially push for and look to make allies who willingly engage. The iron fist approach hardly ever works.
Depending on the size of your team, budget, the size and numbers of the target engineering teams, and company support (organizational culture) you may find a great approach is to embed appsec folks into the engineering teams/squads. This will create institutional knowledge and tailor the appsec program based on that domain expertise they will gain. The downside is that it takes resources away from your normal operations, but my experience is that this is an acceptable cost.
Measure Maturity – Maturity matters and you should set a goal of formally tracking progress in respect to your appec program. Two popular frameworks in regards to establishing a baseline and measuring this over time are:
There is a nice side by side comparison of the two here. While both of these frameworks seem straightforward give them some thought. See which one fits best based on your intimacy with the culture of your organization. One note, it’s ok for your scores to drop every now and then. This is a space that is highly impacted by certain events. Take a Merger & Acquisition (M&A) event for example, you have little control of what you inherit. This could instantly drop your scores based on no fault of yours. So the scores are a great metric but one that requires you to go with flow a bit.
Build relationships – This area really encompasses two distinct areas, testing and operational work. Building a relationship with Quality Assurance (QA) testing teams may prove very beneficial. After all, functional testing can very well go hand in hand with some security testing. Having security functions injected into other areas, such as regression testing, may prove valuable as well.
While automation plays a big role here you may be disappointed to find that in 2022 there is still a lot of manual QA testing that is performed. As humbling as that may be do you actually think your automated appsec scanners can catch everything? Irrespective, building relationships such that discoveries are made prior to production releases will prove invaluable over time. So work with QA to for instance have pen test elements built into some of their processes. They can become total allies over time.
Part of the relationship ecosystem is all about ownership. We are in no way trying to “pass the buck” but other teams, such as software engineering, need to own part of the security responsibility. They will ultimately have more of an impact on day to day security decision making than anyone outside of their team(s). RACI charts can be effective in identifying ownership borders clearly so consider as part of your arsenal. The appsec team is there to advise and try to influence best decisions but generally wont have the authority to force much of anything.
Invest in the right tooling – Focus on solving problems and building solutions, not implementing products. This area is broad and really spans numerous key areas of an appsec program. The goal is to positively impact an entire ecosystem from the left and right. On the left there is the SSDLC and all the goals already mentioned, you will need to invest in some tooling. On the right there are architectural components, pen testing, and a host of other initiatives that range from the systems thinking to the active protective. Active protection, for instance, will require tooling as well. But remember, build solutions.
Automation is going to play a key role in the overall impact of your appsec program. Building security components into X as code (where X can be build or infrastructure, etc) initiatives can add a lot of value. Injecting blocking security mechanisms into your organizations CI/CD pipelines are a great way of designing guardrails directly into a some processes.
Other Areas
Set boundaries – The scope of your appsec program will be critical. Scope is very subjective per organization. In order to set your team up for success set the boundaries early. For example, is your program going to cover database security? What about protecting the connections from apps, APIs, etc to databases? What about file security and protection? In some organizations those elements belong to an appsec team and this needs to be defined clearly.
Gain intimacy – Make sure your appsec team gains some intimacy with all relevant software engineering processes (to include tech stacks, 3rd party libs, hosting environments, file transfer mechanism, build processes, etc). This intimacy will allow your program to be effective with some of its goals, especially the one related to building and implementing a SSDLC.
Intimacy also has a direct impact and/or outcome related to the relationships you want to build. Bi-directional communication will prove invaluable. Get into the weeds with software engineering teams and, assuming you earn their respect, you will quickly identify who will be an ally and a voice for your appsec program(s).
Build inventories – You can’t effectively protect what you are not aware of. In my experience software engineers are generally bad at documentation. Part of your appsec program should aim at inventorying the application landscape and what it is made of. Pay close attention to the hidden “gotchas”, such as the APIs that are built into an app’s infrastructure, and hosted on the same server and transport (i.e. HTTPS) mechanism. Don’t limit your viewpoint here and consider infrastructure components along with supply chain elements as a part of your inventory. A lot of interesting security problems are most likely lurking there. A good inventory should at least expose some of these areas of interest.
As part of the inventory consider also building a control inventory. Frameworks (i.e. MITRE ATT&CK, etc) can help keep this organized as well as help make sense of adversarial tactics, techniques, and procedures (TTP). Creating an inventory of this sort can expose areas in your attack surface that maay need attention.
Build a risk register – This risk register will be focused on your apps and solutions. This should provide clarity in terms of risk the organization faces on a regular basis.
Create opportunities – Take every opportunity to get your message across and show how some security initiatives can be seamless and painless. For example, if you have an engineering team that creates compiled code. Why not have a library built (i.e. shared or static lib) that performs numerous security related functions (i.e. input validation, header setting, encoding/decoding, etc)? Then have the engineers take a look at how easy certain things become when they call exposed functions in that library. That is a seamless way of getting your objective some traction.
Gather metrics that matter – You will need to relay the value add, and effectiveness, of your appsec program to the corporate executives and/or the C-suite. Focus on metrics that matter. Not all metrics will be relevant to an appsec program. Any type of cost savings is always welcome, while you can also measure and track program adoption. Strategically, use one of the frameworks (SAMM or BSIMM) discussed earlier to show progress of maturity.
Final Thoughts
Security leaders set strategy and create programs, but more importantly we advise. A solid appsec program is one of those areas where we advise an organization. It is on us to mold an appsec program and focus it to map to, and benefit, the business. The points I have made here should give you a good sense of how you can enable a business via an appsec program.
Designing a program takes effort and thought. Factor in the people, processes, technology, and culture of the organization. Factoring these elements in is a continuous process as things, and organizations, change. They have to adapt and overcome constantly. So does your program. As long as you are continuously improving, and keeping step with the business, you will get positive results.
Understand that the biggest impact will come from the partnerships you pursue. This will be crucial in terms of having positive influence on the culture of your organization. Support this with some, hopefully many, of the technical components discussed in this writing and you will not be disappointed with the results. Stay focused on the fact that your appsec team should exist to be advisory, as subject matter experts.
Cybersecurity, and appsec specifically, subjectively mean different things to organizations. For some organizations these elements are simply part of modern day business success. My humble advice can get wrapped up as this: position yourself as a facilitator and an advisor, one that can transparently (as much as is possible) leverage the advice provided here to actually enable safe application based business. I wish you the best with your appsec initiatives.
Cybersecurity metrics, the challenge, measure what matters.
Warning: there are a number of somewhat abstract concepts presented here. I know some people have a hard time with abstraction so please read with an open mind. There are a lot of open ended questions as well. This article is intended to spark thought outside of the norm as it relates to cybersecurity metrics.
As an industry we ([cyber | information] security) have struggled to pin down the art, and science, of cybersecurity metrics. It is a struggle that some feel they have mastered. I am not entirely convinced. Moreover, I see the general consensus sadly playing in the safe zone when it comes to this subject. It takes courage to measure what matters as opposed to measuring what is possible, or easy. It also takes courage to measure elements that are somewhat in the abstract because of the difficulty at hand.
I acknowledge that “what matters” is subjective to four entities, the organization, its C-Suite, its varying board members and us (Cybersecurity leadership). We can steer the conversation once there is internal clarity in reference to the items that really matter.
One of the enemies we have to contend with, is our indoctrination to always strive for 100%. This score, level, grade, is simply unachievable in most environments. And what really constitutes 100%? Is it that our organization has been event-less by way of incidents, breaches and/or data exfiltration? What constitutes the opposite, or a score 0 (zero)? We have to stop thinking like this in order to get a realistic sense of metrics that matter.
My contention is that we need a small, tight, set of metrics that are representative of real world elements of importance. This comes with a fear alert, because in some cases measuring these areas will show results that come off as some type of failure. We need not feel like this is reflective of our work, we are merely reporting the facts to those who need them. “Those” would generally be the board and the C-Suite. They will probably have a hard time initially understanding some of these areas and admittedly they are very difficult to measure/quantify.
It is the job of an effective CISO to make sense of these difficult to understand areas and educate those folks. But, the education aspect is not just about understanding them, but to how extract value from them. This is where the courage comes in because a lot of people have a hard time accepting that which is different than what they are accustomed to.
Subjectivity is important here. There are few formulas in the world of cybersecurity and what matters to one organization may have little relevance elsewhere. Organizations need to tailor their goals, and in turn the measuring mechanisms, based on what matters to them. This of course has a direct impact on what risk areas come to light, which ones need to be addressed with urgency and those that can wait. Hitting these subjective goals (that should be defined by the metrics) could also bring about ancillary benefits. For instance this could force the issue of addressing technical debt or force a technology refresh.
Here are some suggestions (nowhere near exhaustive) that are top of mind in respect to metrics we tend not to pursue (mainly due to the difficulty of measuring them):
Effectiveness of active protection mechanisms – This one seems obvious at face value. Grab some statistics after the implementation of some solution, for instance a Web Application Firewall (WAF) and show how many awful things it has prevented. But this is such a fragmented perspective that it may provide a false sense of security. What about your machine to machine communications deeper in your network (VPC or otherwise)? How are you actively protecting those (API requests/responses, etc) entities?
I find the bigger challenge here is ecosystem wide coverage and how you show the relevant effectiveness. There are other difficult to measure areas that directly impact this one, such as attack surface management. But if we, as an industry, are ever going to get ahead of attackers, even in the slightest way, this is an area we all need to consider.
Reproducibility – The “X as a Service” reality is here and quite useful. “X” can be infrastructure, it can be software, it can be many things depending on the maturity and creativity of an organization.
From the software perspective, what percentage of your build process exists within a CI/CD pipeline, or process? This strongly sets a reproducibility perspective. Within a CI/CD process many areas, such as security, resilience and DR, can be covered in automated fashion. Vulnerability management, and patching, can be included here as well. It’s 2022 and if your organization hasn’t invested in this area you need to put some metrics together to make a case for this.
Attack Surface Management – What does your organization look like to an outsider? What does it look like to an insider? What does it look like when you factor in ephemeral entities (such as elastic cloud resources)? Does your attack surface data factor in all assets in your ecosystem? What about interdependencies? Asset inventories are seldom accurate and so possibly your attack surface is a snapshot in time as opposed to something holistic.
There is a lot to consider in terms of attack surface metrics yet it is such a key component to a healthy cybersecurity program. Please don’t think that any one specific product will cover you in this area, most are focused on external perspectives and miss the insider threat vector entirely.
Software Security – This is an enormous subject and one that deserves an entire write itself. The maturity of software can certainly be measured with techniques like SAMM (one such model is OWASP SAMM). Creating, and implementing, a SSDLC goes a long way in integrating security into the core software development process. Underlying any of these techniques is the need to map software to business processes. Otherwise you have a purely technical set of metrics that no one outside of tech will be able to digest.
Technical Debt – This area is complex as it can contextually refer to software that needs to be refactored or it can refer to legacy systems (stagnant or otherwise). Regardless of the context how does one measure the level, or severity, of technical debt within an organization? If a successful relevant model is created it will probably create a strong argument for some budget 🙂
Distance Vector – How far into your ecosystem can an attack get before it is detected and handled (whatever handling means to your organization)? The logic here is simple, the longer it takes to detect something the more you need to pay attention to that area. Think of APTs and how long some of them exist inside of your network before there is detection and response.
Time vector – Who is faster you, the defender, or the attackers? There is a reality to the time factor and every time your organization is targeted there is a bit of a race that takes place. Where you end up in the final results of that race dictate, to an extent, the success factor of an attack. This is very hard to measure. But, spending time figuring out ways to measure this will yield an understanding of the threats you face and how well you will fair up against them.
One great benefit of spending time assessing your time vector is that it will force you to measure your ability to successfully address entire families, or classes, of attacks. Having the macro focus, as opposed to the typical micro focus may bring about an interesting level of discipline with your technical teams. Basically, they will be forced to think big and not exclusively on edge, or corner, cases.
Repeatability – How repeatable are key functions within your organization? Measuring repeatability is super difficult and yet this is a foundational aspect of mature cybersecurity programs. Playbooks exist for this exact reason and we do invest resources into creating, and maintaining, them. This means repeatability is undeniably important but yet how do we quantify this?
Budgeting – How do we know if enough is being funneled into a security program? At the end of the day we can’t plug every hole. One strategy is to perform crown jewel assessments and focus on those resources. Another one is to analyze attack surface data and cover areas of importance. But how do we measure the effectiveness of these, and any other related, strategies?
Insufficient budget obviously reduces the ability of a security team to implement protective mechanisms. The metrics we focus on need to push for clarity in terms of what becomes possible. There’s most likely no correct amount of budget but we get a slice of some larger budget. What we get becomes a variable amount over some period of time. But the budget itself needs to be treated as a metric. Think of it this way, if you don’t get enough budget to cover the areas you know need attention then there will be gaps that are directly attributable.
Sadly a lot of budget increases come about because something bad has happened. But this (the fact that something bad happened) means more work needs to be done. And yet we struggle with the necessary quantification. Ultimately we should pursue business aligned initiatives irrespective of the difficulty of trying to pin down an accurate budget.
All-Out Mean time to recovery (MTTR) – Imagine the absolute nightmare scenario that your entire organization is decimated somehow. Imagine you are brought back to the stone ages of bare metal and have nothing but a few back-ups to recover from. How long will it take you to get your organization back to an operating business? Some organizations are well positioned to recover from isolated incidents, like a ransomware event. My thought process is around something far more catastrophic.
I am not sure there is an organization on the planet that can answer this question at breadth and depth. I fear that there is also a lot of hubris around this subject and some may feel this is not a situation they need to account for. The more typical all-out scenarios you may encounter focus on operational areas. For instance if all servers become unusable there is a DR plan that has been designed, tested, and tweaked to reach acceptable MTTR.
From a positive vantage point, the very act of trying to measure some of these admittedly challenging areas of operation will likely reveal many areas of improvement. That in and of itself may prove valuable to your organization in the long run. There are so many more we could come up with but the areas presented here are a decent starting point.
On the negative end there is an enormous challenge in that board and the C-Suite might not understand these metrics. Hell, I can think of many IT leaders that wont understand some of them. But these are not reasons to shy away from the challenge of educating folks. I understand the notion, and am a practitioner, of talking to the board on their terms, in their language. But is it truly beyond their capabilities to understand some of these points? Is it unreasonable for us to push for a deeper level of understanding and interaction from the board and the C-Suite on these metrics?
One suggestion is to be super consistent in the metrics you choose. By consistent I mean stick with them and show changes over time. The changes can be negative, thats ok. No one is delusional enough to expect total positive momentum all the time. Your presentation, of the metrics you choose, will be an investment and in time the board and the C-Suite will start to see the value, and that you are one persistent individual.
Ultimately, there are many superficial security metrics that keep you in a safe zone. I challenge all of us, myself included, to do better and be more creative. This will be difficult but I find it is well worth it. The outcomes may surprise you and the ancillary benefits (areas you will be driven to address, etc) may as well. There will of course be push back that these are difficult to understand. Or maybe the arguments revolve around the effectiveness of the message to the board and the C-Suite. But the fact that something is difficult is no reason to not tackle it.
Are we always pursuing real protective measures? Real cybersecurity or the pursuit of the optical illusion? It is Q2 of 2022, somehow there are corporate leaders (executives, board members, etc) that still don’t take cybersecurity seriously. As a result they are not interested in security (i.e. a mature program, actual protective mechanisms, etc) but are instead satisfied with the illusion of it. They want to invest the least possible in this area and yet have the best results.
I find this a fascinating, and disturbing, dynamic. In fact, I don’t understand how this is even possible given the reality of todays corporate environments. A number of SEC proposed rules have made it abundantly clear that this needs to change. Moreover, the mainstream media coverage of cybersecurity related issues is very real. This alone should have cybersecurity as an “in your face”, “top of mind” area of concern. It is an area directly linked to the survival of most modern-day businesses. And yet, some corporate leaders still see it as overhead, not worth great investment because it is difficult to link it to revenue generation.
In thinking about this I can’t help but to link this to some of the horrible strategies I have run across over time. Subsequently, there is a message to corporate leaders here, the formula is simple. You get what you pay for. It is delusional to expect stellar results on a shoestring budget. Furthermore, we are here to protect the company, its people, its assets, we are not the enemy. Often we are perceived as such because these folks are just protecting the dollars and cents. Security hurts. And it costs money.
Humorously thinking about this situation, look at this image and ponder the actual reality it portrays:
The other humorous point to me is based on the introduction image at the top of this blog. The person trying to hold back the wolf clearly represents the corporate leaders I am writing about. Together with this the wolf represents those attackers we are sure to face at some point in our cybersecurity leadership journey. The formula is simple and the outcome is obvious.
We all make mistakes but some cybersecurity decisions and/or strategies are just downright silly. They are of course dangerous as well. Here are some of the silliest ones I have encountered.
Cloud security – relying on some automagical security posture simply because you have digitally transformed to a specific cloud provider is just ___ (you fill in the blank). I have actually been told, with a straight face, “we are protected because we are hosting on cloud provider X”. Silly strategy.
Lack of incident means we are safe – some executives take the lack of incidents as the impetus to not have to invest in cybersecurity. So the strategy is purely reactive, their goal is to save money until there is an incident (material or otherwise). Then of course they will be mortified when something bad happens. Silliness.
Cyber-Insurance – having insurance in no way makes an organization secure, not even close. Not worrying about some negatively impacting event because one has insurance coverage is just ___ (again, you fill in the blank). So the notion of transferring risk (as if that is even a real possibility) rather than addressing it is ….. silly.
Reliance on tools – deploying even the best tools, does not mean a “set it and forget it” approach will make them successful. Assuming security is easy because products and/or tools will handle everything is a silly strategy.
Reliance on THE tool – there is of course security by obscurity, but this is security by marketing. Some sales people are very good and of course their one product can solve all of your security issues. Actually believing that one specific tool can solve even a large portion of security issues within a mature/developed ecosystem is silly.
Obscurity – security by obscurity has been a “thing” for a long time. History has proven that this approach hardly ever works. But it is inexpensive and easier to pursue than building proper security controls. And some people out there underestimate the intelligence of the attackers we face on a regular basis. And no, the fact that you may be a small company means nothing to a cyber criminal. Trying to fly under the attackers radar, or assuming your obscure methods will outsmart them, are both silly strategies.
Ignoring it – one just can’t ignore security problems and hope they don’t become real. Hope is not a strategy, or at least it’s a silly one.
We will come back to that – this just never seems to happen. The notion of coming back to something problematic, at some future time, and “tightening things up” is just ___ (once again, you fill in the blank). Ignoring something you are aware of is downright irresponsible and of course, silly.
Assuming the vendor has you covered – assuming that a vendor does security right is off the mark on many levels. Products often get delivered and/or deployed with horrible security configurations (and plenty of easily guessed default credentials) all the time. This is yet another silly strategy.
Compliance equals secure – regulatory compliance does not equate to being secure or protected. Having a ISO-27001 certificate, and a SOC-2 Type 2 report, and a host of any other related compliance credentials, is not going to stop an attacker from being successful. Relying on looking good on some piece of paper is …. silly.
As a cybersecurity leader you must steer things towards a long term, sensible strategy. The foundation of this strategy should take into account all of the silliness I just wrote about. Otherwise, failure is inevitable.
Allow me to offer you a piece of Cybersecurity advice: be threat-led, but be monumental in the real-world. This is purely pragmatic advice. In my interactions with peers I often encounter those that worry about threats that most likely will never affect them. This is generally an honest mistake but can lead to some serious misguidance. Defensive security teams have limited resources and need to stay focused on threats that matter, that are real.
Over time we develop keen instincts that lead us to think of many angles and imagine many edge cases. But focusing on events that are unlikely to happen can lead to wasted resources and important areas left unprotected. Some of my observations and commentary below might be cynical. But when you have been in this game since the 90’s certain real world perspectives develop.
Nation State Threats
The bad news is that, yes, these are very real. The good news is that to a nation state the majority of businesses out there are probably not interesting. For example if you are selling Pokeman cards online I doubt a nation state is targeting your online presence and/or database(s) right now.
FOSS Threats
There has been a lot of talk lately about securing Free Open Source Software (FOSS). And yes there are security issues in that space (as with most pieces of software). But, if one steps back and analyzes the sheer volume of code that exists in FOSS projects, do you really think your focusing on this issue is going to have an impact? Couple the volume with the fact that in 2022 so many developers, especially FOSS contributors, still honestly don’t care about security, and you have to ask yourself if this is really a space worth your resources (energy, attention, etc).
Insider Threats
I have given this area much thought throughout my career. When I worked in the federal government this was a major concern. But truth be told most mere mortals are downright scared of getting in deep trouble (prison time, etc). This puts them in a conflicted state where their level of disgruntlement takes a back seat to the fear of being taken away in handcuffs.
Now there are of course those anomalous cases where an insider (typically a malicious/disgruntled employee or contractor) does lose touch with reality and takes nefarious action. They do have a great advantage in the domain knowledge gathered throughout their employment. But, an “insider” can also be an infiltrated foreign/corporate spy. Hah, caught ya 🙂
Admittedly, if you are an important enough entity to attract the attention of a nation state this type of threat is real.
Vulnerabilities in SaaS products
There is an undeniable trend to not want the headaches of hosting anything these days. As such cloud based solutions, in particular Software as a Service (SaaS) solutions, are very popular. But they have bugs and security weaknesses too. Should you focus on getting those fixed? Can you actually get any of them fixed? Maybe you can, if you are Netflix or Disney and you go to AWS asking them to fix/address something. But normal sized companies, especially in the SMB size range, don’t have that kind of influence and probably have limited security resources. This mean SaaS security might be an area that will yield very little for possible a lot of effort.
Zero-Day Vulnerabilities
A zero-day vulnerability is simply an undiscovered issue before a point. Once used that vulnerability loses its status as a zero-day vulnerability. This means there is a window of time when nefarious actors can actually take advantage of the vulnerability in question. But these are not easy to discover and the discovery process typically requires a very advanced skill set.
In the past many CISO’s led with FUD and zero-days very nicely fed into that horrible strategy. Zero-days are stoppable if your security program is properly thought out and has enough protective layers. The bottom line with zero-days is that they are far and few in between. Frankly there isn’t much you can directly do about them unless you have a research team hunting them down full time.
Real-world Threats
It’s March of 2022 and relatively speaking gone are the days of hacking for bragging rights; this has become organized and is now big business. Many of those script kiddies grew up and realized they could make money off this stuff. But like most businesses there are reality based rules and constraints. This means the bad guys have agendas, bosses and resource constraints just like we, the defenders, do. Phil Venables did a great write up on this exact subject, definitely worth a read.
So taking the business approach to analyzing the world of Cyber crime leads us to acknowledge that efficiency is a positive for these nefarious actors. Efficiency leads to smooth money. And so for instance we now see frameworks providing attack technology for rent. Why write custom code when you can just rent it? Re-usability comes into play as well, if something become repeatable then it leads to good business.
It isn’t just about the threats. If we take emotion out of the equation it becomes pretty clear that what most organizations need is basic Cyber hygiene. They just need to get back to the basics and build from there. They need to pragmatically put controls in place that are relevant. Relevance is important here. For instance implementing a specific SIEM “because this is what mature organizations have/do” is a wrong reason for this action.
Some areas to focus on
Modern day Cybersecurity is challenging and complex in both breadth and depth. We have to cover many areas while remaining pragmatic and focused. Moreover, every organization has different needs, even if they are just slightly unique. The following sections are high level thoughts and they by no means represent an exhaustive list:
Ransomware and phishing
Ransomware and phishing attacks are obviously very real. They require much attention due to the evolution of the sophistication we are encountering. The days of the solution being “having good backups” may be behind us. We need to be far more proactive. End user awareness has proven that it wears off over time so we continuously have to remind users to be vigilant (amongst other things). Going back to the hygiene point, and spilling into the reactive, we have to also make sure all relevant sources of log data are being covered. And then once centrally available all of that log data has to become actionable intelligence via analytics.
Endpoint
Endpoint concerns (including the human at that endpoint in the case of laptops/desktops) are in abundance. Preventative measures against standard fair malware are table stakes now. This just has to be in place. Limiting other attack surfaces, such as macros, requires some diligence but can go a long way in a stronger posture for an organization. Another obvious control that is now becoming commonplace is multi-factor authentication (MFA).
Users
User concerns will always be around. Until an organization can go fully passwordless, and/or implement a real zero trust environment (not a trivial task, and no product X by itself cannot solve all of your zero trust needs), using a password manager will prove very useful.
Incident Response
There will be breaches, problems, outages, events, incidents, etc and so having a formal and documented incident response plan/program will prove invaluable. When something happens, you need a tested and repeatable way of responding irrespective of what human is at the helm.
Resilience
Resilience is an area that requires some focus. Roughly speaking resilience is equal to a combination of an organizations Business Continuity Plans (BCP), Disaster Recovery (DR) plans and any proactive measures (Global Server Load Balancing (GSLB), high availability architectures, etc) aimed at increasing availability time of their solutions. Sometimes resources constraints force a focus on just those elements that are critical and those are sometimes identified by a crown jewel analysis.
Proactivity
Being proactive should be a goal for every Cybersecurity and/or Information Security program to strive for. This hopefully puts you in a position such that when something happens you have a way of preventing things from escalating to an all out breach. Techniques here range from the use of Web Application Firewalls (WAF) to implementations of Intrusion [Detection | Prevention] Systems (IDS/IPS) that help you detect and possibly block nefarious activity. Automation is another area where some investments may prove worthwhile, you just have to strategic about it and focus on relevant areas for your organization.
Patching
The age old practice of patching is still a must that can spare you some serious heartache. Following some simple steps can mitigate the risk of patching. And yes patching sometimes causes problems. So have non-production instances of your solutions available. This way patches will be safely applied and tested with your automated regression suites, or an army of manual testers, before getting pushed to production systems.
API
Irrespective of the size of your organization at this stage in the tech game your organization most likely has a web presence (i.e web applications, web sites, etc) and possibly web based Application Programming Interfaces (API). Securing those areas are obviously critical and approaches range from code level reviews (i.e. shift left, SAST, etc) to implementations of Web App Firewalls (WAF) to the use of Dynamic Application Security Testing (DAST) tools. Pay close attention to protecting your APIs as that can get tricky.
Cloud
Cloud security. Many of the elements already covered apply to securing cloud hosted resources. In particular we can focus on securing/protecting data stored on cloud resources. Typically this is data at-rest (i.e. files stored on some cloud storage) or data stored in some data store (i.e. relational database (DB), key/value or no-sql DB, etc). In the case of files lets be explicitly clear – volume/disk encryption is NOT the same as file encryption. When folks claim their data is secure at-rest, and the basis for this claim is volume encryption, their claim is arguable. Native file level encryption is different and a hybrid approach makes a lot of sense. Here is a good simple breakdown of this area.
Third Party Risk Management
Closely scrutinize your third-party vendors. Your vendor on-boarding process should be fairly thorough and followed up with periodic checks to ensure that all is still in order. Your procurement folks won’t like the delays caused by thorough checks but it is a core component of a good strategy to protect your organization.
Attack Surface Management
Having an asset / software inventory is absolutely critical in this day and age. But more important is what you do with that data.
Your asset inventory should be a critical component of your overall attack surface management strategy. The awareness you develop about your attack surface, and its continuous evolution, is super important and should be a major source of directive data in terms of where to focus some protective resources.
Your software inventory gives you a more granular perspective within those assets. If you have good threat intelligence sources, or have a dedicated team doing vulnerability management/discovery, then coupling some of those data points with elements from your software inventory can again help you be strategic in terms of where to expend resources.
Obviously, I can say much more about many of these areas. I hope to invest time in doing exactly that as things progress. Stay safe, vigilant and focus on those real threats.
Let’s face it titles matter. By title, I am referring to your business title. In this case the reference is to the title of the one in charge of Cybersecurity. This may seem trivial. But outside of your place of employment this matters. It tells the outside world a lot about who they are interacting with and where you are in your career. And if you are very much involved in the industry (outside of your day job) then your business title is that much more important.
The Chief Information Security Officer (CISO) business title is probably the most known related to Cybersecurity and/or Information Security, this is especially so at executive levels.
But what about a CISO that also oversees areas like Site Reliability Engineering (SRE), Devops, Devsecops and/or IT Security? As of late I have seen a role titled “Chief Cyber Officer” and that may cover these other areas as they could be considered not purely security in nature. This is of course arguable given that SRE and Devops functions ultimately play into the “availability” tenet of the CIA triad.
Here are some examples of other relevant business titles I have encountered. They ultimately play the same role as a CISO (even if at varying levels):
Business Information Security Officer (BISO)
VP of Cybersecurity
SVP of Cybersecurity
Senior Director of [Cybersecurity | Information Security]
Director of [Cybersecurity | Information Security]
Ultimately, an organization needs someone in a role that has the last word on cyber/information security matters. No matter the role, this is the title of the one in charge of Cybersecurity. It requires a proper business title. If your role is not properly titled, your external persona is being done an injustice.
Proficiency in translation for a Chief Information Security Officer, the need for it is one of the most important lessons I have learned as my career has progressed forward. Translation is a critical skill. Not Spanish to English, or Russian to Japanese, but tech (geek) talk to business speak. In today’s world (this is written in Jan 2022) the CISO’s (or whatever term your organization uses) role is becoming more and more business-centric (my feelings on this will be the subject of a different post). Moreover, Cybersecurity, Risk and Resilience are now legitimately boardroom matters (well, for mature companies they are) which makes this ability to perform effective translation, as discussed here, essential if you desire success in a modern day business setting.
Hard Lesson
A hard lesson to learn is that no one cares about how smart you are. Especially not board members or business people who mostly see the world through a lens very different than ours. Most of us who have come up the technical ranks take pride in our in depth knowledge of tech. A lot of hard work and late nights went into acquiring that corpus of knowledge. So it is no surprise that when we first get into leadership and/or management we innately try to sound really smart with impressive tech verbiage. What we don’t realize during that stage of our development is that really …. NO ONE CARES. I assure you that for instance talking about malloc and free (if you come from the application security realm) is not going to impress some MBA who thinks in terms of spreadsheets and bottom lines.
Realization
As maturity sets in a bit ( hopefully 🙂 ) we realize that translating the message from geek talk to something digestible by business people, or those MBA-types, becomes a very valuable skill. It is all about conveying a message and if your message gets lost in translation you have failed.
/* For all of you technical purists who will look at the examples provided here and complain about them not being technically accurate – it doesn’t matter!! The technical accuracy of the verbiage does not matter, what matters is conveying an appropriate message to a person, or group of people, who need to be empowered by information in a way they can digest. Admittedly it took me years to come to terms with, and accept, that. Oh, and by the way that audience you need to empower are most likely the same people who control your Cybersecurity budget. */
Examples
Some examples of effective translations:
– Web or Cloud hosted application = customer facing solution – Web Application Firewall (WAF) = protective mechanism for customer facing solutions – Security Information and Event Management (SIEM) solution = centralized repository of event data – Purpose of a WAF = to protect revenue generating resources
The few examples I have shared by no means represent an exhaustive list. If you have some good ones that would like to see added to the list email me (contact info) and I will add them. Use this format so that you can get credit:
Term = translated data (Credit: Your Name)
The bottom line is proficiency in translation for a Chief Information Security Officer, or for cybersecurity leaders of all experience levels, is an essential skill.