Cybersecurity Empowers Businesses to Soar, this is how

Cybersecurity Empowers Businesses to Soar, this is how

Cybersecurity empowers businesses to soar, this is how. The modern day notion that “cybersecurity is a business enabler” is a very popular one. The problem is that most of the people singing that tune are cybersecurity leaders trying to get their message out. The C-Suite and business folks may not agree, or even see this as actually being the case. So, exactly how is it that cybersecurity is a business enabler? This is an exploration of some concepts, with examples.

Cybersecurity teams typically focus on implementing guardrails and protective controls. That is the overt nature of the business. To be an enabler the opposite (opening things up) may need to be a focal area. Reducing and/or eliminating, excessive or unnecessary guardrails and/or controls could prove very effective. The targets of this exercise are those that hinder innovation and new initiatives, but don’t add any tangible value. Sometimes these exist in the form of technical debt because a predecessor thought they made sense.

In modern day business cybersecurity is no longer just about protective mechanisms. Therefore, as technology advances, and adds value to businesses, cybersecurity must be prioritized so that the organizational goal isn’t “business”, but “safe business”. Safe business implies that asset and customer protection are important areas for an organization. Safe business is where cybersecurity empowers businesses to soar, this is how:

Risk Reduction

Any cyber event (attack, incident, etc) can have a significant negative impact both operationally and financially. There is also the potential reputational consequence. However, there are studies that refute this, here is one example. To be objective, the reputational impact of a cybersecurity event cannot be solely measured by its stock price. Irrespective, robust cybersecurity measures are a common way to reduce cyber related risk, in turn providing some protection to organizational reputation and financial health (avoidance of costly legal fees, loss of revenue, etc).


An organization goes through the pain of implementing native, end-to-end encryption (E2EE), at a database level. This protects customer data both while at-rest and in transit. This added layer of protection raises the work factor for any nefarious entity targeting the organization that has custodial responsibilities over the relevant data sets. This move stands to increase confidence in the organization, amongst other benefits, by reducing risk of data leakage and/or exposure.

Asset protection

Cybersecurity controls and protective mechanisms can protect an organization’s assets. The definition of asset is subjective to a given organization, but generally covers data (customer data, PII, intellectual property, etc), people, technology equipment, etc. By protecting assets, and preventing data breaches, a business can maintain a level of integrity related to their assets. Moreover, the organization can avoid potential financial and reputational damage. Once an organization does not have to worry about that potential damage it can operate in a safe, and focused, fashion.


A data discovery exercise reveals that specific columns in a database store personally identifying information. This data is stored in the clear. Cybersecurity works with the respective engineering teams to implement native column level encryption and then appropriately modify all the relevant touch-points (apps, APIs, etc). This strong protective mechanism provides asset protection of sensitive data from nefarious entities. This protection in turn builds trust with customers and partners, enabling the business to grow.

Adherence to regulations

I will never confuse compliance with security. However, there is business value in being compliant with regulations, when appropriate. A number of industries have regulations regarding data privacy and security. These are sometimes encountered in the form of HIPAA, GDPR, & CCCPA. One way cybersecurity can add value to, or enable, a business is by ensuring adherence with the appropriate regulations. A company that can comply with regulations will have a good chance of avoiding fines, penalties and legal issues. This section should be self-explanatory and needs not an example.


Many organizations have been through the unfortunate circumstances that come with some sort of negative security event. The fact that they have is an indicator of some gap, or deficiency, in their security posture. Contrarily, some organizations are not often heard of within the context of negative security events. All organizations have security gaps but these have probably invested more resources in differentiating themselves from the others. Particularly, it is cybersecurity that can provide this competitive advantage to an organization or company. By implementing strong protective measures, an entity can demonstrate to customers, and partners, that they take security seriously. This in turn makes them a more appealing business partner.


An organization engages in honest, objective, and continuous assessments and penetration tests against their customer facing environments. Those reports, untouched by the organization, are then published for the world to consume. This type of transparency shows goodwill and confidence on the part of the organization. Moreover, it differentiates them from the competition if they haven’t been as forthcoming and transparent.

Customer / partner confidence

Customers, and partners, are becoming more aware of cyber risks. These risks are becoming part of normal life. As such, potential partners and customers now prioritize cybersecurity when considering engaging in business. By implementing effective cybersecurity measures, a company can improve the confidence these potential customers and partners have in it. The goal is to use that as a solid basis for business relationships. Over time this will also lead to increased loyalty and trust. Modern day customers, and partners, will trust a company that takes cybersecurity seriously and is committed to protecting their personal data.


A data discovery exercise exposes many years of technical debt by way of dangling backup files. These files are mostly un-encrypted since file encryption wasn’t a big thing a decade ago. Inside of the backup files there is personal data from databases. There is obvious risk here. Moreover, there is needless spending for the necessary storage. The intelligence gathered from discovery leads to engagements with relevant engineering teams to clean this up. Sharing this discovery, and subsequent action, with relevant parties demonstrates a commitment to data protection. This protects the organization from potential/needless data exposure and builds trust with customers and partners, enabling growth.

Enabling innovation

As companies continue to move forward in competitive fashion, innovation is a differentiator. Safe innovation makes cybersecurity involvement even more critical than under normal circumstances. Sound cybersecurity mechanisms can enable a company to innovate with confidence. This could consist of adopting new technologies, such as cloud computing, Internet of Things (IoT), and generative artificial intelligence. By implementing cybersecurity measures that align with business strategies, a company can improve their agility, level of innovation, and competitiveness.


An IoT manufacturing company is building sensors. Those sensors will automatically send telemetry data to a cloud based ecosystem for storage and eventual analytics. The cybersecurity team works with software developers to make sure that data gets transmitted in the safest possible manner, a combination of orthogonal encryption covering both transmission streams and payloads. Given that, self contained executables can be compiled natively for multiple platforms. That protected mode of transmission becomes portable. The hardware design team can then shift gears and change embedded platforms as needed while not worrying about the data transport mechanism. This improves research & development, production efficiency, agility, reduces cost, and enables the business to scale.


In conclusion, cybersecurity is no longer a technology practice, nor is it just a set of defensive measures. It has become a legitimate business enabler that, when done right, can bring significant benefits to a company. From reducing enterprise risk to protecting corporate assets and ensuring adherence to regulations; from creating differentiators to improving customer/partner confidence; from enabling innovation to enhancing competitive advantage, strong cybersecurity measures can help a company thrive in today’s digital age. Hence, cybersecurity empowers businesses to soar, this is how.

Compliance does not equate to security, or protected

Compliance does not equate to security, or protected

Compliance does not equate to security, or protected. They are not the same thing and the dividing line has become somewhat blurred. Parts of the Cybersecurity industry are in crisis due to an over reliance on compliance frameworks. The unfortunate reality is that many cybersecurity leaders rely on these frameworks because they just don’t know any better. Maybe, they were never practitioners and approach cybersecurity from a purely business perspective. Yes, we cybersecurity leaders need to have a business perspective. But, we also need to know our craft and be able to drive true protective initiatives. Over-focusing on compliance efforts may actually hurt security. Even worse, it may give an organization a false sense of being protected.

Compliance has sort of lost its way

My understanding is that compliance initiatives never intended to give a false sense of security. The intentions were to:

  • provide leaders and practitioners guided mechanisms that could promote better security
  • provide a maturity gauge
  • possibly expose areas of deficiency

One of the areas where they intended to help is that of introducing industry standards. The well intended thought process was that adherence to some well defined standards could make organizations “more secure”.

While I commend the intent, the end result, our current state, could not have been foreseen. Earlier generations of cybersecurity leaders, those who came up the practitioner ranks, treated these as mere guiding tools. Today, there are some with great sounding titles that rely on these as evidence of their brilliance and fine work. The problem is that those unfortunate organizations have been led to believe that they are secure and protected.

This creates a misleading culture of compliance=security. And those who know no better push this rhetoric, leave a trail of insecure environments, but move their careers forward. The funniest part here is that often those leaders bring in an external entity to assess if the organization in question is compliant. This is supposed to somehow add credibility to this weak approach. And this box checking exercise is often pursued in lieu of encouraging a real focus on security and protective controls.

Secure companies might be compliant

A secure company may or may not be compliant. I have encountered environments that are well protected and have not been through any compliance exercise. I have also seen compliant organizations go through the unpleasant experience of one or more negatively impacting incidents. Look no further than Uber. According to this they have received, and maintain, the highly coveted, and difficult to get, ISO-27001 certification. Yet, here is an incomplete list of incidents showing that this does not equate to security: 2014, 2016, 2022.

How they differ

Objectives and target audience

The objective of a typical compliance exercise is to measure an organization against a model, or set of recommendations. These might be industry standards. Audiences interested in these types of results can range from internal audit to the C-Suite. Generally speaking, technologists are not interested in these data points. But, some industries place great weight on these types of results and will not even consider business opportunities with organizations that do not have them.

Security, on the other hand either strives to reach certain levels of Confidentiality, Integrity, or Availability (CIA) and/or being Distributed, Immutable or Ephemeral (DIE). The objectives here revolve around threats, risks and building a resilient organization. The audience are those who see security by way of actual protection and resilience.

Rules of engagement

In security, there are none. Nefarious actors hardly ever play fair.

Compliance, however, has clear rules of engagement. In some cases the compliance exercise is even based on the honor system, think of the “self attestation” documents we have put together over the years. There is also a cadence where organizations know when relevant cycles begin and end. These are structured practices attempting to measure environments where the real battlefields have very little structure.

External motivations

When it comes to compliance, the external entities (auditors, assessors, etc) involved are motivated to generate a report or certificate. In some cases there are degrees of freedom for cutting corners or adjusting the wording of controls. This opens up many possibilities that aren’t all positive. Things can get manipulated so that a specific outcome (leaning towards the positive) is achieved. These folks are running a business and looking for repeat business.

The external entities on the security side are motivated towards results (destruction, your data, etc).

Focal areas

Security is usually focused on protection against threats. This focus could very well reach very granular levels. Granularity levels aside, protection could be both active and reactive in nature. Prioritization plays a big role here. This is especially so in organizations of larger sizes. A cybersecurity leader ultimately directs protective dollars to focal areas of priority.

Conversely, compliance takes a broad and high level look at things, treating all areas equally and so there is very little by way of focus. These exercises are more about checking boxes against a list of requirements that are all of equal importance.

Rate of change

Compliance is generally static and operates on long cycles. Moreover, the process requirements don’t change often at all. The challenge here is that internal changes are not factored in until subsequent cycles, and that could in one years time. In the world of security that feels like an eternity.

Security needs to keep up with constant changes. Simply factor in automated deployments and/or ephemeral cloud entities and it’s easy to see the impact of rate of change. Externally, nefarious entities are always changing, improving, adapting, learning. This means that an organization’s attack surface, and risk profile, are constantly changing. This alone negates the usefulness of a point in time compliance validation.

Organizational culture and the mindset


Organizational culture drives the mindset people adopt at work. This could be a subtle dynamic but it is powerful nonetheless. A healthy relationship between compliance and security is the right mindset, and approach as each is important. Compliance is necessary for modern day business, without it an organization may not even be able to bid for certain business opportunities. Different from this modern day construct is security. Security is necessary to actually protect an organization, its assets, users, and customers. The nebulous tier that exists between these two is that compliance represents the only set of data points for external entities (e.g. potential business partners, consumers, etc) to consume and construct a maturity picture.

A company can have the best product on the planet, but if they cannot prove to a potential business partner that it is safe to use the opportunity may not flourish. Here are a few simple example:

  • SOC-2 reports represent proof of security maturity in the United States of America (USA)
  • in the United Kingdom there is generally the requirement of a Cyberessentials certification
  • to interact with credit card providers, an organizations needs to prove (at varying degrees) adherence to the Payment Card Industry Data Security Standard (PCI-DSS) standard.


The organizational culture point here is that there are security-first and compliance-first mindsets. One actually protects while the other aims to prove that protection exists and is effective. Security protects the organization, while compliance proves it.

This space has evolved into an erroneous approach. A compliance-first mindset is treated as actually meaning an organization is somehow secure.   Sadly this mindset merely provides protections against auditors and not attackers. There is no framework I am aware of that actually leads to anything protective. To expect security, or protection, from some compliance initiative, or framework, is foolish and amateurish. Worst off, it can lead to a false sense of security for an organization.

Does a compliance-first mindset hurt security?

Yes, compliance leads to a false sense of security

Security hurts. It adds time to external projects (such as software development), costs money and takes up human resources. Moreover, it is never a static exercise as the entire space is always in flux (new sophisticated attacks, evolving internal changes, etc). Falling back to compliance as a way to alleviate this pain is just a mistake, but it happens. Compliance provides executive leadership a convenient way to create the illusion of security. This attitude can actually hurt a security program and minimize the effectiveness and/or support of certain projects that can add real protective value.

Yes. Compliance often wins a prioritization battle

Theoretically, compliance and security should synchronized in the goal of improving an organizations ability to conduct business. More often than not the two become competing entities for priority. At that point the synchronized goal ceases to exit. In so many organizations, when this prioritization conflict arises, compliance will win. Some executive will compare cost, effort, ultimate benefit and will fall for the allure of a compliance report or certificate. The perception at that executive level will come down to that person seeing that certificate or report as adding more value to the top/bottom line.

Yes – Top security talent see compliance as soul-draining

Compliance work is not exciting to talented security practitioners, especially younger ones. Let’s face it, that is boring work in comparison to blue/red teaming for instance. These folks come to the table ready to protect an organization’s environment. They absolutely do not come to an organization to check boxes on a document while adding screenshots for evidential purposes. From this perspective compliance hurts security by becoming an obstacle in the way of real protective work. Worst off, it creates an environment that is not pleasant to talented security team members.

Is a compliance-first mindset shocking?

Absolutely not. It is fair to say that entities (humans, companies, etc) will generally pick the path of least resistance and pain in order to reach some goal. Contextually, compliance represents that path towards what the security uneducated consider to be something positive. We cannot blame these people for confusing compliance and security. To the security uneducated, the two can actually look identical. After all, they sound alike, talk the same game, and are sometimes spoken of interchangeably by some who should know better. Some executives mistakenly perceive a strong correlation between compliance and security. Because of this, a checkbox exercise can easily seem like the right thing to do. 

Those of us in the security educated space cannot take a judgemental approach to this unfortunate wave of development. It is entirely on us to educate these people and do our part to clear up the confusion that is permeating this subset of our domains. Educating these people selfishly benefits us. It future proofs us from executives getting excited after reading the marketing campaign that promises to secure their business with an ISO-27001 certificate.

Security-first mindset and making compliance work for you

Pursuing an organization-wide security-first mindset is a must in today’s world. Nothing is digitally safe and you should trust nothing. In order to foster a pervasive cultural change towards a security-first mindset you have to be very much in sync with the organizations culture. More importantly, you need to understand the business itself so that your work enhances it by adding safety. The last thing you want to do is push a security-first mindset incorrectly where it hinders business operations.

Internally, an understanding of the business factors in an understanding of an organizations crown jewels and holistic (inside out and outside in) attack surface. The key question is what do we want to protect here in this organization. The mindset discussed here then has a subliminal bias towards always having the organization as a whole leaning in that direction.

Measuring the effectiveness of this mindset and cultural change is very important. This has to be continuous and happen over time. The results may not be linear as they can be impacted by many external factors (M&A, etc). But the key here is you have a deep connection to, and understanding of, the business and its culture.

Compliance frameworks, processes, and standards cannot provide answers to the questions that lead to the necessary, and change impacting, connections discussed here. So it becomes a tool you manipulate and control it in order to make it beneficial. Make it a value add to your security program and the organization as a whole. Make those processes become sources of valuable data points that, for example, can point you at deficient areas. There is inherent value if you learn anything from one of these exercises. This is certainly more beneficial than a checkbox exercise that becomes pass/fail journey.

Final thoughts

It is understandable that many leaders see compliance frameworks, processes, and standards as something useful. They add structure to a space they may not truly understand. Don’t forget that a percentage of cyber and information security leaders did not come up the ranks as practitioners. They, understandably so, perceive compliance exercises as valuable. For those in the know, the value add from the compliance space comes in the form of posture improvements based on provided data points. However, when compliance efforts are prioritized, and treated as a source of security, harmful situations will arise.

I ask my peers to ponder this question: Who exactly is your adversary? You are security-first in nature if your adversary is an actual nefarious entity. Contrarily, you are not if your adversary is some auditor whom you need a passing grade from. We should all be striving to implement a security-first mindset, regardless of the state of compliance within an organization because compliance does not equate to security, or protected.

Attack Surface Management in the Cloud Era

Attack Surface Management in the Cloud Era: The Many Angles to Consider. This was the title of my talk on a invitation-only conference (2/21/2023) with “Executive Insights”. In this blog I share the material while trying to recount what I actually said during the live session. The session was 15 minutes long with 5 minutes of Q&A. Hence, the content is not intended to be granular or exhaustive.

Note: This is an ultra important topic for cybersecurity leaders who are truly focused on understanding their ecosystem and where to focus protective resources (human and dollars).

Slide 1

Slide 1

Attack surface management is a complex endeavor. Unfortunately a number of products in this space have marketed very well. In turn, some people in cybersecurity think that the purchase and deployment of some specific product actually gives them a handle on their attack surface. What I would like to share with you now are some angles to consider so that you may possibly expand your program to cover your organization in a more thorough way.

Slide 2

Slide 2

External perspective

The overt, and best known, space in regards to attack surface management is that of the external perspective. By that I mean what your organization looks like to the outside world, in particular to nefarious actors. This outside-in perspective generally focuses on public internet facing resources, especially for businesses that actually sell, or host, web based products. The angles here mostly focus on which hosts are public facing and in turn what ports are actively listening per host. A good solution here actually probes the listening ports to verify the service being hosted. For the sake of accuracy and thoroughness please don’t assume that everyone adheres to IANA’s well known port list. It is entirely possible, for instance, to host an SSH server on TCP port 80 and that would inaccurately imply that a web server is at play.

Shadow IT

A benefit of focusing on this outside-in angle is that it is a great way to expose shadow IT elements, if they exist and are hosting in a public facing capacity. I should also state that for this set of angles to be effective this has to be a continuous process such that new public facing hosts and ports are discovered in a rapid fashion. There are many products that serve this space and provide relevant solutions.

B2C / B2B

From the external perspective the natural focus is on the Business to Consumer (B2C) angle. This is where the space is predominately based on end-users/customers and web applications. All of what makes up a web app stack comes into play there. But from a Business to Business (B2B) perspective there is the lesser familiar area of APIs. Whether you run a REST shop or a GraphQL shop there are unique challenges when protecting APIs.  Some of those challenges revolve around authentication, authorization, and possible payload encryption in transit. For instance is TLS enough by way of protecting data in transit? Or do you consider an orthogonal level of protection via payload encryption, something like JSON Web Encryption (JWE) (if you use JSON Web Tokens (JWT)) for instance. It’s certainly an angle that needs consideration.

Slide 3

Slide 3

Covertly there are a number of angles that require attention. Starting with the humans lets focus on insiders (meaning inside of your network).


There are threats and employees. Sometimes an employee becomes a threat and the angle here is that they are already inside your network. This angle is typically one where there is high risk because employees have authenticated access to sensitive systems and data. To complicate this, enter hybrid or work from home setups. Now your attack surface has expanded to areas outside of your control. Home networks are clearly high risk environments but since our traditional network perimeters no longer exist, now those home networks are part of your attack surface. Imagine a home network with kids downloading all kinds of crazy stuff. Then imagine a user that has your hardened work laptop at home. Then imagine the day that person feels like accessing work content from their personal machine. Or better yet they figure out how to copy crypto certificates and VPN software to their personal machines. Now the angle is an insecure machine, with direct access to your corporate network, that in turn has direct access to your cloud environments.

Non-generic computing devices

In reference to non standard computing devices there are risks based on this equipment being generally unknown to traditional IT teams. Imagine the HVAC controllers, or motors, or PLCS, required to operate the building you work in. Most of those devices are networked with IP addresses, they typically reside on your network and are part of your attack surface. Now let’s also consider the administrators of said equipment and the fact that they have remote access capabilities. Some VPN paths bring traffic in via your cloud environments with paths back to the building equipment. That is one angle. Then there are direct remote access scenarios, which put people on to your network and in turn there is the possibility of access to your cloud environment. Misconfigurations like these happen all the time and are angles to be considered when studying your attack surface.


Supply chain angles are now a big thing. Actually they have been for some time but recently they are getting a lot of industry attention. Let’s start with SaaS solutions. Are they your responsibility from a security perspective? Maybe, maybe not. But, your data goes into these systems. While it is an organizationally subjective decision if your data in a SaaS solution is part of your attack surface, it is an angle to consider. You should at least scrutinize the security configuration of the SaaS components that get accessed by your employees. The goal is to make sure the tightest security configurations possible are in use. Too often consultants get brought in to set SaaS environments up and they may take the easiest path to a solution, meaning the least secure. It happens.

SaaS integrations are even riskier. Now your data is being exchanged by N number of SaaS solutions outside of your control and again, it’s your data being sent around. Were those integrations, typically API based, configured to protect your data or purely to be functional? After all my years in the industry, I can tell you what I have seen puts me in the cynical category based on people doing what is most secure on your behalf.


Part of modern day supply chains is open source code. We all know of the higher profile cases that involved negative events based on abuse of things like open source libraries. The angles here vary but I assure you most cloud environments are full of open source code. It is an angle that cannot be avoided in the modern world.

Slide 4

Slide 4

Alternate ingress pathways

A typical cloud setup only accepts sensitive network connectivity from specific networks. This is by design so that sensitive communication paths, like SSH, are not accessible via the public internet. Typically these specific networks are our corporate networks and those, in turn, get remotely accessed via VPNs. Or it could also be the case that your VPN traffic itself flows into, and/or through, your cloud environment. So an angle to scrutinize is exactly where do VPN connections get placed on your network. This may be leaving pathways open to your cloud infrastructure.

Another pathway of concern is direct, browser based, access to your cloud infrastructure. A successful authentication could give a user a privileged console for them get work done. If this account gets compromised then there is substantial risk. The real danger with this ability is that it facilitates users to log in and do their work from personal machines that may not have the same protective controls as a work computer.

Privileged Users

SRE engineers and sysadmins typically have elevated privileges and the ability to make impactful changes in cloud environments. The scripts and tools they use need to be considered as part of your attack surface because I am sure your teams didn’t write every piece of software they use. And so these scripts, the SRE engineers machine, etc all become possible alternate access paths to sensitive elements.

DataBase Administrators (DBA) are valued team members. They typically have the most dangerous levels of access from a data perspective. This is obvious given their role but this also should raise your risk alarms. Imagine a DBA working from a home based machine that for instance has a key logger on it. This is a machine that will also have data dumps at any given time. And of course we all know that DBAs and software engineers always sanitize data dumps for local working copies *[Sarcasm]*.


Git – there are so many known examples of sloppy engineering practices around hard coded, and in turn, leaked elements of sensitive data. One important angle to study is the use of secrets managers. Analysis must take place to sniff out hard coded credentials, API keys, passwords, encryption keys, etc and then ensure re-engineering takes place to remove those angles from your attack surface. The removal obviously being a migration to use a secrets manager as opposed to statically storing sensitive elements where they are just easy to access.


SSH tunnels and back-doors are both interesting and challenging. Unquestionably they represent a set of angles you need to hone in on. Detecting if some SSH traffic is a reverse tunnel is not trivial. But from an attack surface perspective you can hardly point a finger at something that introduces this much risk. This scenario can expand your attack surface in a dangerous and stealthy way.

Ephemeral port openings

Temporary openings are a real problem. Ephemeral entities within cloud deployments are challenging enough but the legitimate ones are typically not public facing. So for example let’s say you have containerized your web servers. And you are using elastic technology and possibly orchestration to successfully react to traffic spike. That is an ephemeral use case that is acceptable, and typically behind protective layers (Web App Firewall, etc). But what happens when the human factor creates bypasses to controls in order to facilitate ease of use in a specific scenario.

Story – In talking with some members of a start-up playing in this space they were telling me about one of their interesting findings on a project. They discovered a case where a specific host, on a specific port, was open on specific days for a limited amount of time. Their investigation led to the finding of the SRE and DBA teams having an automated process to allow for the running of a remote maintenance script hitting a database server directly. The SRE / DBA teams felt it was such a limited exposure that there was no risk. Interesting angle from an attack surface perspective and maybe one more people need to look for.

Slide 5

Slide 5


Data. The real target and the heart of our challenges. This is also where multiple angles exist. Let’s start with some simple questions, each one should make you realize the angles at play …. In regards to all of the data you are responsible for, do you definitively know where it all is? Do you know all of the databases at play? Do you know every storage location where files are stored? Within those answers, do you truly know where all of your Personally Identifiable Information (PII) and/or Protected Health Information (PHI) data is? If you don’t then those are angles you need cover ASAP. 

Once you know where the data is … then for each location you need to at least map ingress pathways. What can touch database X? Web apps, admin scripts, APIs, people? And what about egress pathways? Once someone touches a data store how can they exfiltrate? The angles within the ingress / egress challenge can be vast and some skilled analysis needs to take place in order to properly understand this part of your attack surface.

In Transit

Another data related angle to consider is that of your data in transit. But not to the outside world, on the inside of your cloud environment. More often than not the following scenario is real …. You have strongly protected TLS 1.2+ streams from the outside to a tier of load balancers controlled by your cloud provider. The load balancers terminate the relevant sockets and in turn terminate the stream encryption at play. From that point to the back-end all of the streams are in the clear. A lot of people assume that is a trusted part of the network. I am not in the business of trust and so that angle bothers me and I push for encrypted streams on the inside of my cloud environments. Otherwise that part of your attack surface is susceptible to prying eyes.


Finally I would like to stress the potential value of proper analysis of meta-data. Take encrypted channels of communication as an example. If an encrypted egress path is identified then some skillful reconstruction of meta-data can yield some valuable intelligence even though you obviously can’t see the content that was moved along the discovered channels.

Thank you for having me, I will take questions now …

Cuban Refugee to Cybersecurity Leader, My unique journey

Cuban Refugee to Cybersecurity Leader. From Cuba to the streets of New York to global Cybersecurity leadership.

Hispanic Heritage is celebrated in the U.S. at this time of the year (September). That makes this article, about my unique journey – Cuban Refugee to Cybersecurity Leader, that much more special. Growing up on the streets of New York as a Cuban immigrant was an instructive experience. Hispanic Executive ( published this article.

We discuss some of the challenges we (Latino immigrants) encounter coming up in the U.S. professional/corporate ranks. Most immigrants run into similar, sometimes worse, situations. I am cognizant of that. Personally, my many years training in Judo brought me many relevant benefits. This is so due to the discipline of our training but also the flowing, yielding and toughness developed along the Judo journey.

Commonly a path to a technology career is neither linear nor straightforward. The article touches upon this and how I flow with my life’s directed momentum. My journey traverses multiple disciplines within technology. The challenges have been abundant. The lessons learned, priceless. The federal government space, the corporate world, the cybersecurity startup / product world, and consulting for some high profile entities, are worlds I have lived in.

Furthermore, I speak about two other areas in the article that are of paramount importance to me. Those areas are translation and balance.

Translation is of great importance and is relevant in multiple directions. Undoubtedly a successful leader is constantly translating information because part of achieving success is building bridges between entities that don’t speak the same language (i.e. business and tech). I blogged about some of this earlier in the year.

Unquestionably, balance can be elusive, especially within the context of work / life balance. Judo training forces one to understand their own balance on multiple fronts, physically, spiritually and emotionally. Within the context of cybersecurity leadership I speak of the need for balance between technical capacity and business acumen.

Humble application security advice from an old school practitioner

Humble appsec perspective

Here is share some humble application security advice from an old school practitioner. This advice is for practitioners and cybersecurity leaders (CISO, CSO, etc) alike. I have been a player in the application security (appsec) space for many years and I see the appsec space through a fairly wide lens of both offensive and defensive arenas. My lens factors in secure coding, layer 7 protective mechanisms, processes and things like pen testing. My background in these areas spans back to before the days of automated tools in pen testing, we did things manually and actually had to deeply understand stuff under the hood. Back then it was both an art and a science, I am not so sure these days. By 2005 I had professionally performed enough pen tests that I confidently wrote a book on the subject.

I can’t think of any modern day organization that does not have a business-centric Internet presence. This of course implies some web app, or web site. And these of course need protection. This protective journey starts way before the first request is responded to via some port open to the Internet (whether directly or via proxy). From my perspective, there are a few key areas relevant to securing software that are critical.

Before this discussion goes any further let me clearly state something. Appsec is a journey. One that you either wholeheartedly embrace or don’t bother at all. Too many appsec initiatives are driven by some externally mandated compliance, or it’s the scenario where some software engineering team is forced to do this, and frankly just doesn’t want to. Straight talk – out of all the software engineers you have met how many actually give a crap about security? 28 years in for me and that number is minuscule.

This means it is on us to be advisors and aim to positively influence. It’s on us to weave this into the day to day reality of other teams, but we have to be all in. You have to be deep about appsec and be committed. Otherwise feel free to just stop reading here.

Another point I will make here may upset some and that is ok. More straight talk for my security peers ….. you do no one any justice when you, or any “security” expert, comes to the table with software engineers to discuss their “insecure” coding, yet it is obvious that you have never actually coded anything yourself. A software engineer will see right through you and will silently (hopefully at least) be thinking, “what the %$#& do you know about what you are actually saying?”

Maturity is a big factor when pursuing the build out of an appsec program. One could argue that a certain level of maturity is inherent when an organization is even thinking about appsec formally. One major challenge will be your ability to positively influence the engineering culture of your organization. Proper influence is key here because a force fed program will get you limited results and I assure you there is not enough time to review every line of code written for some given cycle. Hence, your goal is to positively influence the relevant people and the process to want to be a part of this. A “shift left” process, for instance, should become a mutually desired business enabler.

Both sides, software engineering and cybersecurity, should be after the same goal of a secure, resilient, functional piece of customer facing software. So building a successful appsec program requires commitment from each side. This will require some tactful education on the side of cybersecurity leadership. And let’s face it, some organizations are just not wired, culturally, to actually have a good appsec program, if any at all.

Organizational Culture

Let’s start with the organizational culture. To keep this relevant let me be clear that not all organizations need a dedicated appsec team. The ones that do are generally building something and pushing that something out for customers to utilize. But for instance if your organizations entire tech stack consists of a bunch of integrated SaaS solutions there may not be much for an appsec team to do. Those orgs can probably get away with periodic consultants reviewing security configurations from the SaaS vendors and perusing the integrations for weak controls. But honestly those orgs are at the security mercy of the SaaS vendor.

Mature organizations want application security, and security in general, to silently just be there. This is ultimately the goal of security as a business enabler. The more silent it is the better. The challenge is, however, that security hurts and it costs (time, money, effort and resources). As much as people love to push the enablement issue security simply doesn’t actually enable anything in most typical businesses. Hence, why they see security as a necessary evil. Changing organizational culture is critical but you have to factor in the reality of what I just mentioned. Weaving security thinking into an organization’s culture, especially in the engineering space, is foundational. Without this an appsec program will fail.

In the spirit of not being that “department of NO”, or not being a blocking entity, it’s up to us to figure out how to be “enablers”. This “how” is not trivial and organizationally subjective. Shifting left is a very common way to ease into enablement. To me, this requires moving security elements (code scanning, vulnerability scans, DAST/SAST, pen tests, etc) to be impactful earlier in an engineering and/or automated build cycle. Accomplish that and you may just have enabled a better solution to be built and deployed.

Focal Areas

In order to build this appsec program you now realize you have to positively change the culture of an organization. This will take time and perseverance. It will also take focus and a sound strategy. Here are a few areas that should be front and center in your appsec journey:

SSDLC – The key focal area for positive impact will be the Software Development Life Cycle (SDLC). You need to help your organizations engineering entities transform this into a Secure SDLC (SSDLC). I suggest you take slow and small steps here as change is difficult to accept. This is especially so if you are an outsider (and you most likely are from their perspective) asking them to change.

Focusing on adding security value with changes/additions to an SDLC, the typical areas you will hear experts speak about are:

  • secure code training (for software engineers)
  • secure design reviews (typically at an architectural level)
  • pen tests (internal and/or external)
  • risk assessments
  • regular advisories
  • secure code review (when/where possible)

For those of us who have really done this we know it is never that cut and dry. Moreover, to accomplish all of that your appsec team and budget better be pretty hefty. Software engineers are not going to rejoice with open arms based on what you are asking them to do (more work, longer deadlines, harder testing, etc). You must choose wisely which of those areas you will initially push for and look to make allies who willingly engage. The iron fist approach hardly ever works.

Depending on the size of your team, budget, the size and numbers of the target engineering teams, and company support (organizational culture) you may find a great approach is to embed appsec folks into the engineering teams/squads. This will create institutional knowledge and tailor the appsec program based on that domain expertise they will gain. The downside is that it takes resources away from your normal operations, but my experience is that this is an acceptable cost.

Measure Maturity – Maturity matters and you should set a goal of formally tracking progress in respect to your appec program. Two popular frameworks in regards to establishing a baseline and measuring this over time are:

There is a nice side by side comparison of the two here. While both of these frameworks seem straightforward give them some thought. See which one fits best based on your intimacy with the culture of your organization. One note, it’s ok for your scores to drop every now and then. This is a space that is highly impacted by certain events. Take a Merger & Acquisition (M&A) event for example, you have little control of what you inherit. This could instantly drop your scores based on no fault of yours. So the scores are a great metric but one that requires you to go with flow a bit.

Build relationships – This area really encompasses two distinct areas, testing and operational work. Building a relationship with Quality Assurance (QA) testing teams may prove very beneficial. After all, functional testing can very well go hand in hand with some security testing. Having security functions injected into other areas, such as regression testing, may prove valuable as well.

While automation plays a big role here you may be disappointed to find that in 2022 there is still a lot of manual QA testing that is performed. As humbling as that may be do you actually think your automated appsec scanners can catch everything? Irrespective, building relationships such that discoveries are made prior to production releases will prove invaluable over time. So work with QA to for instance have pen test elements built into some of their processes. They can become total allies over time.

Part of the relationship ecosystem is all about ownership. We are in no way trying to “pass the buck” but other teams, such as software engineering, need to own part of the security responsibility. They will ultimately have more of an impact on day to day security decision making than anyone outside of their team(s). RACI charts can be effective in identifying ownership borders clearly so consider as part of your arsenal. The appsec team is there to advise and try to influence best decisions but generally wont have the authority to force much of anything.

Invest in the right tooling – Focus on solving problems and building solutions, not implementing products. This area is broad and really spans numerous key areas of an appsec program. The goal is to positively impact an entire ecosystem from the left and right. On the left there is the SSDLC and all the goals already mentioned, you will need to invest in some tooling. On the right there are architectural components, pen testing, and a host of other initiatives that range from the systems thinking to the active protective. Active protection, for instance, will require tooling as well. But remember, build solutions.

Automation is going to play a key role in the overall impact of your appsec program. Building security components into X as code (where X can be build or infrastructure, etc) initiatives can add a lot of value. Injecting blocking security mechanisms into your organizations CI/CD pipelines are a great way of designing guardrails directly into a some processes.

Other Areas

Set boundaries – The scope of your appsec program will be critical. Scope is very subjective per organization. In order to set your team up for success set the boundaries early. For example, is your program going to cover database security? What about protecting the connections from apps, APIs, etc to databases? What about file security and protection? In some organizations those elements belong to an appsec team and this needs to be defined clearly.

Gain intimacy – Make sure your appsec team gains some intimacy with all relevant software engineering processes (to include tech stacks, 3rd party libs, hosting environments, file transfer mechanism, build processes, etc). This intimacy will allow your program to be effective with some of its goals, especially the one related to building and implementing a SSDLC.

Intimacy also has a direct impact and/or outcome related to the relationships you want to build. Bi-directional communication will prove invaluable. Get into the weeds with software engineering teams and, assuming you earn their respect, you will quickly identify who will be an ally and a voice for your appsec program(s).

Build inventories – You can’t effectively protect what you are not aware of. In my experience software engineers are generally bad at documentation. Part of your appsec program should aim at inventorying the application landscape and what it is made of. Pay close attention to the hidden “gotchas”, such as the APIs that are built into an app’s infrastructure, and hosted on the same server and transport (i.e. HTTPS) mechanism. Don’t limit your viewpoint here and consider infrastructure components along with supply chain elements as a part of your inventory. A lot of interesting security problems are most likely lurking there. A good inventory should at least expose some of these areas of interest.

As part of the inventory consider also building a control inventory. Frameworks (i.e. MITRE ATT&CK, etc) can help keep this organized as well as help make sense of adversarial tactics, techniques, and procedures (TTP). Creating an inventory of this sort can expose areas in your attack surface that maay need attention.

Build a risk register – This risk register will be focused on your apps and solutions. This should provide clarity in terms of risk the organization faces on a regular basis.

Create opportunities – Take every opportunity to get your message across and show how some security initiatives can be seamless and painless. For example, if you have an engineering team that creates compiled code. Why not have a library built (i.e. shared or static lib) that performs numerous security related functions (i.e. input validation, header setting, encoding/decoding, etc)? Then have the engineers take a look at how easy certain things become when they call exposed functions in that library. That is a seamless way of getting your objective some traction.

Gather metrics that matter – You will need to relay the value add, and effectiveness, of your appsec program to the corporate executives and/or the C-suite. Focus on metrics that matter. Not all metrics will be relevant to an appsec program. Any type of cost savings is always welcome, while you can also measure and track program adoption. Strategically, use one of the frameworks (SAMM or BSIMM) discussed earlier to show progress of maturity.

Final Thoughts

Security leaders set strategy and create programs, but more importantly we advise. A solid appsec program is one of those areas where we advise an organization. It is on us to mold an appsec program and focus it to map to, and benefit, the business. The points I have made here should give you a good sense of how you can enable a business via an appsec program.

Designing a program takes effort and thought. Factor in the people, processes, technology, and culture of the organization. Factoring these elements in is a continuous process as things, and organizations, change. They have to adapt and overcome constantly. So does your program. As long as you are continuously improving, and keeping step with the business, you will get positive results.

Understand that the biggest impact will come from the partnerships you pursue. This will be crucial in terms of having positive influence on the culture of your organization. Support this with some, hopefully many, of the technical components discussed in this writing and you will not be disappointed with the results. Stay focused on the fact that your appsec team should exist to be advisory, as subject matter experts.

Cybersecurity, and appsec specifically, subjectively mean different things to organizations. For some organizations these elements are simply part of modern day business success. My humble advice can get wrapped up as this: position yourself as a facilitator and an advisor, one that can transparently (as much as is possible) leverage the advice provided here to actually enable safe application based business. I wish you the best with your appsec initiatives.

Cybersecurity metrics, the challenge, measure what matters

Cybersecurity metrics

Cybersecurity metrics, the challenge, measure what matters.

Warning: there are a number of somewhat abstract concepts presented here. I know some people have a hard time with abstraction so please read with an open mind. There are a lot of open ended questions as well. This article is intended to spark thought outside of the norm as it relates to cybersecurity metrics.

As an industry we ([cyber | information] security) have struggled to pin down the art, and science, of cybersecurity metrics. It is a struggle that some feel they have mastered. I am not entirely convinced. Moreover, I see the general consensus sadly playing in the safe zone when it comes to this subject. It takes courage to measure what matters as opposed to measuring what is possible, or easy. It also takes courage to measure elements that are somewhat in the abstract because of the difficulty at hand.

I acknowledge that “what matters” is subjective to four entities, the organization, its C-Suite, its varying board members and us (Cybersecurity leadership). We can steer the conversation once there is internal clarity in reference to the items that really matter.

One of the enemies we have to contend with, is our indoctrination to always strive for 100%. This score, level, grade, is simply unachievable in most environments. And what really constitutes 100%? Is it that our organization has been event-less by way of incidents, breaches and/or data exfiltration? What constitutes the opposite, or a score 0 (zero)? We have to stop thinking like this in order to get a realistic sense of metrics that matter.

My contention is that we need a small, tight, set of metrics that are representative of real world elements of importance. This comes with a fear alert, because in some cases measuring these areas will show results that come off as some type of failure. We need not feel like this is reflective of our work, we are merely reporting the facts to those who need them. “Those” would generally be the board and the C-Suite. They will probably have a hard time initially understanding some of these areas and admittedly they are very difficult to measure/quantify.

It is the job of an effective CISO to make sense of these difficult to understand areas and educate those folks. But, the education aspect is not just about understanding them, but to how extract value from them. This is where the courage comes in because a lot of people have a hard time accepting that which is different than what they are accustomed to.

Subjectivity is important here. There are few formulas in the world of cybersecurity and what matters to one organization may have little relevance elsewhere. Organizations need to tailor their goals, and in turn the measuring mechanisms, based on what matters to them. This of course has a direct impact on what risk areas come to light, which ones need to be addressed with urgency and those that can wait. Hitting these subjective goals (that should be defined by the metrics) could also bring about ancillary benefits. For instance this could force the issue of addressing technical debt or force a technology refresh.

Here are some suggestions (nowhere near exhaustive) that are top of mind in respect to metrics we tend not to pursue (mainly due to the difficulty of measuring them):

Effectiveness of active protection mechanisms – This one seems obvious at face value. Grab some statistics after the implementation of some solution, for instance a Web Application Firewall (WAF) and show how many awful things it has prevented. But this is such a fragmented perspective that it may provide a false sense of security. What about your machine to machine communications deeper in your network (VPC or otherwise)? How are you actively protecting those (API requests/responses, etc) entities?

I find the bigger challenge here is ecosystem wide coverage and how you show the relevant effectiveness. There are other difficult to measure areas that directly impact this one, such as attack surface management. But if we, as an industry, are ever going to get ahead of attackers, even in the slightest way, this is an area we all need to consider.

Reproducibility – The “X as a Service” reality is here and quite useful. “X” can be infrastructure, it can be software, it can be many things depending on the maturity and creativity of an organization.

From the software perspective, what percentage of your build process exists within a CI/CD pipeline, or process? This strongly sets a reproducibility perspective. Within a CI/CD process many areas, such as security, resilience and DR, can be covered in automated fashion. Vulnerability management, and patching, can be included here as well. It’s 2022 and if your organization hasn’t invested in this area you need to put some metrics together to make a case for this.

Attack Surface Management – What does your organization look like to an outsider? What does it look like to an insider? What does it look like when you factor in ephemeral entities (such as elastic cloud resources)? Does your attack surface data factor in all assets in your ecosystem? What about interdependencies? Asset inventories are seldom accurate and so possibly your attack surface is a snapshot in time as opposed to something holistic.

There is a lot to consider in terms of attack surface metrics yet it is such a key component to a healthy cybersecurity program. Please don’t think that any one specific product will cover you in this area, most are focused on external perspectives and miss the insider threat vector entirely.

Software Security – This is an enormous subject and one that deserves an entire write itself. The maturity of software can certainly be measured with techniques like SAMM (one such model is OWASP SAMM). Creating, and implementing, a SSDLC goes a long way in integrating security into the core software development process. Underlying any of these techniques is the need to map software to business processes. Otherwise you have a purely technical set of metrics that no one outside of tech will be able to digest.

Technical Debt – This area is complex as it can contextually refer to software that needs to be refactored or it can refer to legacy systems (stagnant or otherwise). Regardless of the context how does one measure the level, or severity, of technical debt within an organization? If a successful relevant model is created it will probably create a strong argument for some budget 🙂

Distance Vector – How far into your ecosystem can an attack get before it is detected and handled (whatever handling means to your organization)? The logic here is simple, the longer it takes to detect something the more you need to pay attention to that area. Think of APTs and how long some of them exist inside of your network before there is detection and response.

Time vector – Who is faster you, the defender, or the attackers? There is a reality to the time factor and every time your organization is targeted there is a bit of a race that takes place. Where you end up in the final results of that race dictate, to an extent, the success factor of an attack. This is very hard to measure. But, spending time figuring out ways to measure this will yield an understanding of the threats you face and how well you will fair up against them.

One great benefit of spending time assessing your time vector is that it will force you to measure your ability to successfully address entire families, or classes, of attacks. Having the macro focus, as opposed to the typical micro focus may bring about an interesting level of discipline with your technical teams. Basically, they will be forced to think big and not exclusively on edge, or corner, cases.

Repeatability – How repeatable are key functions within your organization? Measuring repeatability is super difficult and yet this is a foundational aspect of mature cybersecurity programs. Playbooks exist for this exact reason and we do invest resources into creating, and maintaining, them. This means repeatability is undeniably important but yet how do we quantify this?

Budgeting – How do we know if enough is being funneled into a security program? At the end of the day we can’t plug every hole. One strategy is to perform crown jewel assessments and focus on those resources. Another one is to analyze attack surface data and cover areas of importance. But how do we measure the effectiveness of these, and any other related, strategies?

Insufficient budget obviously reduces the ability of a security team to implement protective mechanisms. The metrics we focus on need to push for clarity in terms of what becomes possible. There’s most likely no correct amount of budget but we get a slice of some larger budget. What we get becomes a variable amount over some period of time. But the budget itself needs to be treated as a metric. Think of it this way, if you don’t get enough budget to cover the areas you know need attention then there will be gaps that are directly attributable.

Sadly a lot of budget increases come about because something bad has happened. But this (the fact that something bad happened) means more work needs to be done. And yet we struggle with the necessary quantification. Ultimately we should pursue business aligned initiatives irrespective of the difficulty of trying to pin down an accurate budget.

All-Out Mean time to recovery (MTTR) – Imagine the absolute nightmare scenario that your entire organization is decimated somehow. Imagine you are brought back to the stone ages of bare metal and have nothing but a few back-ups to recover from. How long will it take you to get your organization back to an operating business? Some organizations are well positioned to recover from isolated incidents, like a ransomware event. My thought process is around something far more catastrophic.

I am not sure there is an organization on the planet that can answer this question at breadth and depth. I fear that there is also a lot of hubris around this subject and some may feel this is not a situation they need to account for. The more typical all-out scenarios you may encounter focus on operational areas. For instance if all servers become unusable there is a DR plan that has been designed, tested, and tweaked to reach acceptable MTTR.

From a positive vantage point, the very act of trying to measure some of these admittedly challenging areas of operation will likely reveal many areas of improvement. That in and of itself may prove valuable to your organization in the long run. There are so many more we could come up with but the areas presented here are a decent starting point.

On the negative end there is an enormous challenge in that board and the C-Suite might not understand these metrics. Hell, I can think of many IT leaders that wont understand some of them. But these are not reasons to shy away from the challenge of educating folks. I understand the notion, and am a practitioner, of talking to the board on their terms, in their language. But is it truly beyond their capabilities to understand some of these points? Is it unreasonable for us to push for a deeper level of understanding and interaction from the board and the C-Suite on these metrics?

One suggestion is to be super consistent in the metrics you choose. By consistent I mean stick with them and show changes over time. The changes can be negative, thats ok. No one is delusional enough to expect total positive momentum all the time. Your presentation, of the metrics you choose, will be an investment and in time the board and the C-Suite will start to see the value, and that you are one persistent individual.

Ultimately, there are many superficial security metrics that keep you in a safe zone. I challenge all of us, myself included, to do better and be more creative. This will be difficult but I find it is well worth it. The outcomes may surprise you and the ancillary benefits (areas you will be driven to address, etc) may as well. There will of course be push back that these are difficult to understand. Or maybe the arguments revolve around the effectiveness of the message to the board and the C-Suite. But the fact that something is difficult is no reason to not tackle it.

Some of the silliest cybersecurity strategies

We all make mistakes but some cybersecurity decisions and/or strategies are just downright silly. They are of course dangerous as well. Here are some of the silliest ones I have encountered.

Cloud security – relying on some automagical security posture simply because you have digitally transformed to a specific cloud provider is just ___ (you fill in the blank). I have actually been told, with a straight face, “we are protected because we are hosting on cloud provider X”. Silly strategy.

Lack of incident means we are safe – some executives take the lack of incidents as the impetus to not have to invest in cybersecurity. So the strategy is purely reactive, their goal is to save money until there is an incident (material or otherwise). Then of course they will be mortified when something bad happens. Silliness.

Cyber-Insurance – having insurance in no way makes an organization secure, not even close. Not worrying about some negatively impacting event because one has insurance coverage is just ___ (again, you fill in the blank). So the notion of transferring risk (as if that is even a real possibility) rather than addressing it is ….. silly.

Reliance on tools – deploying even the best tools, does not mean a “set it and forget it” approach will make them successful. Assuming security is easy because products and/or tools will handle everything is a silly strategy.

Reliance on THE tool – there is of course security by obscurity, but this is security by marketing. Some sales people are very good and of course their one product can solve all of your security issues. Actually believing that one specific tool can solve even a large portion of security issues within a mature/developed ecosystem is silly.

Obscurity – security by obscurity has been a “thing” for a long time. History has proven that this approach hardly ever works. But it is inexpensive and easier to pursue than building proper security controls. And some people out there underestimate the intelligence of the attackers we face on a regular basis. And no, the fact that you may be a small company means nothing to a cyber criminal. Trying to fly under the attackers radar, or assuming your obscure methods will outsmart them, are both silly strategies.

Ignoring it – one just can’t ignore security problems and hope they don’t become real. Hope is not a strategy, or at least it’s a silly one.

We will come back to that – this just never seems to happen. The notion of coming back to something problematic, at some future time, and “tightening things up” is just ___ (once again, you fill in the blank). Ignoring something you are aware of is downright irresponsible and of course, silly.

Assuming the vendor has you covered – assuming that a vendor does security right is off the mark on many levels. Products often get delivered and/or deployed with horrible security configurations (and plenty of easily guessed default credentials) all the time. This is yet another silly strategy.

Compliance equals secure – regulatory compliance does not equate to being secure or protected. Having a ISO-27001 certificate, and a SOC-2 Type 2 report, and a host of any other related compliance credentials, is not going to stop an attacker from being successful. Relying on looking good on some piece of paper is …. silly.

As a cybersecurity leader you must steer things towards a long term, sensible strategy. The foundation of this strategy should take into account all of the silliness I just wrote about. Otherwise, failure is inevitable.