Cybersecurity is often associated with firewalls, strong passwords, and complex systems. But in practice, protecting an organization goes far beyond technology. It also means understanding how attackers think, where processes fail, and how the human factor can open doors that once seemed securely closed.
We sat down with our consultant Diogo Sousa, an Offensive Cybersecurity Specialist working on a project in the Financial sector, to talk about offensive security, red teaming, and the work of professionals who test not only systems, but also people and processes, with the goal of reducing the impact and likelihood of cyberattacks. From his curiosity sparked by early hacking challenges to his work with Active Directory, cloud environments, access control, and stealthier simulations, this conversation offers a glimpse into a demanding, highly technical, and increasingly essential field for companies today.
When someone asks what you do, what’s the “simplified” version of the answer?
[What I do is] reduce the impact and likelihood of cyberattacks.
If I had to explain it to my grandmother, I’d say there are people who make money by breaking into systems and often selling that access. My job is to do everything we can to reduce the chances of that happening. Both public and private systems have vulnerabilities, and a large part of our day-to-day work focuses on identifying and mitigating them.
I work in cybersecurity, which is a very broad field with several specializations. Within that field, I work in offensive security and, more specifically, in red teaming. In offensive security, the most well-known roles are penetration testers and ethical hackers who analyze systems to identify vulnerabilities, report those flaws to the teams responsible for the tested assets, and support risk mitigation efforts.
In my case, the work is slightly different. Red teaming is also offensive in nature, but the approach tends to be more discreet, because the goal is not only to test systems or identify vulnerabilities. It’s also about testing processes and people.
For example, I could run a red team operation where the infrastructure is well protected and there are no known technical vulnerabilities. Even so, the campaign could still succeed because I managed to exploit another weakness, not technical, but social.
A simple example would be calling the helpdesk asking for a password reset while pretending to be user X, when in reality I’m not. If I manage to convince them to make that change, I now have access to the account. That’s not a technical vulnerability, but rather a human or social weakness that allowed me into the system.
Did you always want to work in cybersecurity, or was it an unexpected path?
It goes back a long time. I think it was around 9th grade that I first came into contact with this field through my cousin’s husband. He had made a career change: he used to be a Math teacher and eventually moved into Computer Engineering. At the time, he took a hacking class and became really interested in it.
Later, someone introduced him to a website called hackthissite.org, I think. It was one of the first websites designed to be “hacked,” allowing people to test their skills, especially in web hacking. Since he used to tutor me in Math and was also going through that transition phase himself, he ended up learning while teaching. That was when he showed me the website, and I became curious.
I liked the field, and over time other platforms appeared, like TryHackMe and Hack The Box, which are also CTF platforms. For a while, I bounced between programming and cybersecurity, but I was always more drawn to security.
By the 11th grade, I started focusing more seriously on it. Before that, it was more of a hobby.
After university, I joined a software development company as part of the cybersecurity team, mostly working on web pentesting. The company provided pentesting services to external clients. The work was primarily focused on pentests, but I also had the opportunity to take part in a red team campaign.
What’s the difference between pentesting and red teaming?
Red teaming is not just about finding vulnerabilities. The goal is to identify weaknesses in the broadest sense and achieve a specific objective. That means I might reach that goal without exploiting any technical vulnerability, simply by taking advantage of process failures.
In pentesting, on the other hand, the goal is to identify vulnerabilities so they can later be fixed. It’s a more focused engagement with a specific scope, an application, a system, or a network, and the testing stays within that perimeter.
In red teaming, there’s usually a concrete objective rather than a single technical target. That objective might involve compromising a database or encrypting systems, for example. How you get there is less important, as long as the objective is achieved.
What was your first contact with offensive security?
Legally? [laughs] I don’t remember.
It’s also important to understand that legislation in this area still has a long way to go, especially in Portugal. Technically, if I search something on Google and a website appears, all I did was search for something and click a link.
If that website has weak defenses, I might end up accessing a database directly just because it showed up in the search results, without any malicious intent. Even so, technically, that could still be considered a crime because I accessed something without authorization.
Within cybersecurity, there’s offensive security, but there are other areas too, right?
We usually simplify cybersecurity into three colors: red, blue, and purple. Red represents the offensive side, blue the defensive side, and purple the collaboration between both.
Typically, red operations fall under the red team. On the defensive side, you have everything related to protecting and monitoring the environment, especially SOC teams.
On a daily basis, SOC teams focus on understanding what’s happening in the environment, analyzing alerts, and determining whether they’re false positives or actual attacks.
Then there’s the purple team, which focuses specifically on collaboration between both sides. We identify problems in the organization’s security posture, but that work only has real value if concrete lessons can be extracted from it. And in red teaming, that generally only happens through close collaboration with the blue team. That’s why it’s called purple, a mix of both colors.
We present our findings, explain what we did, why we did it that way, and how similar activity could be detected in the future.
Was there a decisive moment when you realized you wanted to specialize in offensive security?
It depends a lot on the stage at which we’re introduced to the field and on personality as well.
In my case, because I was introduced to it very early on, there wasn’t really anything for me to defend, so the offensive side naturally became the only one I could focus on at the time.
I’m also a fairly dynamic person. I don’t like standing still, and I think my personality naturally pulls me more toward the offensive side. That said, the defensive side is also extremely interesting.
Was there any mistake or failure that stood out during your journey?
I think I spent too long avoiding a specific area of study. Most companies, especially large ones, rely heavily on Microsoft technologies. One of the most common is Active Directory. In my case, because I was introduced to the field through the websites I mentioned earlier, most of the systems I interacted with were Linux-based. That was the environment I felt most comfortable in.
Whenever a Windows machine appeared, I’d usually avoid it because I didn’t feel like researching and learning about it.
Eventually, I realized I couldn’t keep avoiding it forever. At one point I thought, “Alright, I really need to study this, even if it’s just to understand it.” The initial idea was to learn the basics and then go back to Linux. But once I started studying it, I actually enjoyed it a lot. Nowadays, it’s basically what I work with. I barely touch Linux anymore, I work mostly with Active Directory.
What types of vulnerabilities appear most often?
Access control issues are among the most common. In other words, people or systems having access to things they shouldn’t.
And often, we’re not even talking about technical vulnerabilities in the traditional sense. It’s not necessarily outdated software, missing patches, or old components. Everything can be fully updated and apparently secure.
The issue lies in how the solution was implemented. There’s a logical flaw that allows certain people to access things they shouldn’t.
This happens a lot in organizations. There might be a shared drive where people store files and, whenever someone new joins, the assumption is: “This person will probably need access to this drive.” So they’re given permissions to read, write, share, remove, or edit content.
The problem is that those permissions are often granted and then never reviewed or removed.
Have you ever found something critical that surprised the client?
The situation that surprised me the most was, once again, related to access control.
It involved an account with the lowest possible privilege level that somehow had direct access to the highest level of privileges. In other words, there was someone outside the technical department who had highly privileged access normally reserved for top-level technical profiles within the company.
That really shocked me, especially because those privileges had existed for years without anyone noticing.
What’s harder: the technical side or explaining the risk to the business?
As you progress in your career, it becomes increasingly clear how important it is to explain technical concepts in non-technical language.
At the beginning, of course, the technical side is the hardest because there’s an entirely new world to learn and absorb. There’s a huge amount of information, concepts, and layers of complexity coming at you all at once.
But at this stage, I’d say one of the biggest challenges is translating that technical complexity into language the business can actually understand.
The real challenge lies in balancing operational efficiency and security. In an ideal world, every action would need verification. But that would introduce enormous delays into everything. If I have to enter a password and go through 2FA for every single task, that may be the most secure scenario possible, but people still need to get work done.
It’s possible to have an extremely secure company, but if security compromises operations too much, it ends up affecting the business itself. And without business, there’s no money. Finding that balance is one of the biggest challenges.
Then there’s also the issue of bias. Security teams naturally tend to prioritize security, while operational teams prioritize efficiency and usability.
Which languages and tools are essential in your day-to-day work?
I’d say the technology most associated with offensive security is Kali Linux. It’s a Linux distribution specifically designed for offensive operations.
I think it’s what connects all of us. Anyone working in red teaming, web application security, infrastructure, or cloud security knows Kali Linux. It’s something almost everyone in the field has used at some point, and in my case, I use it daily.
In terms of programming languages, Python is heavily used, especially because it’s such a versatile scripting language that makes many tasks easier. With the rise of AI, that has become even more evident.
Then there’s the entire Microsoft cloud ecosystem, especially Azure and Entra ID, which are becoming increasingly important. AWS also remains the world’s most widely used cloud platform.
---
Did you find this area interesting? 💡 Explore all our open Cybersecurity projects here!
Want to hear more about topics like careers, productivity, technology, management, or leadership? Check out our podcast.




