In the past year, we've had a few discussions in my research group about joining various competitions, such as AIxCC, Pwn2Own, and others. These competitions are testing grounds for many of our research topics, as our research group primarily focuses on testing strategies for a wide variety of targets. We ultimately decided against it out of limited interest in taking part. From my part, my decision was guided out of a singular concern: the competitions often require the exploitation of vulnerabilities.

The purpose of testing research

I have previously discussed the nature of responsible disclosure and its relationship to academic research. The other half of this conversation is how we treat vulnerabilities in practice.

Ultimately, when we develop new techniques for testing, our goals are to find and fix bugs. Both of these goals serve a common purpose: to provide better tools and strategies for developers and users to better protect themselves. Nowhere in this process is exploitation necessary.

When exploitation is included as part of the testing process, it is a weapon without a cause; it may of course demonstrate the severity of a bug, but it also inherently creates the opportunity for active abuse of our discoveries. For this reason, I believe it to be irresponsible to develop exploits during testing research.

The status quo of vulnerability reporting

In part, the reason that many testing groups have taken on exploitation development as part of testing is industry-driven. Without a proof-of-concept, many developers will choose not to acknowledge our findings. If you've ever submitted any bugs to Microsoft, you'll know what I'm talking about.

As external testers, we are often not taken seriously until we provide the means by which to harm others. In the case of most software development companies, exploits are merely a token; we develop a weapon which, realistically, will only affect their numerous customers. At worst, these companies would merely experience exploitation of these bugs as a minor dip in stock price that would be promptly "fixed" by shipping some other vulnerable service that we are then distracted by. This is a wildly irresponsible practice of only accepting findings once we have effectively packaged together a "point and click" tool by which to harm people that, ultimately, the company won't care about.

This practice appears more prominently in large software companies that consider security as an additional cost on top rather than a priority. Open-source groups and smaller companies are thankfully less affected by this, but the problem persists when developers are unaware of or deprioritise security concerns. In these cases, it is often still necessary to develop an exploit to communicate the severity of these issues. When absolutely necessary, the development of severely limited exploits is often sufficient.

Why is this a concern in responsible disclosure?

I seriously question the assertion that disclosure can be responsible when exploits are developed.

For major companies, the presence of malicious actors is indisputable; whether by insider threats or by existing unauthorised access, reporting portals are often compromised in some way. To these companies, with competent security staff, the full context and severity of a vulnerability can be communicated without needing an exploit. Instead, exploits are merely used as a "barrier to entry" on vulnerability reporting to reduce expenses by being able to discard them without review, regardless of actual impact. As reporters, we are effectively required to develop the means of harming their customers as an excuse for the company receiving the report to ignore the potential harm otherwise. We implicitly "do the work" for the malicious actors who would actually exploit the vulnerabilities we are trying to prevent the exploitation of.

For smaller organisations, especially volunteer open-source groups, even less security is available. We've seen this year just how severe compromises to open-source can be, but with the submission of security reports that include exploits, there is almost no need for a malicious actor to attempt to backdoor these softwares.

Similarly, at least in academia, groups which do testing can be and likely are compromised, either by insiders or by unauthorised access. Our systems are often less secured on the grounds of research freedom, which is a good thing, but this freedom comes with a responsibility to minimise the amount of damage our research could do. If nothing else, we are inarguably a target (as we can and do find new vulnerabilities at a relatively high pace) and only make ourselves greater targets by developing exploits.

It is for these reasons that we must publicly reject developing of exploits; the groups we provide these exploits to can be and often are compromised, and we simply hand exploits to malicious actors on a platter. Even if we never disclose these exploits, their mere development is problematic when we ourselves may already be compromised. Moreover, we must adhere to and enforce a strict responsible disclosure timeline for each and every vulnerability we discover as to mitigate effects of compromise both for ourselves and those we report to.

Caveats

I, of course, have and will continue to develop exploits when absolutely necessary to convince a developer to fix a bug. It is better that a bug is fixed later with a known exploit than never without. Developing exploits is undeniably a thrilling process and, frankly, nothing is more satisfying than seeing an exploit succeed for the first time. But developing these exploits before necessary or before the corresponding vulnerability has an upstream patch can only be harmful, and we need to stop this practice.

This is also not to say that exploitation is completely without benefit. Exploitation, as a research topic, is another type of work entirely -- to improve mitigations, to know what vulnerabilities may exist. This has its place in research, as without knowledge of what may be exploited, we have no knowledge of what bugs to test for (and thereby how to test for them) or what techniques may used by attackers (as to be defended against in mitigation research). Nevertheless, this is independent research from testing, and we shouldn't exploit at every opportunity.