Naturally, we want our software and the services that we use to be secure, and this often requires discovering vulnerabilities and taking actions to fix them. But what is the right way to disclose vulnerabilities to vendors and the public?
Many researchers will find themselves in the position of finding and disclosing vulnerabilities, often accidentally. What it is not so known is that such activities can have legal and ethical implications that vary depending, for example, on how vulnerabilities are discovered, on to whom they are disclosed, and how the public is informed.
Well intended researchers look to improve the security of software products in collaboration with the vendors while minimising the negative impact of letting someone taking advantage of a vulnerability. However, in some countries researchers are forced to seek to defend themselves in legal proceedings, because their research on security is considered as criminal hacking. In others, it is not clear whether such research is welcome or if they are protected. As such, they may not know how to behave when discovering a vulnerability.
We spoke with the Head of the research group IRiSC (Interdisciplinary Research on Sociotechnical Cybersecurity), Prof. Gabriele Lenzini, to find out more on how researchers can safely disclose software vulnerabilities, as well as get involved in the ethical debate surrounding the topic.
Gabriele: I believe that we, as cybersecurity researchers, should educate ourselves to be aware of the consequences of our work. We should also be knowledgeable about professional codes of conduct and best practices. In certain areas of computer security, like in cybersecurity, we may discover vulnerabilities and ask ourselves how to deal with them. Shall we investigate further to collect evidence on how the vulnerability works and to find out the extension of the problem? Or shall we immediately inform the vendor even if we do not yet have a strong argument? What if the vendor ignores our attempt to establish a dialogue? Shall we publish our findings? What should we publish and to what level of detail? Shall we wait until the vulnerability is fixed? What if someone exploits it in the meantime? What if the vendor does not like to be associated with what is clearly a serious flaw? Should we publish in venues that attract only specialists or shall we seek for a larger audience? After all, publish or perish is unfortunately how we are evaluated, and we need to publish to progress in our career.
The debate on ethical conduct in computer science is growing, but there is still little guidance in Luxembourg and other countries on how researchers should act when they discover vulnerabilities. Developing an ethical standpoint is therefore imperative.
Researchers should have the necessary support to investigate and reveal software vulnerabilities, and this includes support from companies as well. This is called, in technical terms, a Coordinated Vulnerability Disclosure (CVD). In April 2022, ENISA, the European Union Agency for Cybersecurity, published a map of CVD policies in the EU member states, and reported that there is still a wide disparity within the EU on the practice. In certain countries, finding a vulnerability is a criminal offence, in others it is rewarded and the researchers are acknowledged.
In this respect, the Netherlands has the most advanced national policy, and serves as an example for other countries. Companies, the government, and universities formed an alliance to foster research on security vulnerabilities, and at the same time protect and offer suggestions to researchers who pursue such activities. The Netherlands has also a history of judicial cases that clearly establish the boundaries in which researchers can safely operate.
Several BigTech companies also value the fact that software vulnerabilities are critical. They already offer bounties and public recognition to the researchers who discover vulnerabilities. Sometimes, they even open competitions to find vulnerabilities. These companies want to be competitive and want to fix problems together with researchers.
So, it is ideal that researchers, software vendors and other stakeholders cooperate and coordinate on how to disclose findings for mutual benefits. CVD associations can provide legal advice and public recognition to the researchers who have spent time and effort to look into how to make systems more secure, as well as monetary rewards (bounties) to those who have acted responsibly. Vendors can guarantee that they will not seek legal action, but create the conditions to learn from research how to improve the security of their products and services. A mutual debate and dialogue will guide researchers, and, in addition, researchers can align their goals to correct the vulnerability, and also publish it. This of discussing security issues in a coordinated manner can make the world more secure.
In computer science, there is plenty of room to improve the education of students and researchers on ethical principles. Other fields have already started begun this process some time ago. For instance, education in ethics is quite common in medical sciences, and biology, meanwhile codes of conduct exist for dual-use research related to bio-weapons. Together with Dr. Wilhelmina Maria Botes , we at the Interdisciplinary Centre for Security, Reliability and Trust (SnT) have examined these tools and compared them in terms of responsibility to our situation to see what idea could be adopted in cybersecurity. There is already a move towards ethics in data science, but we should also extend it to cybersecurity.
At SnT, we conduct research on cybersecurity and we started to ask ourselves how we can improve our work by taking an ethical stand. We educate our researchers to consider ethical dilemmas and to see their work as coming with social responsibility. This is particularly relevant for our research on cybersecurity. We like to be a beacon not only on technology and innovation, but also on responsible behaviour in research.
Together with other excellent players in Luxembourg, SnT could ideally help implement an appropriate European-wide framework, and set up a safe environment where researchers can discuss vulnerabilities and their appropriate disclosure, which is after all what EU directives like NIS and its forthcoming revisions are trying to promote. A coordinated approach is vital, so that in the future we see researchers benefit from more support and protection.
Prof. Gabriele Lenzini is Head of the IRiSC (Interdisciplinary Research in Sociotechnical Cybersecurity) research group at SnT, member of the working group in ethics of the Informatics Europe Association, and vice chair of the Ethical Review Panel (ERP) at the University of Luxembourg. He will be a keynote speaker on vulnerability disclosure at the Cybersecurity Week Luxembourg.