eLearnSecurity Blog

How to Conduct a Responsible Disclosure

A software developer found a flaw in Verizon’s Mobile Application and submitted his findings to the broadband and telecommunications company. According to the researcher, Randy Westergren, there was an issue with the Verizon My FiOS app that allowed access to any user’s email account. This included reading their inbox, individual messages, and even sending on their behalf.

Westergren contacted Verizon and within 48 hours, the issue was fixed. The vulnerability was then disclosed to the public. You can read the story of the critical vulnerability here – Critical Vulnerability in Verizon Mobile API Compromising User Email Accounts

proper disclosure els

Proper Disclosure

We asked IT Security instructor, Davide “GiRa” Girardi, about his thoughts on the article and the difference between penetration tests and bug disclosures.

Did the researcher follow the correct procedure when conducting a Penetration Test?

This was not a Penetration Test, it is more a security related disclosure. Westergren discovered some bugs in a mobile application, studied how to exploit them and disclosed this fact to the vendor in a responsible manner.

During a penetration test, you conduct a series of planned attacks on computer networks, systems, applications and more to find existing vulnerabilities. This includes proper analysis and reporting and you are limited to a certain scope. You cannot just test a system wherever you want to test it, you have to follow strict rules of engagement.

How long does it have to take before a researcher should go to the public and disclose a vulnerability if not acted upon by the company? Is there a standard timeline to this method?

It really depends on how the vendor behaves. Many responsible disclosures include a timeline telling people how communication with the vendor went. The timeline usually contains every step of the disclosure process (i.e. private disclosure, acknowledgment of the bug by the vendor, patching, public disclosure).

For example:

  • 22 September: Report to the vendor
  • 25 September: Acknowledgement of the bug from the vendor
  • 7 October: Bug fixed
  • 8 October: Patch issued to the public
  • 9 October: Public disclosure

Sometimes a bug is very hard to fix or there are some internal rules that slow down the patching process, so the vendor could need even months before issuing a patch to the public. Here’s another example of a bug disclosure that has a longer timeline – Microsoft Windows Server 2003 SP2 Arbitrary Write Privilege Escalation | KoreLogic

On the other hand sometimes vendors simply do not reply to private disclosures. In that case, I would try to contact them a couple more times and then I would publish the bug with a proof of concept (PoC), so everyone can be aware of the security implications of the bug.

The reason why some researchers would disclose the bug in public is because there could be black hat hackers already exploiting the vulnerability in the wild. At the end of the day every situation is different, it is up to the pentester do what is best for the community.

Can you give us more tips when conducting proper bug disclosure?

Firstly, you should know that no one can test a network or an infrastructure without proper authorization. On the other hand there are public bug bounty programs or applications you can install in your lab and test. It is a grey area. Being respectful and knowing what you do is the key.

Always keep this in mind: your tests must be carried out by knowing what you are doing and knowing the impact on the tested system(s). You can never, ever, create troubles for the owner of the service(s) you are testing.

Then, if you discover a security bug, you can privately disclose it to the vendor. Do not use videos but give the company a well-written report, with detailed steps to reproduce the bug. Also your PoC must be meaningful and easy to understand! You are trying to help them and no one is in your head!

Finally, after the vendor fixes the bug, you can public disclose it.

Davide Girardi gira

Davide “Gira” Girardi is a security researcher and instructor. He has 8+ years of experience in system hardening and security consultancy on Linux, Windows, OSX and mixed environments.

LinkedIn: https://www.linkedin.com/pub/davide-girardi/76/652/744


Tags: , , ,

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Go to top of page