There have been three discreet schools of thought on disclosing vulnerabilities. Totally open, partially open, and no disclosure. Fairly logical that.
No disclosure is the school of thought that the best means of security is no public and limited private dissemination of vulnerabilities is the best means of security. “Security through obscurity” is the primary phrase of this moment. The logic is quite simple on the surface. If “no one knows” about the problem, it doesn’t exist as far as virtually everyone knows. That means there will be less chance of said vulnerability being exploited, as few people will know about it.
Partially open disclosure is that the issue is acknowledged in very general terms, but no details whatsoever are given. In theory, it’s supposed to be a compromise of between the two parties. In practice, the majority hates it.
Full disclosure is just that. Open, complete discussion of vulnerabilities. All or nearly all details in the open to all parties. It’s not considered inconsistent to give the manufacturer or other responsible party a defined period of time to resolve the issue before publication of the vulnerability. The problem is that the vulnerability can be exploited by virtually anyone interested in doing so. Finding a flaw can be difficult, replicating it is often trivial.
Security through obscurity sounds like a very reasonable argument. Only problem is… Knowledge always leaks given enough time. Unless the person who found the vulnerability is a hermit, he or she is going to tell someone else. Or if that person exploits the vulnerability multiple times, it likely will eventually be noticed. A vulnerability that isn’t or can’t be exploited is of limited value to the bad people.
Another consideration is that virtually everyone, including black hats, are motivated by MICE. Money, Ideology, Coercion, and Ego. Black hats are motivated to do what they do. Previously, it was a historical trend that they did their work for ideology or ego. These days, black hats motivated by ego is the minority (in terms of being a threat). The majority are motivated by money, plain and simple. Primarily spam, but also harvesting personal/corporate/government information for resale or private exploitation.
The “individual” non-profit Black Hats are also starting to die off. They still exist, but are an extreme minority compared to the folks acting as independent or dependent contractors or specialists. Specifically, organized groups that have taken to information and electronic exploitation. Organized crime, intelligence services, military units specializing in IEW, paramilitaries (Security Services, terrorists, PMC’s, et al), corporate espionage groups, etc. They have a specific motivation, whatever it is. These motivations (MICE) have existed since the dawn of civilization and will not disappear until humanity does. Electronical medium is a new playing field, but the overall themes are extremely old.
Outlawing full disclosure is akin to outlawing firearms. People will still engage in their behavior and the only people hindered are the victims. Throughout time, people have always reacted to bad news by shooting the messenger in hopes that the underlining information or situation will expire with the messenger. This is never the case, but the mentality survives.
Full disclosure is very painful to virtually every party in some way. The originating manufacturer of the vulnerability must fix it, the researcher who discovered the vulnerability faces legal or reputation liability, the criminal now much deal with a potentially informed and prepared victim, the potential victim must mitigate the vulnerability.
This sounds like a major pain in the fourth point of contact. So why would any sane person advocate it?
Because it has been shown to be the only historical way of gaining real security.
This is not a new debate. The first published debate on full disclosure is traced to the 1850′s, but existed long before that. Guilds had elaborate procedures of restricting information to only acceptable parties in order to maximize profit at the expense of the consumer and public. Often, dangerously restricted. The milk processing guild restricted knowledge of their milk adulterating procedures, which happened to be very dangerous and not infrequently life threatening.
We are in the same boat as the public in the 1850′s. The overwhelming majority of people do not have the time, training or ability to thoroughly examine every bit of their operating system, every aspect of their locks, etc. It can and often does take a decade or more to master just one area of study. As humans are not immortal, it is impossible to have a mastery of all subjects.
It is in the public’s interest, as well as the manufacturer’s long term interest, to openly disclose vulnerabilities. If a batch of milk was contaminated, people who purchased it must be told. If a lock can be bypassed trivially, the owners should know. If a car has faulty brakes, the driver must know. If there is a major hole in a computer system, the operator must know it exists. Without this knowledge, it is impossible to mitigate the risk. The public will suffer. After being burned, they will not trust and will extract retribution (hopefully through the courts) on the responsible party, the manufacturer.
While it is painful, the manufacturer who discloses a vulnerability greatly reduces their long term liability for a defective product. They then build a better product. Short term loss, long term gain.
Unfortunately, the “shoot the messenger” instinct is still very very strong. In the US, there are laws in place that severely restrict reverse engineering. Free speech prohibits blanket bans of security publications, but Congress does its best to infringe on behalf of people who solely focus on the short term. This has extended to the point of security researchers literally being dragged off the podium in handcuffs. (Sklyarov) It is not infrequent for the manufacturer of the vulnerability to threaten or engage in legal proceedings to silence security researchers. (MIT students v MTA metrocard, et al) People just naturally get angry when they are given bad news. Especially if bad news is directly attributable to the person receiving the bad news.
If you think hackers get treated unfairly, try giving open disclosure lectures on locks. People are absolutely shocked, horrified and angry that their $20 pot metal piece of garbage lock is easily bypassed. Rather than accept personal responsibility and make reasonable steps to mitigate the issue, it’s just plain easier to be angry at the person who told you the information. It doesn’t change the reality of the situation. The vulnerability exists, whether folks know about it or not.
Professionals inform each other. Criminals circulate information. When open disclosure is banned, only the consumer or potential victim is in the dark. Exactly like gun control. When you attend to infringe or ban firearms, you do not stop the police or criminals from owning firearms. Only the public is hurt. Information on vulnerabilities is no different.