How researchers report vulnerabilities -- and how companies react to those reports -- may be one of the briskest topics at this week's Black Hat security conference.
The debate isn't new -- researchers and vendors have quarreled over bug reporting philosophies for as long as the former have found bugs in the latter's software -- but the subject kicked into a higher gear last month.
That was when Tavis Ormandy, a security researcher employed by Google, went public with a critical Windows bug just five days after reporting it to Microsoft. Ormandy said he disclosed the vulnerability when the company wouldn't commit to a patching deadline; Microsoft has disputed that, claiming that it only told Ormandy it would need the rest of that week to decide.
Whether it was a breakdown in communications between the two parties or a misunderstanding, Ormandy's publication of attack code for a Windows XP vulnerability -- since patched by Microsoft -- unleashed a heated debate.
Some security researchers criticized Ormandy for taking the bug public, while others rose to his defense, blasting both Microsoft and the press -- including Computerworld -- for linking Ormandy to his employer.
"The upsetting trend, which I imagine has been keeping security companies playing along with Microsoft's silly game, is for Microsoft to call into question the ethics of the reporter, and even if that reporter was acting independently, tying that question of ethics to the reporter's employer," wrote researcher Brad Spengler in an epistle to the Dailydave security mailing list.
Spengler later declined to be interviewed by Computerworld.
But his post was influential: It was widely circulated among security researchers and rekindled the conversation about the Ormandy-Microsoft incident, as well as the larger conversation about when and how bug finders report their discoveries, and how vendors react to those reports.
For years, the debate has been between two concepts: "full disclosure" and "responsible disclosure."
In the former, researchers release information about a vulnerability when they see fit, or after a vendor balks at or delays a patch. The logic: When a bug goes public, companies fix flaws faster under the pressure, which may include the fact that the publication of the flaw has led to actual attacks.
"It's been shown that vendors can move much quicker when there's an exploitation in the wild," said Dino Dai Zovi, a security researcher who will be presenting Thursday at Black Hat.
Responsible disclosure, on the other hand, holds researchers on a tighter rein. Under that philosophy, a researcher privately reports a bug to the software maker -- or to some other organization that reports the vulnerability for them -- then waits for the developer to patch it before publishing details and exploit proof-of-concept code.
It's no surprise that Microsoft, and virtually all other software makers, have touted the latter as safer for customers and makes for more reliable patches.
"[Microsoft's] customers look to the security community to help them," said Mike Reavey, director of the Microsoft Security Response Center (MSRC), Microsoft's in-house security team, in an interview last week. "They don't want the risk amplified by information [going public] before a high-quality update is ready."
But the Ormandy-Microsoft imbroglio did produce some changes, important ones in the eyes of many security researchers.
Google led off last week when it published a missive called "Rebooting Responsible Disclosure," a proposal that featured, among other elements, a call that researchers set a 60-day deadline. If a vendor hasn't patched by then, researchers would be free to take their findings public.
The search giant also raised the maximum payment it would make for bugs in its Chrome browser, following a similar move by Firefox maker Mozilla.
Days later, Microsoft responded by announcing it wanted to change the term "responsible disclosure" to "coordinated vulnerability disclosure," a decision it denied came from the Ormandy event.
Microsoft's Reavey argued, as others have before him, that the word "responsible" carried too much baggage. "When folks use charged words, a lot of the focus then is on the disclosure, and not on the problem at hand," he said.
Researchers applauded the moves by both Google and Microsoft.
"What's really important [about Google's deadline proposal] is that this is coming from a vendor, not a researcher," said Dai Zovi. "Google's saying, 'This is a standard we're going to hold ourselves because we think it's right.' They're leading by example."
Jeremiah Grossman, chief technology officer at White Hat Security, gave Microsoft kudos for tossing the word "responsible" from the discussion. "Even if it's only a renaming and repositioning of Microsoft position, that's a good thing," he said.
Like Dai Zovi, Grossman is slated to present his vulnerability research at Black Hat on Thursday.
"What a change from a year ago," Grossman added, pointing to the increased bug bounties, Google's demand for a patching deadline and Microsoft's acknowledgment that "responsible disclosure" was an inappropriate label. "What a radical departure."
"I don't know if this will be a big topic at Black Hat," said Charlie Miller, a researcher set to present Wednesday at Black Hat. "It may not. But all sides have put their cards on the table now."
Gregg Keizer covers Microsoft, security issues, Apple, Web browsers and general technology breaking news for Computerworld. Follow Gregg on Twitter at @gkeizer or subscribe to Gregg's RSS feed . His e-mail address is email@example.com.