A journalist protects a source; a doctor secures patient information; a lawyer maintains client confidentiality. So, shouldn’t tech companies be required to guard the privacy of their consumers? According to Adweek, 79% of Americans agree. They think that tech companies should be regulated in the same way that news media is. Further, a recent Pew Research Center survey found that 47% of Americans lacked confidence that they understand what companies will do with their personal information, and had mixed feelings about whether or not to share it. However, unlike journalism, medicine and law, the field of cyber security does not have an ethical code that its professionals are obligated to adhere to.
A problem of vulnerability
With houses filled with smart devices – phones, home security systems, sprinklers, televisions, baby monitors, smoke detectors and so many others – sending data to the cloud, it’s no surprise that we are increasingly vulnerable to cyber attack. It’s unsettling to think that hackers can access anything from your house to your pacemaker. While it’s true that nothing is 100 percent secure, there’s no need to become a luddite just yet. Though many people are convinced that Alexa is spying on them, this is most likely not the case.
This being said, why are we so vulnerable to these hacks? According to a study by IBM Security and the Ponemon Institute, 80% of organizations do not routinely test their IoT apps for security vulnerabilities. The tech and software environment pushes companies to prioritize the rapid release of products over implementing the strongest security methods. Hence, the strength of security for these items is low to begin with. However, many cyber security and tech businesses are strongly opposed to the implementation of stronger security regulations, as this would negatively affect the speed of innovation.
Further, there have been cases where consumers are not made aware of existing vulnerabilities. In 2015, a dental-practice management software vendor was forced to pay the US Federal Trade Commission $250,000, as they had been claiming that their program protected data when it did not. This company spent two years informing companies that it had data security capabilities and met regulatory standards, while aware that their software did not meet the encryption standards of HIPAA.
Disclosing a breach
When a breach is discovered, ethics affects the ways in which a cyber security professional addresses the problem. Researchers handle the information in one of two ways – through responsible disclosure or full disclosure. Full disclosure means sharing the security breach with the public as soon as knowledge comes to light. In doing so, individuals are immediately made aware of which software or product is vulnerable to a hack. Companies argue against this method on the basis that in sharing the breach with the public, they are simultaneously making hackers aware of a vulnerability which may not have a fix. However, full disclosure forces vendors to more actively seek patches for a bug due to public pressure. Additionally, it allows users the opportunity to cease using a compromised product.
On the other hand, responsible disclosure is when the researcher contacts the vendor before the vulnerability is disclosed to the public. From a brand standpoint, the vendor benefits from the ability to control the narrative as they release information on their own timeline, along with a fix. However, in doing so, they leave an extended period of time in which users are unaware that they have been exposed to a hack and will unknowingly continue to use vulnerable products. In doing so, the vendor appears to be prioritizing the brand image over integrity and the safety of its consumers’ information.
At this time, the need for cyber security professionals has been growing more than ever. The William Woods University Online BS in Cybersecurity program is designed to prepare students to thrive in this field, filled with high-demand, lucrative positions.