I had an interesting conversation on the plane last week with a retired choir director/professor who had recently experienced fraudulent charges on his bank account. As I had disclosed my profession, he wanted to know how this could have happened and I struggled to answer the question in a way the he - a non-techie - could easily understand.
The conversation made me wonder once again: what should/can we reasonably expect the average person to understand? Do we really need to reduce to the lowest common denominator, or do we at some point draw a line, with the caveat that a certain percent of the population will never "get it"? If so, what percent is reasonable and appropriate?
Or, instead, is it a matter similar to our cars? Do we set a minimum set of "safety" requirements with punishments and then relegate anything too technical to a "mechanic"? We need to know how to operate our vehicles, and in an allegedly safe manner, but beyond that what is our duty?
I've read a few articles lately that seem to tie into this topic, though from a variety of angles. One such article, titled "Compliance with Information Security Policies: An Empirical Investigation" in the February 2010 issue of IEEE's Computer magazine, talks about how rewarding people for policy compliance is ineffective, and how punishments actually have the best results. They key in on peer pressure, in particular, but the overall message is clear: rules without consequences are ineffective.
It seems that we need something comparable in infosec, and not just in the corporate world. Certainly, enterprise security policies need teeth that are actually used to bite, but this time around I'm talking about the real world. Sure, the notion of licensing end-users isn't new - it's in fact been bandied about for decades. But the real question is this: at what point do we start getting serious about setting reasonable expectations?
There are a few quick thoughts:
* Assumption: The average end-user cannot be held accountable for underlying product defects.
* Assumption: The average end-user can be held accountable for bad decisions.
* Problem: Definitively tying end-users to their bad decisions.
It seems increasingly necessary for legislation on this matter. We're already seeing more calls for increased regulation of enterprises and government intervention on botnets/spambots. It's time to also start looking at sane laws that require end-users to behave responsibly and intelligently (well, as reasonably intelligent as possible). If you are found to be negligent in using or maintaining your computer, you should be held responsible.
So, how do we define negligence for average end-users? Again, a few quick thoughts...
* Patching: Every major OS has automated patching capabilities.
* AV and Anti-spyware: AV and anti-spyware packages are widely available, many free for personal use.
* Reasonable clicking: This is an awareness issue. Part of this problem is poorly coded apps, but the other is basic awareness and intelligence.
* Reasonable suspicion: People are clearly too trusting of the Internet still. Consider this story from Brian Krebs where a man in North Carolina willingly served as a money mule for laundering money.
At the same time, we also need to provide facilities to back these things up. We're fairly safe on most of these cases already, so really just need to add in a few areas. Vendors already provide automated patching tools. There are a wide variety of free AV and anti-spyware packages, with others for relatively little money. This just leaves training and awareness, as well as some sort of consumer-oriented reporting capability. Also, tied to patching, there needs to be a path to help facilitate upgrading. For example, I wonder how many people are still running Windows 95?
Some things that I think would help:
* Privacy regulations: If privacy protection is codified and enhanced, then people will have a better reason to protect their data. Such a movement could help roll back some of the privacy losses over the past decade, as well as to help modified Gen Y (and younger) culture that does not put a premium on privacy. Part of this would be making the shift away from protection against intrusion toward a more control-oriented approach (see my post "The New School of Privacy" from last year).
* A national awareness program: Right now our only mechanism for alerting consumers is through the hype amplifier that is mainstream media. We need a sensible, non-inflammatory national program that is seen as legitimate and constructive, that spends its time offering from training and awareness to consumers about how to protect themselves (this becomes all that much more important if consumers are afforded more protection plus subject to consequences for bad behavior).
* Vendor-oriented regulations: There need to be increased regulations that codify the responsibility of vendors to provide consumers with reasonable protection against defect and compromise. While it's imperative that consumers share the burden, it is also necessary to act proactively to prevent vendors from sloughing off their responsibilities in this shared ecosystem.
* More aggressive government intervention: The government needs to start intervening more aggressively to protect consumers from malware, scams, and fraud. At the same time, the government also needs to aggressively protect consumers from defective products or vendors who aren't taking responsibility for their own systems and applications. It seems that there should be an insurance vertical around this area, too. Perhaps a model along the lines of FEMA's flood insurance could be used for protecting consumers.
* Consumer protection regulations: We need new regulations that provide consumers additional protection, primarily against vendors who don't demonstrate a reasonable standard of care, but also to ensure that consumers are not punished for the evil perpetrated on them by others, at least so long as they aren't making foreseeably bad decisions. Some form of insurance may be helpful, but so would a consumer protection program geared toward cybercrime and related areas.
* Regulatory consequences for bad decisions: While affording the consumer protection is worthwhile, it seems that we have also reached a point where we need to develop reasonable "safety" standards for use of computers and the Internet. As such, just as we have laws governing operation of motor vehicles, so it also seems that we need some sort of comparable program - maybe short of licensing - that requires people to operate their computers up to a minimum level of sanity. What this would look like is unclear, but what is clear is that the current state is neither adequate nor acceptable.
It's a fine line to balance between protecting consumers, requiring consumers to act responsibly, and creating an environment that ends up being unnecessarily exclusionary. However, what is clear is that the Internet and computing will only become more pervasive in our lives. Toward that end, it's time to start affecting cultural change. Of course, none of this perhaps helps me explain why someone had fraudulent charges on their bank account, but it does at least give us a starting point for conversations on safe(r) computing practices in the consumer sector.
Comments (1)
I agree that it's a fine line to balance. Unbelievably though, there are many people who still don't understand "if it seems to good to be true, it probably is" and they get scammed. The balance line between naivete' and stupidity is hard to figure out.
Have fun at RSA, maybe I'll see you there again next year.
Posted by Steve Lodin | February 25, 2010 1:18 PM
Posted on February 25, 2010 13:18