"Because technology can evolve much faster than we can, our natural capacity to process information is likely to be increasingly inadequate to handle the surfeit of change, choice, and challenge that is characteristic of modern life. More and more frequently, we will find ourselves in the position of the lower animals -- with a mental apparatus that is unequipped to deal thoroughly with the intricacy and richness of the outside environment. Unlike the animals, whose cognitive powers have always been relatively deficient, we have created our own deficiency by constructing a radically more complex world." (Influence, p.277)I've recently completed Robert Cialdini's Influence: The Psychology of Persuasion (Collins Business Essentials). The book covers in detail what Cialdini has identified as the six most common methods of influence that are used or abused in compliance situations. A compliance situation would be any scenario where someone is trying to get something from someone else, either for themselves or their organization. Each of the six methods has its own chapter, which provides copious anecdotal and academic backing.
This work fascinated me from an information security perspective. One of the primary threats to average users today is from phishing and spam, which often lead to various types of fraud. These attack vectors often leverage one or more of the six methods, as I'll describe below. Following are my notes from the reading, with additional thoughts and anecdotes added as applicable.
Cialdini begins the book by providing an example of influence at play. He talks about how humans rely on mental shortcuts (these so-called "weapons of influence") to help reduce the amount of mental processing required to go through life. In one example, he talks about how a friend's note to her employees accidentally resulted in jewelry being doubled, instead of halved, in price. As a result, tourists were drawn to the jewelry on the premise that expensive = good. He called this phenomenon "betting the odds" and remarked that it exemplifies how the brain tries to simplify things.
The six "weapons of influence" described in the book are:
* Reciprocation
*Commitment & Consistency
* Social Proof
* Liking
* Authority
* Scarcity
I'll try to summarize each of these below and add my thoughts, particularly with respect to their applicability to information security.
Reciprocation
This method can best be thought of as currying favor. You offer something, and then can ask for something in return, even if the return request is disproportionately larger. Most commonly we see this in negotiations, I believe, where one of the sides will offer a compromise on one issue, while in turn asking for a compromise on another issue. Cialdini talks about it in the chapter as being even as simple as offering a beverage or snack gratis and then later asking for someone to do something for you. In principle, the recipient of the original gift feels indebted for the gift and is then extremely willing to rid themselves of the feeling of indebtedness, even if it means agreeing to an unequal exchange.
There are a couple key features with this approach that I've seen consistently violated in my years working within information security. First, being the one to offer the initial compromise is all good and fine, so long as you're not compromising beyond the appropriate position. For example, if your organization is discussing minimum acceptable password lengths and your security department has concluded that 8 characters is the minimal acceptable length, then it is not useful to offer a compromise of 6 characters, even if you plan to request a return compromise of requiring complexity. The reason this is not useful is because you end up compromising your fundamental position, which, in this scenario, was an 8 character minimum. Instead, I believe it would be better to argue for a 10-character (or greater) minimum length and then offer a compromise to 8 (or, one could argue for a 12-character minimum using a passphrase approach and then compromise on the complexity requirement, provided that the character set available is sufficiently large).
The other use of reciprocation that concerns me is when one side is consistently the first to compromise, sending the sub-textual message that you believe your side has a weak position that cannot be successfully defended. This sense of position within the business seems to be common to information security. We're not generating income, but rather trying to keep bad things from happening. Unfortunately, proving a negative is not possible, leaving us with the unenviable position of trying to advocate for principles that, when working, are relatively impossible to prove.
Thus, we're often put into a position of needing to compromise in order to achieve anything; oftentimes, compromising with the business over real or perceived costs. I, however, offer an alternative negotiation technique that, when used correctly, can lead to a win-win scenario. As discussed already, it's important to not compromise beyond your base objective. One way that I've employed successfully is to hold the line on an unreasonably stated objective (such as a 15 character minimum password) and push the business hard to make the first offer of compromise. One you've set the other side up to begin a reciprocation maneuver, you can then respond in kind by backing off of your stance (sometimes considerably), while still achieving your base objective.
This tactic is useful for three reasons. First, it allows you to approach a negotiation from a position of strength - even if you are actually coming in from a standpoint of weakness (such as asking for costly measures that will benefit the business in intangible ways, or ways that are difficult to measure). Second, it creates the illusion that the other side is in control by letting them take the lead. They believe that they are on the beneficial end of the reciprocation arrangement, even though you were coming from the weaker position. Third, it creates a win-win for everyone, allowing your side to meet your objectives, while the other side feels like they've successfully negotiated a better deal for themselves.
Regrettably, this approach requires a degree of tenacity rarely found in security organizations. Also, if executed poorly, it can set you up to look like a royal jackass, which is probably not overly productive for the old career. There can, however, be an alternative spin on this approach that can save the day. By creating such a situation as an individual contributor, you then setup your management to step in as a peacemaker to negotiate the end solution. As long as everybody is on the same page (rather important), then this can be an equally successful approach, though it still potentially leaves you, the individual contributor, with a slightly tainted reputation. If you reserve this tactic for rare occasions, though, and act reasonably in all other scenarios, then you'll help compensate for standing on principle.
Commitment & Consistency
The method of "commitment and consistency" is based on the underlying believe (demonstrated through studies) that once someone commits to a decision or position, they will then work overtime to ensure they act consistently with that commitment. The more active, public, and effortful the commitment is, the more "owned" people will be by their commitment, causing them to fight for it at all costs. Even more important, consistency is a socially reinforced value: just look at what happens when politicians change their opinion on an issue. How many times have we bought into media labels of "flip-flopper" when someone has changed their stance on abortion, gay marriage, the "war on terror" and ongoing Iraq conflict, and so on. (of course, ironically, almost all politicians hedge their bets, often voting for an issue in one case, then against the funding for it in the next instance -- or vice versa).
According to Cialdini, once you commit to something in writing, no matter how small, it will cause you to modify your self-image to be consistent with that commitment. Companies have apparently been using this for decades, such as with customer testimonials. Though I don't recall such contests, Cialdini points to some companies having short-essay contests promoting their products, which have the net impact of getting consumers to be consistent with their support of the product by purchasing more of it. I also see the self-rating features of certain web sites (Amazon, eBay, Pricegrabber) as having the same effect (though they may have more to do with the next method - social proof). What I wonder, though, is how effective these written testaments are today? Are they still as effective on the Internet? It's hard to know.
Another note from my reading... I'm concerned that the American media has become a tool for anti-national interests by making public concessions on the deficiencies with our leaders and way of life. Cialdini talks at length about how the Chinese used psychological methods to manipulate Korean War POWs into collusion by first asking the POWs to talk about their thoughts on America. Once a POW agreed that everything wasn't rosy, then it was a simply slippery slope down to colluding with the enemy in deriding democracy and upholding the virtues of communism. In this same vane, I worry that the media's concessions and critiques of America are in fact working against national interests by leading people to believe that this is not, in fact, a great country (despite the cretins in charge these days).
Mind you, I'm an ardent advocate of the First Amendment right to free speech. However, with freedom comes responsibility. How often does the media betray those responsibilities, thus playing into the hands of our enemies? It's something of which to be cognizant. I have to wonder if our melting pot culture is particularly susceptible to these types of erosions of support.
From an information security standpoint, I think that commitment and consistency could be used frequently to support our efforts. Think of training and awareness programs being run throughout various organizations. If you can get people to interact during these sessions, conceding that there are merits to the security perspective, then you begin to shape a commitment that people will seek to be consistent with. Beyond this, I wonder if it would be useful to model after the Chinese treatment of POWs, asking program participants to write essays about information security topics that would get these people to commit to desired patterns of behavior, which would then cause them to modify their self-image in order to be consistent.
The overall goal of using this method within information security seems to be to push out principles, get commitment from key personnel, and then let the support grow from a grassroots perspective, with the committed personnel acting consistently with their commitments. Important, however, is the need to reinforce these commitments over time. One possible method might be through train-the-trainer or mentorship. Overall, it's something to keep in the back of one's mind: that people will strive to be consistent once they have made a commitment.
Social Proof
"The greater the number of people who find an idea correct, the more the idea will be correct." (p.128)I found the chapter on social proof quite interesting -- perhaps the most interesting of the entire book. This technique relies on large groups of people acting in a similar manner, with the end-result being that newcomers who are uncertain about how to act will follow the prevalent behavior. This truth is particularly scary when considering that an entire group may be uncertain how to act, resulting in the wrong behavior emerging. From a promotional standpoint, however, it benefits the promoter to have a majority of people doing what is being promoted so that no further direct promotion is required to encourage following the prevailing behavior.
In training situations (live or online), it's not uncommon to see this tactic at work. Ever been in a course where some cheesy video (or series of videos) is shown to model a certain behavior? It turns out that these videos are, in fact, successful, no matter how ridiculous they may seem. The same is true of laugh tracks on television comedies. Cialdini even cited a study where anti-social preschoolers were individually shown a short (less than 30 minute) video of a preschool classroom where an anti-social child became social active in an appropriate manner, with the end-result being that these previously anti-social children (boys, in particular, it seems) became socially integrated and, even better, became social leaders (in a positive manner).
There can be huge negatives to social proof, as alluded to earlier. Cialdini talks considerably about the phenomenon of "pluralistic ignorance" wherein a group of uncertain people will decide collectively not to do the right thing, believing falsely that someone else is doing the right thing. He points to an example that was studied in the 1980s where a woman was heard running down the streets of a New York neighborhood for almost an hour, being attacked 3 times by the same assailant. More than 30 households observed this incident, and yet none of the called the police, nor did the attempt to offer direct support. Douglas Adams even comically refers to this in one of his books from the Hitchhiker's series as the "somebody else's problem" effect (Slartibartfast has a generator on his spaceship that can create an SEP field, causing people not to see something right in front of them, such as a spaceship).
Anyway, this matter of pluralistic ignorance can be quite important for emergency situations. Specifically, Cialdini relates a personal story wherein he was in a major 2-car accident and suddenly realized that pluralistic ignorance was happening, with cars proceeding through the intersection around the accident scene. He relates that the best way to fight this situation built on uncertainty by removing the uncertainty. In his case, he pointed at the driver of a nearby vehicle and instructed them to immediately call for help. Once he'd taken this first step, others realized that an emergency situation existed and they began pulling over to help out.
Within information security, this points to a highly likely event, and in fact one that I believe that affects average users all the time. The fact of the matter is that people are uncertain about the Internet and the things that come from it (spam, phishing, identity theft). Because they are not within a social context with other people to look to for guidance or modeling, they're unable to make a good decision, and thus will more often than not make a bad decision. Within a business environment, security practitioners have a means to address this concern, but outside the business it is a major concern. I'll come back to this with an idea momentarily. First, let's look at what the business can do.
The best way to address pluralistic ignorance, leveraging social proof, is to make a concerted effort to educate people about the difference between emergency and non-emergency situations. Provide them with mechanisms to report anomalous behavior and consistently model appropriate response behavior. The field of incident response management has evolved around these sort of functions. Define an incident, develop means for detecting and reporting them, and then have set procedures for responding, and so on. Education, training, and awareness work hand-in-hand, then, with these formal response programs to help provide the social proof necessary to ensure as secure an environment as feasible.
Now, for the normal home user, where social proof does not exist, I wonder about the potential benefits of creating an online social network through which social proof can be created. If someone could develop a "Web 2.0" style site that allowed consumers to socially network and share experiences, then perhaps it would be possible to use social proof to promote safe computing. Perhaps these social networks could be linked into local face-to-face events to likewise spread the word. Some of this exists already today. How does one build this into a centralized site isn't just a geek fest? Something to think about...
"...the enemy is not some unmanageable societal condition...but is, instead, the simple state of uncertainty..." (p.136-137)Even more fascinating about this chapter on social proof is Cialdini's contention that people oftentimes know the right thing to do, but still choose not to do it because of their surrounding social context. I believe that, from an information security standpoint, this then ties back into the previous technique of commitment and consistency, in that you need committed people in the field modeling appropriate behavior, so that the social context exists to reinforce this good behavior, freeing people to then follow it, instead of falling through to pluralistic ignorance and doing the wrong thing.
Lastly, Cialdini notes that social proof works best when an individual sees the behavior modeled by someone similar to themselves. I view this as a matter of being better able to relate to the behavior observed and assimilate it as their own. While this point might seem to enforce an undercurrent of discrimination, it is something that needs to be considered, and is in fact something I've seen frequently in information security. Business leaders are more likely to follow security guidance when it is delivered and modeled by other business leaders. Programmers are far more likely to listen to other programmers than to accept the presumed-unqualified guidance of an information security professional, no matter how knowledgeable or experienced that infosec pro might be. This pattern is reinforced in the next technique (Liking), but don't worry -- the technique after that (Authority) provides a way around the similarity prejudice.
Liking
So, this technique and the next one (Authority) are interesting in that they seem to be naturally prevalent in a lot of compliance situations. Mainly, it goes like this: the more you like me, the more likely you are to buy what I'm selling or do what I'm asking. Liking, or likability, seems to generally be based on superficial qualities. Cialdini discusses physical attractiveness, compliments, conditioning & association, similarity, and contact & cooperation as the most common ways for achieving likability. I'm not going to spend much time on this technique, as I think it's general self-evident. However, let's look at one common use from the infosec sector.
Social engineering is an age-old practice of manipulating people to do things that they probably should not do. In the business world, this oftentimes is used by attackers to manipulate employees into disclosing information or granting access when it should not be done. For example, it's not uncommon to hear stories of an attacker calling customer service and getting information about other customers, getting competing bad guy accounts suspended, get their own accounts restored, or so on. While these attempts almost always start by pretending to be an authority figure, they almost always feature the Liking technique. Specifically, it's very common for attackers to be very nice and polite. They'll very often compliment the customer support rep (CSR) for the good job their doing. They'll be very understanding that the CSR may not have a lot of time to spend on the task, or might offer to call back later. All of this is intended to make the attacker seem non-threatening. Interestingly, it's the newbie who often is targeted by these attackers because, unsurprisingly, they lack confidence in what they're doing. A more confident CSR is far more likely to detect the BS, not feel the need for external reinforcement, or to simply say no to a potential customer when the request is inappropriate.
Authority
Another fairly obvious technique is using authority to create compliance. According to the book, studies have shown that we grossly underestimate the effect of an authority figure on our judgment -- and we don't consider how easy it is to fake authority. This is played out on the Internet on a regular basis, and has been leveraged by con men for probably as long as the con game has been around. Steven Colbert has even invented a word that kind of goes with this tactic: truthiness. In this case, humans are likely to give in to looks (suits, uniforms, etc.) and assume that a person is authoritative when they may not be.
As mentioned above, this technique is often employed by attackers, particularly when executing social engineering attacks. It's not uncommon for attackers to claim to be from the security department in order to establish an immediate authority position, with the hopes that further compliance will be forthcoming. It is incumbent upon people to challenge unproven authority, demanding proof before agreeing to do something that they would not otherwise agree to do. While this short-cut oftentimes works in our favor in the case of police, fire department, EMS, or other medical personnel, it can also work against us very effectively when someone is trying to trick us. If you go to a hospital, you expect to find a doctor. When someone comes up to you (or appears on television) purporting to be a doctor, you should question the validity of their asserted position of authority.
Incidentally, you'll recognize this method in advertising combined with social proof. "9 out of 10 doctors agree, [product X] is the best for [condition/treatment/symptom]." It works even better when you see doctors (or actors dressed like doctors) giving their opinion while in their white lab coats.
Scarcity
The last "weapon of influence" discussed by Cialdini is that of scarcity. The scarcity principle says that the less available something is, the more desirable it is likely to be. We see this tactic played out effectively in the retail world, such as when we go to purchase a vehicle or major appliance and are told something like "it's the last one we have" or "someone else is also looking at this specific unit." What's most interesting about this method is that it appears to be the hardest one to defend against because it evokes a strong physical-emotional response.
This chapter was perhaps one of the more entertaining of the book as Cialdini talks about how his brother used scarcity to put himself through school (I won't ruin it for you). I wrote down the following comments while reading the book:
The scarcity principle is tightly coupled with psychological reactance, which can have a couple interesting effects First, if you offer people more freedom and choice, and then try to remove it, they will revolt. Second, if people believe that their choices are limited, they will value the decision more than if they have many choices. This perspective was also discussed by Daniel Gilbert in Stumbling on Happiness.
From an information security perspective, there seem to be two major lessons to learn from this method:
1) If you grant people freedom, be prepared for staunch resistance when you try to revoke it. For example, when implementing policies that restrict freedoms that once existed, plan to deliver those policies in a manner that preempts the resistance that will be forthcoming. If possible, leverage the other weapons of influence to your advantage and, even better, try to find a way to get your customers see the new policies as their own decisions. Failing to do this could have the effect of inciting outright resistance to and defiance of the policies.
2) It's not useful to offer people too many choice. Instead, it is better to narrow choices (preferably to win-wins that are equally desirable to you, the requester). This is actually a trick that the Supernanny has advocated using on her show with defiant children. Offer them a choice between options that are equally desirable, giving the appearance of choice without losing the battle.
While the topic of reactance gets away from scarcity directly, Cialdini ties it into his discussion by talking about how scarcity can cause us to make bad decisions because we feel pressured (by ourselves, it turns out) to not apply logic to the thought process.
My primary take-away from this chapter was that we can get trapped into making decisions when the scarcity principle is applied. Here are a couple interesting quotes from the chapter:
"The joy is not in experiencing a scarce commodity but in possessing it. It is important that we not confuse the two." (p.267)
"...it is vital to remember that scarce things do not taste or feel or sound or ride or work any better because of their limited availability." (p.268)
It seems to me that the greatest challenge with scarcity is that it creates an emotional response, resulting in the subjugation of logic. As such, Cialdini coaches us to detect the emotional response and force ourselves back to logic by asking ourselves if we want something for looks or use. If the latter, then we must keep in mind that desirability does not equal functionality.
"With the sophisticated mental apparatus we have used to build world eminence as a species, we have created an environment so complex, fast-paced, and information-laden that we must increasingly deal with it in the fashion of the animals we long ago transcended." (p.275)
No wonder Internet-based threats are so effective against the average user. We in the security industry deal with these threats on a daily basis, becoming attuned to the signs and developing mechanisms to respond accordingly. The average person does not have that degree of cognizant exposure. They're not, then, able to evolve. Of course, it's no wonder this is the case. I mean, who has the time?
"Every day in every way, I'm getting busier." -Cialdini (p.274)
Related links of potential interest:
http://www.fripp.com/art.of_influence.html
http://en.wikipedia.org/wiki/Robert_Cialdini
http://happening-here.blogspot.com/2006/01/surrounded-by-weapons-of-influence.html