One of the great difficulties in understanding persuasion comes due to many variables which do or which potentially affect the outcome. One possible way to analyze the elements is to try and divide up the degree to which one likes the speaker from the quality of the argument the speaker presented.
In the Spring 1998 volume of Social Cognition, we find an example of one such experiment. It begins like most psychological experiments by deceiving college undergraduates. College undergraduates are used because they have free time and are available to professors. Deception is used, because if the subjects know that they are being tested for X they may distort their responses.
In the first test, the students were asked to evaluate the quality of an argument given by someone whom the students would find “likeable” or “unlikable.” The speaker advocated for an examination policy. There were two versions of the argument: in one the speaker gave “strong” arguments; in the other, “weak” arguments.
All of the students were told that prior to evaluating the quality of the argument to indicate their personal opinion about that exam policy. A sub-group was told to make sure they did not let their opinion about the likability of the speaker to influence their conclusion.
This is what they happened. The students who heard the argument from a likable source were more likely to accept the argument. But if they were told to let their opinion of his likability to influence them, they found the argument weaker. The mirror image happened with the unlikable speaker. Without the instruction about not letting likability be influence, the students did not find the arguments persuasive. But when they were told to not let their personal opinion about the speaker influence their decision, they found the speaker more persuasive.
They repeated the experiment with this twist: some of the students were told that exam policy would not be implemented for 10 years, and thus the policy was not relevant to them personally. They were also asked to memorize a seven digit code as part of completing their questionnaire.
A second group was told the policy might take place next year. Also, they were asked only to memorize a single digit.
For the students who were told the exam wouldn’t apply to them, their preference was markedly based upon the likability of the speaker. But when they were told to not let likability affect their decision, the strength of likability significantly diminished.
It was the last group that was the most interesting to the researchers: if the policy applied to you personally, would that factor alone overcome the strength of likability? Would someone “automatically” filter out likability and get to the strength of the argument standing alone?
The answer is “no.” The students disliked the argument from the dislikable speaker. But when they were told you can’t let likability affect your decision, the students were able to factor that out and raised their valuation the argument itself.
And so they were able to demonstrate that people could distinguish between variables.
The psychologists were busy considering variables and testing theories. But the trouble with psychological theories is they can often illuminate as well as obscure.
Let’s think about this test in more practical terms. We all tend to ignore or disregard people we dislike or find unattractive. It’s easier to do. How many people have ignored someone they disliked speaking.
If you don’t pay much attention, the argument will not be very persuasive.
But there is more, we give more grace to people we like. We give them the benefit of the doubt. It makes us feel better about giving more attention.
The students were students. They are people who are good at doing what they are told. This is merely a factor among all people to want to do what is “right” based upon whatever rubric they consider salient. When an authority told them be careful you don’t let likability affect your decision, they “knew” that they were probably doing that and they discounted that aspect.
What is interesting is that in both experiments, the degree of “correction” was greatest for those students who had a dislikable speaker and who were told they should let that influence them. One might even see this as an overcorrection for their bias.
There is a practical takeaway here. For instance, when a lawyer stands before a jury, and the lawyer has a dislikable client, he should tell the jury that the law considers the truth alone. We cannot allow personal feelings about a party to influence a decision. Done well, these experiments seem to indicate that the jury would overcompensate to avoid their bias.
A caveat: This experiment only concerned a rather weak bias: How much could you irrationally dislike a speaker whom you had to bear with for only short time? What would have happened if the students had to deal with this dislikable speaker for weeks? What if the bias was based upon something which evokes greater passion in the subject, particularly if that bias were informed and supported by some considerable matrix of culture or education or a peer group?
Flexible Correction Processes in Social Judgment: Implications for Persuasion
Richard Petty, Duane T. Wegner, and Paul H. White
Social Cognition, Spring 1998.