Tuesday, July 21, 2009

Helping Virtual Teams Succeed

Reference: Nunamaker, J.F., Jr., Reinig, B.A., and Briggs, R.O. (2009), Principles for effective virtual teamwork, Communications of the ACM 52:4, pp. 113-117.

This article is not so much a research article as a research-based guide to practice. Nevertheless, it resonates highly with me for reasons I’ll explain as I highlight the principles that the authors propose.

Principle 1: Realign reward structures for virtual teams. The theory is that in absence of physical proximity among members of a virtual team, non-verbal cues for appreciation and enthusiasm are lost and must be replaced with explicit rewards. Your virtual teammates cannot easily observe your level of commitment to your team and your project, reducing both their need to contribute and the praise that they might otherwise have offered and which would serve to keep you excited and involved. Also, a virtual teammate does not need to worry about being embarrassed by running into you in the hallway after being late on a deliverable or a promise. In a face-to-face collaboration, you could motivate a teammate who doesn’t seem to be involved by walking into his or her office and probing with simple technical or process questions. With a virtual teammate, an email reminder or question is more likely to engender resentment than encouragement. In the virtual environment, both the carrot and the stick are harder to apply.

As a knowledge worker rather than manager, I have little opportunity to modify the reward structures of my virtual teams. But, I have learned to form my teams in such a way as to maximize the rewards for collaboration. One such approach is to include a non-tenured faculty member on each team. These teammates have the greatest incentive to work hard, but they also engender hard work in the rest of the team, as nobody wants to be responsible for their failure to publish.

Among Web 2.0 advocates, the wiki has been evoked as an ideal medium for collaborative writing. A prime example is Wikipedia, a collaboratively written encyclopedia. My own experience with wikis has been mixed. I’ve found that my students will not use a wiki for collaborative writing unless there’s a specific penalty for failing to do so, or, somewhat less successfully, a reward for contributing to it. One of my colleagues has observed the same thing in his classes. Why does Wikipedia work, then, when there is no reward offered? The answer seems to be that some people feel an intrinsic pleasure in contributing. They enjoy seeing their words “in print” or feel great displeasure at seeing errors left uncorrected. This proportion is quite small, but enough people are exposed to Wikipedia that it succeeds despite the low percentage of those for whom the reward is intrinsic.

A colleague and I recently attempted to write a teaching case by wiki with an organization that was highly committed to the case. We thought that this novel approach would be ideal because it would convey the “voice” of the case subject rather than that of the case writer. Additionally, it would be a living case, in the sense that students could contribute to it and the case subjects could respond to the students. Ultimately, this effort failed. There were probably several reasons, including a less-than-friendly wiki interface; but the major reason for failure, in my opinion, was that we never created any incentives for the case subjects to participate.

This blog would be too long if I elaborated on each of the other principles for effective virtual work to the same degree as I elaborated on the first. For now, I will just list them. Hopefully, I’ll get a chance to address them in a future blog:
2. Find new ways to focus attention on task
3. Design activities that cause people to get to know each other
4. Build a virtual presence
5. Agree on standards and terminology
6. Leverage anonymity when appropriate
7. Be more explicit
8. Train teams to self-facilitate
9. Embed collaboration technology into everyday work

Friday, July 10, 2009

An Argument for Case-Based Research

Reference: Kim, D.J., Ferrin, D.L., and Rao, H.R. (2009) Trust and satisfaction, Two stepping stones for successful e-commerce relationships: A longitudinal exploration, Information Systems Research 20:2, pp. 237-257.

This study is the first, so the authors claim (and I have no reason to suspect otherwise), to test "whether a consumer's prepurchase trust impacts post-purchase satisfaction through a combined model of consumer trust and satisfaction developed from a longitudinal viewpoint." It is one of the few studies that observe all three phases of the purchase process -- pre-purchase, decision to purchase, and post-purchase. Finally, it is relatively unique in collecting data both from those who have decided to buy and those who decided not to buy.

The model is beautiful, if one can use that term to describe a model:

Customer trust affects willingness to purchase directly and indirectly through perceived risk and perceived benefit. That is, increasing trust reduces the customer's perceived risk and increases the customer's perceived benefit, and the combination of trust, risk, expectations, and benefit combine to increase willingness to purchase. The willingness to purchase affects the decision to purchase. After the purchase, confirmation of expectations is affected by the expectations themselves (the greater the expectation, the less likely it will be confirmed) and the perceived performance of the website in effecting the sale. Confirmation, expectation, and trust all affect satisfaction, which in turn affects loyalty. All relationships are statistically significant!

While the model is beautiful, one has to question its value. None of these relationships is unexpected, or even interesting. Every seller and website designer understands the need to increase customer trust, reduce risk to the extent possible, offer the greatest benefit possible, and set high expectations. Interestingly, these variables explain less than 50% of the variance in willingness to purchase. Readers should certainly be interested in knowing what other factors affect willingness to purchase. Furthermore, willingness to purchase explains only 21% of the variance in the decision to purchase. Readers should ask, why did consumers who had high willing to purchase fail to do so; and why did consumers who had low willingness to purchase actually decide to purchase? Readers should also want to understand why one site engendered trust while other sites did not. These are the types of questions that case studies, rather than statistical studies, can answer. It is only through a deeper understanding of the independent variables affecting the purchase decision that sellers and website designers can extract value from such a study.

At this point I have to disclose a personal bias. Those who know me know that I have a strong belief in case study research as opposed to statistical research and am somewhat of a crusader for applying case study methodologies. Also, I am Editor-in-Chief of a journal that accepts only case study research: JITCAR, the Journal of Information Technology Case and Application Research (http://www.jitcar.org). So, I am, perhaps, on a soapbox here, expounding on my favorite topic, using an information systems study as a case in point (a case study, if you will).

Of course, a case study would have to be designed differently. This study asked student consumers to visit at least two B2C retailers to comparison shop for an item of their choice. There was no control over what sites they visited or the item they chose to buy. A case study design would most likely have to limit the sites and/or the item purchased. But, by asking more open ended questions and conducting interviews, it would result in much more nuanced understanding of what factors created or destroyed trust and how they entered into the purchase decision. Admittedly, the results might not be generalizable to sites selling different products or, perhaps, retailers of different size (or other characteristics) than those used for the case study. But, sellers reading the study could determine whether or not their particular application was sufficiently represented by the case study to be of value in their design decisions. Case studies suffer from a lack of generalizability, but they have value for at least some readers, while statistical studies leave readers without knowledge about where they stand in relation to the norm.