Monday, August 17, 2009

How to Achieve IS Strategic Alignment

Reference: Preston, D.S. and Karahanna, E. (2009), Antecedents of IS Strategic Alignment: A Nomological Network, Information Systems Research 20:2, pp. 159-179.

One would expect that a shared understanding between the CIO and top management team about the role of information systems in the organization is necessary for an organization to achieve alignment between its strategies for information systems and business. It’s hard to imagine how alignment could be achieved without such a shared understanding. But, the referenced research, while it finds a statistically significant relationship between these constructs, concludes that shared understanding explains less than half (48.3%) of the variance among companies in their IS strategic alignment. In other words, presumably, some companies have relatively low shared understanding and high alignment and some have relatively high shared understanding and low alignment. Of course, one cannot expect perfect correlation, but such a low correlation is indeed baffling.

One explanation might be that a shared understanding is necessary but not sufficient for alignment. In other words, low understanding inevitably leads to poor alignment but high understanding is no guarantee of high alignment. Unfortunately, I have no access to the authors’ data and cannot test this assumption, but I imagine that it’s true. If so, it provokes the question, “what is needed in addition to a shared understanding to achieve alignment?” To answer this question, it is useful first to examine how the constructs of “Strategic Alignment” and “Shared Understanding” are defined by the authors:

Strategic alignment: The congruence of business strategy and IS strategy. This is based on three factors measured on a five point scale ranging from “strongly agree” to “strongly disagree.”
  • The IS strategy is congruent with the corporate business strategy in your organization
  • Decisions in IS planning are tightly linked to the organization’s strategic plan
  • Our business strategy and IS strategy are closely aligned
Shared understanding: The degree to which the CIO and top management team have a shared view and understanding about the role of IS within the organization. This is based on four factors measuring, on the same 5-point scale, the degree to which the CIO and top management team members have:
  • Shared understanding of the role of IS in our organization
  • Shared view of the role of IS as a competitive weapon for our organization
  • Shared understanding of how IS can be used to increase productivity of our organization’s operations
  • Common view about the prioritization of IS investments
So why doesn’t a shared understanding automatically lead to strategic alignment? What else is needed? One possibility is resources. If the resources (financial or human) available to the CIO are insufficient to convert understanding into action, the CIO might be unable to define a realistic strategy that is aligned with the business. However, I’m skeptical of this explanation because the definition of a shared understanding seems to require that the top management team understands the resources that the IS function needs to implement an aligned strategy.

A second possibility is that the top management team has decided that information systems and technology are relatively unimportant to the organization’s success. In this case, there is no need for alignment. Another way of thinking about this situation is that IS strategic alignment is simply achieved by limiting the role of the IS function to a support role at the lowest possible level. Ideally, survey respondents would recognize that this role is aligned with the business, but I would guess that CIOs filling out the survey would answer otherwise.

Another possibility is that the IS and business planning processes are not tightly linked. As a result, there would be no negotiation, no give-and-take, between IS and business functions, as the IS strategy evolves. No amount of understanding can substitute for the values that are revealed in bargaining for resources and in joint planning on linked decisions. Process might be as important as understanding, but this factor is not part of the authors’ study.

Another study has identified prior IS success as an antecedent to IS strategic alignment (Chan, Y.E., Sabherwal, R., Thatcher, J.B. (2006), Antecedents and outcomes of strategic IS alignment: an empirical investigation, IEEE Transactions on Engineering Management 53:1, pp. 27-47). I don’t really see why prior IS success would affect strategic alignment, although I can understand that IS failure could easily derail it, even in the presence of shared understanding.

This research is important in confirming what factors are important in achieving a shared understanding – factors such as CIO business knowledge and top management IS knowledge. But, if I were a member of the top management team, I would feel quite uncomfortable knowing that shared understanding accounts for only 48% of the variance in aligning IS and business strategies. I’d certainly want to know what’s in that other 52 percent.

Tuesday, July 21, 2009

Helping Virtual Teams Succeed

Reference: Nunamaker, J.F., Jr., Reinig, B.A., and Briggs, R.O. (2009), Principles for effective virtual teamwork, Communications of the ACM 52:4, pp. 113-117.

This article is not so much a research article as a research-based guide to practice. Nevertheless, it resonates highly with me for reasons I’ll explain as I highlight the principles that the authors propose.

Principle 1: Realign reward structures for virtual teams. The theory is that in absence of physical proximity among members of a virtual team, non-verbal cues for appreciation and enthusiasm are lost and must be replaced with explicit rewards. Your virtual teammates cannot easily observe your level of commitment to your team and your project, reducing both their need to contribute and the praise that they might otherwise have offered and which would serve to keep you excited and involved. Also, a virtual teammate does not need to worry about being embarrassed by running into you in the hallway after being late on a deliverable or a promise. In a face-to-face collaboration, you could motivate a teammate who doesn’t seem to be involved by walking into his or her office and probing with simple technical or process questions. With a virtual teammate, an email reminder or question is more likely to engender resentment than encouragement. In the virtual environment, both the carrot and the stick are harder to apply.

As a knowledge worker rather than manager, I have little opportunity to modify the reward structures of my virtual teams. But, I have learned to form my teams in such a way as to maximize the rewards for collaboration. One such approach is to include a non-tenured faculty member on each team. These teammates have the greatest incentive to work hard, but they also engender hard work in the rest of the team, as nobody wants to be responsible for their failure to publish.

Among Web 2.0 advocates, the wiki has been evoked as an ideal medium for collaborative writing. A prime example is Wikipedia, a collaboratively written encyclopedia. My own experience with wikis has been mixed. I’ve found that my students will not use a wiki for collaborative writing unless there’s a specific penalty for failing to do so, or, somewhat less successfully, a reward for contributing to it. One of my colleagues has observed the same thing in his classes. Why does Wikipedia work, then, when there is no reward offered? The answer seems to be that some people feel an intrinsic pleasure in contributing. They enjoy seeing their words “in print” or feel great displeasure at seeing errors left uncorrected. This proportion is quite small, but enough people are exposed to Wikipedia that it succeeds despite the low percentage of those for whom the reward is intrinsic.

A colleague and I recently attempted to write a teaching case by wiki with an organization that was highly committed to the case. We thought that this novel approach would be ideal because it would convey the “voice” of the case subject rather than that of the case writer. Additionally, it would be a living case, in the sense that students could contribute to it and the case subjects could respond to the students. Ultimately, this effort failed. There were probably several reasons, including a less-than-friendly wiki interface; but the major reason for failure, in my opinion, was that we never created any incentives for the case subjects to participate.

This blog would be too long if I elaborated on each of the other principles for effective virtual work to the same degree as I elaborated on the first. For now, I will just list them. Hopefully, I’ll get a chance to address them in a future blog:
2. Find new ways to focus attention on task
3. Design activities that cause people to get to know each other
4. Build a virtual presence
5. Agree on standards and terminology
6. Leverage anonymity when appropriate
7. Be more explicit
8. Train teams to self-facilitate
9. Embed collaboration technology into everyday work

Friday, July 10, 2009

An Argument for Case-Based Research

Reference: Kim, D.J., Ferrin, D.L., and Rao, H.R. (2009) Trust and satisfaction, Two stepping stones for successful e-commerce relationships: A longitudinal exploration, Information Systems Research 20:2, pp. 237-257.

This study is the first, so the authors claim (and I have no reason to suspect otherwise), to test "whether a consumer's prepurchase trust impacts post-purchase satisfaction through a combined model of consumer trust and satisfaction developed from a longitudinal viewpoint." It is one of the few studies that observe all three phases of the purchase process -- pre-purchase, decision to purchase, and post-purchase. Finally, it is relatively unique in collecting data both from those who have decided to buy and those who decided not to buy.

The model is beautiful, if one can use that term to describe a model:

Customer trust affects willingness to purchase directly and indirectly through perceived risk and perceived benefit. That is, increasing trust reduces the customer's perceived risk and increases the customer's perceived benefit, and the combination of trust, risk, expectations, and benefit combine to increase willingness to purchase. The willingness to purchase affects the decision to purchase. After the purchase, confirmation of expectations is affected by the expectations themselves (the greater the expectation, the less likely it will be confirmed) and the perceived performance of the website in effecting the sale. Confirmation, expectation, and trust all affect satisfaction, which in turn affects loyalty. All relationships are statistically significant!

While the model is beautiful, one has to question its value. None of these relationships is unexpected, or even interesting. Every seller and website designer understands the need to increase customer trust, reduce risk to the extent possible, offer the greatest benefit possible, and set high expectations. Interestingly, these variables explain less than 50% of the variance in willingness to purchase. Readers should certainly be interested in knowing what other factors affect willingness to purchase. Furthermore, willingness to purchase explains only 21% of the variance in the decision to purchase. Readers should ask, why did consumers who had high willing to purchase fail to do so; and why did consumers who had low willingness to purchase actually decide to purchase? Readers should also want to understand why one site engendered trust while other sites did not. These are the types of questions that case studies, rather than statistical studies, can answer. It is only through a deeper understanding of the independent variables affecting the purchase decision that sellers and website designers can extract value from such a study.

At this point I have to disclose a personal bias. Those who know me know that I have a strong belief in case study research as opposed to statistical research and am somewhat of a crusader for applying case study methodologies. Also, I am Editor-in-Chief of a journal that accepts only case study research: JITCAR, the Journal of Information Technology Case and Application Research (http://www.jitcar.org). So, I am, perhaps, on a soapbox here, expounding on my favorite topic, using an information systems study as a case in point (a case study, if you will).

Of course, a case study would have to be designed differently. This study asked student consumers to visit at least two B2C retailers to comparison shop for an item of their choice. There was no control over what sites they visited or the item they chose to buy. A case study design would most likely have to limit the sites and/or the item purchased. But, by asking more open ended questions and conducting interviews, it would result in much more nuanced understanding of what factors created or destroyed trust and how they entered into the purchase decision. Admittedly, the results might not be generalizable to sites selling different products or, perhaps, retailers of different size (or other characteristics) than those used for the case study. But, sellers reading the study could determine whether or not their particular application was sufficiently represented by the case study to be of value in their design decisions. Case studies suffer from a lack of generalizability, but they have value for at least some readers, while statistical studies leave readers without knowledge about where they stand in relation to the norm.

Friday, June 19, 2009

How Does Online Participation Affect Your Self-Concept?

Reference: Fang, Y. and Neufeld, D. (2009), Understanding Sustained Participation in Open Source Software Projects, Journal of Management Information Systems 25:4, pp. 9–50.

The title for this blog entry appears to have nothing to do with this article, but bear with me.

The article examines why people remain involved in open source software projects. It turns out that a great deal has been written about why people get involved in the first place, but not a lot about why they remain for any length of time. My going-in assumption was that people work on open source projects for the same reasons that they get involved with charitable work -- that they feel a connection of some type to the principle and they want to give back to society. So, it would surprise me if the reasons that they joined were substantially different from the reasons that they continue to participate over time. But, according to the authors, that's not the case with open source software. They get involved generally because they have or see a need for a particular product or function. Once that need is met, they don't necessarily remain involved with the project.

So, what keeps them involved? The answer, according to this research, is a combination of "situated learning" and "identity construction". The theory behind this is the theory of "Legitimate Peripheral Participation," formulated by Lave and Wenger*. "Situated learning," as I understand it, is learning by doing and learning in context, emphasizing the social and problem solving aspects of learning. It makes perfect sense that people will remain involved with an open source project if it feeds their learning, so I'm not surprised at that conclusion. I can see a parallel as well with participation in charitable ventures. If you're only given boring tasks with no opportunity to use and develop your skills, you might opt out after a short while.

That brings us to identity construction. The authors seem to define identity construction largely from an external perspective. Construction of a community member's identity is the "process of understanding who one is, what one can do, and to what extent one becomes more or less legitimized and valued by the other members." This is not an entirely external perspective because it acknowledges the development of a self-understanding, but it is in relation to how the person is perceived by others. The theory is that positive identity construction reinforces a positive self-image, leading to the desire to continue to participate or even increase participation.

It is the relationship between identity construction and participation that interests me because it is clearly a driving force for participation in social networks. I see it all the time, or at least I think I do. I believe that people often feel obliged to tweet on Twitter because they know it affects how others perceive them, it increases the number of their followers, and it feeds their self worth. I feel that they participate on Facebook in large part to build an identity for those who might not know them well. Of course, in both cases, these are not the only reasons for participating, but I'm curious as to how important identity construction is to participation in these networks. It would be a good research project.

The Fang and Neufeld article confirms the hypothesis that identity construction affects participation, at least in the open software community of interest. But, I'd also be interested to learn if the reverse is true. Does participation affect identity construction, as I believe it does in social networking? The evidence the authors present in their tables seems to confirm this as well, but they never make this feedback loop explicit.

* Lave, J., and Wenger, E. Situated Learning—Legitimate Peripheral Participation. Cambridge:Cambridge University Press, 1990.

Tuesday, May 19, 2009

Improvisation as a Dynamic Business Capability

Reference: El Sawy, O.A. and Pavlou, P.A. (2008) IT-enabled business capabilities for turbulent environments, MIS Quarterly Executive 7:3, 139-150.

El Sawy and Pavlou, who have written several articles on the subject of innovation, conclude that strategic advantage requires a “trifecta” of operational, dynamic, and improvisational business capabilities. It’s easy to see why companies need to achieve operational excellence to succeed in almost any environment. El Sawy and Pavlou find, however, that dynamic and improvisational capabilities are necessary for success in turbulent environments, and that in the most turbulent environment, improvisational capabilities are most important. Furthermore, a company’s information technology capabilities need to be aligned and structured properly to support the desired mix of operational vs. dynamic and improvisational capabilities.

Monideepa Tarafdar and I have observed that innovative companies require an ability to achieve and balance operational excellence with strategic vision (Tarafdar, M. and Gordon, S., 2007), a competency we call “ambidexterity,” after O’Reilly and Tushman (2004) and Vinekar et al (2006). In our model, strategic vision relies upon what El Sawy and Pavlou term dynamic and improvisational capabilities, those that allow an organization to respond to the external environment. To this extent, our findings support El Sawy and Pavlou and vice versa.

What I found most interesting in this article is the authors’ division of capabilities for responding to the dynamic environment into two parts – dynamic and improvisational. They define “dynamic capabilities” as those needed to “effectively reconfigure existing operational capabilities to match the changing business environment.” They define “improvisational capabilities” as “the learned ability to spontaneously reconfigure existing resources in real time to build new operational capabilities that better match novel environmental situations.” From this definition, it seems that improvisational capabilities are simply a subset of dynamic capabilities. So, I’m struggling to understand if these really are substantially different capabilities, and if so, whether they require or build upon different information technology capabilities.

El Sawy’s and Pavlou’s model of “dynamic capabilities” includes four dimensions: environment-sensing, learning, knowledge integrating, and coordinating. Three of these dimensions (competencies?) clearly contribute to a firm’s ability to respond to a dynamic environment. Specifically, a firm cannot possibly respond to changes it cannot sense. So, environment sensing is critical. Also, it cannot respond if it cannot learn the skills and capabilities it might need in a changed environment. I’m not sure that coordination is a critical dynamic capability. Although I might not eliminate it as a dynamic capability, it seems more important as an operational capability. That said, operational capabilities, such as for leadership, flexibility, and governance, could also be equally important as dynamic capabilities.

The difficulty in deciding where coordination belongs in a model of business capabilities highlights the problems inherent in building such a model. And, it motivates the question of whether improvisation better understood one of the elements of the business capabilities trifecta or if it is more appropriately classified as a dimension of dynamic capability. If the other dimensions of dynamic capability are, implicitly, non-improvisational – that they are, in some sense, planned – then improvisation is orthogonal to them and would be better classified as a dimension of dynamic capability. Alternatively, if the other dimensions of dynamic capabilities can be achieved in both a planned and improvisational way, then it makes more sense to treat to treat them as both dynamic and improvisational capabilities, in which case the improvisational classification is needed to complete the trifecta.

While either model can work, I think it makes more sense to organize improvisation as a dimension of dynamic capability. For the most part, environment sensing is a planned activity, and the capability is not subject to a great deal of improvisation. Learning and integrating have more opportunities for improvisation, but also, especially at the organizational level, they are capabilities more organized than improvisational. So, I would argue, that ability to improvise is just one more dynamic capability.

What IT capabilities and infrastructure are necessary to support an improvisational capability? The authors provide some answers – I won’t go into them here – but clearly, more research is needed in this area.

O’Reilly, C.A., Tushman, M.L., 2004. Ambidextrous organization. Harvard Business Review 82 (4), 71–81.

Tarafdar, M. and Gordon, S., 2007. Understanding the influence of information systems competencies on process innovation: A resource-based view, Journal of Strategic Information Systems 16, 353-392.

Vinekar, Vishnu, Slinkman, Craig W., Nerur, Sridhar, 2006. Can agile and traditional systems development approaches coexist? An ambidextrous view. Information Systems Management 23 (3), 31–42.

Sunday, April 19, 2009

Responding to Disruptive Technology

Reference: Lucas, H.C. and Goh, J.M. (2009) Disruptive technology: How Kodak missed the digital photography revolution, Journal of Strategic Information Systems 18(1), 46-55.

I was drawn to this paper because I study innovation. My research concerns how information technology can be used to improve the innovation process, but I am also interested in understanding how companies can and should respond to innovations in information technologies that could affect the value of their products and services and ultimately their financial health.

The authors propose two extensions to Christensen’s well known treatises on disruptive technologies. The first is the notion that a firm’s response to disruptive technologies is a “struggle between employees who seek to use dynamic capabilities to bring about change, and employees for whom core capabilities have become core rigidities.” In examining Kodak, the authors focus on middle management as being most problematic and resistant to change, being dependent on core competencies that have become core rigidities. The concept of core rigidities is rooted in Christensen’s work and could hardly be considered an extension. The role of dynamic capabilities, however, does not come directly into play in Christensen’s work. Interestingly, Christensen refers to dynamic capabilities in “The Innovator’s Solution” (Christensen & Raynor, 2003, p. 206), but dismisses the concept as an over-broad categorization of organizational processes. Nevertheless, the suggestion that dynamic capabilities can help companies respond to disruptive technologies is not entirely new. For example, the March issue of the Journal of Engineering and Technology Management is devoted to this principle, as reflected in the introductory article, “Research on corporate radical innovation systems - A dynamic capabilities perspective: An introduction,” by Salomo, Gemünden, and Leifer.

The second “extension” proposed by the authors is consideration of the role of organizational culture. The authors argue that if organizational culture promotes hierarchy and maintenance of the status quo, it would impede the change required to react to disruptive technologies. It’s not clear that this is really an extension of Christensen’s work, specifically because Christensen acknowledges the role of culture in creating core rigidities (see, for example, HBS Note 9-399-104, “What is an Organization Culture,” Rev August 2, 2006).

The Kodak case is an interesting one. Implicit in the analysis is that Kodak failed to respond adequately to the digital revolution. But, it’s not clear that Kodak could have done anything more than it did. Prior to the digital revolution, Kodak’s multi-billion dollar revenue stream depended primarily on sales of its film, developer chemicals, and halide paper used to make photographic prints. These sources of revenue were destined to disappear. In the digital world, other sources of revenue exist, but they are largely commoditized, with limited revenue generating capability. What is surprising is that Kodak has, nevertheless, managed to emerge as a viable business, unlike Polaroid, for example. Although it doesn’t have a dominant position, as it did when photography was based solely on film, it reacted quickly to the digital revolution, with early patents on digital photography and acquisition of companies such as Ofoto and Scitex.

Wednesday, April 8, 2009

Does Task-Technology Fit Matter?

Reference: Fuller, R.M. and Dennis, A.R. (2009), Does fit matter? The impact of task-technology fit and appropriation on team performance in repeated tasks, Information Systems Research 20(1), 2-17.

This is an important piece of research. It’s not often that new research dispels or substantially modifies well accepted models of how things work. This is one such example. Prior research has supported the intuitive belief that the fit between a technology and the task to which it is applied significantly affects success in performing the task. Presumably, the poorer the fit between a task and the technology used to perform it, the worse the performance of the task. This “Task-Technology Fit” theory was first formalized in MISQ in 1995 (Goodhue) and has been extensively analyzed, developed, and verified by subsequent research. However, as Fuller and Dennis show, the TTF theory is incorrect when applied to a repeated task. It turns out that by the third time the task is repeated, users will have figured out how to adapt the technology and the way the task is accomplished so that there is no significant difference in their performance or their perception of the technology. While poor-fit teams failed to equal the performance of well-fit teams in initial task performance, the differences between them melted away over time, becoming indistinguishable by the third repetition of the task!

This finding, while surprising and counter-intuitive, has some theoretical grounding. It is rooted in Adaptive Structuration Theory (AST) (Desanctis & Poole, 1994). AST holds that in performing a task, people adapt the elements of the tools that they use, the features they select, the rules they apply, and the way that they apply them. This process, called appropriation, allows them to improve their performance over time. Prior research has, indeed, recognized the role of appropriation in explaining the performance of teams using information technology. The “Fit-Appropriation Model,” (FAM) (Dennis et al, 2001) holds that performance is affected by both technology fit and appropriation. But, until now, there was no recognition of the possibility, much less the likelihood, that appropriation would ever dominate over fit, and certainly not in such a short period of time.

This research is limited to a single context, task, and technology, and used students as research subjects. Generalizability still needs to be established. However, assuming that the findings stand up to further scrutiny, they are ground breaking.

Thursday, March 26, 2009

One More Benefit of Good Web Design

Reference: Parboteeah, D.V., Valacich, J.S., and Wells, J.D. (2008) The influence of website characteristics on a consumer's urge to buy impulsively, Information Systems Research 20 (1), 60-78.

It's nice to know that a website's task-relevant cues (ie., appropriate information and good navigation) and mood-relevant cues (ie., visually appeal) increase consumers' urge to buy impulsively. Not that any designer would ever want to create a website deficient in either of these attributes.

While I have some doubts about the relevance of this study, I have little reason to doubt that improving website quality improves outcomes, including impulsive purchasing. The predicted magnitudes are somewhat suspect due to the nature of the experiments. All experiments were performed with students in a classroom setting and in no case were they actually buying anything -- they were simply reporting on their urge to buy. While students might represent the typical demographic of a web purchaser, the setting is anything but typical.

According to the first experiment, which used structural equation modeling, increasing visual appeal by one standard deviation increases urge to buy by .42 standard deviations and increasing information fit to task by one standard deviation increases urge to buy by .29 standard deviations. According to the second study, which used MANOVA to compare sites that had poor task-relevant and mood-relevant clues to sites that had good clues of task-relevance, mood-relevance, or both, impulsiveness increased from 61.1% of participants, to 72.2% with mood-relevant only, to 74.1% with task-relevant only, to 98% with both. Likewise, the magnitude of intended impulse buying increased from $33.89 to $49.17 to $55.56 to $66.39. Of course, this result is heavily dependent on what's being sold and what the potential impulse buys might be. In the experiment, the intended purchase was a $15 cell phone holster and the possible impulse buys were a $60 bag and several $15 accessories.

Sunday, March 15, 2009

Yet Another Adoption Model

Reference: Dong, L., Neufeld, D.J., and Higgins, C. (2008). Testing Klein and Sorra's innovation implementation model: An empirical examination, Journal of Engineering and Technology Management 25(4), 237-255.

Despite its title, this article is about the adoption of new information systems, not innovation (except to the extent that new systems can be called innovation). I had previously been unaware of Klein and Sorra’s model (Klein, K.J., Sorra, J.S., 1996, Academy of Management Review), which on its face seems similar to, but not as robust as the UTAUT (Venkatesh et al, 2003, MISQ). The main model finds “implementation effectiveness” to be dependent on “user affective commitment” and “implementation climate.” Implementation climate is, in turn, composed of skills, incentives, and the absence of obstacles. User affective commitment is dependent only on “innovation-values fit.”

To motivate their study, the authors cite the oft-reported research that documents how few companies complete their implementations on time, within budget, and with the promised features and functions. Unfortunately, their model does not address many of the common causes of these failures, such as poor estimates of development costs and time, lack of communication between developers and users, inexperience with the technologies employed, etc. Furthermore, their dependent variable, termed implementation effectiveness, is really just a measure of intention to adopt, as it includes five items that address only the following only the following components: 1) Avoidance, “If I can avoid using the system, I do”; and 2) Endorsement, “I think the system is a waste of time and money for our organization (reverse coded)”.

Contributions of the study include scales to measure implementation climate (5 components, 17 items), innovation values fit (3 components, 13 items), skills (6 items), incentives (2 items), absence of obstacles (3 items), and commitment (4 items). Some of these scales are adaptations from other sources. It is worth noting that the variable “innovation values fit” is similar to the construct of “perceived usefulness” in the TAM and TAM2 models, and to elements of “performance expectancy” and “effort expectancy” in the UTAUT model. It’s components are Fit re Quality, such as “The system maintains data I need to carry out my task,” Fit re Locatibility, such as “The system helps me locate corporate or department data very easily,” and Fit re Flexibility and Cooperation, such as “The system supports the repetitive and predictable work processes.”

The study concludes that “when implementation climate is strong and innovation-values fit is present, an implementation was more likely to succeed than when either climate or fit were weak”.

Friday, March 6, 2009

Adaptation to IT-Induced Change

Reference: Bruque, S., Moyano, J., Eisenberg, J. (2008) Individual Adaptation to IT-Induced Change: The Role of Social Networks. Journal of Management Information Systems 25(3), 177-206.

I was interested in this paper because my current research addresses the role of Web-based social networking on innovation and other organizational outcomes. Existing research on Web-based social networking is quite sparse, probably because the technology is so new. So, when I saw the title of this article, I hoped that it would be relevant to my work. It turns out that this research concerns traditional social networks, not Web-based ones, so its relevance to my own research is not direct. Nevertheless, it seems reasonable to extend the authors' conclusions to individuals' extended (Web-based) networks. Thus, it provides some interesting hypotheses for future research.

Even had there been no relevance to my research, I found this article interesting and refreshing. The highlight for me is to see "adaption to IT-induced change" as the dependent variable rather than the common "adoption of technology." There is a significant difference between adoption and adaption, which the authors describe in some depth. Adoption is a binary variable -- either you adopt or do not. Although one can measure the extent of adoption by counting the number of people in an organization who adopt or fail to adopt, adoption remains binary at the individual level. In practice, many changes force employees to adopt to whatever technology is installed, so the real question is how they adapt to these changes. The authors argue and provide references to support the claim that IT-induced changes are harder to adapt to than most other types of change.

The conclusions of the study are not surprising. Adaptation improves the larger the size of the support network and the greater the strength and density of the informational network. The authors defined these networks to include people outside as well as inside the company. This is a significant departure from most studies and makes me optimistic that the results will extend to Web-based social networks. One disconcerting methodological issue is that subjects were allowed to list only five members of their support network and five members of their informational network. I don't believe that there was any measure of the extent to which these networks overlapped. Of course, Web-based social networks are much larger, although they are probably less "strong" or intense.

A significant contribution of the study is the creation of an instrument to measure individual adaptation to IT-induced change.

Sunday, February 22, 2009

Predicting 50 Years of Compiler Research -- They Can't Be Serious

Reference: Hall, M., Padua, D., and Pingali, K. (2009). Compiler Research: The Next 50 Years, Communications of the ACM 52:2, 60-67.

I was amused to read the title of the CACM article referenced above. One can't quibble with the tag line -- "research and education in compiler technology is [sic] more important than ever." The article starts out well enough, recounting the past 50 years of compiler advances and noting that in the coming decade, research into compiling for multi-core processing and security and reliability will be major challenges. And it's hard to critique the authors' agenda for the compiler community except that it's rather vanilla and based on current conditions and those easily foreseeable for the near future, such as the need to address parallel architectures. But, it's completely unreasonable to expect that anyone can predict now what our needs will be in 50 years. For example, it seems likely that we will need compilers for quantum computing, yet this possibility is not raised. It's also quite likely, especially if you believe Ray Kurzweil, that by then computers rather than people will be building software, implying an entirely different model for the role of people, if at all, in compiler creation.

Thursday, February 12, 2009

Yet Another TAM Article

Reference: Chin, W.W., Johnson, N, Schwartz, A. (2008), A fast form approach to measuring technology acceptance and other constructs, MIS Quarterly 32:4, 687-703.

I'm sure I'm not the only one who's tired of reading articles about the Technology Acceptance Model (TAM). As noted by the authors, there were 698 citations of TAM by 2003 in the Science Citation Index, and fully 10% of the total publications in the IS field prior to 2003 could be classified as TAM studies. There may have been a bit of a drop off in the percentage of publications addressing TAM since 2003, but it always surprises me that TAM articles continue to be published (often in top journals). How can there be anything new to say about it after all this time?

But, there are always exceptions. Don't let TAM fatigue dissuade you from reading this article. It is less about TAM and more about using semantic differential scales instead of Likert scales for IS research. The authors demonstrate that, at least in this case, semantic differential scales are easier and quicker to use, provide an equal degree of construct validity, and produce similar to identical relationships among the constructs measured. This is inspiring. I've always used Likert scales before, but I will seriously consider semantic differential scales in the future. So, for example, instead of asking users to agree or disagree on a 7 point scale with the statement, "Using the system enhances my effectiveness," I will ask users to select among 7 options ranging from "The system is effective" to "The system is ineffective."

Monday, February 2, 2009

RFID vs. Bar Coding

Reference: Hozak, K. and Collier, D.A. (2008) RFID as an enabler of improved manufacturing performance, Decision Sciences 39:4, 859-881.

I always enjoy reading articles with counter-intuitive conclusions or conclusions that attempt to dispel commonly accepted truths about an issue. This one concludes that unless processes are changed, RFID fails to provide much of an operational benefit, if any, over bar coding. Attempts to improve mean flow time and the proportion of transactions that are tardy by reducing lot size, a practice enabled by RFID, could actually have the reverse affect. Very interesting! These conclusions, and several related ones, are based solely on a simulation, which may be suspect, as simplifying conditions are always assumed. Nevertheless, anyone considering RFID adoption should read this. The other caveat, and perhaps the more important one, is that the benefits to improved information and reliability are not considered. I've always thought that the information benefits of RFID outweighed all production metric benefits, so I'm not terribly disturbed by the conclusions. But for those who are considering adopting RFID for the production benefits alone, these conclusions should be kept in mind.

Wednesday, January 28, 2009

Thoughts on Technostress

Reference: Ragu-Nathan, T.S., Tarafdar, M., Ragu-Nathan, B.S. and Tu, Q. (2008) The consequences of technostress for end users in organizations: Conceptual development and empirical validation, Information Systems Research 19:4, 417-433.

It seems that "technostress" has entered the technical (and perhaps, everyday) lexicon. I'm glad to know there's a word to express what I've been feeling for years. There are times when I feel chained to my computer, when there are not enough hours in the day simply to answer my email. The need for triage always makes me anxious, as my to-do list gets longer and longer. But, I really don't know if it would be any different without electronic communication. The electronic communication simply adds to the immediacy and makes the stress more evident.

The article is interesting from a management perspective because it identifies factors that create and inhibit technostress. Although managers in many organizations can manipulate them to some degree, I wish I could control any of them myself. Unfortunately, that doesn't seem to be the case. For the record, the authors find that technostress is created by techno-overload, techo-insecurity, techno-invasion (how it intrudes into your personal life), techno-uncertainty, and techno-complexity. Technostress is diminished by the provision of technical support, literacy facilitation, and involvement facilitation (the involvement of end users in the technological choices made by their organization).

I was really amazed by the finding that technostress decreased with age. This seems totally counterintuitive. Younger people are supposed to be completely comfortable with technology, and it should not cause them much stress. The authors argue that their finding could be due to the fact that older people are just more comfortable in their jobs, with their tenure softening any stress that they might otherwise feel. I don't buy it, but I have no other explanation.