Friday, July 23, 2010
Retiring this Blog
It is time to retire the "Thoughts on IT Research blog." It has been inactive now for some time and will remain so. I have begun a new blog devoted to academic research on social media at http://socmediaacademicresearch.blogspot.com/ (short name: http://ow.ly/2fGTI) . The purpose of that blog is to summarize new academic research on social media and explore the implications of this research for business managers. I'll be tracking the major academic journals in general management, marketing, information systems, psychology, organizational behavior, and innovation. I intend to post weekly.
Monday, August 17, 2009
How to Achieve IS Strategic Alignment
Reference: Preston, D.S. and Karahanna, E. (2009), Antecedents of IS Strategic Alignment: A Nomological Network, Information Systems Research 20:2, pp. 159-179.
One would expect that a shared understanding between the CIO and top management team about the role of information systems in the organization is necessary for an organization to achieve alignment between its strategies for information systems and business. It’s hard to imagine how alignment could be achieved without such a shared understanding. But, the referenced research, while it finds a statistically significant relationship between these constructs, concludes that shared understanding explains less than half (48.3%) of the variance among companies in their IS strategic alignment. In other words, presumably, some companies have relatively low shared understanding and high alignment and some have relatively high shared understanding and low alignment. Of course, one cannot expect perfect correlation, but such a low correlation is indeed baffling.
One explanation might be that a shared understanding is necessary but not sufficient for alignment. In other words, low understanding inevitably leads to poor alignment but high understanding is no guarantee of high alignment. Unfortunately, I have no access to the authors’ data and cannot test this assumption, but I imagine that it’s true. If so, it provokes the question, “what is needed in addition to a shared understanding to achieve alignment?” To answer this question, it is useful first to examine how the constructs of “Strategic Alignment” and “Shared Understanding” are defined by the authors:
Strategic alignment: The congruence of business strategy and IS strategy. This is based on three factors measured on a five point scale ranging from “strongly agree” to “strongly disagree.”
A second possibility is that the top management team has decided that information systems and technology are relatively unimportant to the organization’s success. In this case, there is no need for alignment. Another way of thinking about this situation is that IS strategic alignment is simply achieved by limiting the role of the IS function to a support role at the lowest possible level. Ideally, survey respondents would recognize that this role is aligned with the business, but I would guess that CIOs filling out the survey would answer otherwise.
Another possibility is that the IS and business planning processes are not tightly linked. As a result, there would be no negotiation, no give-and-take, between IS and business functions, as the IS strategy evolves. No amount of understanding can substitute for the values that are revealed in bargaining for resources and in joint planning on linked decisions. Process might be as important as understanding, but this factor is not part of the authors’ study.
Another study has identified prior IS success as an antecedent to IS strategic alignment (Chan, Y.E., Sabherwal, R., Thatcher, J.B. (2006), Antecedents and outcomes of strategic IS alignment: an empirical investigation, IEEE Transactions on Engineering Management 53:1, pp. 27-47). I don’t really see why prior IS success would affect strategic alignment, although I can understand that IS failure could easily derail it, even in the presence of shared understanding.
This research is important in confirming what factors are important in achieving a shared understanding – factors such as CIO business knowledge and top management IS knowledge. But, if I were a member of the top management team, I would feel quite uncomfortable knowing that shared understanding accounts for only 48% of the variance in aligning IS and business strategies. I’d certainly want to know what’s in that other 52 percent.
One would expect that a shared understanding between the CIO and top management team about the role of information systems in the organization is necessary for an organization to achieve alignment between its strategies for information systems and business. It’s hard to imagine how alignment could be achieved without such a shared understanding. But, the referenced research, while it finds a statistically significant relationship between these constructs, concludes that shared understanding explains less than half (48.3%) of the variance among companies in their IS strategic alignment. In other words, presumably, some companies have relatively low shared understanding and high alignment and some have relatively high shared understanding and low alignment. Of course, one cannot expect perfect correlation, but such a low correlation is indeed baffling.
One explanation might be that a shared understanding is necessary but not sufficient for alignment. In other words, low understanding inevitably leads to poor alignment but high understanding is no guarantee of high alignment. Unfortunately, I have no access to the authors’ data and cannot test this assumption, but I imagine that it’s true. If so, it provokes the question, “what is needed in addition to a shared understanding to achieve alignment?” To answer this question, it is useful first to examine how the constructs of “Strategic Alignment” and “Shared Understanding” are defined by the authors:
Strategic alignment: The congruence of business strategy and IS strategy. This is based on three factors measured on a five point scale ranging from “strongly agree” to “strongly disagree.”
- The IS strategy is congruent with the corporate business strategy in your organization
- Decisions in IS planning are tightly linked to the organization’s strategic plan
- Our business strategy and IS strategy are closely aligned
- Shared understanding of the role of IS in our organization
- Shared view of the role of IS as a competitive weapon for our organization
- Shared understanding of how IS can be used to increase productivity of our organization’s operations
- Common view about the prioritization of IS investments
A second possibility is that the top management team has decided that information systems and technology are relatively unimportant to the organization’s success. In this case, there is no need for alignment. Another way of thinking about this situation is that IS strategic alignment is simply achieved by limiting the role of the IS function to a support role at the lowest possible level. Ideally, survey respondents would recognize that this role is aligned with the business, but I would guess that CIOs filling out the survey would answer otherwise.
Another possibility is that the IS and business planning processes are not tightly linked. As a result, there would be no negotiation, no give-and-take, between IS and business functions, as the IS strategy evolves. No amount of understanding can substitute for the values that are revealed in bargaining for resources and in joint planning on linked decisions. Process might be as important as understanding, but this factor is not part of the authors’ study.
Another study has identified prior IS success as an antecedent to IS strategic alignment (Chan, Y.E., Sabherwal, R., Thatcher, J.B. (2006), Antecedents and outcomes of strategic IS alignment: an empirical investigation, IEEE Transactions on Engineering Management 53:1, pp. 27-47). I don’t really see why prior IS success would affect strategic alignment, although I can understand that IS failure could easily derail it, even in the presence of shared understanding.
This research is important in confirming what factors are important in achieving a shared understanding – factors such as CIO business knowledge and top management IS knowledge. But, if I were a member of the top management team, I would feel quite uncomfortable knowing that shared understanding accounts for only 48% of the variance in aligning IS and business strategies. I’d certainly want to know what’s in that other 52 percent.
Tuesday, July 21, 2009
Helping Virtual Teams Succeed
Reference: Nunamaker, J.F., Jr., Reinig, B.A., and Briggs, R.O. (2009), Principles for effective virtual teamwork, Communications of the ACM 52:4, pp. 113-117.
This article is not so much a research article as a research-based guide to practice. Nevertheless, it resonates highly with me for reasons I’ll explain as I highlight the principles that the authors propose.
Principle 1: Realign reward structures for virtual teams. The theory is that in absence of physical proximity among members of a virtual team, non-verbal cues for appreciation and enthusiasm are lost and must be replaced with explicit rewards. Your virtual teammates cannot easily observe your level of commitment to your team and your project, reducing both their need to contribute and the praise that they might otherwise have offered and which would serve to keep you excited and involved. Also, a virtual teammate does not need to worry about being embarrassed by running into you in the hallway after being late on a deliverable or a promise. In a face-to-face collaboration, you could motivate a teammate who doesn’t seem to be involved by walking into his or her office and probing with simple technical or process questions. With a virtual teammate, an email reminder or question is more likely to engender resentment than encouragement. In the virtual environment, both the carrot and the stick are harder to apply.
As a knowledge worker rather than manager, I have little opportunity to modify the reward structures of my virtual teams. But, I have learned to form my teams in such a way as to maximize the rewards for collaboration. One such approach is to include a non-tenured faculty member on each team. These teammates have the greatest incentive to work hard, but they also engender hard work in the rest of the team, as nobody wants to be responsible for their failure to publish.
Among Web 2.0 advocates, the wiki has been evoked as an ideal medium for collaborative writing. A prime example is Wikipedia, a collaboratively written encyclopedia. My own experience with wikis has been mixed. I’ve found that my students will not use a wiki for collaborative writing unless there’s a specific penalty for failing to do so, or, somewhat less successfully, a reward for contributing to it. One of my colleagues has observed the same thing in his classes. Why does Wikipedia work, then, when there is no reward offered? The answer seems to be that some people feel an intrinsic pleasure in contributing. They enjoy seeing their words “in print” or feel great displeasure at seeing errors left uncorrected. This proportion is quite small, but enough people are exposed to Wikipedia that it succeeds despite the low percentage of those for whom the reward is intrinsic.
A colleague and I recently attempted to write a teaching case by wiki with an organization that was highly committed to the case. We thought that this novel approach would be ideal because it would convey the “voice” of the case subject rather than that of the case writer. Additionally, it would be a living case, in the sense that students could contribute to it and the case subjects could respond to the students. Ultimately, this effort failed. There were probably several reasons, including a less-than-friendly wiki interface; but the major reason for failure, in my opinion, was that we never created any incentives for the case subjects to participate.
This blog would be too long if I elaborated on each of the other principles for effective virtual work to the same degree as I elaborated on the first. For now, I will just list them. Hopefully, I’ll get a chance to address them in a future blog:
2. Find new ways to focus attention on task
3. Design activities that cause people to get to know each other
4. Build a virtual presence
5. Agree on standards and terminology
6. Leverage anonymity when appropriate
7. Be more explicit
8. Train teams to self-facilitate
9. Embed collaboration technology into everyday work
This article is not so much a research article as a research-based guide to practice. Nevertheless, it resonates highly with me for reasons I’ll explain as I highlight the principles that the authors propose.
Principle 1: Realign reward structures for virtual teams. The theory is that in absence of physical proximity among members of a virtual team, non-verbal cues for appreciation and enthusiasm are lost and must be replaced with explicit rewards. Your virtual teammates cannot easily observe your level of commitment to your team and your project, reducing both their need to contribute and the praise that they might otherwise have offered and which would serve to keep you excited and involved. Also, a virtual teammate does not need to worry about being embarrassed by running into you in the hallway after being late on a deliverable or a promise. In a face-to-face collaboration, you could motivate a teammate who doesn’t seem to be involved by walking into his or her office and probing with simple technical or process questions. With a virtual teammate, an email reminder or question is more likely to engender resentment than encouragement. In the virtual environment, both the carrot and the stick are harder to apply.
As a knowledge worker rather than manager, I have little opportunity to modify the reward structures of my virtual teams. But, I have learned to form my teams in such a way as to maximize the rewards for collaboration. One such approach is to include a non-tenured faculty member on each team. These teammates have the greatest incentive to work hard, but they also engender hard work in the rest of the team, as nobody wants to be responsible for their failure to publish.
Among Web 2.0 advocates, the wiki has been evoked as an ideal medium for collaborative writing. A prime example is Wikipedia, a collaboratively written encyclopedia. My own experience with wikis has been mixed. I’ve found that my students will not use a wiki for collaborative writing unless there’s a specific penalty for failing to do so, or, somewhat less successfully, a reward for contributing to it. One of my colleagues has observed the same thing in his classes. Why does Wikipedia work, then, when there is no reward offered? The answer seems to be that some people feel an intrinsic pleasure in contributing. They enjoy seeing their words “in print” or feel great displeasure at seeing errors left uncorrected. This proportion is quite small, but enough people are exposed to Wikipedia that it succeeds despite the low percentage of those for whom the reward is intrinsic.
A colleague and I recently attempted to write a teaching case by wiki with an organization that was highly committed to the case. We thought that this novel approach would be ideal because it would convey the “voice” of the case subject rather than that of the case writer. Additionally, it would be a living case, in the sense that students could contribute to it and the case subjects could respond to the students. Ultimately, this effort failed. There were probably several reasons, including a less-than-friendly wiki interface; but the major reason for failure, in my opinion, was that we never created any incentives for the case subjects to participate.
This blog would be too long if I elaborated on each of the other principles for effective virtual work to the same degree as I elaborated on the first. For now, I will just list them. Hopefully, I’ll get a chance to address them in a future blog:
2. Find new ways to focus attention on task
3. Design activities that cause people to get to know each other
4. Build a virtual presence
5. Agree on standards and terminology
6. Leverage anonymity when appropriate
7. Be more explicit
8. Train teams to self-facilitate
9. Embed collaboration technology into everyday work
Friday, July 10, 2009
An Argument for Case-Based Research
Reference: Kim, D.J., Ferrin, D.L., and Rao, H.R. (2009) Trust and satisfaction, Two stepping stones for successful e-commerce relationships: A longitudinal exploration, Information Systems Research 20:2, pp. 237-257.
This study is the first, so the authors claim (and I have no reason to suspect otherwise), to test "whether a consumer's prepurchase trust impacts post-purchase satisfaction through a combined model of consumer trust and satisfaction developed from a longitudinal viewpoint." It is one of the few studies that observe all three phases of the purchase process -- pre-purchase, decision to purchase, and post-purchase. Finally, it is relatively unique in collecting data both from those who have decided to buy and those who decided not to buy.
The model is beautiful, if one can use that term to describe a model:
Customer trust affects willingness to purchase directly and indirectly through perceived risk and perceived benefit. That is, increasing trust reduces the customer's perceived risk and increases the customer's perceived benefit, and the combination of trust, risk, expectations, and benefit combine to increase willingness to purchase. The willingness to purchase affects the decision to purchase. After the purchase, confirmation of expectations is affected by the expectations themselves (the greater the expectation, the less likely it will be confirmed) and the perceived performance of the website in effecting the sale. Confirmation, expectation, and trust all affect satisfaction, which in turn affects loyalty. All relationships are statistically significant!
While the model is beautiful, one has to question its value. None of these relationships is unexpected, or even interesting. Every seller and website designer understands the need to increase customer trust, reduce risk to the extent possible, offer the greatest benefit possible, and set high expectations. Interestingly, these variables explain less than 50% of the variance in willingness to purchase. Readers should certainly be interested in knowing what other factors affect willingness to purchase. Furthermore, willingness to purchase explains only 21% of the variance in the decision to purchase. Readers should ask, why did consumers who had high willing to purchase fail to do so; and why did consumers who had low willingness to purchase actually decide to purchase? Readers should also want to understand why one site engendered trust while other sites did not. These are the types of questions that case studies, rather than statistical studies, can answer. It is only through a deeper understanding of the independent variables affecting the purchase decision that sellers and website designers can extract value from such a study.
At this point I have to disclose a personal bias. Those who know me know that I have a strong belief in case study research as opposed to statistical research and am somewhat of a crusader for applying case study methodologies. Also, I am Editor-in-Chief of a journal that accepts only case study research: JITCAR, the Journal of Information Technology Case and Application Research (http://www.jitcar.org). So, I am, perhaps, on a soapbox here, expounding on my favorite topic, using an information systems study as a case in point (a case study, if you will).
Of course, a case study would have to be designed differently. This study asked student consumers to visit at least two B2C retailers to comparison shop for an item of their choice. There was no control over what sites they visited or the item they chose to buy. A case study design would most likely have to limit the sites and/or the item purchased. But, by asking more open ended questions and conducting interviews, it would result in much more nuanced understanding of what factors created or destroyed trust and how they entered into the purchase decision. Admittedly, the results might not be generalizable to sites selling different products or, perhaps, retailers of different size (or other characteristics) than those used for the case study. But, sellers reading the study could determine whether or not their particular application was sufficiently represented by the case study to be of value in their design decisions. Case studies suffer from a lack of generalizability, but they have value for at least some readers, while statistical studies leave readers without knowledge about where they stand in relation to the norm.
This study is the first, so the authors claim (and I have no reason to suspect otherwise), to test "whether a consumer's prepurchase trust impacts post-purchase satisfaction through a combined model of consumer trust and satisfaction developed from a longitudinal viewpoint." It is one of the few studies that observe all three phases of the purchase process -- pre-purchase, decision to purchase, and post-purchase. Finally, it is relatively unique in collecting data both from those who have decided to buy and those who decided not to buy.
The model is beautiful, if one can use that term to describe a model:
Customer trust affects willingness to purchase directly and indirectly through perceived risk and perceived benefit. That is, increasing trust reduces the customer's perceived risk and increases the customer's perceived benefit, and the combination of trust, risk, expectations, and benefit combine to increase willingness to purchase. The willingness to purchase affects the decision to purchase. After the purchase, confirmation of expectations is affected by the expectations themselves (the greater the expectation, the less likely it will be confirmed) and the perceived performance of the website in effecting the sale. Confirmation, expectation, and trust all affect satisfaction, which in turn affects loyalty. All relationships are statistically significant!
While the model is beautiful, one has to question its value. None of these relationships is unexpected, or even interesting. Every seller and website designer understands the need to increase customer trust, reduce risk to the extent possible, offer the greatest benefit possible, and set high expectations. Interestingly, these variables explain less than 50% of the variance in willingness to purchase. Readers should certainly be interested in knowing what other factors affect willingness to purchase. Furthermore, willingness to purchase explains only 21% of the variance in the decision to purchase. Readers should ask, why did consumers who had high willing to purchase fail to do so; and why did consumers who had low willingness to purchase actually decide to purchase? Readers should also want to understand why one site engendered trust while other sites did not. These are the types of questions that case studies, rather than statistical studies, can answer. It is only through a deeper understanding of the independent variables affecting the purchase decision that sellers and website designers can extract value from such a study.
At this point I have to disclose a personal bias. Those who know me know that I have a strong belief in case study research as opposed to statistical research and am somewhat of a crusader for applying case study methodologies. Also, I am Editor-in-Chief of a journal that accepts only case study research: JITCAR, the Journal of Information Technology Case and Application Research (http://www.jitcar.org). So, I am, perhaps, on a soapbox here, expounding on my favorite topic, using an information systems study as a case in point (a case study, if you will).
Of course, a case study would have to be designed differently. This study asked student consumers to visit at least two B2C retailers to comparison shop for an item of their choice. There was no control over what sites they visited or the item they chose to buy. A case study design would most likely have to limit the sites and/or the item purchased. But, by asking more open ended questions and conducting interviews, it would result in much more nuanced understanding of what factors created or destroyed trust and how they entered into the purchase decision. Admittedly, the results might not be generalizable to sites selling different products or, perhaps, retailers of different size (or other characteristics) than those used for the case study. But, sellers reading the study could determine whether or not their particular application was sufficiently represented by the case study to be of value in their design decisions. Case studies suffer from a lack of generalizability, but they have value for at least some readers, while statistical studies leave readers without knowledge about where they stand in relation to the norm.
Friday, June 19, 2009
How Does Online Participation Affect Your Self-Concept?
Reference: Fang, Y. and Neufeld, D. (2009), Understanding Sustained Participation in Open Source Software Projects, Journal of Management Information Systems 25:4, pp. 9–50.
The title for this blog entry appears to have nothing to do with this article, but bear with me.
The article examines why people remain involved in open source software projects. It turns out that a great deal has been written about why people get involved in the first place, but not a lot about why they remain for any length of time. My going-in assumption was that people work on open source projects for the same reasons that they get involved with charitable work -- that they feel a connection of some type to the principle and they want to give back to society. So, it would surprise me if the reasons that they joined were substantially different from the reasons that they continue to participate over time. But, according to the authors, that's not the case with open source software. They get involved generally because they have or see a need for a particular product or function. Once that need is met, they don't necessarily remain involved with the project.
So, what keeps them involved? The answer, according to this research, is a combination of "situated learning" and "identity construction". The theory behind this is the theory of "Legitimate Peripheral Participation," formulated by Lave and Wenger*. "Situated learning," as I understand it, is learning by doing and learning in context, emphasizing the social and problem solving aspects of learning. It makes perfect sense that people will remain involved with an open source project if it feeds their learning, so I'm not surprised at that conclusion. I can see a parallel as well with participation in charitable ventures. If you're only given boring tasks with no opportunity to use and develop your skills, you might opt out after a short while.
That brings us to identity construction. The authors seem to define identity construction largely from an external perspective. Construction of a community member's identity is the "process of understanding who one is, what one can do, and to what extent one becomes more or less legitimized and valued by the other members." This is not an entirely external perspective because it acknowledges the development of a self-understanding, but it is in relation to how the person is perceived by others. The theory is that positive identity construction reinforces a positive self-image, leading to the desire to continue to participate or even increase participation.
It is the relationship between identity construction and participation that interests me because it is clearly a driving force for participation in social networks. I see it all the time, or at least I think I do. I believe that people often feel obliged to tweet on Twitter because they know it affects how others perceive them, it increases the number of their followers, and it feeds their self worth. I feel that they participate on Facebook in large part to build an identity for those who might not know them well. Of course, in both cases, these are not the only reasons for participating, but I'm curious as to how important identity construction is to participation in these networks. It would be a good research project.
The Fang and Neufeld article confirms the hypothesis that identity construction affects participation, at least in the open software community of interest. But, I'd also be interested to learn if the reverse is true. Does participation affect identity construction, as I believe it does in social networking? The evidence the authors present in their tables seems to confirm this as well, but they never make this feedback loop explicit.
* Lave, J., and Wenger, E. Situated Learning—Legitimate Peripheral Participation. Cambridge:Cambridge University Press, 1990.
The title for this blog entry appears to have nothing to do with this article, but bear with me.
The article examines why people remain involved in open source software projects. It turns out that a great deal has been written about why people get involved in the first place, but not a lot about why they remain for any length of time. My going-in assumption was that people work on open source projects for the same reasons that they get involved with charitable work -- that they feel a connection of some type to the principle and they want to give back to society. So, it would surprise me if the reasons that they joined were substantially different from the reasons that they continue to participate over time. But, according to the authors, that's not the case with open source software. They get involved generally because they have or see a need for a particular product or function. Once that need is met, they don't necessarily remain involved with the project.
So, what keeps them involved? The answer, according to this research, is a combination of "situated learning" and "identity construction". The theory behind this is the theory of "Legitimate Peripheral Participation," formulated by Lave and Wenger*. "Situated learning," as I understand it, is learning by doing and learning in context, emphasizing the social and problem solving aspects of learning. It makes perfect sense that people will remain involved with an open source project if it feeds their learning, so I'm not surprised at that conclusion. I can see a parallel as well with participation in charitable ventures. If you're only given boring tasks with no opportunity to use and develop your skills, you might opt out after a short while.
That brings us to identity construction. The authors seem to define identity construction largely from an external perspective. Construction of a community member's identity is the "process of understanding who one is, what one can do, and to what extent one becomes more or less legitimized and valued by the other members." This is not an entirely external perspective because it acknowledges the development of a self-understanding, but it is in relation to how the person is perceived by others. The theory is that positive identity construction reinforces a positive self-image, leading to the desire to continue to participate or even increase participation.
It is the relationship between identity construction and participation that interests me because it is clearly a driving force for participation in social networks. I see it all the time, or at least I think I do. I believe that people often feel obliged to tweet on Twitter because they know it affects how others perceive them, it increases the number of their followers, and it feeds their self worth. I feel that they participate on Facebook in large part to build an identity for those who might not know them well. Of course, in both cases, these are not the only reasons for participating, but I'm curious as to how important identity construction is to participation in these networks. It would be a good research project.
The Fang and Neufeld article confirms the hypothesis that identity construction affects participation, at least in the open software community of interest. But, I'd also be interested to learn if the reverse is true. Does participation affect identity construction, as I believe it does in social networking? The evidence the authors present in their tables seems to confirm this as well, but they never make this feedback loop explicit.
* Lave, J., and Wenger, E. Situated Learning—Legitimate Peripheral Participation. Cambridge:Cambridge University Press, 1990.
Tuesday, May 19, 2009
Improvisation as a Dynamic Business Capability
Reference: El Sawy, O.A. and Pavlou, P.A. (2008) IT-enabled business capabilities for turbulent environments, MIS Quarterly Executive 7:3, 139-150.
El Sawy and Pavlou, who have written several articles on the subject of innovation, conclude that strategic advantage requires a “trifecta” of operational, dynamic, and improvisational business capabilities. It’s easy to see why companies need to achieve operational excellence to succeed in almost any environment. El Sawy and Pavlou find, however, that dynamic and improvisational capabilities are necessary for success in turbulent environments, and that in the most turbulent environment, improvisational capabilities are most important. Furthermore, a company’s information technology capabilities need to be aligned and structured properly to support the desired mix of operational vs. dynamic and improvisational capabilities.
Monideepa Tarafdar and I have observed that innovative companies require an ability to achieve and balance operational excellence with strategic vision (Tarafdar, M. and Gordon, S., 2007), a competency we call “ambidexterity,” after O’Reilly and Tushman (2004) and Vinekar et al (2006). In our model, strategic vision relies upon what El Sawy and Pavlou term dynamic and improvisational capabilities, those that allow an organization to respond to the external environment. To this extent, our findings support El Sawy and Pavlou and vice versa.
What I found most interesting in this article is the authors’ division of capabilities for responding to the dynamic environment into two parts – dynamic and improvisational. They define “dynamic capabilities” as those needed to “effectively reconfigure existing operational capabilities to match the changing business environment.” They define “improvisational capabilities” as “the learned ability to spontaneously reconfigure existing resources in real time to build new operational capabilities that better match novel environmental situations.” From this definition, it seems that improvisational capabilities are simply a subset of dynamic capabilities. So, I’m struggling to understand if these really are substantially different capabilities, and if so, whether they require or build upon different information technology capabilities.
El Sawy’s and Pavlou’s model of “dynamic capabilities” includes four dimensions: environment-sensing, learning, knowledge integrating, and coordinating. Three of these dimensions (competencies?) clearly contribute to a firm’s ability to respond to a dynamic environment. Specifically, a firm cannot possibly respond to changes it cannot sense. So, environment sensing is critical. Also, it cannot respond if it cannot learn the skills and capabilities it might need in a changed environment. I’m not sure that coordination is a critical dynamic capability. Although I might not eliminate it as a dynamic capability, it seems more important as an operational capability. That said, operational capabilities, such as for leadership, flexibility, and governance, could also be equally important as dynamic capabilities.
The difficulty in deciding where coordination belongs in a model of business capabilities highlights the problems inherent in building such a model. And, it motivates the question of whether improvisation better understood one of the elements of the business capabilities trifecta or if it is more appropriately classified as a dimension of dynamic capability. If the other dimensions of dynamic capability are, implicitly, non-improvisational – that they are, in some sense, planned – then improvisation is orthogonal to them and would be better classified as a dimension of dynamic capability. Alternatively, if the other dimensions of dynamic capabilities can be achieved in both a planned and improvisational way, then it makes more sense to treat to treat them as both dynamic and improvisational capabilities, in which case the improvisational classification is needed to complete the trifecta.
While either model can work, I think it makes more sense to organize improvisation as a dimension of dynamic capability. For the most part, environment sensing is a planned activity, and the capability is not subject to a great deal of improvisation. Learning and integrating have more opportunities for improvisation, but also, especially at the organizational level, they are capabilities more organized than improvisational. So, I would argue, that ability to improvise is just one more dynamic capability.
What IT capabilities and infrastructure are necessary to support an improvisational capability? The authors provide some answers – I won’t go into them here – but clearly, more research is needed in this area.
O’Reilly, C.A., Tushman, M.L., 2004. Ambidextrous organization. Harvard Business Review 82 (4), 71–81.
Tarafdar, M. and Gordon, S., 2007. Understanding the influence of information systems competencies on process innovation: A resource-based view, Journal of Strategic Information Systems 16, 353-392.
Vinekar, Vishnu, Slinkman, Craig W., Nerur, Sridhar, 2006. Can agile and traditional systems development approaches coexist? An ambidextrous view. Information Systems Management 23 (3), 31–42.
El Sawy and Pavlou, who have written several articles on the subject of innovation, conclude that strategic advantage requires a “trifecta” of operational, dynamic, and improvisational business capabilities. It’s easy to see why companies need to achieve operational excellence to succeed in almost any environment. El Sawy and Pavlou find, however, that dynamic and improvisational capabilities are necessary for success in turbulent environments, and that in the most turbulent environment, improvisational capabilities are most important. Furthermore, a company’s information technology capabilities need to be aligned and structured properly to support the desired mix of operational vs. dynamic and improvisational capabilities.
Monideepa Tarafdar and I have observed that innovative companies require an ability to achieve and balance operational excellence with strategic vision (Tarafdar, M. and Gordon, S., 2007), a competency we call “ambidexterity,” after O’Reilly and Tushman (2004) and Vinekar et al (2006). In our model, strategic vision relies upon what El Sawy and Pavlou term dynamic and improvisational capabilities, those that allow an organization to respond to the external environment. To this extent, our findings support El Sawy and Pavlou and vice versa.
What I found most interesting in this article is the authors’ division of capabilities for responding to the dynamic environment into two parts – dynamic and improvisational. They define “dynamic capabilities” as those needed to “effectively reconfigure existing operational capabilities to match the changing business environment.” They define “improvisational capabilities” as “the learned ability to spontaneously reconfigure existing resources in real time to build new operational capabilities that better match novel environmental situations.” From this definition, it seems that improvisational capabilities are simply a subset of dynamic capabilities. So, I’m struggling to understand if these really are substantially different capabilities, and if so, whether they require or build upon different information technology capabilities.
El Sawy’s and Pavlou’s model of “dynamic capabilities” includes four dimensions: environment-sensing, learning, knowledge integrating, and coordinating. Three of these dimensions (competencies?) clearly contribute to a firm’s ability to respond to a dynamic environment. Specifically, a firm cannot possibly respond to changes it cannot sense. So, environment sensing is critical. Also, it cannot respond if it cannot learn the skills and capabilities it might need in a changed environment. I’m not sure that coordination is a critical dynamic capability. Although I might not eliminate it as a dynamic capability, it seems more important as an operational capability. That said, operational capabilities, such as for leadership, flexibility, and governance, could also be equally important as dynamic capabilities.
The difficulty in deciding where coordination belongs in a model of business capabilities highlights the problems inherent in building such a model. And, it motivates the question of whether improvisation better understood one of the elements of the business capabilities trifecta or if it is more appropriately classified as a dimension of dynamic capability. If the other dimensions of dynamic capability are, implicitly, non-improvisational – that they are, in some sense, planned – then improvisation is orthogonal to them and would be better classified as a dimension of dynamic capability. Alternatively, if the other dimensions of dynamic capabilities can be achieved in both a planned and improvisational way, then it makes more sense to treat to treat them as both dynamic and improvisational capabilities, in which case the improvisational classification is needed to complete the trifecta.
While either model can work, I think it makes more sense to organize improvisation as a dimension of dynamic capability. For the most part, environment sensing is a planned activity, and the capability is not subject to a great deal of improvisation. Learning and integrating have more opportunities for improvisation, but also, especially at the organizational level, they are capabilities more organized than improvisational. So, I would argue, that ability to improvise is just one more dynamic capability.
What IT capabilities and infrastructure are necessary to support an improvisational capability? The authors provide some answers – I won’t go into them here – but clearly, more research is needed in this area.
O’Reilly, C.A., Tushman, M.L., 2004. Ambidextrous organization. Harvard Business Review 82 (4), 71–81.
Tarafdar, M. and Gordon, S., 2007. Understanding the influence of information systems competencies on process innovation: A resource-based view, Journal of Strategic Information Systems 16, 353-392.
Vinekar, Vishnu, Slinkman, Craig W., Nerur, Sridhar, 2006. Can agile and traditional systems development approaches coexist? An ambidextrous view. Information Systems Management 23 (3), 31–42.
Sunday, April 19, 2009
Responding to Disruptive Technology
Reference: Lucas, H.C. and Goh, J.M. (2009) Disruptive technology: How Kodak missed the digital photography revolution, Journal of Strategic Information Systems 18(1), 46-55.
I was drawn to this paper because I study innovation. My research concerns how information technology can be used to improve the innovation process, but I am also interested in understanding how companies can and should respond to innovations in information technologies that could affect the value of their products and services and ultimately their financial health.
The authors propose two extensions to Christensen’s well known treatises on disruptive technologies. The first is the notion that a firm’s response to disruptive technologies is a “struggle between employees who seek to use dynamic capabilities to bring about change, and employees for whom core capabilities have become core rigidities.” In examining Kodak, the authors focus on middle management as being most problematic and resistant to change, being dependent on core competencies that have become core rigidities. The concept of core rigidities is rooted in Christensen’s work and could hardly be considered an extension. The role of dynamic capabilities, however, does not come directly into play in Christensen’s work. Interestingly, Christensen refers to dynamic capabilities in “The Innovator’s Solution” (Christensen & Raynor, 2003, p. 206), but dismisses the concept as an over-broad categorization of organizational processes. Nevertheless, the suggestion that dynamic capabilities can help companies respond to disruptive technologies is not entirely new. For example, the March issue of the Journal of Engineering and Technology Management is devoted to this principle, as reflected in the introductory article, “Research on corporate radical innovation systems - A dynamic capabilities perspective: An introduction,” by Salomo, Gemünden, and Leifer.
The second “extension” proposed by the authors is consideration of the role of organizational culture. The authors argue that if organizational culture promotes hierarchy and maintenance of the status quo, it would impede the change required to react to disruptive technologies. It’s not clear that this is really an extension of Christensen’s work, specifically because Christensen acknowledges the role of culture in creating core rigidities (see, for example, HBS Note 9-399-104, “What is an Organization Culture,” Rev August 2, 2006).
The Kodak case is an interesting one. Implicit in the analysis is that Kodak failed to respond adequately to the digital revolution. But, it’s not clear that Kodak could have done anything more than it did. Prior to the digital revolution, Kodak’s multi-billion dollar revenue stream depended primarily on sales of its film, developer chemicals, and halide paper used to make photographic prints. These sources of revenue were destined to disappear. In the digital world, other sources of revenue exist, but they are largely commoditized, with limited revenue generating capability. What is surprising is that Kodak has, nevertheless, managed to emerge as a viable business, unlike Polaroid, for example. Although it doesn’t have a dominant position, as it did when photography was based solely on film, it reacted quickly to the digital revolution, with early patents on digital photography and acquisition of companies such as Ofoto and Scitex.
I was drawn to this paper because I study innovation. My research concerns how information technology can be used to improve the innovation process, but I am also interested in understanding how companies can and should respond to innovations in information technologies that could affect the value of their products and services and ultimately their financial health.
The authors propose two extensions to Christensen’s well known treatises on disruptive technologies. The first is the notion that a firm’s response to disruptive technologies is a “struggle between employees who seek to use dynamic capabilities to bring about change, and employees for whom core capabilities have become core rigidities.” In examining Kodak, the authors focus on middle management as being most problematic and resistant to change, being dependent on core competencies that have become core rigidities. The concept of core rigidities is rooted in Christensen’s work and could hardly be considered an extension. The role of dynamic capabilities, however, does not come directly into play in Christensen’s work. Interestingly, Christensen refers to dynamic capabilities in “The Innovator’s Solution” (Christensen & Raynor, 2003, p. 206), but dismisses the concept as an over-broad categorization of organizational processes. Nevertheless, the suggestion that dynamic capabilities can help companies respond to disruptive technologies is not entirely new. For example, the March issue of the Journal of Engineering and Technology Management is devoted to this principle, as reflected in the introductory article, “Research on corporate radical innovation systems - A dynamic capabilities perspective: An introduction,” by Salomo, Gemünden, and Leifer.
The second “extension” proposed by the authors is consideration of the role of organizational culture. The authors argue that if organizational culture promotes hierarchy and maintenance of the status quo, it would impede the change required to react to disruptive technologies. It’s not clear that this is really an extension of Christensen’s work, specifically because Christensen acknowledges the role of culture in creating core rigidities (see, for example, HBS Note 9-399-104, “What is an Organization Culture,” Rev August 2, 2006).
The Kodak case is an interesting one. Implicit in the analysis is that Kodak failed to respond adequately to the digital revolution. But, it’s not clear that Kodak could have done anything more than it did. Prior to the digital revolution, Kodak’s multi-billion dollar revenue stream depended primarily on sales of its film, developer chemicals, and halide paper used to make photographic prints. These sources of revenue were destined to disappear. In the digital world, other sources of revenue exist, but they are largely commoditized, with limited revenue generating capability. What is surprising is that Kodak has, nevertheless, managed to emerge as a viable business, unlike Polaroid, for example. Although it doesn’t have a dominant position, as it did when photography was based solely on film, it reacted quickly to the digital revolution, with early patents on digital photography and acquisition of companies such as Ofoto and Scitex.
Subscribe to:
Posts (Atom)