Sei sulla pagina 1di 208

Value chain From Wikipedia, the free encyclopedia

Popular Visualization

The value chain, is a concept from business management that was first described and popularized by Michael Porter in his 1985 best-seller, Competitive Advantage: Creating and Sustaining Superior Performance.[1]Contents [hide] 1 Firm Level 2 Requirements of value chain 2.1 Activities 3 Industry Level 4 Significance 5 SCOR 6 Value Reference Model 7 References 8 See also 9 External links

[edit] Firm Level

A value chain is a chain of activities for a firm operating in a specific industry. The business unit is the appropriate level for construction of a value chain, not the divisional level or corporate level. Products pass through all activities of the chain in order, and at each activity the product gains some value. The chain of activities gives the products more added value than the sum of the independent activity's value. It is important not to mix the concept of the value chain with the costs occurring throughout the

activities. A diamond cutter, as a profession, can be used to illustrate the difference of cost and the value chain. The cutting activity may have a low cost, but the activity adds much of the value to the end product, since a rough diamond is significantly less valuable than a cut diamond. Typically, the described value chain and the documentation of processes, assessment and auditing of adherence to the process routines are at the core of the quality certification of the business, e.g. ISO 9001. [edit] Requirements of value chain

Coordination and collaboration; Investment in information technology; Changes in organizational processes; Committed leadership; Flexible jobs and adaptable, capable employees; A supportive organizational culture and attitudes;

Flintstone Example: Without the dinosaur, Fred couldn't complete his daily tasks quickly. This was because the dinosaurs had more strength than poor Freddy, therefore, making the process more efficient, which added value to the final result. [edit] Activities

The value chain categorizes the generic value-adding activities of an organization. The "primary activities" include: inbound logistics, operations (production), outbound logistics, marketing and sales (demand), and services (maintenance). The "support activities" include: administrative infrastructure management, human resource management, technology (R&D), and procurement. The costs and value drivers are identified for each value activity. [edit] Industry Level

An industry value chain is a physical representation of the various processes that are involved in producing goods (and services), starting with raw materials and ending with the delivered product (also known as the supply chain). It is based on the notion of value-added at the link (read: stage of production) level. The sum total of link-level value-added yields total value. The French Physiocrat's Tableau conomique is one of the earliest examples of a value chain. Wasilly Leontief's Input-Output tables, published in the 1950s, provide estimates of the relative importance of each individual link in industry-level value-chains for the U.S. economy.

[edit] Significance

The value chain framework quickly made its way to the forefront of management thought as a powerful analysis tool for strategic planning. The simpler concept of value streams, a cross-functional process which was developed over the next decade,[2] had some success in the early 1990s.[3]

The value-chain concept has been extended beyond individual firms. It can apply to whole supply chains and distribution networks. The delivery of a mix of products and services to the end customer will mobilize different economic factors, each managing its own value chain. The industry wide synchronized interactions of those local value chains create an extended value chain, sometimes global in extent. Porter terms this larger interconnected system of value chains the "value system." A value system includes the value chains of a firm's supplier (and their suppliers all the way back), the firm itself, the firm distribution channels, and the firm's buyers (and presumably extended to the buyers of their products, and so on).

Capturing the value generated along the chain is the new approach taken by many management strategists. For example, a manufacturer might require its parts suppliers to be located nearby its assembly plant to minimize the cost of transportation. By exploiting the upstream and downstream information flowing along the value chain, the firms may try to bypass the intermediaries creating new business models, or in other ways create improvements in its value system.

Value chain analysis has also been successfully used in large Petrochemical Plant Maintenance Organizations to show how Work Selection, Work Planning, Work Scheduling and finally Work Execution can (when considered as elements of chains) help drive Lean approaches to Maintenance. The Maintenance Value Chain approach is particularly successful when used as a tool for helping Change Management as it is seen as more user friendly than other business process tools.

Value chain analysis has also been employed in the development sector as a means of identifying poverty reduction strategies by upgrading along the value chain.[4] Although commonly associated with export-oriented trade, development practitioners have begun to highlight the importance of developing national and intra-regional chains in addition to international ones.[5] [edit]

SCOR

The Supply-Chain Council, a global trade consortium in operation with over 700 member companies, governmental, academic, and consulting groups participating in the last 10 years, manages the SupplyChain Operations Reference (SCOR), the de facto universal reference model for Supply Chain including Planning, Procurement, Manufacturing, Order Management, Logistics, Returns, and Retail; Product and Service Design including Design Planning, Research, Prototyping, Integration, Launch and Revision, and Sales including CRM, Service Support, Sales, and Contract Management which are congruent to the Porter framework. The SCOR framework has been adopted by hundreds of companies as well as national entities as a standard for business excellence, and the US DOD has adopted the newly-launched DesignChain Operations Reference (DCOR) framework for product design as a standard to use for managing their development processes. In addition to process elements, these reference frameworks also maintain a vast database of standard process metrics aligned to the Porter model, as well as a large and constantly researched database of prescriptive universal best practices for process execution. [edit] Value Reference Model

VRM Quick Reference Guide V3R0

A Value Reference Model (VRM) developed by the trade consortium Value Chain Group offers an open source semantic dictionary for value chain management encompassing one unified reference framework representing the process domains of product development, customer relations and supply networks.

The integrated process framework guides the modeling, design, and measurement of business performance by uniquely encompassing the plan, govern and execute requirements for the design, product, and customer aspects of business.

The Value Chain Group claims VRM to be next generation Business Process Management that enables value reference modeling of all business processes and provides product excellence, operations excellence, and customer excellence.

Six business functions of the Value Chain:

Research and Development Design of Products, Services, or Processes Production Marketing & Sales Distribution Customer Service

This guide to the right provides the levels 1-3 basic building blocks for value chain configurations. All Level 3 processes in VRM have input/output dependencies, metrics and practices. The VRM can be extended to levels 4-6 via the Extensible Reference Model schema.

Beneficiation From Wikipedia, the free encyclopedia This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. Please improve this article by introducing more precise citations where appropriate. (May 2009)

In mining, beneficiation is a variety of processes whereby extracted ore from mining is separated into mineral and gangue, the former suitable for further processing or direct use.

Based on this definition, the term has metaphorically come to be used within a context of economic development and corporate social responsibility to describe the proportion of the value derived from asset exploitation which stays 'in country' and benefits local communities.[1]

For example, in the diamond industry, the beneficiation imperative argues that cutting and polishing processes within the diamond value chain should be conducted in-country to maximise the local economic contribution. Strategic business unit From Wikipedia, the free encyclopedia (Redirected from Business unit) The introduction to this article provides insufficient context for those unfamiliar with the subject. Please help improve the article with a good introductory style. (October 2010) This article may need to be wikified to meet Wikipedia's quality standards. Please help by adding relevant internal links, or by improving the article's layout. (July 2010) Click [show] on right for more details.[show] This article only describes one highly specialized aspect of its associated subject. Please help improve this article by adding more general information. The talk page may contain suggestions. (February 2011)

In essence, the SBU is a profit making area that focuses on a combination of product offer and market segment, requiring its own marketing plan, competitor analysis, and marketing campaign. A Strategic Business Unit emerges at the cross-over between A product offering that the company could make and A reachable market segment that has a high value profit potential.

That is to say, if theres a big enough market niche for a product we can supply, then we may want to create a SBU that focuses on that opportunity. That is to say, if theres a big enough market niche for a product we can supply, then we may want to create a SBU that focuses on that opportunity. Strategic Business Unit or SBU is understood as a business unit within the overall corporate identity which is distinguishable from other business because it serves a defined external market where management can conduct strategic planning in relation to products and markets. The unique small business unit benefits

that a firm aggressively promotes in a consistent manner. When companies become really large, they are best thought of as being composed of a number of businesses (or SBUs).

In the broader domain of strategic management, the phrase "Strategic Business Unit" came into use in the 1960s, largely as a result of General Electric's many units.

These organizational entities are large enough and homogeneous enough to exercise control over most strategic factors affecting their performance. They are managed as self contained planning units for which discrete business strategies can be developed. A Strategic Business Unit can encompass an entire company, or can simply be a smaller part of a company set up to perform a specific task. The SBU has its own business strategy, objectives and competitors and these will often be different from those of the parent company. Research conducted in this include the BCG Matrix.

This approach entails the creation of business units to address each market in which the company is operating. The organization of the business unit is determined by the needs of the market.

An SBU is an sole operating unit or planning focus that does not group a distinct set of products or services, which are sold to a uniform set of customers, facing a well-defined set of competitors. The external (market) dimension of a business is the relevant perspective for the proper identification of an SBU. (See Industry information and Porter five forces analysis.) Therefore, an SBU should have a set of external customers and not just an internal supplier.[1]

Companies today often use the word Segmentation or Deviasion when referring to SBUs, or an aggregation of SBUs that share such commonalities.Contents *hide+ 1 Commonalities 2 Success factors 3 BCG matrix 4 References

[edit] Commonalities This section requires expansion.

A SBU is generally defined by what it has in common, as well as the traditional aspects defined by 'hary, of separate competitors and a profitability bottom line. The commonalities are five in number: [edit] Success factors This section requires expansion.

There are three factors that are generally seen as determining the success of an SBU:[citation needed] the degree of autonomy given to each SBU manager, the degree to which an SBU shares functional programs and facilities with other SBUs, and the manner in which the corporation is because of new changes in market. [edit] BCG matrix This section requires expansion.

When using the Boston Consulting Group Matrix, SBUs can be shown within any of the four quadrants (Star, Question Mark, Cash Cow, Dog) as a circle whose area represents their size. With different colors, competitors may also be shown. The precise location is determined by the two axes, Industry Growth as the Y axis, Market Share as the X axis. Alternatively, changes over or two years can be shown by shading or other differences in design.xx[2] Buzzword From Wikipedia, the free encyclopedia For the Acrobat product, see Adobe Buzzword. This article has multiple issues. Please help improve it or discuss these issues on the talk page. It may need reorganization to comply with Wikipedia's layout guidelines. Tagged since January 2008. Its neutrality is disputed. Tagged since December 2007. Its factual accuracy is disputed. Tagged since December 2007.

Its tone or style may not reflect the formal tone used on Wikipedia. Tagged since January 2008. This article appears to contain a large number of buzzwords. Specific concerns can be found on the Talk page. Please improve this article if you can. (July 2011)

A buzzword (also fashion word and vogue word) is a term of art, salesmanship, politics, or technical jargon[1] that has begun to see use in the wider society outside of its originally narrow technical context by nonspecialists who use the term vaguely or imprecisely. Labelling a term a "buzzword" pejoratively implies that it is now used pretentiously and inappropriately by individuals with little understanding of its actual meaning who are most interested in impressing others by making their discourse sound more esoteric, obscure, and technical than it otherwise would be.

Buzzwords differ from jargon in that jargon is esoteric but precisely defined terminology used for ease of communication between specialists in a given field, whereas a buzzword (which often develops from the appropriation of technical jargon) is often used imprecisely among non-specialists.[1]Contents [hide] 1 Misuses of buzzwords 2 Individual examples 3 See also 4 Footnotes 5 Further reading 6 External links

[edit] Misuses of buzzwords Thought-control via intentional vagueness. In management, by stating organization goals with opaque words of unclear meaning; their positive connotations prevent questioning of intent, especially when many buzzwords are used.[2] (See newspeak) To inflate the trivial to importance and stature. To impress a judge or an examiner by seeming to know a legal psychologic theory or a quantum physics principle, by name-dropping it, e.g. "cognitive dissonance", the "Heisenberg Uncertainty Principle".

To camouflage chit-chat saying nothing. Individual examples: Below are a few examples of words that are commonly used as buzzwords. For a more complete list, see list of buzzwords. Going Forward Leverage Long Tail[3] Next Generation[4] Paradigm[5] Paradigm shift[6] Incentivize Paradigm From Wikipedia, the free encyclopedia For other uses, see Paradigm (disambiguation).

The word paradigm ( /prdam/) has been used in science to describe distinct concepts. It comes from Greek "" (paradeigma), "pattern, example, sample"*1+ from the verb "" (paradeiknumi), "exhibit, represent, expose"*2+ and that from "" (para), "beside, beyond"*3+ + "" (deiknumi), "to show, to point out".*4+

The original Greek term (paradeigma) was used in Greek texts such as Plato's Timaeus (28A) as the model or the pattern that the Demiurge (god) used to create the cosmos. The term had a technical meaning in the field of grammar: the 1900 Merriam-Webster dictionary defines its technical use only in the context of grammar or, in rhetoric, as a term for an illustrative parable or fable. In linguistics, Ferdinand de Saussure used paradigm to refer to a class of elements with similarities.

The word has come to refer very often now to a thought pattern in any scientific discipline or other epistemological context. The Merriam-Webster Online dictionary defines this usage as "a philosophical and theoretical framework of a scientific school or discipline within which theories, laws, and

generalizations and the experiments performed in support of them are formulated; broadly: a philosophical or theoretical framework of any kind."[5]Contents [hide] 1 Scientific paradigm 2 Paradigm shifts 3 The concept of Paradigm and the social sciences 3.1 Paradigm paralysis 4 Other uses 5 Notes 6 See also 7 References and links

[edit] Scientific paradigm Main articles: Paradigm shift, Sociology of knowledge, Systemics, Commensurability (philosophy of science), and Confirmation holism See also: Paradigm (experimental), and Scientific consensus

The historian of science Thomas Kuhn gave paradigm its contemporary meaning when he adopted the word to refer to the set of practices that define a scientific discipline at any particular period of time. Kuhn himself came to prefer the terms exemplar and normal science, which have more precise philosophical meanings. However in his book The Structure of Scientific Revolutions Kuhn defines a scientific paradigm as: "universally recognized scientific achievements that, for a time, provide model problems and solutions for a community of researchers", i.e., what is to be observed and scrutinized the kind of questions that are supposed to be asked and probed for answers in relation to this subject how these questions are to be structured how the results of scientific investigations should be interpreted

Alternatively, the Oxford English Dictionary defines paradigm as "a pattern or model, an exemplar." Thus an additional component of Kuhn's definition of paradigm is: how is an experiment to be conducted, and what equipment is available to conduct the experiment.

Thus, within normal science, the paradigm is the set of exemplary experiments that are likely to be copied or emulated. In this scientific context, the prevailing paradigm often represents a more specific way of viewing reality, or limitations on acceptable programs for future research, than the more general scientific method.

A currently accepted paradigm would be the standard model of physics. The scientific method would allow for orthodox scientific investigations into phenomena which might contradict or disprove the standard model; however grant funding would be proportionately more difficult to obtain for such experiments, depending on the degree of deviation from the accepted standard model theory which the experiment would be expected to test for. To illustrate the point, an experiment to test for the mass of neutrinos or the decay of protons (small departures from the model) would be more likely to receive money than experiments to look for the violation of the conservation of momentum, or ways to engineer reverse time travel.

One important aspect of Kuhn's paradigms is that the paradigms are incommensurable, meaning two paradigms cannot be reconciled with each other because they cannot be subjected to the same common standard of comparison. That is, no meaningful comparison between them is possible without fundamental modification of the concepts that are an intrinsic part of the paradigms being compared. This way of looking at the concept of "paradigm" creates a paradox of sorts, since competing paradigms are in fact constantly being measured against each other. (Nonetheless, competing paradigms are not fully intelligible solely within the context of their own conceptual frameworks.) For this reason, paradigm as a concept in the philosophy of science might more meaningfully be defined as a self-reliant explanatory model or conceptual framework. This definition makes it clear that the real barrier to comparison is not necessarily the absence of common units of measurement, but an absence of mutually compatible or mutually intelligible concepts. Under this system, a new paradigm which replaces an old paradigm is not necessarily better, because the criteria of judgment are controlled by the paradigm itself, and by the conceptual framework which defines the paradigm and gives it its explanatory value.[6][7]

A more disparaging term groupthink, and the term mindset, have somewhat similar meanings that apply to smaller and larger scale examples of disciplined thought. Michel Foucault used the terms episteme and discourse, mathesis and taxinomia, for aspects of a "paradigm" in Kuhn's original sense.

Simple common analogy: A simplified analogy for paradigm is a habit of reasoning, or "the box" in the commonly used phrase "thinking outside the box". Thinking inside the box is analogous with normal science. The box encompasses the thinking of normal science and thus the box is analogous with paradigm. "Thinking outside the box" would be what Kuhn calls revolutionary science. Revolutionary science is usually unsuccessful, and very rarely leads to new paradigms. However, when they are successful they lead to large scale changes in the scientific worldview. When these large scale shifts in the scientific view are implemented and accepted by the majority, it will then become "the box" and science will progress within it. [edit] Paradigm shifts Main article: Paradigm shift

In The Structure of Scientific Revolutions, Kuhn wrote that "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science." (p. 12)

Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, a statement generally attributed to physicist Lord Kelvin famously claimed, "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement."[8] Five years later, Albert Einstein published his paper on special relativity, which challenged the very simple set of rules laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years. In this case, the new paradigm reduces the old to a special case in the sense that Newtonian mechanics is still a good model for approximation for speeds that are slow compared to the speed of light. Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it. Kuhn's original model is now generally seen as too limited.

Kuhn's idea was itself revolutionary in its time, as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognize such a paradigm shift. Being in the social sciences, people can still use earlier ideas to discuss the history of science. [edit] The concept of Paradigm and the social sciences

Kuhn himself did not consider the concept of paradigm as appropriate for the social sciences. He explains in his preface to "The Structure of Scientific Revolutions" that he concocted the concept of paradigm precisely in order to distinguish the social from the natural sciences (p.x). He wrote this book at the Palo Alto Center for Scholars, surrounded by social scientists, when he observed that they were never in agreement on theories or concepts. He explains that he wrote this book precisely to show that there are no, nor can there be any, paradigms in the social sciences. Mattei Dogan, a French sociologist, in his article "Paradigms in the [Social Sciences]," develops Kuhn's original thesis that there are no paradigms at all in the social sciences since the concepts are polysemic, the deliberate mutual ignorance between scholars and the proliferation of schools in these disciplines. Dogan provides many examples of the non-existence of paradigms in the social sciences in his essay, particularly in sociology, political science and political anthropology. [edit] Paradigm paralysis

Perhaps the greatest barrier to a paradigm shift, in some cases, is the reality of paradigm paralysis: the inability or refusal to see beyond the current models of thinking.[9] This is similar to what psychologists term Confirmation bias.

Examples include rejection of Galileo's theory of a heliocentric universe, the discovery of electrostatic photography, xerography and the quartz clock. [edit] Other uses

Handa, M.L. (1986) introduced the idea of "social paradigm" in the context of social sciences. He identified the basic components of a social paradigm. Like Kuhn, Handa addressed the issue of changing paradigm; the process popularly known as "paradigm shift". In this respect, he focused on social circumstances that precipitate such a shift and the effects of the shift on social institutions, including the institution of education. This broad shift in the social arena, in turn, changes the way the individual perceives reality.

Another use of the word paradigm is in the sense of Weltanschauung (German for world view). For example, in social science, the term is used to describe the set of experiences, beliefs and values that

affect the way an individual perceives reality and responds to that perception. Social scientists have adopted the Kuhnian phrase "paradigm shift" to denote a change in how a given society goes about organizing and understanding reality. A dominant paradigm refers to the values, or system of thought, in a society that are most standard and widely held at a given time. Dominant paradigms are shaped both by the communitys cultural background and by the context of the historical moment. The following are conditions that facilitate a system of thought to become an accepted dominant paradigm: Professional organizations that give legitimacy to the paradigm Dynamic leaders who introduce and purport the paradigm Journals and editors who write about the system of thought. They both disseminate the information essential to the paradigm and give the paradigm legitimacy Government agencies who give credence to the paradigm Educators who propagate the paradigms ideas by teaching it to students Conferences conducted that are devoted to discussing ideas central to the paradigm Media coverage Lay groups, or groups based around the concerns of lay persons, that embrace the beliefs central to the paradigm Sources of funding to further research on the paradigm

The word paradigm is also still used to indicate a pattern or model or an outstandingly clear or typical example or archetype. The term is frequently used in this sense in the design professions. Design Paradigms or archetypes comprise functional precedents for design solutions. The best known references on design paradigms are Design Paradigms: A Sourcebook for Creative Visualization, by Wake, and Design Paradigms by Petroski.

This term is also used in cybernetics. Here it means (in a very wide sense) a (conceptual) protoprogramme for reducing the chaotic mass to some form of order. Note the similarities to the concept of entropy in chemistry and physics. A paradigm there would be a sort of prohibition to proceed with any action that would increase the total entropy of the system. In order to create a paradigm, a closed system which would accept any changes is required. Thus a paradigm can be only applied to a system that is not in its final stage. Paradigm shift From Wikipedia, the free encyclopedia

For other uses, see Paradigm Shift (disambiguation).

A Paradigm shift (or revolutionary science) is, according to Thomas Kuhn in his influential book The Structure of Scientific Revolutions (1962), a change in the basic assumptions, or paradigms, within the ruling theory of science. It is in contrast to his idea of normal science.

According to Kuhn, "A paradigm is what members of a scientific community, and they alone, share." (The Essential Tension, 1977). Unlike a normal scientist, Kuhn held, "a student in the humanities has constantly before him a number of competing and incommensurable solutions to these problems, solutions that he must ultimately examine for himself." (The Structure of Scientific Revolutions). Once a paradigm shift is complete, a scientist cannot, for example, reject the germ theory of disease to posit the possibility that miasma causes disease or reject modern physics and optics to posit that ether carries light. In contrast, a critic in the Humanities can choose to adopt an array of stances (e.g., Marxist criticism, Freudian criticism, Deconstruction, 19th-century-style literary criticism), which may be more or less fashionable during any given period but which are all regarded as legitimate.

Since the 1960s, the term has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events, even though Kuhn himself restricted the use of the term to the hard sciences. Compare as a structured form of Zeitgeist.Contents [hide] 1 Kuhnian paradigm shifts 2 Science and paradigm shift 3 Examples of paradigm shifts in the natural sciences 4 Examples of paradigm shifts in the social sciences 5 As marketing speak 6 Other uses 7 See also 8 References 9 External links

[edit]

Kuhnian paradigm shifts

Kuhn used the duck-rabbit optical illusion to demonstrate the way in which a paradigm shift could cause one to see the same information in an entirely different way.

An epistemological paradigm shift was called a scientific revolution by epistemologist and historian of science Thomas Kuhn in his book The Structure of Scientific Revolutions.

A scientific revolution occurs, according to Kuhn, when scientists encounter anomalies which cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made. The paradigm, in Kuhn's view, is not simply the current theory, but the entire worldview in which it exists, and all of the implications which come with it. It is based on features of landscape of knowledge that scientists can identify around them. There are anomalies for all paradigms, Kuhn maintained, that are brushed away as acceptable levels of error, or simply ignored and not dealt with (a principal argument Kuhn uses to reject Karl Popper's model of falsifiability as the key force involved in scientific change). Rather, according to Kuhn, anomalies have various levels of significance to the practitioners of science at the time. To put it in the context of early 20th century physics, some scientists found the problems with calculating Mercury's perihelion more troubling than the Michelson-Morley experiment results, and some the other way around. Kuhn's model of scientific change differs here, and in many places, from that of the logical positivists in that it puts an enhanced emphasis on the individual humans involved as scientists, rather than abstracting science into a purely logical or philosophical venture.

When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis, according to Kuhn. During this crisis, new ideas, perhaps ones previously discarded, are tried. Eventually a new paradigm is formed, which gains its own new followers, and an intellectual "battle" takes place between the followers of the new paradigm and the hold-outs of the old paradigm. Again, for early 20th century physics, the transition between the Maxwellian electromagnetic worldview and the Einsteinian Relativistic worldview was neither instantaneous nor calm, and instead involved a protracted set of "attacks," both with empirical data as well as rhetorical or philosophical arguments, by both sides, with the Einsteinian theory winning out in the long-run. Again, the weighing of evidence and importance of new data was fit through the human sieve: some scientists found the simplicity of Einstein's equations to be most compelling, while some found them more complicated than the notion of Maxwell's aether which they banished. Some found Eddington's photographs of light bending around the sun to be compelling, some questioned their accuracy and meaning. Sometimes the convincing force is just time itself and the human toll it takes, Kuhn said, using a quote from Max Planck:

"a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."[1]

After a given discipline has changed from one paradigm to another, this is called, in Kuhn's terminology, a scientific revolution or a paradigm shift. It is often this final conclusion, the result of the long process, that is meant when the term paradigm shift is used colloquially: simply the (often radical) change of worldview, without reference to the specificities of Kuhn's historical argument. [edit] Science and paradigm shift

A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) is a case for relativism:[2] the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different.

These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published a highly regarded essay in 1974, "On the Very Idea of a Conceptual Scheme," arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour (see for example John Hassard, Sociology and Organisation Theory. Positivism, Paradigm and Postmodernity. Cambridge University Press. 1993.)

Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system. In 1900, Lord Kelvin famously stated, "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement." Five years later, Albert Einstein published his paper on special relativity, which challenged the very simple set of rules

laid down by Newtonian mechanics, which had been used to describe force and motion for over two hundred years.

In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science." (p. 12) Kuhn's idea was itself revolutionary in its time, as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. Being in the social sciences, people can still use earlier ideas to discuss the history of science.

Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it. Kuhn's original model is now generally seen as too limited. [edit] Examples of paradigm shifts in the natural sciences

Some of the "classical cases" of Kuhnian paradigm shifts in science are: The transition in cosmology from a Ptolemaic cosmology to a Copernican one. The transition in optics from geometrical optics to physical optics. The transition in mechanics from Aristotelian mechanics to classical mechanics. The acceptance of the theory of biogenesis, that all life comes from life, as opposed to the theory of spontaneous generation, which began in the 17th century and was not complete until the 19th century with Pasteur. The acceptance of the work of Andreas Vesalius, whose work De Humani Corporis Fabrica corrected the numerous errors in the previously-held system created by Galen. The transition between the Maxwellian Electromagnetic worldview and the Einsteinian Relativistic worldview. The transition between the worldview of Newtonian physics and the Einsteinian Relativistic worldview. The development of Quantum mechanics, which redefined Classical mechanics. The acceptance of Plate tectonics as the explanation for large-scale geologic changes.

The development of absolute dating The acceptance of Lavoisier's theory of chemical reactions and combustion in place of phlogiston theory, known as the Chemical Revolution. The acceptance of Mendelian inheritance, as opposed to pangenesis in the early 20th century [edit] Examples of paradigm shifts in the social sciences

In Kuhn's view, the existence of a single reigning paradigm is characteristic of the sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals."[3] Others have applied Kuhn's concept of paradigm shift to the social sciences. The movement, known as the Cognitive revolution, away from Behaviourist approaches to psychological study and the acceptance of cognition as central to studying human behaviour. The Keynesian Revolution is typically viewed as a major shift in macroeconomics.[4] According to John Kenneth Galbraith, Say's Law dominated economic thought prior to Keynes for over a century, and the shift to Keynesianism was difficult. Economists who contradicted the law, which inferred that underemployment and underinvestment (coupled with oversaving) were virtually impossible, risked losing their careers.[5] In his magnum opus, Keynes cited one of his predecessors, J. A. Hobson,[6] who was repeatedly denied positions at universities for his heretical theory. Later, the movement for Monetarism over Keynesianism marked a second divisive shift. Monetarists held that fiscal policy was not effective for stabilizing inflation, that it was solely a monetary phenomenon, in contrast to the Keynesian view of the time was that both fiscal and monetary policy were important. Keynesians later adopted much of the Monetarists view of the quantity theory of money and shifting Philips curve, theories they initially rejected.[7] Fritjof Capra describes a paradigm shift presently happening in science from physics to the life sciences. This shift in perception accompanies a shift in values and is characterized by ecological literacy.[8] [edit] As marketing speak

In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication.[9] In his book, Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains

the phrase. It is referred to in several articles and books[10][11] as abused and overused to the point of becoming meaningless. [edit] Other uses

The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought-pattern a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing: Handa, M. L., a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift." In this respect, he focuses on the social circumstances which precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education.[citation needed] The concept has been developed for technology and economics in the identification of new technoeconomic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Schumpeter's idea of creative destruction. Examples include the move to mass production, and the introduction of microelectronics.[citation needed] In the arena of political science, the concept has been applied to the ethos of war. Evolutionary biologist Judith Hand, in a paper entitled "To Abolish War," argued that that a paradigm shift is possible from a global ethos that operates on the assumption that war is an inevitable aspect of human nature to a global ethos that rejects war under any circumstances.[12] Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement which gained great prominence in the years immediately following distribution of those images.[13][14] Calculating demand forecast accuracy From Wikipedia, the free encyclopedia (Redirected from Calculating Demand Forecast Accuracy)

Calculating demand forecast accuracy is the process of determining the accuracy of forecasts made regarding customer demand for a product.Contents [hide] 1 Importance of forecasts 2 Calculating the accuracy of supply chain forecasts 3 Calculating forecast error 4 Reducing forecast error 5 See also 6 References 7 External links

[edit] Importance of forecasts

Understanding and predicting customer demand is vital to manufacturers and distributors to avoid stock-outs and maintain adequate inventory levels. While forecasts are never perfect, they are necessary to prepare for actual demand. In order to maintain an optimized inventory and effective supply chain, accurate demand forecasts are imperative. [edit] Calculating the accuracy of supply chain forecasts

Forecast accuracy in the supply chain is typically measured using the Mean Absolute Percent Error or MAPE. Statistically MAPE is defined as the average of percentage errors. Most practitioners, however, define and use the MAPE as the Mean Absolute Deviation divided by Average Sales. This is in effect a volume weighted MAPE. This is also referred to as the MAD/Mean ratio.

A simpler and more elegant method to calculate MAPE across all the products forecasted is to divide the sum of the absolute deviations by the total sales of all products. [edit] Calculating forecast error

The forecast error needs to be calculated using actual sales as a base. There are several forms of forecast error calculation methods used, namely Mean Percent Error, Root Mean Squared Error, Tracking Signal and Forecast Bias.. Delta Model From Wikipedia, the free encyclopedia

Delta Model is a customer-based approach to Strategic management.[1][2][3] Compared to a philosophical focus on the characteristics of a product (product economics), the model is based on customer economics. The customer-centric model was developed by Dean Wilde and Arnoldo Hax.Contents [hide] 1 Haxioms 2 Others 3 References 4 See also

[edit] Haxioms

Haxioms are a set of principles, proposed by Arnoldo Hax, which serve as a framework for the conceptualization of the Delta Model, and since it somehow challenges the conventional wisdom regarding strategic thinking: The center of the strategy is the customer This is the center of the Delta Model, being the customer the driving force for all actions undertaken by the company. Thus, the effort the Organizations have to do is to configure high value-added propositions to customers which will be both creative and unique. You don't win by beating the competition. You win by achieving Customer Bonding Just as the central focus of the management is the Customer, the central focus of the strategy should be Customer Bonding. This stage is recognizable by a relationship based on transparency, fairness, and which produces long term benefits for all involved.

Strategy is not war; it is Love When we define the essence of strategy as a competitive advantage, we are at the same time denoting conflict as the way to think about business. If instead we reject this notion, our mind opens up to new alternatives and, since we are no longer in confrontation with our partners, other forms of cooperation can be considered. The extreme way of non-conflict is indeed LOVE. A product-centric mentality is constraining; open your mindset to include the customers, the suppliers and the complementors as your key constituencies Since all business are related and dependent on other members of the supply chain, a wider view is needed to see this expanded enterprise, which is the entity of real importance in our strategic analysis. In this way we can better propose high-value propositions to our customers. Try to understand your customer deeply. Strategy is done one customer at a time. The granular customer analysis is fundamental to complete a sensible customer segmentation. the extreme is in fact the consideration of each single customer individually with his/her own needs and wants. Porter's Four Corners Model From Wikipedia, the free encyclopedia

Porters four corners model is a predictive tool designed by Michael Porter that helps in determining a competitors course of action. Unlike other predictive models which predominantly rely on a firms current strategy and capabilities to determine future strategy, Porters model additionally calls for an understanding of what motivates the competitor. This added dimension of understanding a competitor's internal culture, value system, mindset and assumptions help in determining a much more accurate and realistic reading of a competitors possible reactions in a given situation.Contents *hide+ 1 The Four Corners 2 Strengths 3 Use in competitive intelligence and strategy 4 See also 5 References 6 Further reading

[edit]

The Four Corners Motivation drivers

This helps in determining competitor's action by understanding their goals (both strategic and tactical) and their current position vis--vis their goals. A wide gap between the two could mean the competitor is highly likely to react to any external threat that comes in its way, whereas a narrower gap is likely to produce a defensive strategy. Question to be answered here is: What is it that drives the competitor? These drivers can be at various levels and dimensions and can provide insights into future goals. Motivation Management Assumptions

The perceptions and assumptions the competitor has about itself and its industry would shape strategy. This corner includes determining the competitor's perception of its strengths and weaknesses, organization culture and their beliefs about competitor's goals. If the competitor thinks highly of its competition and has a fair sense of industry forces, it is likely to be ready with plans to counter any threats to its position. On the other hand, a competitor who has a misplaced understanding of industry forces is not very likely to respond to a potential attack. Question to be answered here is: What are competitor's assumption about the industry, the competition and its own capabilities? Actions Strategy

A competitor's strategy determines how it competes in the market. However, there could be a difference between the company's intended strategy (as stated in the annual report and interviews) and its realized strategy (as is evident in its acquisitions, new product development, etc.). It is therefore important here to determine the competitor's realized strategy and how they are actually performing. If current strategy is yielding satisfactory results, it is safe to assume that the competitor is likely to continue to operate in the same way. Questions to be answered here are: What is the competitor actually doing and how successful is it in implementing its current strategy? Actions Capabilities

This looks at a competitor's inherent ability to initiate or respond to external forces. Though it might have the motivation and the drive to initiate a strategic action, its effectiveness is dependent on its capabilities. Its strengths will also determine how the competitor is likely to respond to an external threat. An organization with an extensive distribution network is likely to initiate an attack through its channel, whereas a company with strong financials is likely to counter attack through price drops. The

questions to be answered here are: What are the strengths and weaknesses of the competitor? Which areas is the competitor strong in? [edit] Strengths Considers implicit aspects of competitive behavior

Firms are more often than not aware of their rivals and do have a generally good understanding of their strategies and capabilities. However, motivational factors are often overlooked. Sufficiently motivated competitors can often prove to be more competitive than bigger but less motivated rivals. What sets this model apart from others is its insistence on accounting for the "implicit" factors such as culture, history, executive, consultants, and boards backgrounds, goals, values and commitments and inclusion of management's deep beliefs and assumptions about what works or does not work in the market.[1] Predictive in nature

Porter's four corners model provides a framework that ties competitor's capabilities to their assumptions of the competitive environment and their underlying motivations. By looking at both a firm's capabilities (what the firm can do) and underlying implicit factors (their motivations to follow a course of action) can help predict competitor's actions with a relatively higher level of confidence. The underlying assumption here is that decision makers in firms are essentially human and hence subject to the influences of affective and automatic processes described by neuroscientists.[1] Hence by considering these factors along with a firm's capabilities, this model is a better predictor of competitive behavior. [edit] Use in competitive intelligence and strategy

Despite its strengths, Porter's four corners model is not widely used in strategy and competitive intelligence. In a 2005 survey by the Society of Competitive Intelligence Professionals's (SCIP) frequently used analytical tools, Porter's four corners does not even figure in the top ten.[2]

However this model can be used in competitive analysis and strategy as follows:

Strategy development and testing: Can be used to determine likely actions by competitors in response to the firm's strategy. This can be used when developing a strategy (such as for a new product launch) or to test this strategy using simulation techniques such as a business war game. Early warning

The predictive nature of this tool can also alert firms to possible threats due to competitive action.

Porter's four corners also works well with other analytical models. For instance it complements Porters five forces model well. Competitive Cluster Analysis of industry products in turn complements Four Corners Analysis.[3] Using such models that complement each other can help create a more complete analysis. Competitive intelligence From Wikipedia, the free encyclopedia (Redirected from Competitive Intelligence)

A broad definition of competitive intelligence is the action of defining, gathering, analyzing, and distributing intelligence about products, customers, competitors and any aspect of the environment needed to support executives and managers in making strategic decisions for an organization.

Key points of this definition: Competitive intelligence is an ethical and legal business practice, as opposed to industrial espionage which is illegal. The focus is on the external business environment.[1] There is a process involved in gathering information, converting it into intelligence and then utilizing this in business decision making. CI professionals erroneously emphasize that if the intelligence gathered is not usable (or actionable) then it is not intelligence.

A more focused definition of CI regards it as the organizational function responsible for the early identification of risks and opportunities in the market before they become obvious. Experts also call this process the early signal analysis. This definition focuses attention on the difference between dissemination of widely available factual information (such as market statistics, financial reports,

newspaper clippings) performed by functions such as libraries and information centers, and competitive intelligence which is a perspective on developments and events aimed at yielding a competitive edge.[2]

The term CI is often viewed as synonymous with competitor analysis, but competitive intelligence is more than analyzing competitors it is about making the organization more competitive relative to its entire environment and stakeholders: customers, competitors, distributors, technologies, macroeconomic data etc.Contents [hide] 1 Historic development 2 Principles 3 Distinguishing competitive intelligence from similar fields 4 Ethics 5 See also 6 References

[edit] Historic development

The literature associated with the field of competitive intelligence is best exemplified by the detailed bibliographies that were published in the Society of Competitive Intelligence Professionals refereed academic journal called The Journal of Competitive Intelligence and Management.[3][4][5][6] Although elements of organizational intelligence collection have been a part of business for many years, the history of competitive intelligence arguably began in the U.S. in the 1970s, although the literature on the field pre-dates this time by at least several decades.[6] In 1980, Michael Porter published the study Competitive-Strategy: Techniques for Analyzing Industries and Competitors which is widely viewed as the foundation of modern competitive intelligence. This has since been extended most notably by the pair of Craig Fleisher and Babette Bensoussan, who through several popular books on competitive analysis have added 48 commonly applied competitive intelligence analysis techniques to the practitioner's tool box.[7][8] In 1985, Leonard Fuld published his best selling book dedicated to competitor intelligence.[9] However, the institutionalization of CI as a formal activity among American corporations can be traced to 1988, when Ben and Tamar Gilad published the first organizational model of a formal corporate CI function, which was then adopted widely by US companies.[10] The first professional certification program (CIP) was created in 1996 with the establishment of The Fuld-Gilad-

Herring Academy of Competitive Intelligence in Cambridge, MA, followed in 2004 by the Institute for Competitive Intelligence.

In 1986 the Society of Competitive Intelligence Professionals (SCIP) was founded in the U.S. and grew in the late 1990s to around 6000 members worldwide, mainly in the U.S. and Canada, but with large numbers especially in UK and Germany. Due to financial difficulties in 2009, the organization merged with Frost & Sullivan under the Frost & Sullivan Institute. SCIP has since been renamed "Strategic & Competitive Intelligence Professionals" to emphasise the strategic nature of the subject, and also to refocus the organisation's general approach, while keeping the existing SCIP brandname and logo. A number of efforts have been made to discuss the field's advances in post-secondary (university) education, covered by several authors including Blenkhorn & Fleisher,[11] Fleisher,[12] Fuld,[13] Prescott,[14] and McGonagle,[15] among others. Although the general view would be that competitive intelligence concepts can be readily found and taught in many business schools around the globe, there are still relatively few dedicated academic programs, majors, or degrees in the field, a concern to academics in the field who would like to see it further researched.[12] These issues were widely discussed by over a dozen knowledgeable individuals in a special edition of the Competitive Intelligence Magazine that was dedicated to this topic.[16] On the other hand, practitioners regard professional accreditation as more important.[17] In 2011, SCIP recognized the Fuld-Gilad-Herring Academy of Competitive Intelligence's CIP certification process as its global, dual-level (CIP-I and CIP-II) certification program.

Global developments have also been uneven in competitive intelligence.[18] Several academic journals, particularly the Journal of Competitive Intelligence and Management in its third volume, provided coverage of the field's global development.[19] For example, in 1997 the Ecole de Guerre Economique (School of economic warfare) was founded in Paris, France. It is the first European institution which teaches the tactics of economic warfare within a globalizing world. In Germany, competitive intelligence was unattended until the early 1990s. The term "competitive intelligence" first appeared in German literature in 1997. In 1995 a German SCIP chapter was founded, which is now second in terms of members in Europe. In summer 2004 the Institute for Competitive Intelligence was founded, which provides a post-graduate certification program for Competitive Intelligence Professionals. Japan is currently the only country that officially maintains an economic intelligence agency (JETRO). It was founded by the Ministry of International Trade and Industry (MITI) in 1958.

Accepting the importance of competitive intelligence, major multinational corporations, such as ExxonMobil, Procter & Gamble, and Johnson and Johnson, have created formal CI units.[Citation Needed] Importantly, organizations execute competitive intelligence activities not only as a safeguard to protect against market threats and changes, but also as a method for finding new opportunities and trends.

[edit] Principles

Organizations use competitive intelligence to compare themselves to other organizations ("competitive benchmarking"), to identify risks and opportunities in their markets, and to pressure-test their plans against market response (war gaming), which enable them to make informed decisions. Most firms today realize the importance of knowing what their competitors are doing and how the industry is changing, and the information gathered allows organizations to realize their strengths and weaknesses.

The actual importance of these categories of information to an organization depends on the contestability of its markets, the organizational culture, and personality and biases of its top decision makers, and the reporting structure of competitive intelligence within the company.

Strategic Intelligence (SI): focus is on the longer term, looking at issues affecting a companys competitiveness over the course of a couple of years. The actual time horizon for SI ultimately depends on the industry and how quickly its changing. The general questions that SI answers are, Where should we as a company be in x Years? and 'What are the strategic risks and opportunities facing us?' This type of intelligence work involves among others the identification of weak signals and application of methodology and process called Strategic Early Warning (SEW), first introduced by Gilad,[20][21][22] followed by Steven Shaker and Victor Richardson,[23] Alessandro Comai and Joaquin Tena,[24][25] and others. According to Gilad, 20% of the work of competitive intelligence practitioners should be dedicated to strategic early identification of weak signals within a SEW framework.

Tactical Intelligence: the focus is on providing information designed to improve shorter-term decisions, most often related with the intent of growing market share or revenues. Generally, the type of information that you would need to support the sales process in an organization. Investigates various aspects of a product/product line marketing: Product - what are people selling? Price - what price are they charging? Promotion - what activities are they conducting for promoting this product? Place - where are they selling this product? Other - sales force structure, clinical trial design, technical issues, etc.

With the right amount of information, organizations can avoid unpleasant surprises by anticipating competitors moves and decreasing response time. Examples of competitive intelligence research is evident in Daily Newspapers, such as the Wall Street Journal, Business Week and Fortune. Major airlines

change hundreds of fares daily in response to competitors tactics. They use information to plan their own marketing, pricing, and production strategies.

Resources, such as the Internet, have made gathering information on competitors easy. With a click of a button, analysts can discover future trends and market requirements. However competitive intelligence is much more than this, as the ultimate aim is to lead to competitive advantage. As the Internet is mostly public domain material, information gathered is less likely to result in insights that will be unique to the company. In fact there is a risk that information gathered from the Internet will be misinformation and mislead users, so competitive intelligence researchers are often wary of using such information.

As a result, although the Internet is viewed as a key source, most CI professionals should spend their time and budget gathering intelligence using primary research networking with industry experts, from trade shows and conferences, from their own customers and suppliers, and so on. Where the Internet is used, it is to gather sources for primary research as well as information on what the company says about itself and its online presence (in the form of links to other companies, its strategy regarding search engines and online advertising, mentions in discussion forums and on blogs, etc.). Also, important are online subscription databases and news aggregation sources which have simplified the secondary source collection process. Social media sources are also becoming important - providing potential interviewee names, as well as opinions and attitudes, and sometimes breaking news (e.g. via Twitter).

Organizations must be careful not to spend too much time and effort on old competitors without realizing the existence of any new competitors. Knowing more about your competitors will allow your business to grow and succeed. The practice of competitive intelligence is growing every year, and most companies and business students now realize the importance of knowing their competitors.

According to Arjan Singh and Andrew Beurschgens in their 2006 article in the Competitive Intelligence Review, there are 4 stages of development of a competitive intelligence capability with a firm. It starts from Stick Fetching, where a CI department is very reactive, to World Class where it is completely integrated in the decision making process. [edit] Distinguishing competitive intelligence from similar fields

Competitive Intelligence is depended on the Intelligence Cycle which is the basic principle of the national intelligence activity. The website of the CIA[26] is providing a comprehensive explanation of this

key principle. This is a five steps process aiming towards creating value to the intelligence activity, mainly to the decision-makers. It took CI a few years to comprehend that operating in the business field to value the corporation with better understanding of the external threats and opportunities,[27] comprises numerous constraints, mainly ethical and legal which are obviously less relevant while operating for governments. This process of emerging CI since the 1980s and building up its strengths is described by Prescott.[28] Competitive intelligence is often confused with, or viewed to have overlapping elements with related fields like market research, environmental scanning, business intelligence, and marketing research, just to name a few.[29] Some have questioned whether the name of "competitive intelligence" is even a satisfactory one to apply to the field[29] In a 2003 book chapter, Fleisher compares and contrasts competitive intelligence to business intelligence, competitor intelligence, knowledge management, market intelligence, marketing research, and strategic intelligence[30]

The argument put forth by former SCIP President and CI author Craig Fleisher[30][verification needed] suggests that business intelligence has two forms. In its narrower (contemporary) form has more of an information technology and internal focus than competitive intelligence while its broader (historical) definition is actually more encompassing than the contemporary practice of CI. Knowledge management (KM), when it isn't properly achieved (it needs an appropriate taxonomy for being up the best standards in the domain), is also viewed as being a heavily information technology driven organizational practice, that relies on data mining, corporate intranets, and mapping organizational assets, among other things, in order to make it accessible to organizational members for decision making. The CI shares some aspects of the real KM that is ideally and definitely human intelligence and experiences-based for more sophisticated qualitative analysis, creativity, prospective views. KM is essential for effective innovations.

Market intelligence (MI) is industry-targeted intelligence that is developed on real-time (i.e., dynamic) aspects of competitive events taking place among the 4Ps of the marketing mix (i.e., pricing, place, promotion, and product) in the product or service marketplace in order to better understand the attractiveness of the market.[31] A time-based competitive tactic, MI insights are used by marketing and sales managers to hone their marketing efforts so as to more quickly respond to consumers in a fastmoving, vertical (i.e., industry) marketplace. Craig Fleisher suggests it is not distributed as widely as some forms of CI, which are distributed to other (non-marketing) decision-makers as well.[30][verification needed] Market intelligence also has a shorter-term time horizon than many other intelligence areas and is usually measured in days, weeks, or, in some slower-moving industries, a handful of months.

Marketing research is a tactical, methods-driven field that consists mainly of neutral primary research that draws on customer data in the form of beliefs and perceptions as gathered through surveys or focus groups, and is analyzed through the application of statistical research techniques.[32] In contrast,

CI typically draws on a wider variety (i.e., both primary and secondary) of sources, from a wider range of stakeholders (e.g., suppliers, competitors, distributors, substitutes, media, and so on), and seeks not just to answer existing questions but also to raise new ones and to guide action.[30][verification needed]

In the 2001 article by Ben Gilad and Jan Herring, the authors lay down a set of basic prerequisites that define the unique nature of CI and distinguish it from other information-rich disciplines such as market research or business development. They show that a common body of knowledge and a unique set of applied tools (Key Intelligence Topics, Business War Games, Blindspots analysis) make CI clearly different, and that while other sensory activities in the commercial firm focus on one category of players in the market (customers or suppliers or acquisition targets), CI is the only integrative discipline calling for a synthesis of the data on all High Impact Players (HIP).[17]

In a later article,[2] Gilad focuses his delineation of CI more forcefully on the difference between information and intelligence. According to Gilad, the commonality among many organizational sensory functions, whether called Market Research, Business Intelligence or Market intelligence is that in practice they deliver facts and information, not intelligence. Intelligence, by Gilad, is a perspective on facts, not the facts themselves. Uniquely among other corporate functions, competitive intelligence has a specific perspective of external risks and opportunities to the firms overall performance, and as such it is part of an organization's risk management activity, not information activities. [edit] Ethics

Ethics has been a long-held issue of discussion amongst CI practitioners.[29] Essentially, the questions revolve around what is and is not allowable in terms of CI practitioners' activity. A number of very excellent scholarly treatments have been generated on this topic, most prominently addressed through Society of Competitive Intelligence Professionals publications.[33] The book Competitive Intelligence Ethics: Navigating the Gray Zone provides nearly twenty separate views about ethics in CI, as well as another 10 codes used by various individuals or organizations.[33] Combining that with the over two dozen scholarly articles or studies found within the various CI bibliographic entries,[34][verification needed][5][6][35] it is clear that no shortage of study has gone into better classifying, understanding and addressing CI ethics.

Competitive information may be obtained from public or subscription sources, from networking with competitor staff or customers, or from field research interviews. Competitive intelligence research is

distinguishable from industrial espionage, as CI practitioners generally abide by local legal guidelines and ethical business norms.[36] Six Forces Model From Wikipedia, the free encyclopedia

The Six Forces Model is a market opportunities analysis model, as an extension to Porter's Five Forces Model and is more robust than a standard SWOT analysis.

The following forces are identified: Competition New entrants End users/Buyers Suppliers Substitutes Complementary products/ The government/ The public [edit] Criticisms of the 5 Force model

Porter's framework has been challenged by other academics and strategists such as Stewart Neill, also the likes of Kevin P. Coyne and Somu Subramaniam have stated that three dubious assumptions underlie the five forces: That buyers, competitors, and suppliers are unrelated and do not interact and collude That the source of value is structural advantage (creating barriers to entry) That uncertainty is low, allowing participants in a market to plan for and respond to competitive behavior.

An important extension to Porter was found in the work of Brandenburger and Nalebuff in the mid1990s. Using game theory, they added the concept of complementors (also called "the 6th force"),

helping to explain the reasoning behind strategic alliances. The idea that complementors are the sixth force has often been credited to Andrew Grove, former CEO of Intel Corporation. According to most references, the sixth force is government or the public. Martyn Richard Jones, whilst consulting at Groupe Bull, developed an augmented 5 forces model in Scotland in 1993, it is based on Porter's model, and includes Government (national and regional) as well as Pressure Groups as the notional 6th force. This model was the result of work carried out as part of Group Bulle's Knowledge Asset Management Organisation initiative.

It is also perhaps not feasible to evaluate the attractiveness of an industry independent of the resources a firm brings to that industry. It is thus argued that this theory be coupled with the Resource-Based View (RBV) in order for the firm to develop a much more sound strategy. Context analysis From Wikipedia, the free encyclopedia

Context analysis is a method to analyze the environment in which a business operates. Environmental scanning mainly focuses on the macro environment of a business. But context analysis considers the entire environment of a business, its internal and external environment. This is an important aspect of business planning. One kind of context analysis, called SWOT analysis, allows the business to gain an insight into their strengths and weaknesses and also the opportunities and threats posed by the market within which they operate. The main goal of a context analysis, SWOT or otherwise, is to analyze the environment in order to develop a strategic plan of action for the business.

Context analysis also refers to a method of sociological analysis associated with Scheflen (1963) which believes that 'a given act, be it a glance at [another] person, a shift in posture, or a remark about the weather, has no intrinsic meaning. Such acts can only be understood when taken in relation to one another.' (Kendon, 1990: 16). This is not discussed here; only Context Analysis in the business sense is.Contents [hide] 1 Method 2 Define market or subject 3 Trend Analysis 4 Competitor Analysis 4.1 Competition levels 4.2 Competitive forces

4.3 Competitor behavior 4.4 Competitor strategy 5 Opportunities and Threats 6 Organization Analysis 6.1 Internal analysis 6.2 Competence analysis 7 SWOT-i matrix 8 Strategic Plan 9 Example 9.1 Define market 9.2 Trend Analysis 9.3 Competitor Analysis 9.4 Opportunities and Threats 9.5 Organization analysis 9.6 SWOT-i matrix 9.7 Strategic Plan 10 See also 11 References 12 External links

[edit] Method

There is a sequence of activities involved in conducting context analysis. The process (activities) of the method mainly consist of three analyses on different organizational levels: trend analysis (macro environment), competitor analysis (meso environment) and organization analysis (micro environment). These activities are described in the table below and are further elaborated in the next section.

ActivitiesActivity Define market analyze.

Sub Activity

Description

This refers to having a concrete description to which market you are going to

Trend Analysis Political trend analysis Determine those political factors/changes that can have impact on the organization. Economical trend analysis organization. Social trend analysis Identify economical factors/trends that can have impact on the

Identify social factors/trends that can have impact on the organization. Identify technological factors/trends that have impact on the

Technological trend analysis organization. Demographic trend analysis organization

Identify those demographic factors/trends that have an impact on the

Competitor Analysis Determine competition levels Determine for each of the four competition levels how the organization competes opposed to its competition. Analyze competitive forces within the industry. Analyze competitor behavior For each competitive force determine how the level of competition is

Analyze how the competition's offense and defense tactics are.

Determine competitor strategies Determine out of the two strategies ( low-cost and differentiation) how to compete with the competition. Define opportunities and threats Based on the trend analysis and the competition analysis determine the opportunities and threats the organization faces with regard to the market. Organization Analysis Conduct internal analysis Analyze the internal environment of the organization. Identify the organizations strengths and weaknesses Conduct competence analysis Analyze the organization and identify its competences.

Create SWOT-I Matrix Create a matrix which depicts the strengths, weaknesses, opportunities and threats identified previously. Develop strategic plan Based on the SWOT-I matrix compare the strengths, weaknesses, opportunities and threats identified with the identified competences and determine a strategic plan.

As previously mentioned, the ultimate goal of this method is to devise a strategic plan. The strategic plan is composed of three output data resulting from the three main analysis activities: Trend analysis, Competitor analysis and Organization analysis data. These are further subdivided into individual data outputs corresponding with the activity steps of the method. The following table provides a description of all the resulting data from the method.

DataData

Definition (source)

TREND ANALYSIS Analysis of the trends that can be of influence to an organization. This analysis aids organizations in making timely decisions about their activities and organization. Trend analysis consists of political trend analysis, economical trend analysis, social trend analysis, technological trend analysis and demographic trend analysis. (Van der Meer, 2005) POLITICAL TREND der Meer, 2005) Political trends are short and long term changes in governmental policies. (Van

ECONOMICAL TREND Economical trends to changes to for example the rise or fall of prosperity and spending of consumers, globalization etc. (Van der Meer, 2005) TECHNOLOGICAL TREND Technological trends are changes due to ever changing technological developments. The main issue here is for the organization to determine how they can take advantage hereof to make money. (Van der Meer, 2005) SOCIAL TREND Social trends are changes in what is or is not important to people within the society. (Van der Meer, 2005) DEMOGRAPHICAL TREND Demographic trends are changes in the population. For example, its size, age groups, religion, salaries etc. An organization needs to pay attention to these changes because this can effect the demand. (Van der Meer, 2005) COMPETITOR ANALYSIS It is important for an organization to know who the competition is, how they operate and how powerful they are in order to survive in a particular market. (Van der Meer, 2005) COMPETITION LEVEL Companies compete on several levels, like on the basis of the needs of consumers, general competition, product competition and brand competition. The organization should concentrate on all four levels to be able to understand the demand. (Van der Meer, 2005) CONSUMER NEEDS This level is the level of competition that refers to the needs and desires of consumers. A company should ask: What are the desires of the consumers? GENERAL COMPETITION This level of competition refers to the kind of demand consumers have. (for example: do consumers prefer shaving with electric razor or a razor blade)

PRODUCT prefer?

This level refers to the type of demand. Thus what types of products do consumers

BRAND This level refers to brand competition. Which brands are preferable to a consumer? COMPETITIVE FORCE Forces that determine the organizations level of competition within a particular market. There are six forces that have to be taken into consideration, power of the competition, threat of new entrants, bargaining power of buyers and suppliers, threat of substitute products and the importance of complementary products. (Van der Meer, 2005) COMPETITION POWER Competition power refers to identifying who your direct competitors are. NEW ENTRANTS Regarding this competitive force, an organization should ask themselves: how easy is it for a newcomer to enter the market and are there already newcomers who have entered? BARGAINING POWER OF BUYERS The bargaining power of buyers refers to how much influence the company has on the buyers. Can they persuade the buyers to do business with them? BARGAINING POWER OF SUPPLIERS supplier have over a company. The bargaining power of suppliers is how much influence does a

COMPLEMENTARY PRODUCTS Complementary products are products or services that can diminish the demand of a companies products and services. SUBSTITUTE PRODUCTS A company should answer the following: Which products can potentially be used instead of ours? COMPETITOR BEHAVIOR der Meer, 2005) Refer to the defensive and offensive actions of the competition. (Van

COMPETITOR STRATEGY These strategies refer to how an organization competes with other organizations. And these are only two, low price strategy and product differentiation. (Van der Meer, 2005) OPPORTUNITIES AND THREATS These refer to the opportunities and strengths of the organization with regard to the market. ORGANIZATION ANALYSIS This analysis refers to which knowledge and skills are present within an organization. (Van der Meer, 2005) STRENGTH Factors within an organization that results in a market advantage. (Van der Meer, 2005)

WEAKNESS Aspects that are needed in the market but which the organization is unable to comply with. (Van der Meer, 2005) COMPETENCE The combination between knowledge, skills and technology that an organization has or still have to achieve. (Van der Meer, 2005)

SWOT-i MATRIX A description if the strengths, weaknesses and opportunities and threats of an organization. The matrix can be used to determine how to use organizations strengths to exploit the opportunities in the market and to address its weaknesses and defend itself against threats in the market. (Ward & Peppard, 2002) STRATEGIC PLAN This is a strategic plan of action for the organization as a result of conducting context analysis. The trend and competitor analysis gives insight to the opportunities and threats in the market and the internal analysis gives insight to the competences of the organization. And by combining these competences and opportunities and strengths a strategic plan can be developed. (Van der Meer, 2005)

[edit] Define market or subject

The first step of the method is to define a particular market (or subject) one wishes to analyze and focus all analysis techniques on what was defined. A subject, for example, can be a newly proposed product idea. [edit] Trend Analysis

The next step of the method is to conduct a trend analysis. Trend analysis is an analysis of macro environmental factors in the external environment of a business, also called PEST analysis. It consists of analyzing political, economical, social, technological and demographic trends. This can be done by first determining which factors, on each level, are relevant for the chosen subject and to score each item as to specify its importance. This allows the business to identify those factors that can influence them. They cant control these factors but they can try to cope with them by adapting themselves. The trends (factors) that are addressed in PEST analysis are Political, Economical, Social and Technological; but for context analysis Demographic trends are also of importance. Demographic trends are those factors that have to do with the population, like for example average age, religion, education etc. Demographic information is of importance if, for example during market research, a business wants to determine a particular market segment to target. The other trends are described in environmental scanning and PEST analysis. Trend analysis only covers part of the external environment. Another important aspect of the external environment that a business should consider is its competition. This is the next step of the method, competitor analysis. [edit]

Competitor Analysis

As one can imagine, it is important for a business to know who its competition is, how they do their business and how powerful they are so that they can be on the defense and offense. In Competitor analysis a couple of techniques are introduced how to conduct such an analysis. Here I will introduce another technique which involves conducting four sub analyses, namely: determination of competition levels, competitive forces, competitor behavior and competitor strategy. [edit] Competition levels

Businesses compete on several levels and it is important for them to analyze these levels so that they can understand the demand. Competition is identified on four levels: Consumer needs: level of competition that refers to the needs and desires of consumers. A business should ask: What are the desires of the consumers? General competition: The kind of consumer demand. For example: do consumers prefer shaving with electric razor or a razor blade? Brand: This level refers to brand competition. Which brands are preferable to a consumer? Product: This level refers to the type of demand. Thus what types of products do consumers prefer?

Another important aspect of a competition analysis is to increase the consumer insight. For example: [Ducati] has, by interviewing a lot of their customers, concluded that their main competitor is not another bicycle, but sport-cars like [Porsche] or [GM]. This will of course influence the competition level within this business. [edit] Competitive forces

These are forces that determine the level of competition within a particular market. There are six forces that have to be taken into consideration, power of the competition, threat of new entrants, bargaining power of buyers and suppliers, threat of substitute products and the importance of complementary products. This analysis is described in Porter 5 forces analysis. [edit]

Competitor behavior

Competitor behaviors are the defensive and offensive actions of the competition. [edit] Competitor strategy

These strategies refer to how an organization competes with other organizations. And these are: low price strategy and product differentiation strategy. [edit] Opportunities and Threats

The next step, after the trend analysis and competitor analysis are conducted, is to determine threats and opportunities posed by the market. The trends analysis revealed a set of trends that can influence the business in either a positive or a negative manner. These can thus be classified as either opportunities or threats. Likewise, the competitor analysis revealed positive and negative competition issues that can be classified as opportunities or threats. [edit] Organization Analysis

The last phase of the method is an analysis of the internal environment of the organization, thus the organization itself. The aim is to determine which skills, knowledge and technological fortes the business possesses. This entails conducting an internal analysis and a competence analysis. [edit] Internal analysis

The internal analysis, also called SWOT analysis, involves identifying the organizations strengths and weaknesses. The strengths refer to factors that can result in a market advantage and weaknesses to factors that give a disadvantage because the business is unable to comply with the market needs. [edit]

Competence analysis

Competences are the combination of a business knowledge, skills and technology that can give them the edge versus the competition. Conducting such an analysis involves identifying market related competences, integrity related competences and functional related competences. [edit] SWOT-i matrix

The previous sections described the major steps involved in context analysis. All these steps resulted in data that can be used for developing a strategy. These are summarized in a SWOT-i matrix. The trend and competitor analysis revealed the opportunities and threats posed by the market. The organization analysis revealed the competences of the organization and also its strengths and weaknesses. These strengths, weaknesses, opportunities and threats summarize the entire context analysis. A SWOT-i matrix, depicted in the table below, is used to depict these and to help visualize the strategies that are to be devised. SWOT- i stand for Strengths, Weaknesses, Opportunities, Threats and Issues. The Issues refer to strategic issues that will be used to devise a strategic plan. Opportunities (O1, O2, ..., On) Threats (T1, T2, ..., Tn) Strengths (S1, S2, ..., Sn) ... S1On...SnOn ... S1Tn...SnTn Weaknesses (W1, W2, ..., Wn) W1O1...WnO1 ... W1On...WnOn W1T1...WnT1 ... W1Tn...WnTn S1T1...SnT1 S1O1...SnO1

This matrix combines the strengths with the opportunities and threats, and the weaknesses with the opportunities and threats that were identified during the analysis. Thus the matrix reveals four clusters: Cluster strengths and opportunities: use strengths to take advantage of opportunities. Cluster strengths and threats: use strengths to overcome the threats Cluster weaknesses and opportunities: certain weaknesses hamper the organization from taking advantage of opportunities therefore they have to look for a way to turn those weaknesses around. Cluster weaknesses and threats: there is no way that the organization can overcome the threats without having to make major changes. [edit] Strategic Plan

The ultimate goal of context analysis is to develop a strategic plan. The previous sections described all the steps that form the stepping stones to developing a strategic plan of action for the organization .The trend and competitor analysis gives insight to the opportunities and threats in the market and the internal analysis gives insight to the competences of the organization. And these were combined in the SWOT-i matrix. The SWOT-i matrix helps identify issues that need to be dealt with. These issues need to be resolved by formulating an objective and a plan to reach that objective, a strategy. [edit] Example

Joe Arden is in the process of writing a business plan for his business idea, Arden Systems. Arden Systems will be a software business that focuses on the development of software for small businesses. Joe realizes that this is a tough market because there are many software companies that develop business software. Therefore, he conducts context analysis to gain insight into the environment of the business in order to develop a strategic plan of action to achieve competitive advantage within the market. [edit] Define market

First step is to define a market for analysis. Joe decides that he wants to focus on small businesses consisting of at most 20 employees.

[edit] Trend Analysis

Next step is to conduct trend analysis. The macro environmental factors that Joe should take into consideration are as follows: Political trend: Intellectual property rights Economical trend: Economic growth Social trend: Reduce operational costs; Ease for conducting business administration Technological trend: Software suites; Web applications Demographic trend: Increase in the graduates of IT related studies [edit] Competitor Analysis

Following trend analysis is competitor analysis. Joe analyzes the competition on four levels to gain insight into how they operate and where advantages lie. Competition level: Consumer need: Arden Systems will be competing on the fact that consumers want efficient and effective conducting of a business Brand: There are software businesses that have been making business software for a while and thus have become very popular in the market. Competing based on brand will be difficult. Product: They will be packaged software like the major competition. Competitive forces: Forces that can affect Arden Systems are in particular: The bargaining power of buyers: the extent to which they can switch from one product to the other. Threat of new entrants: it is very easy for someone to develop a new software product that can be better than Arden's. Power of competition: the market leaders have most of the cash and customers; they have to power to mold the market. Competitor behavior: The focus of the competition is to take over the position of the market leader.

Competitor strategy: Joe intends to compete based on product differentiation. [edit] Opportunities and Threats

Now that Joe has analyzed the competition and the trends in the market he can define opportunities and threats. Opportunities: Because the competitors focus on taking over the leadership position, Arden can focus on those segments of the market that the market leader ignores. This allows them to take over where the market leader shows weakness. The fact that there are new IT graduates, Arden can employ or partner with someone that may have a brilliant idea. Threats: IT graduates with fresh idea's can start their own software businesses and form a major competition for Arden Systems. [edit] Organization analysis

After Joe has identified the opportunities and threats of the market he can try to figure out what Arden System's strengths and weaknesses are by doing an organization analysis. Internal analysis: Strength: Product differentiation Weakness: Lacks innovative people within the organization Competence analysis: Functional related competence: Arden Systems provides system functionalities that fit small businesses. Market-related competence: Arden Systems has the opportunity to focus on a part of the market which is ignored. [edit]

SWOT-i matrix

After the previous analyses, Joe can create a SWOT-i matrix to perform SWOT analysis. Opportunities Threats Strengths Weaknesses Product differentiation, market leader ignores market segment Lack of innovation, increase in IT graduates

[edit] Strategic Plan

After creating the SWOT-i matrix, Joe is now able to devise a strategic plan. Focus all software development efforts to that part of the market which is ignored by market leaders, small businesses. Employ recent innovative It graduates to stimulate the innovation within Arden Systems. SWOT analysis From Wikipedia, the free encyclopedia For other uses, see SWOT.

SWOT analysis is a strategic planning method used to evaluate the Strengths, Weaknesses, Opportunities, and Threats involved in a project or in a business venture. It involves specifying the objective of the business venture or project and identifying the internal and external factors that are favorable and unfavorable to achieve that objective. The technique is credited to Albert Humphrey, who led a convention at Stanford University in the 1960s and 1970s using data from Fortune 500 companies.

A SWOT analysis must first start with defining a desired end state or objective. A SWOT analysis may be incorporated into the strategic planning model. Strategic Planning has been the subject of much research.[citation needed] Strengths: characteristics of the business or team that give it an advantage over others in the industry. Weaknesses: are characteristics that place the firm at a disadvantage relative to others.

Opportunities: external chances to make greater sales or profits in the environment. Threats: external elements in the environment that could cause trouble for the business.

Identification of SWOTs is essential because subsequent steps in the process of planning for achievement of the selected objective may be derived from the SWOTs.

First, the decision makers have to determine whether the objective is attainable, given the SWOTs. If the objective is NOT attainable a different objective must be selected and the process repeated.

The SWOT analysis is often used in academia to highlight and identify strengths, weaknesses, opportunities and threats.[citation needed] It is particularly helpful in identifying areas for development.[citation needed]Contents [hide] 1 Matching and converting 1.1 Evidence on the use of SWOT 2 Internal and external factors 3 Use of SWOT analysis 4 SWOT - landscape analysis 5 Corporate planning 5.1 Marketing 6 See also 7 References 8 External links

[edit] Matching and converting

Another way of utilizing SWOT is matching and converting.

Matching is used to find competitive advantages by matching the strengths to opportunities.

Converting is to apply conversion strategies to convert weaknesses or threats into strengths or opportunities.

An example of conversion strategy is to find new markets.

If the threats or weaknesses cannot be converted a company should try to minimize or avoid them.[1] [edit] Evidence on the use of SWOT

SWOT analysis may limit the strategies considered in the evaluation. J. Scott Armstrong notes that "people who use SWOT might conclude that they have done an adequate job of planning and ignore such sensible things as defining the firm's objectives or calculating ROI for alternate strategies." [2] Findings from Menon et al. (1999) [3] and Hill and Westbrook (1997) [4] have shown that SWOT may harm performance. As an alternative to SWOT, Armstrong describes a 5-step approach alternative that leads to better corporate performance.[5] [edit] Internal and external factors

The aim of any SWOT analysis is to identify the key internal and external factors that are important to achieving the objective. These come from within the company's unique value chain. SWOT analysis groups key pieces of information into two main categories: Internal factors The strengths and weaknesses internal to the organization. External factors The opportunities and threats presented by the external environment to the organization.

The internal factors may be viewed as strengths or weaknesses depending upon their impact on the organization's objectives. What may represent strengths with respect to one objective may be weaknesses for another objective. The factors may include all of the 4P's; as well as personnel, finance, manufacturing capabilities, and so on. The external factors may include macroeconomic matters, technological change, legislation, and socio-cultural changes, as well as changes in the marketplace or competitive position. The results are often presented in the form of a matrix.

SWOT analysis is just one method of categorization and has its own weaknesses. For example, it may tend to persuade companies to compile lists rather than think about what is actually important in achieving objectives. It also presents the resulting lists uncritically and without clear prioritization so that, for example, weak opportunities may appear to balance strong threats.

It is prudent not to eliminate too quickly any candidate SWOT entry. The importance of individual SWOTs will be revealed by the value of the strategies it generates. A SWOT item that produces valuable strategies is important. A SWOT item that generates no strategies is not important. [edit] Use of SWOT analysis

The usefulness of SWOT analysis is not limited to profit-seeking organizations. SWOT analysis may be used in any decision-making situation when a desired end-state (objective) has been defined. Examples include: non-profit organizations, governmental units, and individuals. SWOT analysis may also be used in pre-crisis planning and preventive crisis management. SWOT analysis may also be used in creating a recommendation during a viability study/survey. [edit] SWOT - landscape analysis

The SWOT-landscape systematically deploys the relationships between overall objective and underlying SWOT-factors and provides an interactive, query-able 3D landscape.

The SWOT-landscape grabs different managerial situations by visualizing and foreseeing the dynamic performance of comparable objects according to findings by Brendan Kitts, Leif Edvinsson and Tord Beding (2000).[6]

Changes in relative performance are continually identified. Projects (or other units of measurements) that could be potential risk or opportunity objects are highlighted.

SWOT-landscape also indicates which underlying strength/weakness factors that have had or likely will have highest influence in the context of value in use (for ex. capital value fluctuations). [edit] Corporate planning

As part of the development of strategies and plans to enable the organization to achieve its objectives, then that organization will use a systematic/rigorous process known as corporate planning. SWOT alongside PEST/PESTLE can be used as a basis for the analysis of business and environmental factors.[7] Set objectives defining what the organization is going to do Environmental scanning Internal appraisals of the organization's SWOT, this needs to include an assessment of the present situation as well as a portfolio of products/services and an analysis of the product/service life cycle Analysis of existing strategies, this should determine relevance from the results of an internal/external appraisal. This may include gap analysis which will look at environmental factors Strategic Issues defined key factors in the development of a corporate plan which needs to be addressed by the organization Develop new/revised strategies revised analysis of strategic issues may mean the objectives need to change Establish critical success factors the achievement of objectives and strategy implementation Preparation of operational, resource, projects plans for strategy implementation Monitoring results mapping against plans, taking corrective action which may mean amending objectives/strategies.[8] [edit] Marketing Main article: Marketing management

In many competitor analyses, marketers build detailed profiles of each competitor in the market, focusing especially on their relative competitive strengths and weaknesses using SWOT analysis. Marketing managers will examine each competitor's cost structure, sources of profits, resources and competencies, competitive positioning and product differentiation, degree of vertical integration, historical responses to industry developments, and other factors.

Marketing management often finds it necessary to invest in research to collect the data required to perform accurate marketing analysis. Accordingly, management often conducts market research (alternately marketing research) to obtain this information. Marketers employ a variety of techniques to conduct market research, but some of the more common include: Qualitative marketing research, such as focus groups Quantitative marketing research, such as statistical surveys Experimental techniques such as test markets Observational techniques such as ethnographic (on-site) observation Marketing managers may also design and oversee various environmental scanning and competitive intelligence processes to help identify trends and inform the company's marketing analysis.

Using SWOT to analyse the market position of a small management consultancy with specialism in HRM.[8]Strengths Weaknesses Opportunities Threats Reputation in marketplace Shortage of consultants at operating level rather than partner level Well established position with a well defined market niche Large consultancies operating at a minor level Expertise at partner level in HRM consultancy Unable to deal with multi-disciplinary assignments because of size or lack of ability Identified market for consultancy in areas other than HRM Other small consultancies looking to invade the marketplace PEST analysis From Wikipedia, the free encyclopedia (Redirected from PESTLE analysis)

PEST analysis stands for "Political, Economic, Social, and Technological analysis" and describes a framework of macro-environmental factors used in the environmental scanning component of strategic management. Some analysts added Legal and rearranged the mnemonic to SLEPT;[1] inserting Environmental factors expanded it to PESTEL or PESTLE, which is popular in the United Kingdom.[2] The model has recently been further extended to STEEPLE and STEEPLED, adding Ethics and demographic factors. It is a part of the external analysis when conducting a strategic analysis or doing market research, and gives an overview of the different macroenvironmental factors that the company has to take into consideration. It is a useful strategic tool for understanding market growth or decline, business position, potential and direction for operations. The growing importance of environmental or ecological factors in the first decade of the 21st century have given rise to green business and encouraged widespread use of an updated version of the PEST framework. STEER analysis systematically considers Socio-cultural, Technological, Economic, Ecological, and Regulatory factors.Contents [hide] 1 Composition 2 Applicability of the Factors 3 Use of PEST analysis with other models 4 See also 5 References 6 External links

[edit] Composition Political factors are how and to what degree a government intervenes in the economy. Specifically, political factors include areas such as tax policy, labour law, environmental law, trade restrictions, tariffs, and political stability. Political factors may also include goods and services which the government wants to provide or be provided (merit goods) and those that the government does not want to be provided (demerit goods or merit bads). Furthermore, governments have great influence on the health, education, and infrastructure of a nation. Economic factors include economic growth, interest rates, exchange rates and the inflation rate. These factors have major impacts on how businesses operate and make decisions. For example, interest rates affect a firm's cost of capital and therefore to what extent a business grows and expands. Exchange rates affect the costs of exporting goods and the supply and price of imported goods in an economy Social factors include the cultural aspects and include health consciousness, population growth rate, age distribution, career attitudes and emphasis on safety. Trends in social factors affect the demand for a company's products and how that company operates. For example, an aging population may imply a

smaller and less-willing workforce (thus increasing the cost of labor). Furthermore, companies may change various management strategies to adapt to these social trends (such as recruiting older workers). Technological factors include technological aspects such as R&D activity, automation, technology incentives and the rate of technological change. They can determine barriers to entry, minimum efficient production level and influence outsourcing decisions. Furthermore, technological shifts can affect costs, quality, and lead to innovation. Environmental factors include ecological and environmental aspects such as weather, climate, and climate change, which may especially affect industries such as tourism, farming, and insurance. Furthermore, growing awareness of the potential impacts of climate change is affecting how companies operate and the products they offer, both creating new markets and diminishing or destroying existing ones. Legal factors include discrimination law, consumer law, antitrust law, employment law, and health and safety law. These factors can affect how a company operates, its costs, and the demand for its products. [edit] Applicability of the Factors

The model's factors will vary in importance to a given company based on its industry and the goods it produces. For example, consumer and B2B companies tend to be more affected by the social factors, while a global defense contractor would tend to be more affected by political factors.[3] Additionally, factors that are more likely to change in the future or more relevant to a given company will carry greater importance. For example, a company which has borrowed heavily will need to focus more on the economic factors (especially interest rates).[4]

Furthermore, conglomerate companies who produce a wide range of products (such as Sony, Disney, or BP) may find it more useful to analyze one department of its company at a time with the PESTEL model, thus focusing on the specific factors relevant to that one department. A company may also wish to divide factors into geographical relevance, such as local, national, and global (also known as LoNGPESTEL). [edit] Use of PEST analysis with other models

The PEST factors, combined with external micro-environmental factors and internal drivers, can be classified as opportunities and threats in a SWOT analysis. Gap analysis From Wikipedia, the free encyclopedia This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2009)

In business and economics, gap analysis is a tool that helps companies compare actual performance with potential performance. At its core are two questions: "Where are we?" and "Where do we want to be?" If a company or organization does not make the best use of current resources, or forgoes investment in capital or technology, it may produce or perform below its potential. This concept is similar to the base case of being below the production possibilities frontier.

Gap analysis identifies gaps between the optimized allocation and integration of the inputs (resources), and the current allocation level. This reveals areas that can be improved. Gap analysis involves determining, documenting, and approving the variance between business requirements and current capabilities. Gap analysis naturally flows from benchmarking and other assessments. Once the general expectation of performance in the industry is understood, it is possible to compare that expectation with the company's current level of performance. This comparison becomes the gap analysis. Such analysis can be performed at the strategic or operational level of an organization.

Gap analysis is a formal study of what a business is doing currently and where it wants to go in the future. It can be conducted, in different perspectives, as follows: Organization (e.g., human resources) Business direction Business processes Information technology

Gap analysis provides a foundation for measuring investment of time, money and human resources required to achieve a particular outcome (e.g. to turn the salary payment process from paper-based to paperless with the use of a system). Note that 'GAP analysis' has also been used as a means for

classification of how well a product or solution meets a targeted need or set of requirements. In this case, 'GAP' can be used as a ranking of 'Good', 'Average' or 'Poor'. This terminology does appear in the PRINCE2 project management publication from the OGC (Office of Government Commerce).

The need for new products or additions to existing lines may emerge from portfolio analysis, in particular from the use of the Boston Consulting Group Growth-share matrixor the need may emerge from the regular process of following trends in the requirements of consumers. At some point, a gap emerges between what existing products offer and what the consumer demands. The organization must fill that gap to survive and grow.

Gap analysis can identify gaps in the market. Thus, comparing forecast profits to desired profits reveals the planning gap.: This represents a goal for new activities in general, and new products in particular. The planning gap can be divided into three main elements:Contents [hide] 1 Usage gap 2 Market potential 3 Existing usage 4 Product gap 5 Gap analyses to develop a better process 6 See also 7 Market gap analysis 8 References 9 External Links

[edit] Usage gap

This is the gap between the total potential for the market and actual current usage by all consumers in the market. Data for this calculation includes:

Market potential Existing usage Current industrial potential [edit] Market potential

The maximum number of consumers available is usually determined by market research, but it may sometimes be calculated from demographic data or government statistics. Ultimately there are, of course, limitations on the number of consumers. For guidance one can look to the numbers who use similar products. Alternatively, one can look to what happened in other countries.[citation needed] Increased affluence[citation needed] in all major Western economies means such a lag can now be much shorter. [edit] Existing usage

Existing consumer usage makes up the total current market, from which market shares, for example, are calculated. It usually derives from marketing research, most accurately from panel research, such as conducted by the Nielsen Company, but also from ad hoc work. Sometimes it may be available from figures collected that governments or industries have collected. However, these are often based on categories that make bureaucratic sense but are less helpful in marketing terms. The 'usage gap' is thus: usage gap = market potential existing usage

This is an important calculation. Many, if not most marketers, accept existing market sizesuitably projected their forecast timescalesas the boundary for expansion plans. Though this is often the most realistic assumption, it may impose an unnecessary limit on horizons. For example: the original market for video-recorders was limited to professional users who could afford high prices. Only after some time did the technology extend to the mass market.

In the public sector, where service providers usually enjoy a monopoly, the usage gap is probably the most important factor in activity development. However, persuading more consumers to take up family benefits, for example, is probably more important to the relevant government department than opening more local offices.

Usage gap is most important for brand leaders. If a company has a significant share of the whole market, they may find it worthwhile to invest in making the market bigger. This option is not generally open to minor players, though they may still profit by targeting specific offerings as market extensions.

All other gaps relate to the difference between existing sales (market share) and total sales of the market as a whole. The difference is the competitor share. These gaps therefore, relate to competitive activity. [edit] Product gap

The product gapalso called the segment or positioning gapis that part of the market a particular organization is excluded from because of product or service characteristics. This may be because the market is segmented and the organization does not have offerings in some segments, or because the organization positions its offerings in a way that effectively excludes certain potential consumers because competitive offerings are much better placed for these consumers.

This segmentation may result from deliberate policy. Segmentation and positioning are powerful marketing techniques, but the trade-offagainst better focusis that market segments may effectively be put beyond reach. On the other hand, product gap can occur by default; the organization has thought out its positioning, its offerings drifted to a particular market segment.

The product gap may be the main element of the planning gap where an organization can have productive input; hence the emphasis on the importance of correct positioning. [edit] Gap analyses to develop a better process

The gap analysis also can be used to analyse gaps in processes and the gap between the existing outcome and the desired outcome. this step process can be summarised as below: Identify the existing process, Identify the existing outcome, Identify the desired outcome, Identify the process to get the desired outcome, Document the gap. develop the means to fill the gap

[edit] See also Capability (systems engineering) Gap analysis (conservation) [edit] Market gap analysis

In the type of analysis described above, gaps in the product range are looked for. Other perspective (essentially taking the "product gap" to its logical conclusion) is to look for gaps in the "market" (in a variation on "product positioning," and using the multidimensional "mapping"), which the company could profitably address, regardless of where the current products stand.

Many marketers would question the worth of the theoretical gap analysis described earlier. Instead, they would immediately start proactively to pursue a search for a competitive advantage.

Segmenting and positioning From Wikipedia, the free encyclopedia

A marketing strategy is based on expected customer behavior in a certain market. In order to know the customer and its expected buying process of segmenting and positioning is needed. These processes are chronological steps which are dependent on each other. The process of market segmentation and of positioning are described elsewhere within the Wikipedia. This topic elaborates on the dependency and relationship between these processes.Contents [hide] 1 The process-data model 2 Segmenting 3 Targeting 4 Positioning 5 B2C and B2B 6 See also

7 External links

[edit] The process-data model

Below a generic process-data model is given for the whole process of segmenting and positioning as a basis of deciding on the most effective marketing strategy and marketing mix.

This model consists of the three main activities: segmenting, targeting and positioning. It shows the chronological dependency of the different activities. On the right side of the model the concepts resulting from the activities are showed. The arrows show that one concept results from one or more previous concepts; the concept can not be made when the previous activities have not taken place. Below the three main activities are shortly described as well as their role as a basis for the next step or their dependency on the previous step. [edit] Segmenting

Segmenting is the process of dividing the market into segments based on customer characteristics and needs.

The main activity segmenting consists of four sub activities. These are:

1. determining who the actual and potential customers are

2. identifying segments

3. analyzing the intensity of competitors in the market

4. selecting the attractive customer segments.

The first, second and fourth steps are described as market segmentation. The third step of analyzing the intensity of the competitors is added to the process of segmenting in this process description. When different segments are identified, it is not necessary that these segments are attractive to target. A company is almost never alone in a market -- competitors have a great influence on the attractiveness of entering a certain market. When there is a high intensity of competitors, it is hard to obtain a profitable market share and a company may decide not to enter a certain market. The third step of segmenting is the first part of the topic of competitor analysis.

The need for segmenting a market is based on the fact that no market is homogeneous. For one product the market can be divided in different customer groups. The variables used for this segmenting in these groups are usually geographical, psychographical, behavioral and demographic variables. This results in segments which are homogeneous within and heterogeneous between each other. When these segments are known, it is important to decide on which market to target. Not every market is an attractive market to enter. A little filtering has been done in this activity, but there are more factors to take in account before targeting a certain market segment. This process is called targeting. [edit] Targeting

After the most attractive segments are selected, a company should not directly start targeting all these segments -- other important factors come into play in defining a target market. Four sub activities form the basis for deciding on which segments will actually be targeted.

The four sub activities within targeting are:

1. defining the abilities of the company and resources needed to enter a market

2. analyzing competitors on their resources and skills

3. considering the companys abilities compared to the competitors' abilities

4. deciding on the actual target markets.

The first three sub activities are described as the topic competitor analysis. The last sub activity of deciding on the actual target market is an analysis of the company's abilities to those of its competitors. The results of this analysis leads to a list of segments which are most attractive to target and have a good chance of leading to a profitable market share.

Obviously, targeting can only be done when segments have been defined, as these segments allow firms to analyze the competitors in this market. When the process of targeting is ended, the markets to target are selected, but the way to use marketing in these markets is not yet defined. To decide on the actual marketing strategy, knowledge of the differential advantages of each segment is needed. [edit] Positioning

When the list of target markets is made, a company might want to start on deciding on a good marketing mix directly. But an important step before developing the marketing mix is deciding on how to create an identity or image of the product in the mind of the customer. Every segment is different from the others, so different customers with different ideas of what they expect from the product. In the process of positioning the company:

1. identifies the differential advantages in each segment

2. decides on a different positioning concept for each of these segments. This process is described at the topic positioning, here different concepts of positioning are given.

The process-data model shows the concepts resulting from the different activities before and within positioning. The model shows how the predefined concepts are the basis for the positioning statement.

The analyses done of the market, competitors and abilities of the company are necessary to create a good positioning statement.

When the positioning statement is created, one can start on creating the marketing mix. [edit] B2C and B2B

The process described above can be used for both business-to-customer as well as business-to-business marketing. Although most variables used in segmenting the market are based on customer characteristics, business characteristics can be described using the variables which are not depending on the type of buyer. There are however methods for creating a positioning statement for both B2C and B2B segments. One of these methods is MIPS: a method for managing industrial positioning strategies by Muhlbacher, Dreher an Gabriel-Ritter (1994). Market research From Wikipedia, the free encyclopediaMarketing Key concepts

Product Pricing Distribution Service Retail Brand management Account-based marketing Marketing ethics Marketing effectiveness Market research Market segmentation Marketing strategy Marketing management Market dominance

Promotional content

Advertising Branding Underwriting Direct marketing Personal Sales Product placement Publicity Sales promotion Sex in advertising Loyalty marketing Premiums Prizes Promotional media

Printing Publication Broadcasting Out-of-home Internet marketing Point of sale Promotional merchandise Digital marketing In-game In-store demonstration Word-of-mouth marketing Brand Ambassador Drip Marketing This box: view talk edit

Market research is any organized effort to gather information about markets or customers. It is a very important component of business strategy.[1] The term is commonly interchanged with marketing research; however, expert practitioners may wish to draw a distinction, in that marketing research is concerned specifically about marketing processes, while market research is concerned specifically with markets.[2]

Market Research is a key factor to get advantage over competitors. Market research provides important information to identify and analyze the market need, market size and competition.

Market research,as defined by the ICC/ESOMAR International Code on Market and Social Research, includes social and opinion research, [and] is the systematic gathering and interpretation of information about individuals or organizations using statistical and analytical methods and techniques of the applied social sciences to gain insight or support decision making.[3]Contents [hide] 1 History 2 Market research for business/planning 3 Financial performance 3.1 Top 9 of the market research sector 2009 3.2 Global market research turnover in 2009 4 See also 5 References 6 Other reading 7 External links

[edit] History

Market research began to be conceptualized and put into formal practice during the 1920s, as an offshoot of the advertising boom of the Golden Age of radio in the United States. Advertisers began to realize the significance of demographics revealed by sponsorship of different radio programs. [edit] Market research for business/planning

Market research is for discovering what people want, need, or believe. It can also involve discovering how they act. Once that research is completed, it can be used to determine how to market your product.

Questionnaires and focus group discussion surveys are some of the instruments for market research.

For starting up a business, there are some important things: Market information

Through Market information one can know the prices of the different commodities in the market, as well as the supply and demand situation. Information about the markets can be obtained from different sources, varieties and formats, as well as the sources and varieties that have to be obtained to make the business work. Market segmentation

Market segmentation is the division of the market or population into subgroups with similar motivations. It is widely used for segmenting on geographic differences, personality differences, demographic differences, technographic differences, use of product differences, psychographic differences and gender differences. For B2B segmentation firmographics is commonly used. Market trends

Market trends are the upward or downward movement of a market, during a period of time. The market size is more difficult to estimate if one is starting with something completely new. In this case, you will have to derive the figures from the number of potential customers, or customer segments. [Ilar 1998]

Besides information about the target market, one also needs information about one's competitors, customers, products, etc. Lastly, you need to measure marketing effectiveness. A few techniques are: Customer analysis Choice modelling Competitor analysis Risk analysis Product research

Advertising the research Marketing mix modeling Choice modelling From Wikipedia, the free encyclopedia

Choice modelling attempts to model the decision process of an individual or segment in a particular context. Choice modelling may also be used to estimate non-market environmental benefits and costs[1].

Well specified choice models are sometimes able to predict with some accuracy how individuals would react in a particular situation. Unlike a poll or a survey, predictions are able to be made over large numbers of scenarios within a context, to the order of many trillions of possible scenarios.

Choice modelling is believed by some to be the most accurate and general purpose tool currently available for making some probabilistic predictions about certain human decision making behaviorCitation Needed. Many alternatives exist in econometrics, marketing, sociometrics and other fields, including utility maximization, optimization applied to consumer theory, and a plethora of other identification strategies which may be more or less accurate depending on the data, sample, hypothesis and the particular decision being modeled. In addition Choice Modelling is regarded as the most suitable method for estimating consumers willingness to pay for quality improvements in multiple dimensions.[2]. The Nobel Prize for economics was awarded to a principal exponent of the Choice Modelling theory, Daniel McFadden[3].Contents [hide] 1 Related terms for choice modelling 2 Theoretical background 3 Methods used in choice modeling 3.1 Orthogonality 3.2 Experimental design 3.3 Stated preference 3.4 Preferences as choice trade-offs 3.5 Sampling and block allocation

3.6 Model generation 4 Choice modeling in practice 5 Strengths of choice modelling [4] 6 Choice modelling versus traditional quantitative market research 6.1 Ratings 6.2 Ranking 6.3 Maximum difference scaling 7 Uses of choice modelling 8 References

[edit] Related terms for choice modelling

A number of terms exist that are either subsets of, part of the process or definition of, or overlap with other areas of econometrics that may be broadly termed Choice Modelling. As with any emerging technology, there are varying claims as to the correct lexicon.

These include: Stated preference discrete choice modelling Discrete choice Choice experiment Choice set Conjoint analysis Controlled experiments [edit] Theoretical background

Modelling was developed in parallel by economists and cognitive psychologists. The origins of choice modeling can be traced to Thurstone's research into food preferences in the 1920s and to random utility theory.

To some degree, all decisions involve choice. Individuals choose among different alternatives; commuters choose between alternative routes and methods of transport, shoppers choose between competing products for their attributes such as price, quality and quantity.

Choice modelling posits that with human choice there is an underlying rational decision process and that this process has a functional form. Depending on the behavioural context, a specific functional form may be selected as a candidate to model that behaviour. The multinomial logit or MNL model form is commonly used as it is a good approximation to the economic principle of utility maximisation. That is, human beings strive to maximise their total utility. The multinomial logit form describes total utility as a linear addition (or subtraction) of the component utilities in a context. Once the functional form of the decision process has been established, the parameters of a specific model may be estimated from available data using multiple regression, in the case of MNL. Other functional forms may be used or combined, such as binary logit, probit or EBA with appropriate statistical tests to determine the goodness of fit of the model to a hold out data set. [edit] Methods used in choice modeling

Choice modeling comprises a number of specific techniques that contribute to its power. Some or all of these may be used in the construction of a Choice Model. [edit] Orthogonality

For model convergence, and therefore parameter estimation, it is often necessary that the data have little or no collinearity. The reasons for this have more to do with information theory than anything else. To understand why this is, take the following example:

Imagine a car dealership that sells both luxury cars and used low-end vehicles. Using the utility maximisation principle and an MNL model form, we hypothesise that the decision to buy a car from this dealership is the sum of the individual contribution of each of the following to the total utility. Price Marque (BMW, Chrysler, Mitsubishi) Origin (German, American) Performance

Using multinomial regression on the sales data however will not tell us what we want to know. The reason is that much of the data is collinear since cars at this dealership are either: high performance, expensive German cars low performance, cheap American cars

There is not enough information, nor will there ever be enough, to tell us whether people are buying cars because they are European, because they are a BMW or because they are high performance. The reason is that these three attributes always co-occur and in this case are perfectly correlated . That is: all BMW's are made in Germany and are of high performance. These three attributes: origin, marque and performance are said to be collinear or non-orthogonal.

These types of data, the sales figures, are known as revealed preference data, or RP data, because the data 'reveals' the underlying preference for cars. We can infer someone's preference through their actions, i.e. the car they actually bought. All data mining uses RP data. RP data is vulnerable to collinearity since the data is effectively from the wild world of reality. The presence of collinearity implies that there is missing information, as one or more of the collinear factors is redundant and adds no new information. This weakness of data mining is that the critical missing data that may explain choices, is simply never observed.

We can ensure that attributes of interest are orthogonal by filtering the RP data to remove correlations. This may not always be possible, however using stated preference methods, orthogonality can be ensured through appropriate construction of an experimental design.

[edit] Experimental design

In order to maximise the information collected in Stated Preference Experiments, an experimental design (below) is employed. An experimental design in a Choice Experiment is a strict scheme for controlling and presenting hypothetical scenarios, or choice sets to respondents. For the same experiment, different designs could be used, each with different properties. The best design depends on the objectives of the exercise.

It is the experimental design that drives the experiment and the ultimate capabilities of the model. Many very efficient designs exist in the public domain that allow near optimal experiments to be performed.

For example the Latin square 1617 design allows the estimation of all main effects of a product that could have up to 1617 (approximately 295 followed by eighteen zeros) configurations. Furthermore this could be achieved within a sample frame of only around 256 respondents.

Below is an example of a much smaller design. This is 34 main effects design.0 0 0 1 1 1 2 2 2 1 2 0 1 2 0 1 2 1 2 1 2 0 2 0 1 2 1 1 0 2 2 1 0

This design would allow the estimation of main effects utilities from 81 (34) possible product configurations. A sample of around 20 respondents could model the main effects of all 81 possible product configurations with statistically significant results.

Some examples of other experimental designs commonly used: Balanced incomplete block designs (BIBD) Random designs Main effects Two way effects Full factorial

More information on experimental designs may be found here. [edit] Stated preference

A major advance in choice modelling has been the use of Stated Preference data. With RP data we are at the whim of the interrelated nature of the real world. With SP data, since we are directly asking humans about their preferences for products and services, we are also at liberty to construct the very products as we wish them to evaluate.

This allows great freedom in the creative construction many improbable but plausible hypothetical products. It also allows complete militation against collinearity through experimental design.

If instead of using the RP sales data as in the previous example, we were to show respondents various cars and ask "Would you buy this car?"", we could model the same data. However, instead of simply using the cars we actually sold, we allowed ourselves the freedom to create hypothetical cars, we could escape the problems of collinearity and discover the true utilities for the attributes of marque, origin and performance. This is known as a Choice Experiment.

For example one could create the following unlikely, however plausible scenarios. a low performance BMW that was manufactured in the US. "Would you buy this car?", or; a high performance Mitsubishi manufactured in Germany. "How about this car?"

Information theory tells us that a data set generated from this exercise would at least allow the discrimination between 'origin' as a factor in choice.

A more formal derivation of an appropriate experimental design would consequently ensure that no attributes were collinear and would therefore guarantee that there was enough information in the collected data for all attribute effects to be identified.

Because individuals do not have to back up their choices with real commitments when they answer the survey, to some extent, they would behave inconsistently when the situation really happens, a common problem with all SP methods.

However, because Choice Models are Scale Invariant this effect is equivalent for all estimates and no individual estimate is biased with respect to another.

SP models may therefore be accurately scaled with the introduction of Scale Parameters from real world observations, yielding extremely accurate predictive models. [edit] Preferences as choice trade-offs

It has long been known that simply asking human beings to rate or choose their preferred item from a scalar list will generally yield no more information than the fact that human beings want all the benefits and none of the costs. The above exercise if executed as a quantitative survey would tell us that people would prefer high performance cars at no cost. Again information theory tells us that there is no context-specific information here.

Instead, a choice experiment requires that individuals be forced to make a trade-off between two or more options, sometimes also allowing 'None or Neither' as a valid response. This presentation of alternatives requires that the at least some respondents compare: the cheaper, lower performance car against the more expensive, higher performance car. This datum provides the key missing information necessary to separate and independently measure the utility of performance and price. [edit] Sampling and block allocation

Stated Preference data must be collected in highly specific fashion to avoid temporal, learning and segment biases. Techniques include: random without replacement block allocation; to ensure balanced sampling of scenarios in-block order randomisation; to avoid temporal and learning biases independent segment based allocation; to ensure balanced scenarios across segments of interest block allocation balancing; to ensure that non-completes do not affect overal sample balance [edit] Model generation

The typical outputs from a choice model are: a model equation a set of estimates of the marginal utilities for each of the attributes of interest; in the above example these would be (Marque, Origin, Price and Performance). In the case of an MNL model form, the marginal utilities have a specific quantitative meaning and are directly related to the marginal probability that the attribute causes an effect on the dependent variable which in the above example would be propensity to buy. variance statistics for each of the utilities estimated. [edit] Choice modeling in practice

Superficially, a Choice Experiment resembles a market research survey; Respondents are recruited to fill out a survey, data is collected and the data is analysed. However two critical steps differentiate a Choice Experiment from a Questionnaire: An experimental design must be constructed. This is a non-trivial task. Data must be analysed with a model form, MNL, Mixed Logit, EBA, Probit etc...

The Choice Experiment itself may be performed via hard copy with pen and paper, however increasingly the on-line medium is being used as it has many advantages over the manual process, including cost, speed, accuracy and ability to perform more complex studies such as those involving multimedia or dynamic feedback.

Despite the power and general applicability of Choice Modeling, the practical execution is far more complex than running a general survey. The model itself is a delicate tool and potential sources of bias that are ignored in general market research surveys need to be controlled for in choice models. [edit] Strengths of choice modelling [4] Forces respondents to consider trade-offs between attributes; Makes the frame of reference explicit to respondents via the inclusion of an array of attributes and product alternatives; Enables implicit prices to be estimated for attributes; Enables welfare impacts to be estimated for multiple scenarios; Can be used to estimate the level of customer demand for alternative 'service product' in non-monetary terms; and Potentially reduces the incentive for respondents to behave strategically. [edit] Choice modelling versus traditional quantitative market research

Choice Experiments may be used in nearly every case where a hard estimate of current and future human preferences needs to be determined.

Many other market research techniques attempt to use ratings and ranking scales to elicit preference information. [edit] Ratings

Major problems with ratings questions that do not occur with Choice Models are: no trade-off information. A risk with ratings is that respondents tend not to differentiate between perceived 'good' attributes and rate them all as attractive. variant personal scales. Different individuals value a '2' on a scale of 1 to 5 differently. Aggregation of the frequencies of each of the scale measures has no theoretical basis. no relative measure. How does and analyst compare something rated a 1 to something rated a 2. Is one twice as good as the other? Again there is no theoretical way of aggregating the data. [edit] Ranking

Rankings do introduce an element of trade-off in the response as no two items may occupy the same ranking position. Order preference is captured; however, relative importance is not.

Choice Models however do not suffer from these problems and furthermore are able to provide direct numerical predictions about the probability an individual will make a particular choice. [edit] Maximum difference scaling

Maximum Difference Preference Scaling (or MaxDiff as it is commonly known) is a well-regarded alternative to ratings and ranking. It asks people to choose their most and least preferred options from a range of alternatives. By integrating across the choice probabilities, utility scores for each alternative can be estimated on a ratio scale. [edit]

Uses of choice modelling

Choice modelling is particularly useful for: Predicting uptake and refining New Product Development Estimating the implied willingness to pay (WTP) for goods and services Product or service viability testing Variations of product attributes Understanding brand value and preference Demand estimates and optimum pricing Brand value

Choice modeling is a standard technique in travel demand modeling. A classical reference is Ben Akiva and Lerman (1989) [5], and Cascetta (2009) [6] ; more recent methodological developments are described in Train (2003) [7].

Early applications of discrete choice theory to marketing are described in Anderson et. al. (1992) [8]

Recent developments include a Bayesian approach to discrete choice modeling as set out in Rossi, Allenby, and McCulloch (2009) [9] Competitor analysis From Wikipedia, the free encyclopedia

Competitor analysis in marketing and strategic management is an assessment of the strengths and weaknesses of current and potential competitors. This analysis provides both an offensive and defensive strategic context through which to identify opportunities and threats. Competitor profiling coalesces all of the relevant sources of competitor analysis into one framework in the support of efficient and effective strategy formulation, implementation, monitoring and adjustment.[1]

Competitor analysis is an essential component of corporate strategy. It is argued that most firms do not conduct this type of analysis systematically enough. Instead, many enterprises operate on what is called informal impressions, conjectures, and intuition gained through the tidbits of information about competitors every manager continually receives. As a result, traditional environmental scanning places many firms at risk of dangerous competitive blindspots due to a lack of robust competitor analysis.[2]Contents [hide] 1 Competitor array 2 Competitor profiling 3 Media scanning 4 New competitors 5 See also 6 References

[edit] Competitor array

One common and useful technique is constructing a competitor array. The steps include: Define your industry - scope and nature of the industry Determine who your competitors are Determine who your customers are and what benefits they expect Determine what are the key success factors are in your industry Rank the key success factors by giving each one a weighting - The sum of all the weightings must add up to one. Rate each competitor on each of the key success factors Multiply each cell in the matrix by the factor weighting.

This can best be displayed on a two dimensional matrix - competitors along the top and key success factors down the side. An example of a competitor array follows:[3]Key Industry Success Factors Weighting #1 rating #1 weighted #2 rating #2 weighted 1 - Extensive distribution 2 - Customer focus .3 .4 4 3 7 15 6 1.2 .6 .7 3.7 2.4 5 3 4 3 1.5 .6 .4 1.2 Competitor Competitor Competitor Competitor

3 - Economies of scale .2 4 - Product innovation .1 Totals 1.0 20 4.9

In this example competitor #1 is rated higher than competitor #2 on product innovation ability (7 out of 10, compared to 4 out of 10) and distribution networks (6 out of 10), but competitor #2 is rated higher on customer focus (5 out of 10). Overall, competitor #1 is rated slightly higher than competitor #2 (20 out of 40 compared to 15 out of 40). When the success factors are weighted according to their importance, competitor #1 gets a far better rating (4.9 compared to 3.7).

Two additional columns can be added. In one column you can rate your own company on each of the key success factors (try to be objective and honest). In another column you can list benchmarks. They are the ideal standards of comparisons on each of the factors. They reflect the workings of a company using all the industry's best practices. [edit] Competitor profiling

The strategic rationale of competitor profiling is powerfully simple. Superior knowledge of rivals offers a legitimate source of competitive advantage. The raw material of competitive advantage consists of

offering superior customer value in the firms chosen market. The definitive characteristic of customer value is the adjective, superior. Customer value is defined relative to rival offerings making competitor knowledge an intrinsic component of corporate strategy. Profiling facilitates this strategic objective in three important ways. First, profiling can reveal strategic weaknesses in rivals that the firm may exploit. Second, the proactive stance of competitor profiling will allow the firm to anticipate the strategic response of their rivals to the firms planned strategies, the strategies of other competing firms, and changes in the environment. Third, this proactive knowledge will give the firms strategic agility. Offensive strategy can be implemented more quickly in order to exploit opportunities and capitalize on strengths. Similarly, defensive strategy can be employed more deftly in order to counter the threat of rival firms from exploiting the firms own weaknesses.*2+

Clearly, those firms practicing systematic and advanced competitor profiling have a significant advantage. As such, a comprehensive profiling capability is rapidly becoming a core competence required for successful competition. An appropriate analogy is to consider this advantage as akin to having a good idea of the next move that your opponent in a chess match will make. By staying one move ahead, checkmate is one step closer. Indeed, as in chess, a good offense is the best defense in the game of business as well.[2]

A common technique is to create detailed profiles on each of your major competitors. These profiles give an in-depth description of the competitor's background, finances, products, markets, facilities, personnel, and strategies. This involves: Background location of offices, plants, and online presences history - key personalities, dates, events, and trends ownership, corporate governance, and organizational structure Financials P-E ratios, dividend policy, and profitability various financial ratios, liquidity, and cash flow Profit growth profile; method of growth (organic or acquisitive) Products products offered, depth and breadth of product line, and product portfolio balance new products developed, new product success rate, and R&D strengths

brands, strength of brand portfolio, brand loyalty and brand awareness patents and licenses quality control conformance reverse engineering Marketing segments served, market shares, customer base, growth rate, and customer loyalty promotional mix, promotional budgets, advertising themes, ad agency used, sales force success rate, online promotional strategy distribution channels used (direct & indirect), exclusivity agreements, alliances, and geographical coverage pricing, discounts, and allowances Facilities plant capacity, capacity utilization rate, age of plant, plant efficiency, capital investment location, shipping logistics, and product mix by plant Personnel number of employees, key employees, and skill sets strength of management, and management style compensation, benefits, and employee morale & retention rates Corporate and marketing strategies objectives, mission statement, growth plans, acquisitions, and divestitures marketing strategies [edit] Media scanning

Scanning competitor's ads can reveal much about what that competitor believes about marketing and their target market. Changes in a competitor's advertising message can reveal new product offerings, new production processes, a new branding strategy, a new positioning strategy, a new segmentation

strategy, line extensions and contractions, problems with previous positions, insights from recent marketing or product research, a new strategic direction, a new source of sustainable competitive advantage, or value migrations within the industry. It might also indicate a new pricing strategy such as penetration, price discrimination, price skimming, product bundling, joint product pricing, discounts, or loss leaders. It may also indicate a new promotion strategy such as push, pull, balanced, short term sales generation, long term image creation, informational, comparative, affective, reminder, new creative objectives, new unique selling proposition, new creative concepts, appeals, tone, and themes, or a new advertising agency. It might also indicate a new distribution strategy, new distribution partners, more extensive distribution, more intensive distribution, a change in geographical focus, or exclusive distribution. Little of this intelligence is definitive: additional information is needed before conclusions should be drawn.

A competitor's media strategy reveals budget allocation, segmentation and targeting strategy, and selectivity and focus. From a tactical perspective, it can also be used to help a manager implement his own media plan. By knowing the competitor's media buy, media selection, frequency, reach, continuity, schedules, and flights, the manager can arrange his own media plan so that they do not coincide.

Other sources of corporate intelligence include trade shows, patent filings, mutual customers, annual reports, and trade associations.

Some firms hire competitor intelligence professionals to obtain this information. The Society of Competitive Intelligence Professionals [1] maintains a listing of individuals who provide these services. [edit] New competitors

In addition to analyzing current competitors, it is necessary to estimate future competitive threats. The most common sources of new competitors are == Companies competing in a related product/market Companies using related technologies Companies already targeting your prime market segment but with unrelated products Companies from other geographical areas and with similar products New start-up companies organized by former employees and/or managers of existing companies

The entrance of new competitors is likely when: There are high profit margins in the industry There is unmet demand (insufficient supply) in the industry There are no major barriers to entry There is future growth potential Competitive rivalry is not intense Gaining a competitive advantage over existing firms is feasible Porter five forces analysis From Wikipedia, the free encyclopedia (Redirected from Porter 5 forces analysis) This article relies on references to primary sources or sources affiliated with the subject, rather than references from independent authors and third-party publications. Please add more appropriate citations from reliable sources. (October 2009)

A graphical representation of Porter's Five Forces

Porter's Five Forces is a framework for industry analysis and business strategy development formed by Michael E. Porter of Harvard Business School in 1979. It draws upon Industrial Organization (IO) economics to derive five forces that determine the competitive intensity and therefore attractiveness of a market. Attractiveness in this context refers to the overall industry profitability. An "unattractive" industry is one in which the combination of these five forces acts to drive down overall profitability. A very unattractive industry would be one approaching "pure competition", in which available profits for all firms are driven down to zero.

Three of Porter's five forces refer to competition from external sources. The remainder are internal threats.

Porter referred to these forces as the micro environment, to contrast it with the more general term macro environment. They consist of those forces close to a company that affect its ability to serve its customers and make a profit. A change in any of the forces normally, requires a business unit to reassess the marketplace given the overall change in industry information. The overall industry attractiveness does not imply that every firm in the industry will return the same profitability. Firms are able to apply their core competencies, business model or network to achieve a profit above the industry average. A clear example of this is the airline industry. As an industry, profitability is low and yet individual companies, by applying unique business models, have been able to make a return in excess of the industry average.

Porter's five forces include - three forces from 'horizontal' competition: threat of substitute products, the threat of established rivals, and the threat of new entrants; and two forces from 'vertical' competition: the bargaining power of suppliers and the bargaining power of customers.

This five forces analysis, is just one part of the complete Porter strategic models. The other elements are the value chain and the generic strategies.[citation needed]

Porter developed his Five Forces analysis in reaction to the then-popular SWOT analysis, which he found unrigorous and ad hoc.[1]Contents [hide] 1 The five forces 1.1 The threat of the entry of new competitors 1.2 The threat of substitute products or services 1.3 The bargaining power of customers (buyers) 1.4 The bargaining power of suppliers 1.5 The intensity of competitive rivalry 2 Usage 3 Criticisms 4 See also 5 References 6 Further reading

7 External links

[edit] The five forces [edit] The threat of the entry of new competitors

Profitable markets that yield high returns will attract new firms. This results in many new entrants, which eventually will decrease profitability for all firms in the industry. Unless the entry of new firms can be blocked by incumbents, the abnormal profit rate will tend towards zero (perfect competition). The existence of barriers to entry (patents, rights, etc.) The most attractive segment is one in which entry barriers are high and exit barriers are low. Few new firms can enter and non-performing firms can exit easily. Economies of product differences Brand equity Switching costs or sunk costs Capital requirements Access to distribution Customer loyalty to established brands Absolute cost Industry profitability; the more profitable the industry the more attractive it will be to new competitors [edit] The threat of substitute products or services

The existence of products outside of the realm of the common product boundaries increases the propensity of customers to switch to alternatives: Buyer propensity to substitute

Relative price performance of substitute Buyer switching costs Perceived level of product differentiation Number of substitute products available in the market Ease of substitution. Information-based products are more prone to substitution, as online product can easily replace material product. Substandard product Quality depreciation [edit] The bargaining power of customers (buyers)

The bargaining power of customers is also described as the market of outputs: the ability of customers to put the firm under pressure, which also affects the customer's sensitivity to price changes. Buyer concentration to firm concentration ratio Degree of dependency upon existing channels of distribution Bargaining leverage, particularly in industries with high fixed costs Buyer volume Buyer switching costs relative to firm switching costs Buyer information availability Ability to backward integrate Availability of existing substitute products Buyer price sensitivity Differential advantage (uniqueness) of industry products RFM Analysis [edit] The bargaining power of suppliers

The bargaining power of suppliers is also described as the market of inputs. Suppliers of raw materials, components, labor, and services (such as expertise) to the firm can be a source of power over the firm, when there are few substitutes. Suppliers may refuse to work with the firm, or, e.g., charge excessively high prices for unique resources. Supplier switching costs relative to firm switching costs Degree of differentiation of inputs Impact of inputs on cost or differentiation Presence of substitute inputs Strength of distribution channel Supplier concentration to firm concentration ratio Employee solidarity (e.g. labor unions) Supplier competition - ability to forward vertically integrate and cut out the BUYER

Ex. If you are making biscuits and there is only one person who sells flour, you have no alternative but to buy it from him. [edit] The intensity of competitive rivalry

For most industries, the intensity of competitive rivalry is the major determinant of the competitiveness of the industry. Sustainable competitive advantage through innovation Competition between online and offline companies Level of advertising expense Powerful competitive strategy The visibility of proprietary items on the Web[2] used by a company which can intensify competitive pressures on their rivals.

How will competition react to a certain behavior by another firm? Competitive rivalry is likely to be based on dimensions such as price, quality, and innovation. Technological advances protect companies from competition. This applies to products and services. Companies that are successful with introducing new technology, are able to charge higher prices and achieve higher profits, until competitors imitate them. Examples of recent technology advantage in have been mp3 players and mobile telephones. Vertical integration is a strategy to reduce a business' own cost and thereby intensify pressure on its rival. [edit] Usage

Strategy consultants occasionally use Porter's five forces framework when making a qualitative evaluation of a firm's strategic position. However, for most consultants, the framework is only a starting point or "checklist" they might use " Value Chain " afterward. Like all general frameworks, an analysis that uses it to the exclusion of specifics about a particular situation is considered nave.

According to Porter, the five forces model should be used at the line-of-business industry level; it is not designed to be used at the industry group or industry sector level. An industry is defined at a lower, more basic level: a market in which similar or closely related products and/or services are sold to buyers. (See industry information.) A firm that competes in a single industry should develop, at a minimum, one five forces analysis for its industry. Porter makes clear that for diversified companies, the first fundamental issue in corporate strategy is the selection of industries (lines of business) in which the company should compete; and each line of business should develop its own, industry-specific, five forces analysis. The average Global 1,000 company competes in approximately 52 industries (lines of business). [edit] Criticisms

Porter's framework has been challenged by other academics and strategists such as Stewart Neill. Similarly, the likes of Kevin P. Coyne [1] and Somu Subramaniam have stated that three dubious assumptions underlie the five forces: That buyers, competitors, and suppliers are unrelated and do not interact and collude. That the source of value is structural advantage (creating barriers to entry).

That uncertainty is low, allowing participants in a market to plan for and respond to competitive behavior. [3]

An important extension to Porter was found in the work of Adam Brandenburger and Barry Nalebuff in the mid-1990s. Using game theory, they added the concept of complementors (also called "the 6th force"), helping to explain the reasoning behind strategic alliances. The idea that complementors are the sixth force has often been credited to Andrew Grove, former CEO of Intel Corporation. According to most references, the sixth force is government or the public. Martyn Richard Jones, whilst consulting at Groupe Bull, developed an augmented 5 forces model in Scotland in 1993. It is based on Porter's model and includes Government (national and regional) as well as Pressure Groups as the notional 6th force. This model was the result of work carried out as part of Groupe Bull's Knowledge Asset Management Organisation initiative.

Porter indirectly rebutted the assertions of other forces, by referring to innovation, government, and complementary products and services as "factors" that affect the five forces.[4]

It is also perhaps not feasible to evaluate the attractiveness of an industry independent of the resources a firm brings to that industry. It is thus argued[citation needed] that this theory be coupled with the Resource-Based View (RBV) in order for the firm to develop a much more sound strategy. Risk analysis (business) From Wikipedia, the free encyclopedia This article is about business. For other uses, see Risk analysis.

Risk analysis is a technique to identify and assess factors that may jeopardize the success of a project or achieving a goal. This technique also helps to define preventive measures to reduce the probability of these factors from occurring and identify countermeasures to successfully deal with these constraints when they develop to avert possible negative effects on the competitiveness of the company. Reference class forecasting was developed to increase accuracy in risk analysis.[1]

One of the more popular methods to perform a risk analysis in the computer field is called facilitated risk analysis process (FRAP).Contents [hide] 1 Facilitated risk analysis process

2 See also 3 References 4 External links

[edit] Facilitated risk analysis process

FRAP analyzes one system, application or segment of business processes at time.

FRAP assumes that additional efforts to develop precisely quantified risks are not cost effective because: such estimates are time consuming risk documentation becomes too voluminous for practical use specific loss estimates are generally not needed to determine if controls are needed.

After identifying and categorizing risks, a team identifies the controls that could mitigate the risk. The decision for what controls are needed lies with the business manager. The team's conclusions as to what risks exists and what controls needed are documented along with a related action plan for control implementation.

Three of the most important risks a software company faces are: unexpected changes in revenue, unexpected changes in costs from those budgeted and the amount of specialization of the software planned. Risks that affect revenues can be: unanticipated competition, privacy, intellectual property right problems, and unit sales that are less than forecast. Unexpected development costs also create risk that can be in the form of more rework than anticipated, security holes, and privacy invasions. [2]

Narrow specialization of software with a large amount of research and development expenditures can lead to both business and technological risks since specialization does not necessarily lead to lower unit costs of software.[3] Combined with the decrease in the potential customer base, specialization risk can be significant for a software firm. After probabilities of scenarios have been calculated with risk analysis, the process of risk management can be applied to help manage the risk.

Methods like applied information economics add to and improve on risk analysis methods by introducing procedures to adjust subjective probabilities, compute the value of additional information and to use the results in part of a larger portfolio management problem. Risk management tools From Wikipedia, the free encyclopedia

Risk Management is a non-intuitive field of study, where the most simple of models consist of a probability multiplied by an impact. Even understanding individual risks is difficult as multiple probabilities can contribute to Risk total probability, and impacts can be "units" of cost, time, events (for example, a catastrophe), market states, etc. This is further complicated by there being no straightforward approach to consider how multiple risks will influence one another or increase the overall risk of the subject of analysis.

Risk management tools allow planners to explicitly address uncertainty by identifying and generating metrics, parameterizing, prioritizing, and developing mitigations, and tracking risk. These capabilities are very difficult to track without some form of documentation or, with the advent of information technology, software application. Simple risk management tools allow documentation. More sophisticated tools provide a visual display of risks, while the most cutting edge, such as those developed by Air Force Research Laboratory Headquarters, are able to aggregate risks into a coherent picture. A few tools have predictive capability, which, through collaboration between partners allow fair partition of risks and improvement of business relations.[1]

The following is a list of risk management tools.

@Risk - performs risk analysis using Monte Carlo simulation to show many possible outcomes in Microsoft Excel spreadsheetand predicts how likely they are to occur.

Active Risk Manager - (ARM), addresses enterprise-wide risk management (ERM) and governance, risk and compliance (GRC) requirements, enabling the identification, communication, analysis and mitigation of risks and opportunities in both quantitative and qualitative formats.

The Aggregate Risk Tool - (ART), generates predictive financial data from any probability-impact model.[2]

Bow tie diagrams - a fault identifying visual tool.

Capital asset pricing model - (CAPM) is used to determine the appropriate required rate of return of an asset, if that asset be added to an already well diversified portfolio, based on non-diversifiable risk.[3]

Control Estratgico de Riesgo (CERO) - Software tool with specific tools for each activity of the risk management process. With clients mostly in Latin America. [4]

Cost/Risk Identification & Management System (CRIMS) - Integrated Probabilistic risk assessment model with cost and other variables.[5]

Crystal Ball - performs risk analysis using Monte Carlo simulation, analyzes time series and creates statistical forecasts, and determines the best values of decision variables based on stochastic optimization, all in a Microsoft Excel spreadsheet.

Cura Enterprise - Cura's GRC platform is a highly configurable solution that meets organizational requirements, and provides a balance between qualified and quantified data, all of which can be normalized and reported on across the entire organization.

Cura Quants - Is a quantitative modeling solution designed to integrate with the existing Enterprise GRC Platform. Cura Quants enables customers to quickly and easily quantify the impact of capital and project related risks as well as the effects of accompanying treatment strategies.

Dymonda - Dymonda is a software tool that enables Dynamic Flowgraph Methodology (DFM) modelling and analysis. The model explicitly identifies the cause-and-effect and timing relationships between parameters and states that are best suited to describe a particular system behavior.

Resolve by RPM'" - Cloud software toolbox to manage, track and audit processes associated with risk and safety areas within corporations.

IBM OpenPages GRC Platform - Integrated enterprise governance, risk and compliance solution that includes modules for operational risk management, policy and compliance management, financial controls management, IT governance, and internal audit management

Methodware - Methodware's ERA is a GRC solution that is a scalable,and flexible tool to automate,identify and track risk across departments, regions, and business units effectively.transparency

Operational risk management - The continual cyclic process which includes risk assessment, risk decision making, and implementation of risk controls, which results in acceptance, mitigation, or avoidance of risk.[6]

PIMS Risk - Is a complete risk framework for identifying, analysing and evaluate threats and opportunities. Created for and used by major companies in oil and energy sector.

Probabilistic risk assessment (PRA), Probability Consequence (P/C) or Probability Impact Model - Simple model where estimates of probability of occurrence are multiplied by the consequence (cost, schedule delay, etc.). This is the most common tool, examples are RiskNav and RiskMatrix.

Reference class forecasting Predicts the outcome of a planned, risky action based on actual outcomes in a reference class of similar actions to that being forecast.[7]

RiskAid products - collaborative web/intranet-based risk management software environments for projects, operations and Enterprise Risk Management (ERM), developed by Risk Reasoning.

RiskAoA A predictive tool used to discriminate between proposals, choices or alternatives, by expressing risk for each as a single number, so a proposal's trade-space between cost, scheduled time and risk from its desired characteristics can be compared instantly.[8] RiskAoA and variations of PRA are the only approved tools for United States Department of Defense Military Acquisition.

RiskComplete - Tracks project risk from planning approached to measuring tasks, from concept to manufacture.

RiskIssue.com - An online risk management tool for business, projects, teams and processes. [9]

RiskLike'Con - A free probabilistic risk assessment tool. Displays risks in the industry-standard matrix; Probability vs. Consequence.

Risk register - A project planning and organizational risk assessment tool. It is often referred to as a Risk Log.

RiskPath - An improvement of RiskAoA, available to the public, where forecasts are quantified for each alternative.

Safety case - An assessment of the potential risks in a project and of the measures to be taken to minimize them.

SAPHIRE - A probabilistic risk and reliability assessment software tool. SAPHIRE stands for Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.

SCHRAM - The Schedule Risk Assessment Manager; allows the generation of risk-adjusted schedules; the time of least risk, consequence of rushed/broken schedules. Allows realistic planning based on operational realities.

TRIMS - Provides insight as a knowledge-based tool that measures technical risk management rather than cost and schedule.[10]

Unified Risk Assessment and Regulatory Compliance - A standards based end-to-end comprehensive cloud service from Unisys that performs model based risk assessment and also provides a real time dashboard for Regulatory & Policy Compliance traceability and transparency.

Xero Risk - Web based enterprise risk governance tool to identify, track and balance risks across an organization using user definable assessment and impact criteria. Food safety risk analysis From Wikipedia, the free encyclopedia This article has no lead section. Please help by adding an introductory section to this article. For more information, see the layout guide, and Wikipedia's lead section guidelines. (August 2011) Contents [hide] 1 Risk analysis 1.1 Risk management 1.2 Risk assessment 1.3 Risk communication 1.4 Codex Alimentarius Commission 2 References 3 Further reading 4 External links

[edit] Risk analysis

Risk analysis is defined for the purposes of the Codex Alimentarius Commission as "A process consisting of three components: risk management, risk assessment, and risk communication." [1][2]

The diagram above illustrates the relationship between the three components of risk analysis. [edit] Risk management

The diagram above represents a generic framework for risk management.

Risk management is defined for the purposes of the Codex Alimentarius Commission as "The process, distinct from risk assessment, of weighing policy alternatives, in consultation with all interested parties, considering risk assessment and other factors relevant for the health protection of consumers and for the promotion of fair trade practices, and, if needed, selecting appropriate prevention and control options." [1][3] [edit] Risk assessment

Risk assessment is defined for the purposes of the Codex Alimentarius Commission as "A scientifically based process consisting of the following steps: (i) hazard identification, (ii) hazard characterization, (iii) exposure assessment, and (iv) risk characterization."

Hazard identification is "The identification of biological, chemical, and physical agents capable of causing adverse health effects and which may be present in a particular food or group of foods."

Hazard characterization is "The qualitative and/or quantitative evaluation of the nature of the adverse health effects associated with biological, chemical and physical agents which may be present in food. For chemical agents, a dose-response assessment should be performed. For biological or physical agents, a dose-response assessment should be performed if the data are obtainable."

Exposure assessment is "The qualitative and/or quantitative evaluation of the likely intake of biological, chemical, and physical agents via food as well as exposures from other sources if relevant."

Risk characterization is "The qualitative and/or quantitative estimation, including attendant uncertainties, of the probability of occurrence and severity of known or potential adverse health effects in a given population based on hazard identification, hazard characterization and exposure assessment." [1] [edit] Risk communication

Risk communication is defined for the purposes of the Codex Alimentarius Commission as "The interactive exchange of information and opinions throughout the risk analysis process concerning hazards and risks, risk-related factors and risk perceptions, among risk assessors, risk managers, consumers, industry, the academic community and other interested parties, including the explanation of risk assessment findings and the basis of risk management decisions." [1] [edit] Codex Alimentarius Commission

The Codex Alimentarius Commission "...was created in 1963 by the Food Agriculture Organization (FAO) and the World Health Organization (WHO) to develop food standards, guidelines and related texts such as codes of practice under the Joint FAO/WHO Food Standards Programme. The main purposes of this Programme are protecting health of the consumers and ensuring fair trade practices in the food trade, and promoting coordination of all food standards work undertaken by international governmental and non-governmental organizations." [4] [5]

Probabilistic risk assessment From Wikipedia, the free encyclopedia

(Redirected from Quantitative risk analysis)

Probabilistic risk assessment (PRA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity (such as an airliner or a nuclear power plant).

Risk in a PRA is defined as a feasible detrimental outcome of an activity or action. In a PRA, risk is characterized by two quantities: the magnitude (severity) of the possible adverse consequence(s), and the likelihood (probability) of occurrence of each consequence.

Consequences are expressed numerically (e.g., the number of people potentially hurt or killed) and their likelihoods of occurrence are expressed as probabilities or frequencies (i.e., the number of occurrences or the probability of occurrence per unit time). The total risk is the expected loss: the sum of the products of the consequences multiplied by their probabilities.

The spectrum of risks across classes of events are also of concern, and are usually controlled in licensing processes it would be of concern if rare but high consequence events were found to dominate the overall risk, particularly as these risk assessment is very sensitive to assumptions (how rare is a high consequence event?).

Probabilistic Risk Assessment usually answers three basic questions: What can go wrong with the studied technological entity, or what are the initiators or initiating events (undesirable starting events) that lead to adverse consequence(s)? What and how severe are the potential detriments, or the adverse consequences that the technological entity may be eventually subjected to as a result of the occurrence of the initiator? How likely to occur are these undesirable consequences, or what are their probabilities or frequencies?

Two common methods of answering this last question are Event Tree Analysis and Fault Tree Analysis for explanations of these, see safety engineering.

In addition to the above methods, PRA studies require special but often very important analysis tools like human reliability analysis (HRA) and common-cause-failure analysis (CCF). HRA deals with methods for modeling human error while CCF deals with methods for evaluating the effect of inter-system and intra-system dependencies which tend to cause simultaneous failures and thus significant increases in overall risk.

In 2007 France was criticised for failing to use a PRA approach to evaluate the seismic risks of French nuclear power plants.[1]Contents [hide] 1 Criticism 2 See also 3 External links 4 References

[edit] Criticism

Theoretically, the probabilistic risk assessment method suffers from several problems:[2]

Nancy Leveson of MIT and her collaborators have argued PDF that the chain-of-event conception of accidents typically used for such risk assessments cannot account for the indirect, non-linear, and feedback relationships that characterize many accidents in complex systems. These risk assessments do a poor job of modeling human actions and their impact on known, let alone unknown, failure modes. Also, as a 1978 Risk Assessment Review Group Report to the NRC pointed out, it is "conceptually impossible to be complete in a mathematical sense in the construction of event-trees and fault-trees This inherent limitation means that any calculation using this methodology is always subject to revision and to doubt as to its completeness."[2]

In the case of many accidents, probabilistic risk assessment models do not account for unexpected failure modes:[2]

At Japan's Kashiwazaki Kariwa reactors, for example, after the 2007 Chuetsu earthquake some radioactive materials escaped into the sea when ground subsidence pulled underground electric cables downward and created an opening in the reactor's basement wall. As a Tokyo Electric Power Company official remarked then, "It was beyond our imagination that a space could be made in the hole on the outer wall for the electric cables."[2]

When it comes to future safety, nuclear designers and operators often assume that they know what is likely to happen, which is what allows them to assert that they have planned for all possible contingencies. Yet there is one weakness of the probabilistic risk assessment method that has been emphatically demonstrated with the Fukushima I nuclear accidents -- the difficulty of modeling common-cause or common-mode failures:[2]

From most reports it seems clear that a single event, the tsunami, resulted in a number of failures that set the stage for the accidents. These failures included the loss of offsite electrical power to the reactor complex, the loss of oil tanks and replacement fuel for diesel generators, the flooding of the electrical switchyard, and perhaps damage to the inlets that brought in cooling water from the ocean. As a result, even though there were multiple ways of removing heat from the core, all of them failed.[2] Benefit shortfall From Wikipedia, the free encyclopedia (Redirected from Benefit risk)

A benefit shortfall results from the actual benefits of a venture being lower than the projected, or estimated, benefits of that venture.[1] If, for instance, a company is launching a new product or service and projected sales are 40 million dollars per year, whereas actual annual sales turn out to be only 30 million dollars, then the benefit shortfall is said to be 25 percent. Sometimes the terms "demand shortfall" or "revenue shortfall" are used instead of benefit shortfall.

Public and private enterprises alike fall victim to benefit shortfalls. Prudent planning of new ventures will include the risk of benefit shortfalls in risk assessment and risk management.

If large benefit shortfalls coincide with large cost overruns in a venture - as happened for the Channel tunnel between the UK and France - then fiscal and other distress will be particularly pronounced for that venture.[2]

The root cause of benefit shortfalls is benefit overestimation during the planning phase of new ventures. Benefit overestimation (and cost underestimation) are main sources of error and bias in cost-benefit analysis. Reference class forecasting was developed to reduce the risk of benefit shortfalls and cost overruns.[3]

The discipline of Benefits Realisation Management seeks to identify any benefits shortfall as early as possible in a project or programmes delivery in order to allow corrective action to be taken, costs to be controlled and benefits realised. Common-cause and special-cause From Wikipedia, the free encyclopedia (Redirected from Common mode failure)Type of variation Common cause Chance cause Non-assignable cause Noise Natural pattern Special cause Signal Unnatural pattern Assignable cause Synonyms

Common- and special-causes are the two distinct origins of variation in a process, as defined in the statistical thinking and methods of Walter A. Shewhart and W. Edwards Deming. Briefly, "commoncause" is the usual, historical, quantifiable variation in a system, while "special-causes" are unusual, not previously observed, non-quantifiable variation.

The distinction is fundamental in philosophy of statistics and philosophy of probability, with different treatment of these issues being a classic issue of probability interpretations, being recognised and discussed as early as 1703 by Gottfried Leibniz; various alternative names have been used over the years.

The distinction has been particularly important in the thinking of economists Frank Knight, John Maynard Keynes and G. L. S. Shackle.Contents [hide] 1 Origins and concepts 2 Definitions 2.1 Common-cause variation 2.2 Special-cause variation 3 Examples 3.1 Common causes 3.2 Special causes 4 Importance to economics 5 Importance to industrial and quality management 6 Importance to statistics 6.1 Deming and Shewhart 6.2 Keynes 7 In engineering 8 References 9 Bibliography 10 See also

[edit] Origins and concepts

In 1703, Jacob Bernoulli wrote to Gottfried Leibniz to discuss their shared interest in applying mathematics and probability to games of chance. Bernoulli speculated whether it would be possible to gather mortality data from gravestones and thereby calculate, by their existing practice, the probability

of a man currently aged 20 years outliving a man aged 60 years. Leibniz replied that he doubted this was possible as:

Nature has established patterns originating in the return of events but only for the most part. New illnesses flood the human race, so that no matter how many experiments you have done on corpses, you have not thereby imposed a limit on the nature of events so that in the future they could not vary.

This captures the central idea that some variation is predictable, at least approximately in frequency. This common-cause variation is evident from the experience base. However, new, unanticipated, emergent or previously neglected phenomena (e.g. "new diseases") result in variation outside the historical experience base. Shewhart and Deming argued that such special-cause variation is fundamentally unpredictable in frequency of occurrence or in severity.

John Maynard Keynes emphasised the importance of special-cause variation when he wrote:

By uncertain knowledge ... I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty ... The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention ... About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know! [edit] Definitions [edit] Common-cause variation

Common-cause variation is characterised by: Phenomena constantly active within the system; Variation predictable probabilistically; Irregular variation within an historical experience base; and

Lack of significance in individual high or low values.

The outcomes of a perfectly balanced roulette wheel are a good example of common-cause variation. Common-cause variation is the noise within the system.

Walter A. Shewhart originally used the term chance-cause.[1] The term common-cause was coined by Harry Alpert in 1947. The Western Electric Company used the term natural pattern.[2] Shewhart called a process that features only common-cause variation as being in statistical control. This term is deprecated by some modern statisticians who prefer the phrase stable and predictable. [edit] Special-cause variation

Special-cause variation is characterised by: New, unanticipated, emergent or previously neglected phenomena within the system; Variation inherently unpredictable, even probabilistically; Variation outside the historical experience base; and Evidence of some inherent change in the system or our knowledge of it.

Special-cause variation always arrives as a surprise. It is the signal within a system.

Walter A. Shewhart originally used the term assignable-cause.[3] The term special-cause was coined by W. Edwards Deming. The Western Electric Company used the term unnatural pattern.[2] [edit] Examples [edit] Common causes Inappropriate procedures

Poor design Poor maintenance of machines Lack of clearly defined standing operating procedures Poor working conditions, e.g. lighting, noise, dirt, temperature, ventilation Substandard raw materials Assurement error Quality control error Vibration in industrial processes Ambient temperature and humidity Normal wear and tear Variability in settings Computer response time [edit] Special causes Poor adjustment of equipment Operator falls asleep Faulty controllers Machine malfunction Computer crashes Poor batch of raw material Power surges High healthcare demand from elderly people Abnormal traffic (click-fraud) on web ads[4] Extremely long lab testing turnover time due to switching to a new computer system Operator absent[5]

[edit] Importance to economics For more details on this topic, see Knightian uncertainty.

In economics, this circle of ideas is referred to under the rubric of "Knightian uncertainty". John Maynard Keynes and Frank Knight both discussed the inherent unpredictability of economic systems in their work and used it to criticise the mathematical approach to economics, in terms of expected utility, developed by Ludwig von Mises and others. Keynes in particular argued that economic systems did not automatically tend to the equilibrium of full employment owing to their agents' inability to predict the future. As he remarked in The General Theory of Employment, Interest and Money:

... as living and moving beings, we are forced to act ... [even when] our existing knowledge does not provide a sufficient basis for a calculated mathematical expectation.

Keynes's thinking was at odds with the classical liberalism of the Austrian school of economists, but G. L. S. Shackle recognised the importance of Keynes's insight and sought to formalise it within a free-market philosophy.

In financial economics, the black swan theory of Nassim Nicholas Taleb is based on the significance and unpredictability of special-causes. [edit] Importance to industrial and quality management

A special-cause failure is a failure that can be corrected by changing a component or process, whereas a common-cause failure is equivalent to noise in the system and specific actions cannot be made to prevent for the failure.

Harry Alpert observed: A riot occurs in a certain prison. Officials and sociologists turn out a detailed report about the prison, with a full explanation of why and how it happened here, ignoring the fact that the causes were common to a majority of prisons, and that the riot could have happened anywhere.

The quote recognises that there is a temptation to react to an extreme outcome and to see it as significant, even where its causes are common to many situations and the distinctive circumstances surrounding its occurrence, the results of mere chance. Such behaviour has many implications within management, often leading to interventions in processes that merely increase the level of variation and frequency of undesirable outcomes.

Deming and Shewhart both advocated the control chart as a means of managing a business process in an economically efficient manner. [edit] Importance to statistics [edit] Deming and Shewhart

Within the frequency probability framework, there is no process whereby a probability can be attached to the future occurrence of special cause.[citation needed] However the Bayesian approach does allow such a probability to be specified. The existence of special-cause variation led Keynes and Deming to an interest in bayesian probability but no formal synthesis has ever been forthcoming. Most statisticians of the Shewhart-Deming school take the view that special causes are not embedded in either experience or in current thinking (that's why they come as a surprise) so that any subjective probability is doomed to be hopelessly badly calibrated in practice.

It is immediately apparent from the Leibniz quote above that there are implications for sampling. Deming observed that in any forecasting activity, the population is that of future events while the sampling frame is, inevitably, some subset of historical events. Deming held that the disjoint nature of population and sampling frame was inherently problematic once the existence of special-cause variation was admitted, rejecting the general use of probability and conventional statistics in such situations. He articulated the difficulty as the distinction between analytic and enumerative statistical studies.

Shewhart argued that, as processes subject to special-cause variation were inherently unpredictable, the usual techniques of probability could not be used to separate special-cause from common-cause variation. He developed the control chart as a statistical heuristic to distinguish the two types of

variation. Both Deming and Shewhart advocated the control chart as a means of assessing a process's state of statistical control and as a foundation for forecasting. [edit] Keynes

Keynes identified three domains of probability: Frequency probability; Subjective or Bayesian probability; and Events lying outside the possibility of any description in terms of probability (special causes)

- and sought to base a probability theory thereon. [edit] In engineering

Common mode, or common cause, failure has a more specific meaning in engineering. It refers to events which are not statistically independent. That is, failures in multiple parts of a system caused by a single fault, particularly random failures due to environmental conditions or aging. An example is when all of the pumps for a fire sprinkler system are located in one room. If the room becomes too hot for the pumps to operate, they will all fail at essentially the same time, from one cause (the heat in the room).

For example, in an electronic system, a fault in a power supply which injects noise onto a supply line may cause failures in multiple subsystems.

This is particularly important in safety-critical systems using multiple redundant channels. If the probability a failure in one subsystem is p, then it would expected that an N channel system would have a probability of failure of pN. However, in practice, the probability of failure is much higher because they are not statistically independent[6]; for example ionizing radiation or electromagnetic interference (EMI) may affect both channels.

The principle of redundancy states that, when events of failure of a component are statistically independent, the probabilities of their joint occurrence multiply. Thus, for instance, if the probability of failure of a component of a system is one in one thousand per year, the probability of the joint failure of two of them is one in one million per year, provided that the two events are statistically independent. This principle favors the strategy of the redundancy of components. One place this strategy is implemented is in RAID 1, where two hard disks store a computer's data redundantly.

But even so there can be many common modes: consider a RAID1 where two disks are purchased online and are installed in a computer, there can be many common modes: The disks are likely to be from the same manufacturer and of the same model, therefore they share the same design flaws. The disks are likely to have similar serial numbers, thus they may share any manufacturing flaws affecting production of the same batch. The disks are likely to have been shipped at the same time, thus they are likely to have suffered from the same transportation damage. As installed both disks are attached to the same power supply, making them vulnerable to the same power supply issues. As installed both disks are in the same case, making them vulnerable to the same overheating events. They will be both attached to the same card or motherboard, and driven by the same software, which may have the same bugs. Because of the very nature of RAID1, both disks will be subjected to the same workload and very closely similar access patterns, stressing them in the same way.

Also, if the events of failure of two components are maximally statistically dependent, the probability of the joint failure of both is identical to the probability of failure of them individually. In such a case, the advantages of redundancy are negated. Strategies for the avoidance of common mode failures include keeping redundant components physically isolated.

A prime example of redundancy with isolation is a nuclear power plant. The new ABWR has three divisions of Emergency Core Cooling Systems, each with its own generators and pumps and each isolated from the others. The new European Pressurized Reactor has two containment buildings, one inside the other. However, even here it is not impossible for a common mode failure to occur (for example, caused by a highly-unlikely Richter 9 earthquake).

Cost overrun From Wikipedia, the free encyclopedia (Redirected from Cost risk)

A cost overrun, also known as a cost increase or budget overrun, is an unexpected cost incurred in excess of a budgeted amount due to an under-estimation of the actual cost during budgeting. Cost overrun should be distinguished from cost escalation, which is used to express an anticipated growth in a budgeted cost due to factors such as inflation.

Cost overrun is common in infrastructure, building, and technology projects. A comprehensive study of cost overrun published in the Journal of the American Planning Association in 2002 found that 9 out of ten construction projects had underestimated costs. Overruns of 50 to one hundred percent were common.[improper synthesis?] Cost underestimation was found in each of 20 nations and five continents covered by the study, and cost underestimation had not decreased in the 70 years for which data were available.[1] For IT projects, an industry study by the Standish Group found that the average cost overrun was 43 percent; 71 percent of projects were over budget, exceeded time estimates, and had estimated too narrow a scope; and total waste was estimated at $55 billion per year in the US alone.[2]

Many major construction projects have incurred cost overruns. The Suez Canal cost 20 times as much as the earliest estimates; even the cost estimate produced the year before construction began underestimated the project's actual costs by a factor of three.[1] The Sydney Opera House cost 15 times more than was originally projected, and the Concorde supersonic aeroplane cost 12 times more than predicted.[1] When Boston's "Big Dig" tunnel construction project was completed, the project was 275 percent ($11 billion) over budget.[3] The Channel Tunnel between the UK and France had a construction cost overrun of 80 percent, and a 140-percent financing cost overrun.[3]Contents [hide] 1 Causes 2 List of projects with large cost overruns 3 See also 4 References 5 External links 6 Further reading

[edit] Causes

Three types of explanation for cost overrun exist: technical, psychological, and political-economic. Technical explanations account for cost overrun in terms of imperfect forecasting techniques, inadequate data, etc. Psychological explanations account for overrun in terms of optimism bias with forecasters. Finally, political-economic explanations see overrun as the result of strategic misrepresentation of scope or budgets.

All three explanations can be considered forms of risk. A project's budgeted costs should always include cost contingency funds to cover risks (other than scope changes imposed on the project). As has been shown in cost engineering research,[4] poor risk analysis and contingency estimating practices account for many project cost overruns. Numerous studies have found that the greatest cause of cost growth was poorly-defined scope at the time that the budget was established. The cost growth, or overrun of the budget before cost contingency is added, can be predicted by rating the extent of scope definition, even on complex projects with new technology.[5]

Professor Bent Flyvbjerg of Oxford University and Martin Wachs of University of California, Los Angeles have shown that big public-works projects often have cost overruns due to strategic misrepresentation"that is, lying", as Flyvbjerg defines the term.[6]

Cost overrun is typically calculated in one of two ways: either as a percentage, namely actual cost minus budgeted cost, in percent of budgeted cost; or as a ratio of actual cost divided by budgeted cost. For example, if the budget for building a new bridge was $100 million, and the actual cost was $150 million, then the cost overrun may be expressed by the ratio 1.5, or as 50 percent.

Reference class forecasting was developed to eliminate or reduce cost overrun.[7] [edit] List of projects with large cost overruns This section needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (October 2010)

[edit] Australia Sydney Olympic Park Sydney Opera House[1] myki [edit] Brazil Braslia [edit] Canada Pickering Nuclear Generating Station Montreal Olympic Stadium Rogers Centre (formerly SkyDome) [edit] Denmark Great Belt railway tunnel [edit] Egypt Suez Canal[1] [edit] Japan Joetsu Shinkansen high-speed rail line [edit] Malaysia

Pergau Dam [edit] North Korea Ryugyong Hotel [edit] Panama Panama Canal [edit] Sweden Gta Canal Hallandss Tunnel [edit] United Kingdom Humber Bridge Millennium Dome National Programme for IT Scottish Parliament Building TAURUS (share trading) [edit] United States Big Dig[3] Denver International Airport Eastern span replacement of the San FranciscoOakland Bay Bridge F-22 Raptor Joint Strike Fighter Program

NPOESS Paw Paw Tunnel V-22 Osprey Hubble Space Telescope [edit] Multinational Airbus A380 Airbus A400M Channel Tunnel[3] Cologne Cathedral Concorde[3] Eurofighter Reference class forecasting From Wikipedia, the free encyclopedia This article has multiple issues. Please help improve it or discuss these issues on the talk page. Its references would be clearer with a different or consistent style of citations, footnoting or external linking. Tagged since November 2010. Its introduction provides insufficient context for those unfamiliar with the subject. Tagged since November 2010.

Reference class forecasting predicts the outcome of a planned action based on actual outcomes in a reference class of similar actions to that being forecast. The theories behind reference class forecasting were developed by Daniel Kahneman and Amos Tversky. The methodology and data needed for employing reference class forecasting in practice in policy, planning, and management were developed by Oxford professor Bent Flyvbjerg and the COWI consulting group in a joint effort. The theoretical work helped Kahneman win the Nobel Prize in Economics. Today, reference class forecasting has found widespread use in practice in both public and private sector policy and management.

Kahneman and Tversky (1979a, b) found that human judgment is generally optimistic due to overconfidence and insufficient consideration of distributional information about outcomes. Therefore, people tend to underestimate the costs, completion times, and risks of planned actions, whereas they tend to overestimate the benefits of those same actions. Such error is caused by actors taking an "inside view," where focus is on the constituents of the specific planned action instead of on the actual outcomes of similar ventures that have already been completed.

Kahneman and Tversky concluded that disregard of distributional information, that is, risk, is perhaps the major source of error in forecasting. On that basis they recommended that forecasters "should therefore make every effort to frame the forecasting problem so as to facilitate utilizing all the distributional information that is available" (Kahneman and Tversky 1979b, p. 316).

Using distributional information from previous ventures similar to the one being forecast is called taking an "outside view". Reference class forecasting is a method for taking an outside view on planned actions.

Reference class forecasting for a specific project involves the following three steps: Identify a reference class of past, similar projects. Establish a probability distribution for the selected reference class for the parameter that is being forecast. Compare the specific project with the reference class distribution, in order to establish the most likely outcome for the specific project.

Whereas Kahneman and Tversky developed the theories of reference class forecasting, Flyvbjerg and COWI (2004) developed the method for its practical use in policy and planning. The first instance of reference class forecasting in practice is described in Flyvbjerg (2006). This was a forecast carried out in 2004 by the UK government of the projected capital costs for an extension of Edinburgh Trams. The promoter's forecast estimated a cost of 255 million. Taking all available distributional information into account, based on a reference class of comparable rail projects, the reference class forecast estimated a cost of 320 million. (In reality, although it is still not finished, projected costs now expected to exceed 600 million[1]) Since the Edinburgh forecast, reference class forecasting has been applied to numerous other projects in the UK, including the 15 (US$29) billion Crossrail project in London. After 2004, The Netherlands, Denmark, and Switzerland have also implemented various types of reference class forecasting.

In 2005, the American Planning Association (APA) endorsed reference class forecasting and recommended that planners should never rely solely on conventional forecasting techniques:

"APA encourages planners to use reference class forecasting in addition to traditional methods as a way to improve accuracy. The reference class forecasting method is beneficial for non-routine projects ... Planners should never rely solely on civil engineering technology as a way to generate project forecasts" (the American Planning Association 2005).

Before this, in 2001 (updated in 2006), AACE International (the Association for the Advancement of Cost Engineering) included Estimate Validation as a distinct step in the recommended practice of Cost Estimating (Estimate Validation is equivalent to Reference class forecasting in that it calls for separate empirical-based evaluations to benchmark the base estimate):

"The estimate should be benchmarked or validated against or compared to historical experience and/or past estimates of the enterprise and of competitive enterprises to check its appropriateness, competitiveness, and to identify improvement opportunities...Validation examines the estimate from a different perspective and using different metrics than are used in estimate preparation." (AACE International 2006)

In the process industries (e.g., oil and gas, chemicals, mining, energy, etc. which tend to dominate AACE's membership), benchmarking (i.e., "outside view") of project cost estimates against the historical costs of completed projects of similar types, including probabilistic information, has a long history (Merrow 1990). Risk assessment From Wikipedia, the free encyclopedia This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations where appropriate. (August 2010)

Risk assessment is a step in a risk management procedure. Risk assessment is the determination of quantitative or qualitative value of risk related to a concrete situation and a recognized threat (also

called hazard). Quantitative risk assessment requires calculations of two components of risk: R, the magnitude of the potential loss L, and the probability p, that the loss will occur.

In all types of engineering of complex systems sophisticated risk assessments are often made within Safety engineering and Reliability engineering when it concerns threats to life, environment or machine functioning. The nuclear, aerospace, oil, rail and military industries have a long history of dealing with risk assessment. Also, medical, hospital, and food industries control risks and perform risk assessments on a continual basis. Methods for assessment of risk may differ between industries and whether it pertains to general financial decisions or environmental, ecological, or public health risk assessment.Contents [hide] 1 Explanation 2 Risk assessment in public health 2.1 How the risk is determined 2.2 Small subpopulations 2.3 Acceptable risk increase 3 Risk assessment in auditing 4 Risk assessment and human health 5 Risk assessment in information security 6 Risk assessment in project management 7 Risk assessment for megaprojects 8 Quantitative risk assessment 9 Risk assessment in software evolution 10 Criticisms of quantitative risk assessment 11 See also 12 References 12.1 Footnotes 12.2 General references 13 External links

Explanation

Risk assessment consists of an objective evaluation of risk in which assumptions and uncertainties are clearly considered and presented. Part of the difficulty in risk management is that measurement of both of the quantities in which risk assessment is concerned - potential loss and probability of occurrence can be very difficult to measure. The chance of error in measuring these two concepts is large. Risk with a large potential loss and a low probability of occurring is often treated differently from one with a low potential loss and a high likelihood of occurring. In theory, both are of nearly equal priority, but in practice it can be very difficult to manage when faced with the scarcity of resources, especially time, in which to conduct the risk management process. Expressed mathematically,

Risk assessment from a financial point of view.

Financial decisions, such as insurance, express loss in terms of dollar amounts. When risk assessment is used for public health or environmental decisions, loss can be quantified in a common metric such as a country's currency or some numerical measure of a location's quality of life. For public health and environmental decisions, loss is simply a verbal description of the outcome, such as increased cancer incidence or incidence of birth defects. In that case, the "risk" is expressed as:

If the risk estimate takes into account information on the number of individuals exposed, it is termed a "population risk" and is in units of expected increased cases per a time period. If the risk estimate does not take into account the number of individuals exposed, it is termed an "individual risk" and is in units of incidence rate per a time period. Population risks are of more use for cost/benefit analysis; individual risks are of more use for evaluating whether risks to individuals are "acceptable".... Risk assessment in public health

In the context of public health, risk assessment is the process of quantifying the probability of a harmful effect to individuals or populations from certain human activities. In most countries the use of specific

chemicals or the operations of specific facilities (e.g. power plants, manufacturing plants) is not allowed unless it can be shown that they do not increase the risk of death or illness above a specific threshold. For example, the American Food and Drug Administration (FDA) regulates food safety through risk assessment.[1] The FDA required in 1973 that cancer-causing compounds must not be present in meat at concentrations that would cause a cancer risk greater than 1 in a million lifetimes. The US Environmental Protection Agency provides basic information about environmental risk assessments for the public via its risk assessment portal.[2] How the risk is determined

In the estimation of risks, three or more steps are involved that require the inputs of different disciplines: Hazard Identification, aims to determine the qualitative nature of the potential adverse consequences of the contaminant (chemical, radiation, noise, etc.) and the strength of the evidence it can have that effect. This is done, for chemical hazards, by drawing from the results of the sciences of toxicology and epidemiology. For other kinds of hazard, engineering or other disciplines are involved. Dose-Response Analysis, is determining the relationship between dose and the probability or the incidence of effect (dose-response assessment). The complexity of this step in many contexts derives mainly from the need to extrapolate results from experimental animals (e.g. mouse, rat) to humans, and/or from high to lower doses. In addition, the differences between individuals due to genetics or other factors mean that the hazard may be higher for particular groups, called susceptible populations. An alternative to dose-response estimation is to determine an effect unlikely to yield observable effects, that is, a no effect concentration. In developing such a dose, to account for the largely unknown effects of animal to human extrapolations, increased variability in humans, or missing data, a prudent approach is often adopted by including safety factors in the estimate of the "safe" dose, typically a factor of 10 for each unknown step. Exposure Quantification, aims to determine the amount of a contaminant (dose) that individuals and populations will receive. This is done by examining the results of the discipline of exposure assessment. As different location, lifestyles and other factors likely influence the amount of contaminant that is received, a range or distribution of possible values is generated in this step. Particular care is taken to determine the exposure of the susceptible population(s).

Finally, the results of the three steps above are then combined to produce an estimate of risk. Because of the different susceptibilities and exposures, this risk will vary within a population. Small subpopulations

When risks apply mainly to small subpopulations, there is uncertainty at which point intervention is necessary. What if a risk is very low for everyone but 0.1% of the population? A difference exists whether this 0.1% is represented by *all infants younger than X days or *recreational users of a particular product. If the risk is higher for a particular sub-population because of abnormal exposure rather than susceptibility, there is a potential to consider strategies to further reduce the exposure of that subgroup. If an identifiable sub-population is more susceptible due to inherent genetic or other factors, there is a policy choice whether to set policies for protecting the general population that are protective of such groups (as is currently done for children when data exists, or is done under the Clean Air Act for populations such as asthmatics) or whether if the group is too small, or the costs to high. Sometimes, a more specific calculation can be applied whether it is more important to analyze each method specifically the changes of the risk assessment method in containing all problems that each of us people could replace. Acceptable risk increase

The idea of not increasing lifetime risk by more than one in a million has become common place in public health discourse and policy. How consensus settled on this particular figure is unclear. In some respects this figure has the characteristics of a mythical number. In another sense the figure provides a numerical basis for what to consider a negligible increase in risk. Some current environmental decision making allows some discretion to deem individual risks potentially "acceptable" if below one in ten thousand increased lifetime risk. Low risk criteria such as these provide some protection for a case where individuals may be exposed to multiple chemicals (whether pollutants or food additives, or other chemicals). However, both of these benchmarks are clearly small relative to the typical one in four lifetime risk of death by cancer (due to all causes combined) in developed countries. On the other hand, adoption of a zero-risk policy could be motivated by the fact that the 1 in a million policy still would cause the death of hundreds or thousands of people in a large enough population. In practice however, a true zero-risk is possible only with the suppression of the risk-causing activity.

More stringent requirements (even 1 in a million) may not be technologically feasible at a given time or may be prohibitively expensive as to render the risk-causing activity unsustainable, resulting in the optimal degree of intervention being a balance between risks vs. benefit. For example, it might well be that the emissions from hospital incinerators result in a certain number of deaths per year. However, this risk must be balanced against the available alternatives. In some unusual cases, there are significant public health risks, as well as economic costs, associated with all options. For example, there are risks associated with no incineration (with the potential risk for spread of infectious diseases) or even no hospitals. Further investigation often identifies more options such as separating noninfectious from infectious wastes, or air pollution controls on a medical incinerator that provide a broad range of options of acceptable risk - though with varying practical implications and varying economic costs.

Intelligent thought about a reasonably full set of options is essential. Thus, it is not unusual for there to be an iterative process between analysis, consideration of options, and follow up analysis. Risk assessment in auditing

In auditing, risk assessment is a very crucial stage before accepting an audit engagement. According to ISA315 Understanding the Entity and its Environment and Assessing the Risks of Material Misstatement, "the auditor should perform risk assessment procedures to obtain an understanding of the entity and its environment, including its internal control."<evidence relating to the auditors risk assessment of a material misstatement in the clients financial statements. Then, auditor obtains initial evidence regarding the classes of transactions at the client and the operating effectiveness of the clients internal controls.In auditing, audit risk includes inherent risk, control risk and detection risk. Risk assessment and human health

There are many resources that provide health risk information. The National Library of Medicine provides risk assessment and regulation information tools for a varied audience.[3] These include TOXNET (databases on hazardous chemicals, environmental health, and toxic releases),[4] the Household Products Database (potential health effects of chemicals in over 10,000 common household products),[5] and TOXMAP (maps of US Environmental Agency Superfund and Toxics Release Inventory data). The United States Environmental Protection Agency provides basic information about environmental risk assessments for the public.[6] Risk assessment in information security Main article: IT risk management#Risk assessment

IT risk assessment can be performed by a qualitative or quantitative approach, following different methodologies. Risk assessment in project management

In project management, risk assessment is an integral part of the risk management plan, studying the probability, the impact, and the effect of every known risk on the project, as well as the corrective action to take should that risk occur.[7] Risk assessment for megaprojects

Megaprojects (sometimes also called "major programs") are extremely large-scale investment projects, typically costing more than US$1 billion per project. Megaprojects include bridges, tunnels, highways, railways, airports, seaports, power plants, dams, wastewater projects, coastal flood protection, oil and natural gas extraction projects, public buildings, information technology systems, aerospace projects, and defence systems. Megaprojects have been shown to be particularly risky in terms of finance, safety, and social and environmental impacts. Risk assessment is therefore particularly pertinent for megaprojects and special methods and special education have been developed for such risk assessment.[8][9] Quantitative risk assessment Further information: Quantitative Risk Assessment software

Quantitative risk assessments include a calculation of the single loss expectancy (SLE) of an asset. The single loss expectancy can be defined as the loss of value to asset based on a single security incident. The team then calculates the Annualized Rate of Occurrence (ARO) of the threat to the asset. The ARO is an estimate based on the data of how often a threat would be successful in exploiting a vulnerability. From this information, the Annualized Loss Expectancy (ALE) can be calculated. The annualized loss expectancy is a calculation of the single loss expectancy multiplied by the annual rate of occurrence, or how much an organization could estimate to lose from an asset based on the risks, threats, and vulnerabilities. It then becomes possible from a financial perspective to justify expenditures to implement countermeasures to protect the asset. Risk assessment in software evolution Further information: ACM A Formal Risk Assessment Model for Software Evolution

Studies have shown that early parts of the system development cycle such as requirements and design specifications are especially prone to error. This effect is particularly notorious in projects involving multiple stakeholders with different points of view. Evolutionary software processes offer an iterative approach to requirement engineering to alleviate the problems of uncertainty, ambiguity and inconsistency inherent in software developments. Criticisms of quantitative risk assessment

Barry Commoner, Brian Wynne and other critics have expressed concerns that risk assessment tends to be overly quantitative and reductive. For example, they argue that risk assessments ignore qualitative differences among risks. Some charge that assessments may drop out important non-quantifiable or inaccessible information, such as variations among the classes of people exposed to hazards.

Furthermore, Commoner and O'Brien claim that quantitative approaches divert attention from precautionary or preventative measures.[10] Others, like Nassim Nicholas Taleb consider risk managers little more than "blind users" of statistical tools and methods.[11]

Extreme risk From Wikipedia, the free encyclopedia

Extreme risks are risks of very bad outcomes or "high consequence", but of low probability. They include the risks of terrorist attack, biosecurity risks such as the invasion of pests, and extreme natural disasters such as major earthquakes.Contents [hide] 1 Introduction 2 Extreme value theory 3 Black swan theory 4 Bank operational risk 5 References 6 Further reading

[edit]

Introduction

The estimation of the probability of extreme events is difficult because of the lack of data: they are events that have not yet happened or have happened only very rarely, so relevant data is scarce. Thus standard statistical methods are generally inapplicable. [edit] Extreme value theory Main article: Extreme value theory

If there is some relevant data, the probability of events at or beyond the range of the data may be estimated by the statistical methods of extreme value theory, developed for such purposes as predicting 100-year floods from a limited range of data of past floods. In such cases a mathematical function may be fitted to the data and extrapolated beyond the range of the data to estimate the probability of extreme events. The results need to be treated with caution because of the possibility that the largest values in the past are unrepresentative, and the possibility that the behavior of the system has changed. [edit] Black swan theory Main article: Black swan theory

In cases where the event of interest is very different from existing experience, there may be no relevant guide in the past data. Nassim Nicholas Taleb argues in his black swan theory that the frequency and impact of totally unexpected events is generally underestimated. With hindsight, they can be explained, but there is no prospect of predicting them. [edit] Bank operational risk Main article: Operational risk

Banks need to evaluate the risk of adverse events other that credit risks and market risks. These risks, called operational risks, include the major events most likely to cause bank failure, such as massive internal fraud. The international compliance regime for banks, Basel II, requires that such risks be quantified using a mixture of statistical theory, such as extreme value theory, and scenario analysis

conducted by internal committees of experts. The result is overseen by the bank regulator such as the Federal Reserve. Negotiations between the parties result in a system that combines quantitative methods with informed and scrutinized expert opinion. This gives the potential to avoid as far as possible the problems caused by the paucity of data and the bias of pure expert opinion.[1]

Similar methods combining quantitative methods with moderated expert opinion have been used to evaluate biosecurity risks such as risks of invasive species that will have massive impacts on a country's economy or ecology.[2] Cost-benefit analysis From Wikipedia, the free encyclopedia

Cost-benefit analysis (CBA), sometimes called benefit-cost analysis (BCA), is an economic decisionmaking approach, used particularly in government and business. CBA is used in the assessment of whether a proposed project, programme or policy is worth doing, or to choose between several alternative ones. It involves comparing the total expected costs of each option against the total expected benefits, to see whether the benefits outweigh the costs, and by how much.

In CBA, benefits and costs are expressed in money terms, and are adjusted for the time value of money, so that all flows of benefits and flows of project costs over time (which tend to occur at different points in time) are expressed on a common basis in terms of their "present value."

Closely related, but slightly different, formal techniques include cost-effectiveness analysis, cost-utility analysis, economic impact analysis, fiscal impact analysis and Social Return on Investment (SROI) analysis.Contents [hide] 1 Theory 1.1 Valuation 1.2 Time 1.3 Risk and uncertainty 2 Application and history 3 Accuracy problems 4 References

5 Further reading 6 External links

[edit] Theory

Costbenefit analysis is often used by governments and others, e.g. businesses, to evaluate the desirability of a given intervention. It is an analysis of the cost effectiveness of different alternatives in order to see whether the benefits outweigh the costs (i.e. whether it is worth intervening at all), and by how much (i.e. which intervention to choose). The aim is to gauge the efficiency of the interventions relative to each other and the status quo. [edit] Valuation

The costs of an intervention are usually financial. The overall benefits of a government intervention are often evaluated in terms of the public's willingness to pay for them, minus their willingness to pay to avoid any adverse effects. The guiding principle of evaluating benefits is to list all parties affected by an intervention and place a value, usually monetary, on the (positive or negative) effect it has on their welfare as it would be valued by them. Putting actual values on these is often difficult; surveys or inferences from market behavior are often used.

One source of controversy is placing a monetary value of human life, e.g. when assessing road safety measures or life-saving medicines. However, this can sometimes be avoided by using the related technique of cost-utility analysis, in which benefits are expressed in non-monetary units such as qualityadjusted life years. For example, road safety can be measured in terms of 'cost per life saved', without placing a financial value on the life itself.

Another controversy is the value of the environment, which in the 21st century is sometimes assessed by valuing it as a provider of services to humans, such as water and pollination. Monetary values may also be assigned to other intangible effects such as loss of business reputation, market penetration, or long-term enterprise strategy alignments. [edit]

Time

CBA usually tries to put all relevant costs and benefits on a common temporal footing using time value of money formulas. This is often done by converting the future expected streams of costs and benefits into a present value amount using a suitable discount rate. Empirical studies suggest that in reality, people do discount the future like this.

There is often no consensus on the appropriate discount rate to use - e.g. whether it should be small (thus putting a similar value on future generations as on ourselves) or larger (e.g. a real interest rate or market rate of return, on the basis that there is a theoretical alternative option of investing the cost in financial markets to get a monetary benefit). The rate chosen usually makes a large difference in the assessment of interventions with long-term effects, such as those affecting climate change, and thus is a source of controversy. One of the issues arising is the equity premium puzzle, that actual long-term financial returns on equities may be rather higher than they should be; if so then arguably these rates of return should not be used to determine a discount rate, as doing so would have the effect of largely ignoring the distant future. [edit] Risk and uncertainty

Risk associated with the outcome of projects is also usually taken into account using probability theory. This can be factored into the discount rate (to have uncertainty increasing over time), but is usually considered separately. Particular consideration is often given to risk aversion - that is, people usually consider a loss to have a larger impact than an equal gain, so a simple expected return may not take into account the detrimental effect of uncertainty.

Uncertainty in the CBA parameters (as opposed to risk of project failure etc.) is often evaluated using a sensitivity analysis, which shows how the results are affected by changes in the parameters. [edit] Application and history

The practice of costbenefit analysis differs between countries and between sectors (e.g., transport, health) within countries. Some of the main differences include the types of impacts that are included as costs and benefits within appraisals, the extent to which impacts are expressed in monetary terms, and

differences in the discount rate between countries. Agencies across the world rely on a basic set of key costbenefit indicators, including the following: NPV (net present value) PVB (present value of benefits) PVC (present value of costs) BCR (benefit cost ratio = PVB / PVC) Net benefit (= PVB - PVC) NPV/k (where k is the level of funds available)

The concept of CBA dates back to an 1848 article by Jules Dupuit and was formalized in subsequent works by Alfred Marshall. The practical application of CBA was initiated in the US by the Corps of Engineers, after the Federal Navigation Act of 1936 effectively required costbenefit analysis for proposed federal waterway infrastructure.[1] The Flood Control Act of 1939 was instrumental in establishing CBA as federal policy. It specified the standard that "the benefits to whomever they accrue [be] in excess of the estimated costs.[2]

Subsequently, costbenefit techniques were applied to the development of highway and motorway investments in the US and UK in the 1950s and 1960s. An early and often-quoted, more developed application of the technique was made to London Underground's Victoria Line. Over the last 40 years, costbenefit techniques have gradually developed to the extent that substantial guidance now exists on how transport projects should be appraised in many countries around the world.

In the UK, the New Approach to Appraisal (NATA) was introduced by the then Department for Transport, Environment and the Regions. This brought together costbenefit results with those from detailed environmental impact assessments and presented them in a balanced way. NATA was first applied to national road schemes in the 1998 Roads Review but subsequently rolled out to all modes of transport. It is now a cornerstone of transport appraisal in the UK and is maintained and developed by the Department for Transport.[11]

The EU's 'Developing Harmonised European Approaches for Transport Costing and Project Assessment' (HEATCO) project, part of its Sixth Framework Programme, has reviewed transport appraisal guidance across EU member states and found that significant differences exist between countries. HEATCO's aim is to develop guidelines to harmonise transport appraisal practice across the EU.[12][13] [3]

Transport Canada has also promoted the use of CBA for major transport investments since the issuance of its Guidebook in 1994.[4]

More recent guidance has been provided by the United States Department of Transportation and several state transportation departments, with discussion of available software tools for application of CBA in transportation, including HERS, BCA.Net, StatBenCost, Cal-BC, and TREDIS. Available guides are provided by the Federal Highway Administration,[5][6] Federal Aviation Administration,[7] Minnesota Department of Transportation,[8] California Department of Transportation (Caltrans),[9] and the Transportation Research Board Transportation Economics Committee.[10]

In the early 1960s, CBA was also extended to assessment of the relative benefits and costs of healthcare and education in works by Burton Weisbrod.[11][12] Later, the United States Department of Health and Human Services issued its CBA Guidebook.[13] .[14] [edit] Accuracy problems

The accuracy of the outcome of a costbenefit analysis depends on how accurately costs and benefits have been estimated.

A peer-reviewed study [14] of the accuracy of cost estimates in transportation infrastructure planning found that for rail projects actual costs turned out to be on average 44.7 percent higher than estimated costs, and for roads 20.4 percent higher (Flyvbjerg, Holm, and Buhl, 2002). For benefits, another peerreviewed study [15] found that actual rail ridership was on average 51.4 percent lower than estimated ridership; for roads it was found that for half of all projects estimated traffic was wrong by more than 20 percent (Flyvbjerg, Holm, and Buhl, 2005). Comparative studies indicate that similar inaccuracies apply to fields other than transportation. These studies indicate that the outcomes of costbenefit analyses should be treated with caution because they may be highly inaccurate. Inaccurate costbenefit analyses likely to lead to inefficient decisions, as defined by Pareto and Kaldor-Hicks efficiency .These outcomes (almost always tending to underestimation unless significant new approaches are overlooked) are to be expected because such estimates: Rely heavily on past like projects (often differing markedly in function or size and certainly in the skill levels of the team members)

Rely heavily on the project's members to identify (remember from their collective past experiences) the significant cost drivers Rely on very crude heuristics to estimate the money cost of the intangible elements Are unable to completely dispel the usually unconscious biases of the team members (who often have a vested interest in a decision to go ahead) and the natural psychological tendency to "think positive" (whatever that involves)

Reference class forecasting was developed to increase accuracy in estimates of costs and benefits.[15]

The discipline of Benefits Realisation Management seeks to both increase the rigour of benefit estimation and manage benefits throughout a project or programme to ensure incorrect prodictions of benefit are identified early and corrective action can be taken swiftly.

Another challenge to costbenefit analysis comes from determining which costs should be included in an analysis (the significant cost drivers). This is often controversial because organizations or interest groups may think that some costs should be included or excluded from a study.

In the case of the Ford Pinto (where, because of design flaws, the Pinto was liable to burst into flames in a rear-impact collision), the Ford company's decision was not to issue a recall. Ford's costbenefit analysis had estimated that based on the number of cars in use and the probable accident rate, deaths due to the design flaw would run about $49.5 million (the amount Ford would pay out of court to settle wrongful death lawsuits). This was estimated to be less than the cost of issuing a recall ($137.5 million) [16]. In the event, Ford overlooked (or considered insignificant) the costs of the negative publicity so engendered, which turned out to be quite significant (because it led to the recall anyway and to measurable losses in sales).

In the field of health economics, some analysts think costbenefit analysis can be an inadequate measure because willingness-to-pay methods of determining the value of human life can be subject to bias according to income inequity. They support use of variants such as cost-utility analysis and qualityadjusted life year to analyze the effects of health policies.

In the case of environmental and occupational health regulation, it has been argued that if modern costbenefit analyses had been applied prospectively to proposed regulations such as removing lead from

gasoline, not turning the Grand Canyon into a hydroelectric dam, and regulating workers' exposure to vinyl chloride, these regulations would not have been implemented even though they are considered to be highly successful in retrospect.[16] The Clean Air Act has been cited in retrospective studies as a case where benefits exceeded costs, but the knowledge of the benefits (attributable largely to the benefits of reducing particulate pollution) was not available until many years later.[16]

cost benefit analysis: is a Measurement of the relative costs and benefits associated with a particular project or task. Demand chain From Wikipedia, the free encyclopedia

The Demand chain is that part of the value chain which drives demand.Contents [hide] 1 Concept 2 Demand chain challenges 3 Linking demand and supply chains 4 Demand chain information systems 5 Demand chain process improvement 6 Demand chain budget segmentation, targeting and optimization 7 See also 8 References

[edit] Concept

Analysing the firm's activities as a linked chain is a tried and tested way of revealing value creation opportunities. The business economist Michael Porter of Harvard Business School pioneered this value chain approach: "the value chain disaggregates the firm into its strategically relevant activities in order to understand the costs and existing potential sources of differentiation" [1]. It is the micro mechanism at the level of the firm that equalizes supply and demand at the macro market level.

Early applications in distribution, manufacturing and purchasing collectively gave rise to a subject known as the supply chain [2]. Old supply chains have been transformed into faster, cheaper and more reliable modern supply chains as a result of investment in information technology, cost-analysis and processanalysis.

Marketing, sales and service are the other half of the value-chain, which collectively drive and sustain demand, and are known as the Demand Chain. Progress in transforming the demand side of business is behind the supply side, but there is growing interest today in transforming demand chains.

[edit] Demand chain challenges

At present, there appear to be four main challenges to progress in transforming Demand Chains and making them faster, leaner and better: Linking Demand and Supply Chains Demand Chain Information Systems Demand Chain Process Re-Engineering Demand Chain Resource Distribution and Optimisation [edit] Linking demand and supply chains

The challenge of linking demand and supply chains has occupied many supply chain specialists in recent years; and concepts such as "demand-driven supply chains", customer-driven supply chains and sales and operations planning have attracted attention and become the subject of conferences and seminars.[3],[4]

The core problem from the supply chain perspective is getting good demand plans and forecasts from the people driving demand: marketing, sales promotions, new product developments etc. The aim is to minimise out-of-stock (OOS) situations and excessive cost of supply due to spiky demand. Much attention has been drawn to the bullwhip effect. This occurs when demand patterns are extremely volatile, usually as a result of sales promotions, and it has the unintended consequences of driving up supply chain costs and service issues, due to supply capacity being unable to meet the spiky demand pattern and the entire chain becoming unstable as a consequence [5].

While the aim of linking the chains is clearly sensible, most of the people involved so far come from the supply side, and there has been a noticeable lack of input to these debates from Marketing and Sales specialists. Progress in modernising marketing and sales processes and information systems has also been slow; these systems and processes have been an obstacle to linking the two chains. [edit] Demand chain information systems

Information about activities and costs is an essential resource for improving value chain performance. Such information is nowadays readily available for the supply chain, due to the widespread implementation of ERP technology (systems such as SAP), and these systems have been instrumental in the transformation of supply chain performance.

Demand chain IT development has focused on database marketing and CRM systems [6]. Demand driving activities and associated costs are still recorded in an inconsistent manner, mostly on spreadsheets and even then the quality of the information tends to be incomplete and inaccurate [7].[8]

Recently, however, Marketing Resource Management systems have become available to plan, track and measure activities and costs as an embedded part of marketing workflows. "MRM is a set of processes and capabilities that aim to enhance your ability to orchestrate and optimize the use of internal and external marketing resources...The desire to deal with increased marketing complexity, along with a mandate to do more with less, are the primary drivers behind the growth of MRM" [9]

Implementation of MRM systems often reveals process issues that must be tackled, as Gartner have observed

"All too often, large enterprises lack documented or standardized marketing processes resulting in misalignments, inconsistencies and wasted effort. Marketing personnel frequently rotate job responsibilities. Along with thwarting progress toward best practices and processes, this disarray contributes to a loss of corporate memory and key lessons learned. The elongated learning curve affects new or transferred employees as they struggle to find information or have to relearn what the organization, in effect, already "knows." [10] [edit] Demand chain process improvement

Processes in the demand chain are often less well-organised and disciplined than their supply side equivalents. This arises partly from the absence of an agreed framework for analysing the demand chain process.

Professors Philip Kotler and Robert Shaw have recently proposed such a framework [11]. Describing it as the "Idea to Demand Chain" they say: "The I2D process can be pictured as shown in Exhibit 1; it is the mirror image of the supply chain, and contains all the activities that result in demand being stimulated. Yet unlike the supply chain, which has successfully delivered economies of scale through process simplification and process control, marketings demand chain is primitive and inefficient. In many firms it is fragmented, obscured by departmental boundaries, invisible and unmanaged."

[edit] Demand chain budget segmentation, targeting and optimization

Demand chain budgets for marketing, sales and service expenditure are substantial. Maximising their impact on shareholder value has become an important financial goal for decision makers. Developing a shared language across marketing and finance is one the challenges to achieving this goal.[12]

Segmentation is the initial thing to decide. From a strategic finance perspective "segments are responsibility centers for which a separate measure of revenues and costs is obtained" [13]. From a marketing perspective "segmentation is the act of dividing the market into distinct groups of buyers who

might require separate products and/or marketing mixes" [14]. An important challenge for decision makers is how to align these two marketing and finance perspectives on segmentation.

Targeting of the budget is the final thing to decide. From the marketing perspective the challenge is how "to optimally allocate a given marketing budget to various target markets" [15]. From a finance perspective the problem is one of resource and budget allocation "determining the right quantity of resources to implement the value maximising strategy" [16].

Optimization provides the technical basis for targeting decisions. Whilst mathematical optimization theory has been in existence since the 1950s, its application to marketing only began in the 1970s [17], and lack of data and computer power were limiting factors until the 1990s.

Since 2000, applying maths to budget segmentation, targeting and optimization has become more commonplace. In the UK the IPA Awards have documented over 1000 cases of modelling over 15 years, as part of their award process. The judging criteria are rigorous and not a matter of taste or fashion. Entrants must prove beyond all reasonable doubt that the marketing is profitable [18]. It enables marketing to be brought centre stage in four important ways [19]

First, it translates the language of marketing and sales into the language of the boardroom. Finance and profits are the preferred language of the modern executive suite. Marketing and sales strategies have to be justified in terms of their ability to increase the financial value of the business. It provides a bridge between marketing and the other functions.

Second, it strengthens demand chain accountability. In Marketing Departments awareness, preference and satisfaction are often tracked as alternative objectives to shareholder value. In Sales Departments, sales promotion spending is often used to boost volumes, even when the result is unprofitable [20]. Optimization modelling can assess these practices and support more rigorous accountability methods.

Third, it provides a counter-argument to the arbitrary cutting of demand-chain budgets. Return on marketing investment models can help demonstrate where financial impact of demand driving activities is positive and negative, and so help support fact-based budgeting.

Finally, demand-chain profitability modelling encourages a strategic debate. Because long-term cashflow and NPV calculations can show the shareholder value effect of marketing, sales and service, strong arguments can be made for putting the Demand Chain on an equal footing to the Supply Chain.

Marketing strategy From Wikipedia, the free encyclopedia This article may need to be rewritten entirely to comply with Wikipedia's quality standards. You can help. The discussion page may contain suggestions. (May 2009) Marketing

Key concepts

Product Pricing Distribution Service Retail Brand management Account-based marketing Marketing ethics Marketing effectiveness Market research Market segmentation Marketing strategy Marketing management Market dominance Promotional content

Advertising Branding Underwriting Direct marketing Personal Sales Product placement Publicity Sales promotion Sex in advertising Loyalty marketing Premiums Prizes Promotional media

Printing Publication Broadcasting Out-of-home Internet marketing Point of sale

Promotional merchandise Digital marketing In-game In-store demonstration Word-of-mouth marketing Brand Ambassador Drip Marketing This box: view talk edit

Marketing strategy is a process that can allow an organization to concentrate its limited resources on the greatest opportunities to increase sales and achieve a sustainable competitive advantage.[1]Contents [hide] 1 Developing a marketing strategy 2 Types of strategies 3 Strategic models 4 Real-life marketing 5 See also 6 References 7 Further reading

[edit] Developing a marketing strategy This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (June 2008)

Marketing strategies serve as the fundamental underpinning of marketing plans designed to fill market needs and reach marketing objectives.[2] Plans and objectives are generally tested for measurable results. Commonly, marketing strategies are developed as multi-year plans, with a tactical plan detailing

specific actions to be accomplished in the current year. Time horizons covered by the marketing plan vary by company, by industry, and by nation, however, time horizons are becoming shorter as the speed of change in the environment increases.[3] Marketing strategies are dynamic and interactive. They are partially planned and partially unplanned. See strategy dynamics.

Marketing strategy involves careful scanning of the internal and external environments which are summarized in a SWOT analysis.[4] Internal environmental factors include the marketing mix, plus performance analysis and strategic constraints.[5] External environmental factors include customer analysis, competitor analysis, target market analysis, as well as evaluation of any elements of the technological, economic, cultural or political/legal environment likely to impact success.[3][6] A key component of marketing strategy is often to keep marketing in line with a company's overarching mission statement.[7] Besides SWOT analysis, portfolio analyses such as the GE/McKinsey matrix [8] or COPE analysis[9] can be performed to determine the strategic focus.

Once a thorough environmental scan is complete, a strategic plan can be constructed to identify business alternatives, establish challenging goals, determine the optimal marketing mix to attain these goals, and detail implementation.[3] A final step in developing a marketing strategy is to create a plan to monitor progress and a set of contingencies if problems arise in the implementation of the plan. [edit] Types of strategies This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (June 2008)

Marketing strategies may differ depending on the unique situation of the individual business. However there are a number of ways of categorizing some generic strategies. A brief description of the most common categorizing schemes is presented below: Strategies based on market dominance - In this scheme, firms are classified based on their market share or dominance of an industry. Typically there are four types of market dominance strategies: Leader Challenger Follower Nicher

Porter generic strategies - strategy on the dimensions of strategic scope and strategic strength. Strategic scope refers to the market penetration while strategic strength refers to the firms sustainable competitive advantage. The generic strategy framework (porter 1984) comprises two alternatives each with two alternative scopes. These are Differentiation and low-cost leadership each with a dimension of Focus-broad or narrow. Product differentiation (broad) Cost leadership (broad) Market segmentation (narrow) Innovation strategies This deals with the firm's rate of the new product development and business model innovation. It asks whether the company is on the cutting edge of technology and business innovation. There are three types: Pioneers Close followers Late followers Growth strategies In this scheme we ask the question, How should the firm grow?. There are a number of different ways of answering that question, but the most common gives four answers: Horizontal integration Vertical integration Diversification Intensification

A more detailed scheme uses the categories[10]: Prospector Analyzer Defender Reactor Marketing warfare strategies - This scheme draws parallels between marketing strategies and military strategies. [edit]

Strategic models This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (June 2008)

Marketing participants often employ strategic models and tools to analyze marketing decisions. When beginning a strategic analysis, the 3Cs can be employed to get a broad understanding of the strategic environment. An Ansoff Matrix is also often used to convey an organization's strategic positioning of their marketing mix. The 4Ps can then be utilized to form a marketing plan to pursue a defined strategy.

There are many companies especially those in the Consumer Package Goods (CPG) market that adopt the theory of running their business centered around Consumer, Shopper & Retailer needs. Their Marketing departments spend quality time looking for "Growth Opportunities" in their categories by identifying relevant insights (both mindsets and behaviors) on their target Consumers, Shoppers and retail partners. These Growth Opportunities emerge from changes in market trends, segment dynamics changing and also internal brand or operational business challenges.The Marketing team can then prioritize these Growth Opportunities and begin to develop strategies to exploit the opportunities that could include new or adapted products, services as well as changes to the 7Ps.

[edit] Real-life marketing

Real-life marketing primarily revolves around the application of a great deal of common-sense; dealing with a limited number of factors, in an environment of imperfect information and limited resources complicated by uncertainty and tight timescales. Use of classical marketing techniques, in these circumstances, is inevitably partial and uneven.

Thus, for example, many new products will emerge from irrational processes and the rational development process may be used (if at all) to screen out the worst non-runners. The design of the advertising, and the packaging, will be the output of the creative minds employed; which management will then screen, often by 'gut-reaction', to ensure that it is reasonable.

For most of their time, marketing managers use intuition and experience to analyze and handle the complex, and unique, situations being faced; without easy reference to theory. This will often be 'flying

by the seat of the pants', or 'gut-reaction'; where the overall strategy, coupled with the knowledge of the customer which has been absorbed almost by a process of osmosis, will determine the quality of the marketing employed. This, almost instinctive management, is what is sometimes called 'coarse marketing'; to distinguish it from the refined, aesthetically pleasing, form favored by the theorists. Pricing strategies From Wikipedia, the free encyclopedia This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (April 2008)

Pricing strategies for products or services include the following:Contents [hide] 1 Competition-based pricing 2 Cost-plus pricing 3 Creaming or skimming 4 Limit pricing 5 Loss leader 6 Market-oriented pricing 7 Penetration pricing 8 Price discrimination 9 Premium pricing 10 Predatory pricing 11 Contribution margin-based pricing 12 Psychological pricing 13 Dynamic pricing 14 Price leadership 15 Target pricing 16 Absorption pricing

17 High-low pricing 18 Premium Decoy pricing 19 Marginal-cost pricing 20 Value Based pricing 21 Nine Laws of Price Sensitivity & Consumer Psychology 22 References

[edit] Competition-based pricing

Setting the price based upon prices of the similar competitor products.

Competitive pricing is based on three types of competitive product: Products have lasting distinctiveness from competitor's product. Here we can assume The product has low price elasticity. The product has low cross elasticity. The demand of the product will rise. Products have perishable distinctiveness from competitor's product, assuming the product features are medium distinctiveness. Products have little distinctiveness from competitor's product. assuming that: The product has high price elasticity. The product has some cross elasticity. No expectation that demand of the product will rise. [edit] Cost-plus pricing Main article: cost-plus pricing

Cost-plus pricing is the simplest pricing method. The firm calculates the cost of producing the product and adds on a percentage (profit) to that price to give the selling price. This method although simple has two flaws; it takes no account of demand and there is no way of determining if potential customers will purchase the product at the calculated price.

This appears in 2 forms, Full cost pricing which takes into consideration both variable and fixed costs and adds a % markup. The other is Direct cost pricing which is variable costs plus a % markup, the latter is only used in periods of high competition as this method usually leads to a loss in the long run. [edit] Creaming or skimming

Selling a product at a high price, sacrificing high sales to gain a high profit, therefore skimming the market. Usually employed to reimburse the cost of investment of the original research into the product: commonly used in electronic markets when a new range, such as DVD players, are firstly dispatched into the market at a high price. This strategy is often used to target "early adopters" of a product or service. These early adopters are relatively less price-sensitive because either their need for the product is more than others or they understand the value of the product better than others. In market skimming goods are sold at higher prices so that fewer sales are needed to break even.

This strategy is employed only for a limited duration to recover most of investment made to build the product. To gain further market share, a seller must use other pricing tactics such as economy or penetration. This method can come with some setbacks as it could leave the product at a high price to competitors.[1] [edit] Limit pricing Main article: Limit price

A limit price is the price set by a monopolist to discourage economic entry into a market, and is illegal in many countries. The limit price is the price that the entrant would face upon entering as long as the incumbent firm did not decrease output. The limit price is often lower than the average cost of production or just low enough to make entering not profitable. The quantity produced by the incumbent

firm to act as a deterrent to entry is usually larger than would be optimal for a monopolist, but might still produce higher economic profits than would be earned under perfect competition.

The problem with limit pricing as strategic behavior is that once the entrant has entered the market, the quantity used as a threat to deter entry is no longer the incumbent firm's best response. This means that for limit pricing to be an effective deterrent to entry, the threat must in some way be made credible. A way to achieve this is for the incumbent firm to constrain itself to produce a certain quantity whether entry occurs or not. An example of this would be if the firm signed a union contract to employ a certain (high) level of labor for a long period of time. [edit] Loss leader Main article: loss leader

A loss leader or leader is a product sold at a low price (at cost or below cost) to stimulate other profitable sales. [edit] Market-oriented pricing

Setting a price based upon analysis and research compiled from the targeted market. [edit] Penetration pricing Main article: penetration pricing

Setting the price low in order to attract customers and gain market share. The price will be raised later once this market share is gained.[2] [edit] Price discrimination Main article: price discrimination

Setting a different price for the same product in different segments to the market. For example, this can be for different ages or for different opening times, such as cinema tickets. [edit] Premium pricing Main article: Premium pricing

Premium pricing is the practice of keeping the price of a product or service artificially high in order to encourage favorable perceptions among buyers, based solely on the price. The practice is intended to exploit the (not necessarily justifiable) tendency for buyers to assume that expensive items enjoy an exceptional reputation or represent exceptional quality and distinction. [edit] Predatory pricing Main article: predatory pricing

Aggressive pricing intended to drive out competitors from a market. It is illegal in some places. [edit] Contribution margin-based pricing Main article: contribution margin-based pricing

Contribution margin-based pricing maximizes the profit derived from an individual product, based on the difference between the product's price and variable costs (the product's contribution margin per unit), and on ones assumptions regarding the relationship between the products price and the number of units that can be sold at that price. The product's contribution to total firm profit (i.e., to operating income) is maximized when a price is chosen that maximizes the following: (contribution margin per unit) X (number of units sold).. [edit] Psychological pricing Main article: psychological pricing

Pricing designed to have a positive psychological impact. For example, selling a product at $3.95 or $3.99, rather than $4.00. [edit] Dynamic pricing Main article: dynamic pricing

A flexible pricing mechanism made possible by advances in information technology, and employed mostly by Internet based companies. By responding to market fluctuations or large amounts of data gathered from customers - ranging from where they live to what they buy to how much they have spent on past purchases - dynamic pricing allows online companies to adjust the prices of identical goods to correspond to a customers willingness to pay. The airline industry is often cited as a dynamic pricing success story. In fact, it employs the technique so artfully that most of the passengers on any given airplane have paid different ticket prices for the same flight. [edit] Price leadership Main article: price leadership

An observation made of oligopic business behavior in which one company, usually the dominant competitor among several, leads the way in determining prices, the others soon following. [edit] Target pricing

Pricing method whereby the selling price of a product is calculated to produce a particular rate of return on investment for a specific volume of production. The target pricing method is used most often by public utilities, like electric and gas companies, and companies whose capital investment is high, like automobile manufacturers.

Target pricing is not useful for companies whose capital investment is low because, according to this formula, the selling price will be understated. Also the target pricing method is not keyed to the demand for the product, and if the entire volume is not sold, a company might sustain an overall budgetary loss on the product.

[edit] Absorption pricing

Method of pricing in which all costs are recovered. The price of the product includes the variable cost of each item plus a proportionate amount of the fixed costs. A form of cost plus pricing [edit] High-low pricing

Method of pricing for an organization where the goods or services offered by the organization are regularly priced higher than competitors, but through promotions, advertisements, and or coupons, lower prices are offered on key items. The lower promotional prices are targeted to bring customers to the organization where the customer is offered the promotional product as well as the regular higher priced products.[3] [edit] Premium Decoy pricing

Method of pricing where an organization artificially sets one product price high, in order to boost sales of a lower priced product. [edit] Marginal-cost pricing

In business, the practice of setting the price of a product to equal the extra cost of producing an extra unit of output. By this policy, a producer charges, for each product unit sold, only the addition to total cost resulting from materials and direct labor. Businesses often set prices close to marginal cost during periods of poor sales. If, for example, an item has a marginal cost of $1.00 and a normal selling price is $2.00, the firm selling the item might wish to lower the price to $1.10 if demand has waned. The business would choose this approach because the incremental profit of 10 cents from the transaction is better than no sale at all. [edit] Value Based pricing

Main article: Value-based pricing

Pricing a product based on the perceived value and not on any other factor. Pricing based on the demand for a specific product would have a likely change in the market place. [edit] Nine Laws of Price Sensitivity & Consumer Psychology

In their book, The Strategy and Tactics of Pricing, Thomas Nagle and Reed Holden outline 9 laws or factors that influence how a consumer perceives a given price and how price-sensitive s/he is likely to be with respect to different purchase decisions: [4][5] Reference Price Effect Buyers price sensitivity for a given product increases the higher the products price relative to perceived alternatives. Perceived alternatives can vary by buyer segment, by occasion, and other factors. Difficult Comparison Effect Buyers are less sensitive to the price of a known / more reputable product when they have difficulty comparing it to potential alternatives. Switching Costs Effect The higher the product-specific investment a buyer must make to switch suppliers, the less price sensitive that buyer is when choosing between alternatives. Price-Quality Effect Buyers are less sensitive to price the more that higher prices signal higher quality. Products for which this effect is particularly relevant include: image products, exclusive products, and products with minimal cues for quality. Expenditure Effect Buyers are more price sensitive when the expense accounts for a large percentage of buyers available income or budget. End-Benefit Effect The effect refers to the relationship a given purchase has to a larger overall benefit, and is divided into two parts: Derived demand: The more sensitive buyers are to the price of the end benefit, the more sensitive they will be to the prices of those products that contribute to that benefit. Price proportion cost: The price proportion cost refers to the percent of the total cost of the end benefit accounted for by a given component that helps to produce the end benefit (e.g., think CPU and PCs). The smaller the given components share of the total cost of the end benefit, the less sensitive buyers will be to the component's price. Shared-cost Effect The smaller the portion of the purchase price buyers must pay for themselves, the less price sensitive they will be. Fairness Effect Buyers are more sensitive to the price of a product when the price is outside the range they perceive as fair or reasonable given the purchase context.

The Framing Effect Buyers are more price sensitive when they perceive the price as a loss rather than a forgone gain, and they have greater price sensitivity when the price is paid separately rather than as part of a bundle. Porter generic strategies From Wikipedia, the free encyclopedia

Michael Porter has described a category scheme consisting of three general types of strategies that are commonly used by businesses to achieve and maintain competitive advantage. These three generic strategies are defined along two dimensions: strategic scope and strategic strength. Strategic scope is a demand-side dimension (Michael E. Porter was originally an engineer, then an economist before he specialized in strategy) and looks at the size and composition of the market you intend to target. Strategic strength is a supply-side dimension and looks at the strength or core competency of the firm. In particular he identified two competencies that he felt were most important: product differentiation and product cost (efficiency).

He originally ranked each of the three dimensions (level of differentiation, relative product cost, and scope of target market) as either low, medium, or high, and juxtaposed them in a three dimensional matrix. That is, the category scheme was displayed as a 3 by 3 by 3 cube. But most of the 27 combinations were not viable.

Porter's Generic Strategies

In his 1980 classic Competitive Strategy: Techniques for Analysing Industries and Competitors, Porter simplifies the scheme by reducing it down to the three best strategies. They are cost leadership, differentiation, and market segmentation (or focus). Market segmentation is narrow in scope while both cost leadership and differentiation are relatively broad in market scope.

Empirical research on the profit impact of marketing strategy indicated that firms with a high market share were often quite profitable, but so were many firms with low market share. The least profitable firms were those with moderate market share. This was sometimes referred to as the hole in the middle problem. Porters explanation of this is that firms with high market share were successful because they pursued a cost leadership strategy and firms with low market share were successful because they used market segmentation to focus on a small but profitable market niche. Firms in the middle were less profitable because they did not have a viable generic strategy.

Porter suggested combining multiple strategies is successful in only one case. Combining a market segmentation strategy with a product differentiation strategy was seen as an effective way of matching a firms product strategy (supply side) to the characteristics of your target market segments (demand side). But combinations like cost leadership with product differentiation were seen as hard (but not impossible) to implement due to the potential for conflict between cost minimization and the additional cost of value-added differentiation.

Since that time, empirical research has indicated companies pursuing both differentiation and low-cost strategies may be more successful than companies pursuing only one strategy.[1]

Some commentators have made a distinction between cost leadership, that is, low cost strategies, and best cost strategies. They claim that a low cost strategy is rarely able to provide a sustainable competitive advantage. In most cases firms end up in price wars. Instead, they claim a best cost strategy is preferred. This involves providing the best value for a relatively low price.Contents [hide] 1 Cost Leadership Strategy 2 Differentiation Strategy 2.1 Variants on the Differentiation Strategy 3 Focus or Strategic Scope 4 Recent developments 5 Criticisms of generic strategies 6 See also 7 References

[edit] Cost Leadership Strategy

This strategy involves the firm winning market share by appealing to cost-conscious or price-sensitive customers. This is achieved by having the lowest prices in the target market segment, or at least the lowest price to value ratio (price compared to what customers receive). To succeed at offering the

lowest price while still achieving profitability and a high return on investment, the firm must be able to operate at a lower cost than its rivals. There are three main ways to achieve this.

The first approach is achieving a high asset turnover. In service industries, this may mean for example a restaurant that turns tables around very quickly, or an airline that turns around flights very fast. In manufacturing, it will involve production of high volumes of output. These approaches mean fixed costs are spread over a larger number of units of the product or service, resulting in a lower unit cost, i.e. the firm hopes to take advantage of economies of scale and experience curve effects. For industrial firms, mass production becomes both a strategy and an end in itself. Higher levels of output both require and result in high market share, and create an entry barrier to potential competitors, who may be unable to achieve the scale necessary to match the firms low costs and prices.

The second dimension is achieving low direct and indirect operating costs. This is achieved by offering high volumes of standardized products, offering basic no-frills products and limiting customization and personalization of service. Production costs are kept low by using fewer components, using standard components, and limiting the number of models produced to ensure larger production runs. Overheads are kept low by paying low wages, locating premises in low rent areas, establishing a cost-conscious culture, etc. Maintaining this strategy requires a continuous search for cost reductions in all aspects of the business. This will include outsourcing, controlling production costs, increasing asset capacity utilization, and minimizing other costs including distribution, R&D and advertising. The associated distribution strategy is to obtain the most extensive distribution possible. Promotional strategy often involves trying to make a virtue out of low cost product features.

The third dimension is control over the supply/procurement chain to ensure low costs. This could be achieved by bulk buying to enjoy quantity discounts, squeezing suppliers on price, instituting competitive bidding for contracts, working with vendors to keep inventories low using methods such as Just-in-Time purchasing or Vendor-Managed Inventory. Wal-Mart is famous for squeezing its suppliers to ensure low prices for its goods. Dell Computer initially achieved market share by keeping inventories low and only building computers to order. Other procurement advantages could come from preferential access to raw materials, or backward integration.

Some writers posit that cost leadership strategies are only viable for large firms with the opportunity to enjoy economies of scale and large production volumes. However, this takes a limited industrial view of strategy. Small businesses can also be cost leaders if they enjoy any advantages conducive to low costs. For example, a local restaurant in a low rent location can attract price-sensitive customers if it offers a limited menu, rapid table turnover and employs staff on minimum wage. Innovation of products or

processes may also enable a startup or small company to offer a cheaper product or service where incumbents' costs and prices have become too high. An example is the success of low-cost budget airlines who despite having fewer planes than the major airlines, were able to achieve market share growth by offering cheap, no-frills services at prices much cheaper than those of the larger incumbents.

A cost leadership strategy may have the disadvantage of lower customer loyalty, as price-sensitive customers will switch once a lower-priced substitute is available. A reputation as a cost leader may also result in a reputation for low quality, which may make it difficult for a firm to rebrand itself or its products if it chooses to shift to a differentiation strategy in future. [edit] Differentiation Strategy

Differentiate the products in some way in order to compete successfully. Examples of the successful use of a differentiation strategy are Hero Honda, Asian Paints, HLL, Nike athletic shoes, Perstorp BioProducts, Apple Computer, and Mercedes-Benz automobiles.

A differentiation strategy is appropriate where the target customer segment is not price-sensitive, the market is competitive or saturated, customers have very specific needs which are possibly under-served, and the firm has unique resources and capabilities which enable it to satisfy these needs in ways that are difficult to copy. These could include patents or other Intellectual Property (IP), unique technical expertise (e.g. Apple's design skills or Pixar's animation prowess), talented personnel (e.g. a sports team's star players or a brokerage firm's star traders), or innovative processes. Successful brand management also results in perceived uniqueness even when the physical product is the same as competitors. This way, Chiquita was able to brand bananas, Starbucks could brand coffee, and Nike could brand sneakers. Fashion brands rely heavily on this form of image differentiation. [edit] Variants on the Differentiation Strategy

The shareholder value model holds that the timing of the use of specialized knowledge can create a differentiation advantage as long as the knowledge remains unique.[2] This model suggests that customers buy products or services from an organization to have access to its unique knowledge. The advantage is static, rather than dynamic, because the purchase is a one-time event.

The unlimited resources model utilizes a large base of resources that allows an organization to outlast competitors by practicing a differentiation strategy. An organization with greater resources can manage risk and sustain profits more easily than one with fewer resources. This deep-pocket strategy provides a short-term advantage only. If a firm lacks the capacity for continual innovation, it will not sustain its competitive position over time. [edit] Focus or Strategic Scope

This dimension is not a separate strategy per se, but describes the scope over which the company should compete based on cost leadership or differentiation. The firm can choose to compete in the mass market (like Wal-Mart) with a broad scope, or in a defined, focused market segment with a narrow scope. In either case, the basis of competition will still be either cost leadership or differentiation.

In adopting a narrow focus, the company ideally focuses on a few target markets (also called a segmentation strategy or niche strategy). These should be distinct groups with specialized needs. The choice of offering low prices or differentiated products/services should depend on the needs of the selected segment and the resources and capabilities of the firm. It is hoped that by focusing your marketing efforts on one or two narrow market segments and tailoring your marketing mix to these specialized markets, you can better meet the needs of that target market. The firm typically looks to gain a competitive advantage through product innovation and/or brand marketing rather than efficiency. It is most suitable for relatively small firms but can be used by any company. A focused strategy should target market segments that are less vulnerable to substitutes or where a competition is weakest to earn above-average return on investment.

Examples of firm using a focus strategy include Southwest Airlines, which provides short-haul point-topoint flights in contrast to the hub-and-spoke model of mainstream carriers, and Family Dollar.

In adopting a broad focus scope, the principle is the same: the firm must ascertain the needs and wants of the mass market, and compete either on price (low cost) or differentiation (quality, brand and customization) depending on its resources and capabilities. Wal Mart has a broad scope and adopts a cost leadership strategy in the mass market. Pixar also targets the mass market with its movies, but adopts a differentiation strategy, using its unique capabilities in story-telling and animation to produce signature animated movies that are hard to copy, and for which customers are willing to pay to see and own. Apple also targets the mass market with its iPhone and iPod products, but combines this broad

scope with a differentiation strategy based on design, branding and user experience that enables it to charge a price premium due to the perceived unavailability of close substitutes. [edit] Recent developments

Michael Treacy and Fred Wiersema (1993) in their book The Discipline of Market Leaders have modified Porter's three strategies to describe three basic "value disciplines" that can create customer value and provide a competitive advantage. They are operational excellence, product leadership, and customer intimacy. [edit] Criticisms of generic strategies

Several commentators have questioned the use of generic strategies claiming they lack specificity, lack flexibility, and are limiting.

In particular, Miller (1992) questions the notion of being "caught in the middle". He claims that there is a viable middle ground between strategies. Many companies, for example, have entered a market as a niche player and gradually expanded. According to Baden-Fuller and Stopford (1992) the most successful companies are the ones that can resolve what they call "the dilemma of opposites".

A popular post-Porter model was presented by W. Chan Kim and Rene Mauborgne in their 1999 Harvard Business Review article "Creating New Market Space". In this article they described a "value innovation" model in which companies must look outside their present paradigms to find new value propositions. Their approach fundamentally goes against Porter's concept that a firm must focus either on cost leadership or on differentiation. They later went on to publish their ideas in the book Blue Ocean Strategy.

An up-to-date critique of generic strategies and their limitations, including Porter, appears in Bowman, C. (2008) Generic strategies: a substitute for thinking? [1] [edit] See also

Bowman's Strategy Clock [edit] References From the three generic business strategies Porter stress the idea that only one strategy should be adopted by a firm and failure to do so will result in stuck in the middle scenario (Porter 1980 cited by Allen et al. 2006,Torgovicky et al. 2005). He discuss the idea that practising more than one strategy will lose the entire focus of the organisation hence clear direction of the future trajectory could not be established. The argument is based on the fundamental that differentiation will incur costs to the firm which clearly contradicts with the basis of low cost strategy and in the other hand relatively standardised products with features acceptable to many customers will not carry any differentiation (Panayides 2003, p. 126) hence, cost leadership and differentiation strategy will be mutually exclusive ( Porter 1980 cited by Trogovicky et al. 2005, p. 20). Two focal objectives of low cost leadership and differentiation clash with each other resulting in no proper direction for a firm.

However, contrarily to the rationalisation of Porter, contemporary research has shown evidence of firms practising such a hybrid strategy. Hambrick (1983 cited by Kim et al. 2004, p. 25) identified successful organisations that adopt a mixture of low cost and differentiation strategy (Kim et al. 2004, p. 25). Research writings of Davis (1984 cited by Prajogo 2007, p. 74) state that firms employing the hybrid business strategy (Low cost and differentiation strategy) outperform the ones adopting one generic strategy. Sharing the same view point, Hill (1988 cited by Akan et al. 2006, p. 49) challenged Porters concept regarding mutual exclusivity of low cost and differentiation strategy and further argued that successful combination of those two strategies will result in sustainable competitive advantage. As to Wright and other (1990 cited by Akan et al. 2006, p. 50) multiple business strategies are required to respond effectively to any environment condition. In the mid to late 1980s where the environments were relatively stable there was no requirement for flexibility in business strategies but survival in the rapidly changing, highly unpredictable present market contexts will require flexibility to face any contingency (Anderson 1997, Goldman et al. 1995, Pine 1993 cited by Radas 2005, p. 197).After eleven years Porter revised his thinking and accepted the fact that hybrid business strategy could exist (Porter cited by Projogo 2007, p. 70) and writes in the following manner

Competitive advantage can be divided into two basic types: lower costs than rivals, or the ability to differentiate and command a premium price that exceeds the extra costs of doing so. Any superior performing firm has achieved one type of advantage, the other or both ( 1991,p. 101).

Though Porter had a fundamental rationalisation in his concept about the invalidity of hybrid business strategy, the highly volatile and turbulent market conditions will not permit survival of rigid business strategies since long term establishment will depend on the agility and the quick responsiveness towards market and environmental conditions. Market and environmental turbulence will make drastic implications on the root establishment of a firm. If a firms business strategy could not cope with the environmental and market contingencies, long term survival becomes unrealistic. Diverging the strategy into different avenues with the view to exploit opportunities and avoid threats created by market conditions will be a pragmatic approach for a firm.

Critical analysis done separately for cost leadership strategy and differentiation strategy identifies elementary value in both strategies in creating and sustaining a competitive advantage. Consistent and superior performance than competition could be reached with stronger foundations in the event hybrid strategy is adopted. Depending on the market and competitive conditions hybrid strategy should be adjusted regarding the extent which each generic strategy (cost leadership or differentiation) should be given priority in practise.

Strategic management From Wikipedia, the free encyclopedia This article may need to be rewritten entirely to comply with Wikipedia's quality standards. You can help. The discussion page may contain suggestions. (March 2009) This article appears to contain a large number of buzzwords. Specific concerns can be found on the Talk page. Please improve this article if you can. (July 2011)

Strategic management is a field that deals with the major intended and emergent initiatives taken by general managers on behalf of owners, involving utilization of resources, to enhance the performance of rms in their external environments.*1+ It entails specifying the organization's mission, vision and objectives, developing policies and plans, often in terms of projects and programs, which are designed to achieve these objectives, and then allocating resources to implement the policies and plans, projects and programs. A balanced scorecard is often used to evaluate the overall performance of the business and its progress towards objectives. Recent studies and leading management theorists have advocated that strategy needs to start with stakeholders expectations and use a modified balanced scorecard which includes all stakeholders.

Strategic management is a level of managerial activity under setting goals and over Tactics. Strategic management provides overall direction to the enterprise and is closely related to the field of Organization Studies. In the field of business administration it is useful to talk about "strategic alignment" between the organization and its environment or "strategic consistency." According to Arieu (2007), "there is strategic consistency when the actions of an organization are consistent with the expectations of management, and these in turn are with the market and the context." Strategic management includes not only the management team but can also include the Board of Directors and other stakeholders of the organization. It depends on the organizational structure.

Strategic management is an ongoing process that evaluates and controls the business and the industries in which the company is involved; assesses its competitors and sets goals and strategies to meet all existing and potential competitors; and then reassesses each strategy annually or quarterly [i.e. regularly] to determine how it has been implemented and whether it has succeeded or needs replacement by a new strategy to meet changed circumstances, new technology, new competitors, a new economic environment., or a new social, financial, or political environment. (Lamb, 1984:ix)[2]Contents [hide] 1 Concepts/approaches of strategic management 2 Strategy formation 3 Strategy evaluation and choice 3.1 The basis of competition 3.2 Mode of action 3.3 Suitability 3.4 Feasibility 3.5 Acceptability

3.6 The direction of action 4 Strategic implementation and control 4.1 Organizing 4.2 Resourcing 4.3 Change management 5 General approaches 6 The strategy hierarchy 7 Historical development of strategic management 7.1 Birth of strategic management 7.2 Growth and portfolio theory 7.3 The marketing revolution 7.4 The Japanese challenge 7.5 Competitive advantage 7.6 The military theorists 7.7 Strategic change 7.8 Information- and technology-driven strategy 7.9 Knowledge Adaptive Strategy 7.10 Strategic decision making processes 8 The psychology of strategic management 9 Reasons why strategic plans fail 10 Limitations of strategic management 10.1 The linearity trap 11 See also 12 References 13 External links

[edit] Concepts/approaches of strategic management

The specific approach to strategic management can depend upon the size of an organization, and the proclivity to change of its business environment. These points are highlighted below: A global/transnational organization may employ a more structured strategic management model, due to its size, scope of operations, and need to encompass stakeholder views and requirements. An SME (Small and Medium Enterprise) may employ an entrepreneurial approach. This is due to its comparatively smaller size and scope of operations, as well as possessing fewer resources. An SME's CEO (or general top management) may simply outline a mission, and pursue all activities under that mission. [edit] Strategy formation

The initial task in strategic management is typically the compilation and dissemination of a mission statement. This document outlines, in essence, the raison d'etre of an organization. Additionally, it specifies the scope of activities an organization wishes to undertake, coupled with the markets a firm wishes to serve.

Following the devising of a mission statement, a firm would then undertake an environmental scanning within the purview of the statement.

Strategic formation is a combination of three main processes which are as follows: Performing a situation analysis, self-evaluation and competitor analysis: both internal and external; both micro-environmental and macro-environmental. Concurrent with this assessment, objectives are set. These objectives should be parallel to a time-line; some are in the short-term and others on the long-term. This involves crafting vision statements (long term view of a possible future), mission statements (the role that the organization gives itself in society), overall corporate objectives (both financial and strategic), strategic business unit objectives (both financial and strategic), and tactical objectives. [edit]

Strategy evaluation and choice

An environmental scan will highlight all pertinent aspects that affects an organisation, whether external or sector/industry-based. Such an occurrence will also uncover areas to capitalise on, in addition to areas in which expansion may be unwise.

These options, once identified, have to be vetted and screened by an organisation. In addition to ascertaining the suitability, feasibility and acceptability of an option, the actual modes of progress have to be determined. These pertain to: [edit] The basis of competition

The basis of competition relates to how an organization will produce its product offerings, together with the basis as to how it will act within a market structure, and relative to its competitors. Some of these options encompass: A differentiation approach, in which a multitude of market segments are served on a mass scale. An example will include the array of products produced by Unilever, or Proctor and Gamble, as both forge many of the world's noted consumer brands serving a variety of market segments. A cost-based approach, which often concerns economy pricing. An example would be dollar stores in the United States. A focus (or niche) approach. In this paradigm, an organization would produce items for a niche market, as opposed to a mass market. An example is Aston Martin cars. [edit] Mode of action Measuring the effectiveness of the organizational strategy, it's extremely important to conduct a SWOT analysis to figure out the internal strengths and weaknesses, and external opportunities and threats of the entity in business. This may require taking certain precautionary measures or even changing the entire strategy.

In corporate strategy, Johnson, Scholes and Whittington present a model in which strategic options are evaluated against three key success criteria:[3]

Suitability (would it work?) Feasibility (can it be made to work?) Acceptability (will they work it?) [edit] Suitability

Suitability deals with the overall rationale of the strategy. The key point to consider is whether the strategy would address the key strategic issues underlined by the organisation's strategic position. Does it make economic sense? Would the organization obtain economies of scale or economies of scope? Would it be suitable in terms of environment and capabilities?

Tools that can be used to evaluate suitability include: Ranking strategic options Decision trees [edit] Feasibility

Feasibility is concerned with whether the resources required to implement the strategy are available, can be developed or obtained. Resources include funding, people, time, and information.

Tools that can be used to evaluate feasibility include: cash flow analysis and forecasting break-even analysis resource deployment analysis [edit]

Acceptability

Acceptability is concerned with the expectations of the identified stakeholders (mainly shareholders, employees and customers) with the expected performance outcomes, which can be return, risk and stakeholder reactions. Return deals with the benefits expected by the stakeholders (financial and non-financial). For example, shareholders would expect the increase of their wealth, employees would expect improvement in their careers and customers would expect better value for money. Risk deals with the probability and consequences of failure of a strategy (financial and non-financial). Stakeholder reactions deals with anticipating the likely reaction of stakeholders. Shareholders could oppose the issuing of new shares, employees and unions could oppose outsourcing for fear of losing their jobs, customers could have concerns over a merger with regards to quality and support.

Tools that can be used to evaluate acceptability include: what-if analysis stakeholder mapping [edit] The direction of action

Strategic options may span a number of options, including: Growth-based (inspired by Igor Ansoff's matrix - market development, product development, market penetration, diversification) Consolidation Divestment Harvesting

The exact option depends on the given resources of the firm, in addition to the nature of products' performance in given industries. A generally well-performing organisation may seek to harvest (,i.e. let a

product die a natural death in the market) a product, if via portfolio analysis it was performing poorly comparative to others in the market.

Additionally, the exact means of implementing a strategy needs to be considered. These points range from: Strategic alliances CAPEX Internal development (,i.e. utilising one's own strategic capability in a given course of action) M&A (Mergers and Acquisitions)

The chosen option in this context is dependent on the strategic capabilities of a firm. A company may opt for an acquisition (actually buying and absorbing a smaller firm), if it meant speedy entry into a market or lack of time in internal development. A strategic alliance (such as a network, consortium or joint venture) can leverage on mutual skills between companies. Some countries, such as India and China, specifically state that FDI in their countries should be executed via a strategic alliance arrangement. [edit] Strategic implementation and control

Once a strategy has been identified, it must then be put into practice. This may involve organising, resourcing and utilising change management procedures: [edit] Organizing

Organizing relates to how an organisational design of a company can fit with a chosen strategy. This concerns the nature of reporting relationships, spans of control, and any strategic business units (SBUs) that require to be formed. Typically, an SBU will be created (which often has some degree of autonomous decision-making) if it exists in a market with unique conditions, or has/requires unique strategic capabilities (,i.e. the skills needed for the running and competition of the SBU are different). [edit]

Resourcing

Resourcing is literally the resources required to put the strategy into practice, ranging from human resources, to capital equipment, and to ICT-based implements. [edit] Change management

In the process of implementing strategic plans, an organisation must be wary of forces that may legitimately seek to obstruct such changes. It is important then that effectual change management practices are instituted. These encompass: The appointment of a change agent, as an individual who would champion the changes and seek to reassure and allay any fears arising. Ascertaining the causes of the resistance to organisational change (whether from employees, perceived loss of job security, etc.) Via change agency, slowly limiting the negative effects that a change may uncover. [edit] General approaches

In general terms, there are two main approaches, which are opposite but complement each other in some ways, to strategic management: The Industrial Organizational Approach based on economic theory deals with issues like competitive rivalry, resource allocation, economies of scale assumptions rationality, self discipline behaviour, profit maximization The Sociological Approach deals primarily with human interactions assumptions bounded rationality, satisfying behaviour, profit sub-optimality. An example of a company that currently operates this way is Google. The stakeholder focused approach is an example of this modern approach to strategy.

Strategic management techniques can be viewed as bottom-up, top-down, or collaborative processes. In the bottom-up approach, employees submit proposals to their managers who, in turn, funnel the best ideas further up the organization. This is often accomplished by a capital budgeting process. Proposals are assessed using financial criteria such as return on investment or cost-benefit analysis. Cost underestimation and benefit overestimation are major sources of error. The proposals that are approved form the substance of a new strategy, all of which is done without a grand strategic design or a strategic architect. The top-down approach is the most common by far. In it, the CEO, possibly with the assistance of a strategic planning team, decides on the overall direction the company should take. Some organizations are starting to experiment with collaborative strategic planning techniques that recognize the emergent nature of strategic decisions.

Strategic decisions should focus on Outcome, Time remaining, and current Value/priority. The outcome comprises both the desired ending goal and the plan designed to reach that goal. Managing strategically requires paying attention to the time remaining to reach a particular level or goal and adjusting the pace and options accordingly. Value/priority relates to the shifting, relative concept of value-add. Strategic decisions should be based on the understanding that the value-add of whatever you are managing is a constantly changing reference point. An objective that begins with a high level of value-add may change due to influence of internal and external factors. Strategic management by definition, is managing with a heads-up approach to outcome, time and relative value, and actively making course corrections as needed.

Simulation strategies are also used by managers in an industry. The purpose of simulation gaming is to prepare managers make well rounded decisions. There are two main focuses of the different simulation games, generalized games and functional games. Generalized games are those that are designed to provide participants with new forms of how to adapt to an unfamiliar environment and make business decisions when in doubt. On the other hand, functional games are designed to make participants more aware of being able to deal with situations that bring about one or more problems that are encountered in a corporate function within an industry.[4] [edit] The strategy hierarchy

In most (large) corporations there are several levels of management. Corporate strategy is the highest of these levels in the sense that it is the broadest - applying to all parts of the firm - while also incorporating the longest time horizon. It gives direction to corporate values, corporate culture,

corporate goals, and corporate missions. Under this broad corporate strategy there are typically business-level competitive strategies and functional unit strategies.

Corporate strategy refers to the overarching strategy of the diversified firm. Such a corporate strategy answers the questions of "which businesses should we be in?" and "how does being in these businesses create synergy and/or add to the competitive advantage of the corporation as a whole?" Business strategy refers to the aggregated strategies of single business firm or a strategic business unit (SBU) in a diversified corporation. According to Michael Porter, a firm must formulate a business strategy that incorporates either cost leadership, differentiation, or focus to achieve a sustainable competitive advantage and long-term success. Alternatively, according to W. Chan Kim and Rene Mauborgne, an organization can achieve high growth and profits by creating a Blue Ocean Strategy that breaks the previous value-cost trade off by simultaneously pursuing both differentiation and low cost.

Functional strategies include marketing strategies, new product development strategies, human resource strategies, financial strategies, legal strategies, supply-chain strategies, and information technology management strategies. The emphasis is on short and medium term plans and is limited to the domain of each departments functional responsibility. Each functional department attempts to do its part in meeting overall corporate objectives, and hence to some extent their strategies are derived from broader corporate strategies.

Many companies feel that a functional organizational structure is not an efficient way to organize activities so they have reengineered according to processes or SBUs. A strategic business unit is a semiautonomous unit that is usually responsible for its own budgeting, new product decisions, hiring decisions, and price setting. An SBU is treated as an internal profit centre by corporate headquarters. A technology strategy, for example, although it is focused on technology as a means of achieving an organization's overall objective(s), may include dimensions that are beyond the scope of a single business unit, engineering organization or IT department.

An additional level of strategy called operational strategy was encouraged by Peter Drucker in his theory of management by objectives (MBO). It is very narrow in focus and deals with day-to-day operational activities such as scheduling criteria. It must operate within a budget but is not at liberty to adjust or create that budget. Operational level strategies are informed by business level strategies which, in turn, are informed by corporate level strategies.

Since the turn of the millennium, some firms have reverted to a simpler strategic structure driven by advances in information technology. It is felt that knowledge management systems should be used to share information and create common goals. Strategic divisions are thought to hamper this process. This notion of strategy has been captured under the rubric of dynamic strategy, popularized by Carpenter and Sanders's textbook [1]. This work builds on that of Brown and Eisenhart as well as Christensen and portrays firm strategy, both business and corporate, as necessarily embracing ongoing strategic change, and the seamless integration of strategy formulation and implementation. Such change and implementation are usually built into the strategy through the staging and pacing facets. [edit] Historical development of strategic management [edit] Birth of strategic management

Strategic management as a discipline originated in the 1950s and 60s. Although there were numerous early contributors to the literature, the most influential pioneers were Alfred D. Chandler, Philip Selznick, Igor Ansoff, and Peter Drucker.

Alfred Chandler recognized the importance of coordinating the various aspects of management under one all-encompassing strategy. Prior to this time the various functions of management were separate with little overall coordination or strategy. Interactions between functions or between departments were typically handled by a boundary position, that is, there were one or two managers that relayed information back and forth between two departments. Chandler also stressed the importance of taking a long term perspective when looking to the future. In his 1962 groundbreaking work Strategy and Structure, Chandler showed that a long-term coordinated strategy was necessary to give a company structure, direction, and focus. He says it concisely, structure follows strategy.*5+

In 1957, Philip Selznick introduced the idea of matching the organization's internal factors with external environmental circumstances.[6] This core idea was developed into what we now call SWOT analysis by Learned, Andrews, and others at the Harvard Business School General Management Group. Strengths and weaknesses of the firm are assessed in light of the opportunities and threats from the business environment.

Igor Ansoff built on Chandler's work by adding a range of strategic concepts and inventing a whole new vocabulary. He developed a strategy grid that compared market penetration strategies, product

development strategies, market development strategies and horizontal and vertical integration and diversification strategies. He felt that management could use these strategies to systematically prepare for future opportunities and challenges. In his 1965 classic Corporate Strategy, he developed the gap analysis still used today in which we must understand the gap between where we are currently and where we would like to be, then develop what he called gap reducing actions.*7+

Peter Drucker was a prolific strategy theorist, author of dozens of management books, with a career spanning five decades. His contributions to strategic management were many but two are most important. Firstly, he stressed the importance of objectives. An organization without clear objectives is like a ship without a rudder. As early as 1954 he was developing a theory of management based on objectives.[8] This evolved into his theory of management by objectives (MBO). According to Drucker, the procedure of setting objectives and monitoring your progress towards them should permeate the entire organization, top to bottom. His other seminal contribution was in predicting the importance of what today we would call intellectual capital. He predicted the rise of what he called the knowledge worker and explained the consequences of this for management. He said that knowledge work is nonhierarchical. Work would be carried out in teams with the person most knowledgeable in the task at hand being the temporary leader.

In 1985, Ellen-Earle Chaffee summarized what she thought were the main elements of strategic management theory by the 1970s:[9] Strategic management involves adapting the organization to its business environment. Strategic management is fluid and complex. Change creates novel combinations of circumstances requiring unstructured non-repetitive responses. Strategic management affects the entire organization by providing direction. Strategic management involves both strategy formation (she called it content) and also strategy implementation (she called it process). Strategic management is partially planned and partially unplanned. Strategic management is done at several levels: overall corporate strategy, and individual business strategies. Strategic management involves both conceptual and analytical thought processes. [edit] Growth and portfolio theory

In the 1970s much of strategic management dealt with size, growth, and portfolio theory. The PIMS study was a long term study, started in the 1960s and lasted for 19 years, that attempted to understand the Profit Impact of Marketing Strategies (PIMS), particularly the effect of market share. Started at General Electric, moved to Harvard in the early 1970s, and then moved to the Strategic Planning Institute in the late 1970s, it now contains decades of information on the relationship between profitability and strategy. Their initial conclusion was unambiguous: The greater a company's market share, the greater will be their rate of profit. The high market share provides volume and economies of scale. It also provides experience and learning curve advantages. The combined effect is increased profits.[10] The studies conclusions continue to be drawn on by academics and companies today: "PIMS provides compelling quantitative evidence as to which business strategies work and don't work" - Tom Peters.

The benefits of high market share naturally lead to an interest in growth strategies. The relative advantages of horizontal integration, vertical integration, diversification, franchises, mergers and acquisitions, joint ventures, and organic growth were discussed. The most appropriate market dominance strategies were assessed given the competitive and regulatory environment.

There was also research that indicated that a low market share strategy could also be very profitable. Schumacher (1973),[11] Woo and Cooper (1982),[12] Levenson (1984),[13] and later Traverso (2002)[14] showed how smaller niche players obtained very high returns.

By the early 1980s the paradoxical conclusion was that high market share and low market share companies were often very profitable but most of the companies in between were not. This was sometimes called the hole in the middle problem. This anomaly would be explained by Michael Porter in the 1980s.

The management of diversified organizations required new techniques and new ways of thinking. The first CEO to address the problem of a multi-divisional company was Alfred Sloan at General Motors. GM was decentralized into semi-autonomous strategic business units (SBU's), but with centralized support functions.

One of the most valuable concepts in the strategic management of multi-divisional companies was portfolio theory. In the previous decade Harry Markowitz and other financial theorists developed the theory of portfolio analysis. It was concluded that a broad portfolio of financial assets could reduce specific risk. In the 1970s marketers extended the theory to product portfolio decisions and managerial

strategists extended it to operating division portfolios. Each of a companys operating divisions were seen as an element in the corporate portfolio. Each operating division (also called strategic business units) was treated as a semi-independent profit center with its own revenues, costs, objectives, and strategies. Several techniques were developed to analyze the relationships between elements in a portfolio. B.C.G. Analysis, for example, was developed by the Boston Consulting Group in the early 1970s. This was the theory that gave us the wonderful image of a CEO sitting on a stool milking a cash cow. Shortly after that the G.E. multi factoral model was developed by General Electric. Companies continued to diversify until the 1980s when it was realized that in many cases a portfolio of operating divisions was worth more as separate completely independent companies. [edit] The marketing revolution

The 1970s also saw the rise of the marketing oriented firm. From the beginnings of capitalism it was assumed that the key requirement of business success was a product of high technical quality. If you produced a product that worked well and was durable, it was assumed you would have no difficulty selling them at a profit. This was called the production orientation and it was generally true that good products could be sold without effort, encapsulated in the saying "Build a better mousetrap and the world will beat a path to your door." This was largely due to the growing numbers of affluent and middle class people that capitalism had created. But after the untapped demand caused by the second world war was saturated in the 1950s it became obvious that products were not selling as easily as they had been. The answer was to concentrate on selling. The 1950s and 1960s is known as the sales era and the guiding philosophy of business of the time is today called the sales orientation. In the early 1970s Theodore Levitt and others at Harvard argued that the sales orientation had things backward. They claimed that instead of producing products then trying to sell them to the customer, businesses should start with the customer, find out what they wanted, and then produce it for them. The customer became the driving force behind all strategic business decisions. This marketing orientation, in the decades since its introduction, has been reformulated and repackaged under numerous names including customer orientation, marketing philosophy, customer intimacy, customer focus, customer driven, and market focused. [edit] The Japanese challenge

In 2009, industry consultants Mark Blaxill and Ralph Eckardt suggested that much of the Japanese business dominance that began in the mid 1970s was the direct result of competition enforcement efforts by the Federal Trade Commission (FTC) and U.S. Department of Justice (DOJ). In 1975 the FTC reached a settlement with Xerox Corporation in its anti-trust lawsuit. (At the time, the FTC was under

the direction of Frederic M. Scherer). The 1975 Xerox consent decree forced the licensing of the companys entire patent portfolio, mainly to Japanese competitors. (See "compulsory license.") This action marked the start of an activist approach to managing competition by the FTC and DOJ, which resulted in the compulsory licensing of tens of thousands of patent from some of America's leading companies, including IBM, AT&T, DuPont, Bausch & Lomb, and Eastman Kodak.[original research?]

Within four years of the consent decree, Xerox's share of the U.S. copier market dropped from nearly 100% to less than 14%. Between 1950 and 1980 Japanese companies consummated more than 35,000 foreign licensing agreements, mostly with U.S. companies, for free or low-cost licenses made possible by the FTC and DOJ. The post-1975 era of anti-trust initiatives by Washington D.C. economists at the FTC corresponded directly with the rapid, unprecedented rise in Japanese competitiveness and a simultaneous stalling of the U.S. manufacturing economy.[15] [edit] Competitive advantage

The Japanese challenge shook the confidence of the western business elite, but detailed comparisons of the two management styles and examinations of successful businesses convinced westerners that they could overcome the challenge. The 1980s and early 1990s saw a plethora of theories explaining exactly how this could be done. They cannot all be detailed here, but some of the more important strategic advances of the decade are explained below.

Gary Hamel and C. K. Prahalad declared that strategy needs to be more active and interactive; less armchair planning was needed. They introduced terms like strategic intent and strategic architecture.[16][17] Their most well known advance was the idea of core competency. They showed how important it was to know the one or two key things that your company does better than the competition.[18]

Active strategic management required active information gathering and active problem solving. In the early days of Hewlett-Packard (HP), Dave Packard and Bill Hewlett devised an active management style that they called management by walking around (MBWA). Senior HP managers were seldom at their desks. They spent most of their days visiting employees, customers, and suppliers. This direct contact with key people provided them with a solid grounding from which viable strategies could be crafted. The MBWA concept was popularized in 1985 by a book by Tom Peters and Nancy Austin.[19] Japanese managers employ a similar system, which originated at Honda, and is sometimes called the 3 G's

(Genba, Genbutsu, and Genjitsu, which translate into actual place, actual thing, and actual situation).

Probably the most influential strategist of the decade was Michael Porter. He introduced many new concepts including; 5 forces analysis, generic strategies, the value chain, strategic groups, and clusters. In 5 forces analysis he identifies the forces that shape a firm's strategic environment. It is like a SWOT analysis with structure and purpose. It shows how a firm can use these forces to obtain a sustainable competitive advantage. Porter modifies Chandler's dictum about structure following strategy by introducing a second level of structure: Organizational structure follows strategy, which in turn follows industry structure. Porter's generic strategies detail the interaction between cost minimization strategies, product differentiation strategies, and market focus strategies. Although he did not introduce these terms, he showed the importance of choosing one of them rather than trying to position your company between them. He also challenged managers to see their industry in terms of a value chain. A firm will be successful only to the extent that it contributes to the industry's value chain. This forced management to look at its operations from the customer's point of view. Every operation should be examined in terms of what value it adds in the eyes of the final customer.

In 1993, John Kay took the idea of the value chain to a financial level claiming Adding value is the central purpose of business activity, where adding value is defined as the difference between the market value of outputs and the cost of inputs including capital, all divided by the firm's net output. Borrowing from Gary Hamel and Michael Porter, Kay claims that the role of strategic management is to identify your core competencies, and then assemble a collection of assets that will increase value added and provide a competitive advantage. He claims that there are 3 types of capabilities that can do this; innovation, reputation, and organizational structure.

The 1980s also saw the widespread acceptance of positioning theory. Although the theory originated with Jack Trout in 1969, it didnt gain wide acceptance until Al Ries and Jack Trout wrote their classic book Positioning: The Battle For Your Mind (1979). The basic premise is that a strategy should not be judged by internal company factors but by the way customers see it relative to the competition. Crafting and implementing a strategy involves creating a position in the mind of the collective consumer. Several techniques were applied to positioning theory, some newly invented but most borrowed from other disciplines. Perceptual mapping for example, creates visual displays of the relationships between positions. Multidimensional scaling, discriminant analysis, factor analysis, and conjoint analysis are mathematical techniques used to determine the most relevant characteristics (called dimensions or factors) upon which positions should be based. Preference regression can be used to determine vectors of ideal positions and cluster analysis can identify clusters of positions.

Others felt that internal company resources were the key. In 1992, Jay Barney, for example, saw strategy as assembling the optimum mix of resources, including human, technology, and suppliers, and then configure them in unique and sustainable ways.[20]

Michael Hammer and James Champy felt that these resources needed to be restructured.[21] This process, that they labeled reengineering, involved organizing a firm's assets around whole processes rather than tasks. In this way a team of people saw a project through, from inception to completion. This avoided functional silos where isolated departments seldom talked to each other. It also eliminated waste due to functional overlap and interdepartmental communications.

In 1989 Richard Lester and the researchers at the MIT Industrial Performance Center identified seven best practices and concluded that firms must accelerate the shift away from the mass production of low cost standardized products. The seven areas of best practice were:[22] Simultaneous continuous improvement in cost, quality, service, and product innovation Breaking down organizational barriers between departments Eliminating layers of management creating flatter organizational hierarchies. Closer relationships with customers and suppliers Intelligent use of new technology Global focus Improving human resource skills

The search for best practices is also called benchmarking.*23+ This involves determining where you need to improve, finding an organization that is exceptional in this area, then studying the company and applying its best practices in your firm.

A large group of theorists felt the area where western business was most lacking was product quality. People like W. Edwards Deming,[24] Joseph M. Juran,[25] A. Kearney,[26] Philip Crosby,[27] and Armand Feignbaum[28] suggested quality improvement techniques like total quality management (TQM), continuous improvement (kaizen), lean manufacturing, Six Sigma, and return on quality (ROQ).

An equally large group of theorists felt that poor customer service was the problem. People like James Heskett (1988),[29] Earl Sasser (1995), William Davidow,[30] Len Schlesinger,[31] A. Paraurgman (1988), Len Berry,[32] Jane Kingman-Brundage,[33] Christopher Hart, and Christopher Lovelock (1994), gave us fishbone diagramming, service charting, Total Customer Service (TCS), the service profit chain, service gaps analysis, the service encounter, strategic service vision, service mapping, and service teams. Their underlying assumption was that there is no better source of competitive advantage than a continuous stream of delighted customers.

Process management uses some of the techniques from product quality management and some of the techniques from customer service management. It looks at an activity as a sequential process. The objective is to find inefficiencies and make the process more effective. Although the procedures have a long history, dating back to Taylorism, the scope of their applicability has been greatly widened, leaving no aspect of the firm free from potential process improvements. Because of the broad applicability of process management techniques, they can be used as a basis for competitive advantage.

Some realized that businesses were spending much more on acquiring new customers than on retaining current ones. Carl Sewell,[34] Frederick F. Reichheld,[35] C. Gronroos,[36] and Earl Sasser[37] showed us how a competitive advantage could be found in ensuring that customers returned again and again. This has come to be known as the loyalty effect after Reicheld's book of the same name in which he broadens the concept to include employee loyalty, supplier loyalty, distributor loyalty, and shareholder loyalty. They also developed techniques for estimating the lifetime value of a loyal customer, called customer lifetime value (CLV). A significant movement started that attempted to recast selling and marketing techniques into a long term endeavor that created a sustained relationship with customers (called relationship selling, relationship marketing, and customer relationship management). Customer relationship management (CRM) software (and its many variants) became an integral tool that sustained this trend.

James Gilmore and Joseph Pine found competitive advantage in mass customization.[38] Flexible manufacturing techniques allowed businesses to individualize products for each customer without losing economies of scale. This effectively turned the product into a service. They also realized that if a service is mass customized by creating a performance for each individual client, that service would be transformed into an experience. Their book, The Experience Economy,*39+ along with the work of Bernd Schmitt convinced many to see service provision as a form of theatre. This school of thought is sometimes referred to as customer experience management (CEM).

Like Peters and Waterman a decade earlier, James Collins and Jerry Porras spent years conducting empirical research on what makes great companies. Six years of research uncovered a key underlying principle behind the 19 successful companies that they studied: They all encourage and preserve a core ideology that nurtures the company. Even though strategy and tactics change daily, the companies, nevertheless, were able to maintain a core set of values. These core values encourage employees to build an organization that lasts. In Built To Last (1994) they claim that short term profit goals, cost cutting, and restructuring will not stimulate dedicated employees to build a great company that will endure.*40+ In 2000 Collins coined the term built to flip to describe the prevailing business attitudes in Silicon Valley. It describes a business culture where technological change inhibits a long term focus. He also popularized the concept of the BHAG (Big Hairy Audacious Goal).

Arie de Geus (1997) undertook a similar study and obtained similar results. He identified four key traits of companies that had prospered for 50 years or more. They are: Sensitivity to the business environment the ability to learn and adjust Cohesion and identity the ability to build a community with personality, vision, and purpose Tolerance and decentralization the ability to build relationships Conservative financing

A company with these key characteristics he called a living company because it is able to perpetuate itself. If a company emphasizes knowledge rather than finance, and sees itself as an ongoing community of human beings, it has the potential to become great and endure for decades. Such an organization is an organic entity capable of learning (he called it a learning organization) and capable of creating its own processes, goals, and persona.

There are numerous ways by which a firm can try to create a competitive advantage - some will work but many will not. To help firms avoid a hit and miss approach to the creation of competitive advantage, Will Mulcaster [41] suggests that firms engage in a dialogue that centres around the question "Will the proposed competitive advantage create Perceived Differential Value?" The dialogue should raise a series of other pertinent questions, including: "Will the proposed competitive advantage create something that is different from the competition?" "Will the difference add value in the eyes of potential customers?" - This question will entail a discussion of the combined effects of price, product features and consumer perceptions.

"Will the product add value for the firm?" - Answering this question will require an examination of cost effectiveness and the pricing strategy. [edit] The military theorists

In the 1980s some business strategists realized that there was a vast knowledge base stretching back thousands of years that they had barely examined. They turned to military strategy for guidance. Military strategy books such as The Art of War by Sun Tzu, On War by von Clausewitz, and The Red Book by Mao Zedong became instant business classics. From Sun Tzu, they learned the tactical side of military strategy and specific tactical prescriptions. From Von Clausewitz, they learned the dynamic and unpredictable nature of military strategy. From Mao Zedong, they learned the principles of guerrilla warfare. The main marketing warfare books were: Business War Games by Barrie James, 1984 Marketing Warfare by Al Ries and Jack Trout, 1986 Leadership Secrets of Attila the Hun by Wess Roberts, 1987

Philip Kotler was a well-known proponent of marketing warfare strategy.

There were generally thought to be four types of business warfare theories. They are: Offensive marketing warfare strategies Defensive marketing warfare strategies Flanking marketing warfare strategies Guerrilla marketing warfare strategies

The marketing warfare literature also examined leadership and motivation, intelligence gathering, types of marketing weapons, logistics, and communications.

By the turn of the century marketing warfare strategies had gone out of favour. It was felt that they were limiting. There were many situations in which non-confrontational approaches were more

appropriate. In 1989, Dudley Lynch and Paul L. Kordis published Strategy of the Dolphin: Scoring a Win in a Chaotic World. "The Strategy of the Dolphin was developed to give guidance as to when to use aggressive strategies and when to use passive strategies. A variety of aggressiveness strategies were developed.

In 1993, J. Moore used a similar metaphor.[42] Instead of using military terms, he created an ecological theory of predators and prey (see ecological model of competition), a sort of Darwinian management strategy in which market interactions mimic long term ecological stability. [edit] Strategic change

In 1968, Peter Drucker (1969) coined the phrase Age of Discontinuity to describe the way change forces disruptions into the continuity of our lives.[43] In an age of continuity attempts to predict the future by extrapolating from the past can be somewhat accurate. But according to Drucker, we are now in an age of discontinuity and extrapolating from the past is hopelessly ineffective. We cannot assume that trends that exist today will continue into the future. He identifies four sources of discontinuity: new technologies, globalization, cultural pluralism, and knowledge capital.

In 1970, Alvin Toffler in Future Shock described a trend towards accelerating rates of change.[44] He illustrated how social and technological norms had shorter lifespans with each generation, and he questioned society's ability to cope with the resulting turmoil and anxiety. In past generations periods of change were always punctuated with times of stability. This allowed society to assimilate the change and deal with it before the next change arrived. But these periods of stability are getting shorter and by the late 20th century had all but disappeared. In 1980 in The Third Wave, Toffler characterized this shift to relentless change as the defining feature of the third phase of civilization (the first two phases being the agricultural and industrial waves).[45] He claimed that the dawn of this new phase will cause great anxiety for those that grew up in the previous phases, and will cause much conflict and opportunity in the business world. Hundreds of authors, particularly since the early 1990s, have attempted to explain what this means for business strategy.

In 2000, Gary Hamel discussed strategic decay, the notion that the value of all strategies, no matter how brilliant, decays over time.[46]

In 1978, Dereck Abell (Abell, D. 1978) described strategic windows and stressed the importance of the timing (both entrance and exit) of any given strategy. This has led some strategic planners to build planned obsolescence into their strategies.[47]

In 1989, Charles Handy identified two types of change.[48] Strategic drift is a gradual change that occurs so subtly that it is not noticed until it is too late. By contrast, transformational change is sudden and radical. It is typically caused by discontinuities (or exogenous shocks) in the business environment. The point where a new trend is initiated is called a strategic inflection point by Andy Grove. Inflection points can be subtle or radical.

In 2000, Malcolm Gladwell discussed the importance of the tipping point[disambiguation needed], that point where a trend or fad acquires critical mass and takes off.[49]

In 1983, Noel Tichy wrote that because we are all beings of habit we tend to repeat what we are comfortable with.[50] He wrote that this is a trap that constrains our creativity, prevents us from exploring new ideas, and hampers our dealing with the full complexity of new issues. He developed a systematic method of dealing with change that involved looking at any new issue from three angles: technical and production, political and resource allocation, and corporate culture.

In 1990, Richard Pascale (Pascale, R. 1990) wrote that relentless change requires that businesses continuously reinvent themselves.*51+ His famous maxim is Nothing fails like success by which he means that what was a strength yesterday becomes the root of weakness today, We tend to depend on what worked yesterday and refuse to let go of what worked so well for us in the past. Prevailing strategies become self-confirming. To avoid this trap, businesses must stimulate a spirit of inquiry and healthy debate. They must encourage a creative process of self renewal based on constructive conflict.

Peters and Austin (1985) stressed the importance of nurturing champions and heroes. They said we have a tendency to dismiss new ideas, so to overcome this, we should support those few people in the organization that have the courage to put their career and reputation on the line for an unproven idea.

In 1996, Adrian Slywotzky showed how changes in the business environment are reflected in value migrations between industries, between companies, and within companies.[52] He claimed that recognizing the patterns behind these value migrations is necessary if we wish to understand the world of chaotic change. In Profit Patterns (1999) he described businesses as being in a state of strategic

anticipation as they try to spot emerging patterns. Slywotsky and his team identified 30 patterns that have transformed industry after industry.[53]

In 1997, Clayton Christensen (1997) took the position that great companies can fail precisely because they do everything right since the capabilities of the organization also defines its disabilities.[54] Christensen's thesis is that outstanding companies lose their market leadership when confronted with disruptive technology. He called the approach to discovering the emerging markets for disruptive technologies agnostic marketing, i.e., marketing under the implicit assumption that no one - not the company, not the customers - can know how or in what quantities a disruptive product can or will be used before they have experience using it.

A number of strategists use scenario planning techniques to deal with change. The way Peter Schwartz put it in 1991 is that strategic outcomes cannot be known in advance so the sources of competitive advantage cannot be predetermined.[55] The fast changing business environment is too uncertain for us to find sustainable value in formulas of excellence or competitive advantage. Instead, scenario planning is a technique in which multiple outcomes can be developed, their implications assessed, and their likeliness of occurrence evaluated. According to Pierre Wack, scenario planning is about insight, complexity, and subtlety, not about formal analysis and numbers.[56]

In 1988, Henry Mintzberg looked at the changing world around him and decided it was time to reexamine how strategic management was done.[57][58] He examined the strategic process and concluded it was much more fluid and unpredictable than people had thought. Because of this, he could not point to one process that could be called strategic planning. Instead Mintzberg concludes that there are five types of strategies: Strategy as plan - a direction, guide, course of action - intention rather than actual Strategy as ploy - a maneuver intended to outwit a competitor Strategy as pattern - a consistent pattern of past behaviour - realized rather than intended Strategy as position - locating of brands, products, or companies within the conceptual framework of consumers or other stakeholders - strategy determined primarily by factors outside the firm Strategy as perspective - strategy determined primarily by a master strategist

In 1998, Mintzberg developed these five types of management strategy into 10 schools of thought. These 10 schools are grouped into three categories. The first group is prescriptive or normative. It

consists of the informal design and conception school, the formal planning school, and the analytical positioning school. The second group, consisting of six schools, is more concerned with how strategic management is actually done, rather than prescribing optimal plans or positions. The six schools are the entrepreneurial, visionary, or great leader school, the cognitive or mental process school, the learning, adaptive, or emergent process school, the power or negotiation school, the corporate culture or collective process school, and the business environment or reactive school. The third and final group consists of one school, the configuration or transformation school, an hybrid of the other schools organized into stages, organizational life cycles, or episodes.*59+

In 1999, Constantinos Markides also wanted to reexamine the nature of strategic planning itself.[60] He describes strategy formation and implementation as an on-going, never-ending, integrated process requiring continuous reassessment and reformation. Strategic management is planned and emergent, dynamic, and interactive. J. Moncrieff (1999) also stresses strategy dynamics.[61] He recognized that strategy is partially deliberate and partially unplanned. The unplanned element comes from two sources: emergent strategies (result from the emergence of opportunities and threats in the environment) and Strategies in action (ad hoc actions by many people from all parts of the organization).

Some business planners are starting to use a complexity theory approach to strategy. Complexity can be thought of as chaos with a dash of order. Chaos theory deals with turbulent systems that rapidly become disordered. Complexity is not quite so unpredictable. It involves multiple agents interacting in such a way that a glimpse of structure may appear. [edit] Information- and technology-driven strategy

Peter Drucker had theorized the rise of the knowledge worker back in the 1950s. He described how fewer workers would be doing physical labor, and more would be applying their minds. In 1984, John Nesbitt theorized that the future would be driven largely by information: companies that managed information well could obtain an advantage, however the profitability of what he calls the information float (information that the company had and others desired) would all but disappear as inexpensive computers made information more accessible.

Daniel Bell (1985) examined the sociological consequences of information technology, while Gloria Schuck and Shoshana Zuboff looked at psychological factors.[62] Zuboff, in her five year study of eight pioneering corporations made the important distinction between automating technologies and

infomating technologies. She studied the effect that both had on individual workers, managers, and organizational structures. She largely confirmed Peter Drucker's predictions three decades earlier, about the importance of flexible decentralized structure, work teams, knowledge sharing, and the central role of the knowledge worker. Zuboff also detected a new basis for managerial authority, based not on position or hierarchy, but on knowledge (also predicted by Drucker) which she called participative management.*63+

In 1990, Peter Senge, who had collaborated with Arie de Geus at Dutch Shell, borrowed de Geus' notion of the learning organization, expanded it, and popularized it. The underlying theory is that a company's ability to gather, analyze, and use information is a necessary requirement for business success in the information age. (See organizational learning.) To do this, Senge claimed that an organization would need to be structured such that:[64] People can continuously expand their capacity to learn and be productive, New patterns of thinking are nurtured, Collective aspirations are encouraged, and People are encouraged to see the whole picture together.

Senge identified five disciplines of a learning organization. They are: Personal responsibility, self reliance, and mastery We accept that we are the masters of our own destiny. We make decisions and live with the consequences of them. When a problem needs to be fixed, or an opportunity exploited, we take the initiative to learn the required skills to get it done. Mental models We need to explore our personal mental models to understand the subtle effect they have on our behaviour. Shared vision The vision of where we want to be in the future is discussed and communicated to all. It provides guidance and energy for the journey ahead. Team learning We learn together in teams. This involves a shift from a spirit of advocacy to a spirit of enquiry. Systems thinking We look at the whole rather than the parts. This is what Senge calls the Fifth discipline. It is the glue that integrates the other four into a coherent strategy. For an alternative approach to the learning organization, see Garratt, B. (1987).

Since 1990 many theorists have written on the strategic importance of information, including J.B. Quinn,[65] J. Carlos Jarillo,[66] D.L. Barton,[67] Manuel Castells,[68] J.P. Lieleskin,[69] Thomas Stewart,[70] K.E. Sveiby,[71] Gilbert J. Probst,[72] and Shapiro and Varian[73] to name just a few.

Thomas A. Stewart, for example, uses the term intellectual capital to describe the investment an organization makes in knowledge. It is composed of human capital (the knowledge inside the heads of employees), customer capital (the knowledge inside the heads of customers that decide to buy from you), and structural capital (the knowledge that resides in the company itself).

Manuel Castells, describes a network society characterized by: globalization, organizations structured as a network, instability of employment, and a social divide between those with access to information technology and those without.

Geoffrey Moore (1991) and R. Frank and P. Cook[74] also detected a shift in the nature of competition. In industries with high technology content, technical standards become established and this gives the dominant firm a near monopoly. The same is true of networked industries in which interoperability requires compatibility between users. An example is word processor documents. Once a product has gained market dominance, other products, even far superior products, cannot compete. Moore showed how firms could attain this enviable position by using E.M. Rogers five stage adoption process and focusing on one group of customers at a time, using each group as a base for marketing to the next group. The most difficult step is making the transition between visionaries and pragmatists (See Crossing the Chasm). If successful a firm can create a bandwagon effect in which the momentum builds and your product becomes a de facto standard.

Evans and Wurster describe how industries with a high information component are being transformed.[75] They cite Encarta's demolition of the Encyclopedia Britannica (whose sales have plummeted 80% since their peak of $650 million in 1990). Encartas reign was speculated to be shortlived, eclipsed by collaborative encyclopedias like Wikipedia that can operate at very low marginal costs. Encarta's service was subsequently turned into an on-line service and dropped at the end of 2009. Evans also mentions the music industry which is desperately looking for a new business model. The upstart information savvy firms, unburdened by cumbersome physical assets, are changing the competitive landscape, redefining market segments, and disintermediating some channels. One manifestation of this is personalized marketing. Information technology allows marketers to treat each individual as its own market, a market of one. Traditional ideas of market segments will no longer be relevant if personalized marketing is successful.

The technology sector has provided some strategies directly. For example, from the software development industry agile software development provides a model for shared development processes.

Access to information systems have allowed senior managers to take a much more comprehensive view of strategic management than ever before. The most notable of the comprehensive systems is the balanced scorecard approach developed in the early 1990s by Drs. Robert S. Kaplan (Harvard Business School) and David Norton (Kaplan, R. and Norton, D. 1992). It measures several factors financial, marketing, production, organizational development, and new product development to achieve a 'balanced' perspective. [edit] Knowledge Adaptive Strategy

Most current approaches to business "strategy" focus on the mechanics of managemente.g., Drucker's operational "strategies" -- and as such are not true business strategy. In a post-industrial world these operationally focused business strategies hinge on conventional sources of advantage have essentially been eliminated: Scale used to be very important. But now, with access to capital and a global marketplace, scale is achievable by multiple organizations simultaneously. In many cases, it can literally be rented. Process improvement or best practices were once a favored source of advantage, but they were at best temporary, as they could be copied and adapted by competitors. Owning the customer had always been thought of as an important form of competitive advantage. Now, however, customer loyalty is far less important and difficult to maintain as new brands and products emerge all the time.

In such a world, differentiation, as elucidated by Michael Porter, Botten and McManus is the only way to maintain economic or market superiority (i.e., comparative advantage) over competitors. A company must OWN the thing that differentiates it from competitors. Without IP ownership and protection, any product, process or scale advantage can be compromised or entirely lost. Competitors can copy them without fear of economic or legal consequences, thereby eliminating the advantage.

This principle is based on the idea of evolution: differentiation, selection, amplification and repetition. It is a form of strategy to deal with complex adaptive systems which individuals, businesses, the economy are all based on. The principle is based on the survival of the "fittest". The fittest strategy employed

after trail and error and combination is then employed to run the company in its current market. Failed strategic plans are either discarded or used for another aspect of a business. The trade off between risk and return is taken into account when deciding which strategy to take. Cynefin model and the adaptive cycles of businesses are both good ways to develop KAS, reference Panarchy and Cynefin. Analyze the fitness landscapes for a product, idea, or service to better develop a more adaptive strategy.

(For an explanation and elucidation of the "post-industrial" worldview, see George Ritzer and Daniel Bell.) [edit] Strategic decision making processes

Will Mulcaster [76] argues that while much research and creative thought has been devoted to generating alternative strategies, too little work has been done on what influences the quality of strategic decision making and the effectiveness with which strategies are implemented. For instance, in retrospect it can be seen that the financial crisis of 2008-9 could have been avoided if the banks had paid more attention to the risks associated with their investments, but how should banks change the way they make decisions to improve the quality of their decisions in the future? Mulcaster's Managing Forces framework addresses this issue by identifying 11 forces that should be incorporated into the processes of decision making and strategic implementation. The 11 forces are: Time; Opposing forces; Politics; Perception; Holistic effects; Adding value; Incentives; Learning capabilities; Opportunity cost; Risk; Stylewhich can be remembered by using the mnemonic 'TOPHAILORS'. [edit] The psychology of strategic management

Several psychologists have conducted studies to determine the psychological patterns involved in strategic management. Typically senior managers have been asked how they go about making strategic decisions. A 1938 treatise by Chester Barnard, that was based on his own experience as a business executive, sees the process as informal, intuitive, non-routinized, and involving primarily oral, 2-way communications. Bernard says The process is the sensing of the organization as a whole and the total situation relevant to it. It transcends the capacity of merely intellectual methods, and the techniques of discriminating the factors of the situation. The terms pertinent to it are feeling, judgement, sense, proportion, balance, appropriateness. It is a matter of art rather than science.*77+

In 1973, Henry Mintzberg found that senior managers typically deal with unpredictable situations so they strategize in ad hoc, flexible, dynamic, and implicit ways. . He says, The job breeds adaptive information-manipulators who prefer the live concrete situation. The manager works in an environment of stimulous-response, and he develops in his work a clear preference for live action.*78+

In 1982, John Kotter studied the daily activities of 15 executives and concluded that they spent most of their time developing and working a network of relationships that provided general insights and specific details for strategic decisions. They tended to use mental road maps rather than systematic planning techniques.[79]

Daniel Isenberg's 1984 study of senior managers found that their decisions were highly intuitive. Executives often sensed what they were going to do before they could explain why.[80] He claimed in 1986 that one of the reasons for this is the complexity of strategic decisions and the resultant information uncertainty.[81]

Shoshana Zuboff (1988) claims that information technology is widening the divide between senior managers (who typically make strategic decisions) and operational level managers (who typically make routine decisions). She claims that prior to the widespread use of computer systems, managers, even at the most senior level, engaged in both strategic decisions and routine administration, but as computers facilitated (She called it deskilled) routine processes, these activities were moved further down the hierarchy, leaving senior management free for strategic decision making.

In 1977, Abraham Zaleznik identified a difference between leaders and managers. He describes leadershipleaders as visionaries who inspire. They care about substance. Whereas managers are claimed to care about process, plans, and form.[82] He also claimed in 1989 that the rise of the manager was the main factor that caused the decline of American business in the 1970s and 80s.The main difference between leader and manager is that, leader has followers and manager has subordinates. In capitalistic society leaders make decisions and manager usually follow or execute.[83] Lack of leadership is most damaging at the level of strategic management where it can paralyze an entire organization.[84]

According to Corner, Kinichi, and Keats,[85] strategic decision making in organizations occurs at two levels: individual and aggregate. They have developed a model of parallel strategic decision making. The model identifies two parallel processes that both involve getting attention, encoding information, storage and retrieval of information, strategic choice, strategic outcome, and feedback. The individual

and organizational processes are not independent however. They interact at each stage of the process.

[edit] Reasons why strategic plans fail

There are many reasons why strategic plans fail, especially: Failure to execute by overcoming the four key organizational hurdles[86] Cognitive hurdle Motivational hurdle Resource hurdle Political hurdle Failure to understand the customer Why do they buy Is there a real need for the product inadequate or incorrect marketing research Inability to predict environmental reaction What will competitors do Fighting brands Price wars Will government intervene Over-estimation of resource competence Can the staff, equipment, and processes handle the new strategy Failure to develop new employee and management skills Failure to coordinate Reporting and control relationships not adequate Organizational structure not flexible enough

Failure to obtain senior management commitment Failure to get management involved right from the start Failure to obtain sufficient company resources to accomplish task Failure to obtain employee commitment New strategy not well explained to employees No incentives given to workers to embrace the new strategy Under-estimation of time requirements No critical path analysis done Failure to follow the plan No follow through after initial planning No tracking of progress against plan No consequences for above Failure to manage change Inadequate understanding of the internal resistance to change Lack of vision on the relationships between processes, technology and organization Poor communications Insufficient information sharing among stakeholders Exclusion of stakeholders and delegates [edit] Limitations of strategic management

Although a sense of direction is important, it can also stifle creativity, especially if it is rigidly enforced. In an uncertain and ambiguous world, fluidity can be more important than a finely tuned strategic compass. When a strategy becomes internalized into a corporate culture, it can lead to group think. It can also cause an organization to define itself too narrowly. An example of this is marketing myopia.

Many theories of strategic management tend to undergo only brief periods of popularity. A summary of these theories thus inevitably exhibits survivorship bias (itself an area of research in strategic management). Many theories tend either to be too narrow in focus to build a complete corporate strategy on, or too general and abstract to be applicable to specific situations. Populism or faddishness can have an impact on a particular theory's life cycle and may see application in inappropriate circumstances. See business philosophies and popular management theories for a more critical view of management theories.

In 2000, Gary Hamel coined the term strategic convergence to explain the limited scope of the strategies being used by rivals in greatly differing circumstances. He lamented that strategies converge more than they should, because the more successful ones are imitated by firms that do not understand that the strategic process involves designing a custom strategy for the specifics of each situation.[46]

Ram Charan, aligning with a popular marketing tagline, believes that strategic planning must not dominate action. "Just do it!" while not quite what he meant, is a phrase that nevertheless comes to mind when combatting analysis paralysis. [edit] The linearity trap

It is tempting to think that the elements of strategic management (i) reaching consensus on corporate objectives; (ii) developing a plan for achieving the objectives; and (iii) marshalling and allocating the resources required to implement the plan can be approached sequentially. It would be convenient, in other words, if one could deal first with the noble question of ends, and then address the mundane question of means.

But in the world where strategies must be implemented, the three elements are interdependent. Means are as likely to determine ends as ends are to determine means.[87] The objectives that an organization might wish to pursue are limited by the range of feasible approaches to implementation. (There will usually be only a small number of approaches that will not only be technically and administratively possible, but also satisfactory to the full range of organizational stakeholders.) In turn, the range of feasible implementation approaches is determined by the availability of resources.

And so, although participants in a typical strategy session may be asked to do blue sky thinking where they pretend that the usual constraints resources, acceptability to stakeholders , administrative

feasibility have been lifted, the fact is that it rarely makes sense to divorce oneself from the environment in which a strategy will have to be implemented. Its probably impossible to think in any meaningful way about strategy in an unconstrained environment. Our brains cant process boundless possibilities, and the very idea of strategy only has meaning in the context of challenges or obstacles to be overcome. Its at least as plausible to argue that acute awareness of constraints is the very thing that stimulates creativity by forcing us to constantly reassess both means and ends in light of circumstances.

The key question, then, is, "How can individuals, organizations and societies cope as well as possible with ... issues too complex to be fully understood, given the fact that actions initiated on the basis of inadequate understanding may lead to significant regret?"[88]

The answer is that the process of developing organizational strategy must be iterative. Such an approach has been called the Strategic Incrementalisation Perspective.[89] It involves toggling back and forth between questions about objectives, implementation planning and resources. An initial idea about corporate objectives may have to be altered if there is no feasible implementation plan that will meet with a sufficient level of acceptance among the full range of stakeholders, or because the necessary resources are not available, or both.

Even the most talented manager would no doubt agree that "comprehensive analysis is impossible" for complex problems.[90] Formulation and implementation of strategy must thus occur side-by-side rather than sequentially, because strategies are built on assumptions that, in the absence of perfect knowledge, are never perfectly correct. Strategic management is necessarily a "...repetitive learning cycle [rather than] a linear progression towards a clearly defined final destination."[91] While assumptions can and should be tested in advance, the ultimate test is implementation. You will inevitably need to adjust corporate objectives and/or your approach to pursuing outcomes and/or assumptions about required resources. Thus a strategy will get remade during implementation because "humans rarely can proceed satisfactorily except by learning from experience; and modest probes, serially modified on the basis of feedback, usually are the best method for such learning."[92]

It serves little purpose (other than to provide a false aura of certainty sometimes demanded by corporate strategists and planners) to pretend to anticipate every possible consequence of a corporate decision, every possible constraining or enabling factor, and every possible point of view. At the end of the day, what matters for the purposes of strategic management is having a clear view based on the best available evidence and on defensible assumptions of what it seems possible to accomplish within the constraints of a given set of circumstances.[citation needed] As the situation changes, some opportunities for pursuing objectives will disappear and others arise. Some implementation approaches

will become impossible, while others, previously impossible or unimagined, will become viable.[citation needed]

The essence of being strategic thus lies in a capacity for "intelligent trial-and error"[93] rather than linear adherence to finally honed and detailed strategic plans. Strategic management will add little valueindeed, it may well do harmif organizational strategies are designed to be used as a detailed blueprints for managers. Strategy should be seen, rather, as laying out the general pathbut not the precise stepsan organization will follow to create value.[94] Strategic management is a question of interpreting, and continuously reinterpreting, the possibilities presented by shifting circumstances for advancing an organization's objectives. Doing so requires strategists to think simultaneously about desired objectives, the best approach for achieving them, and the resources implied by the chosen approach. It requires a frame of mind that admits of no boundary between means and ends.

It may not be so limiting as suggested in "The linearity trap" above. Strategic thinking/ identification takes place within the gambit of organizational capacity and Industry dynamics. The two common approaches to strategic analysis are value analysis and SWOT analysis. Yes Strategic analysis takes place within the constraints of existing/potential organizational resources but its would not be appropriate to call it a trap. For e.g., SWOT tool involves analysis of the organization's internal environment (Strengths & weaknesses) and its external environment (opportunities & threats). The organization's strategy is built using its strengths to exploit opportunities, while managing the risks arising from internal weakness and external threats. It further involves contrasting its strengths & weaknesses to determine if the organization has enough strengths to offset its weaknesses. Applying the same logic, at the external level, contrast is made between the externally existing opportunities and threats to determine if the organization is capitalizing enough on opportunities to offset emerging threats.[citation needed] Value grid From Wikipedia, the free encyclopedia This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2007)

The value grid model was proposed by Pil and Holweg as a means to show that the way firms compete has shifted away from the linear value chain way management theory has traditionally thought about value chain management.

The advantages of a value-grid framework, as opposed to value chains, is in allowing companies' managers to strategize and coordinate operations. Common non-linear value chain strategies include influencing demand, modifying information access, exploring multitier penetration, managing risk, seizing value, integrating value, creating new value propositions, exploiting value chains across tiers, pursuing pinch-point mapping, and defining demand enablers. Value theory From Wikipedia, the free encyclopedia This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2009)

Value theory encompasses a range of approaches to understanding how, why, and to what degree people should value things, whether the thing is a person, idea, object, or anything else. This investigation began in ancient philosophy, where it is called axiology or ethics. Early philosophical investigations sought to understand good and evil, and the concept of "the good". Today much of value theory is scientifically empirical, recording what people do value and attempting to understand why they value it in the context of psychology, sociology, and economics.

At the general level, there is a difference between moral and natural goods. Moral goods are those that have to do with the conduct of persons, usually leading to praise or blame. Natural goods, on the other hand, have to do with objects, not persons. For example, to say that "Mary is a morally good person" might involve a different sense of "good" than the one used in the sentence "Wow, that was some good food".

Ethics tend to be focused on moral goods rather than natural goods, while economics tends to be interested in the opposite. However, both moral and natural goods are equally relevant to goodness and value theory, which is more general in scope.Contents [hide] 1 Psychology 2 Sociology 3 Economics 4 Ethics and Axiology 4.1 Intrinsic and instrumental value

4.2 Pragmatism and contributory goodness 4.3 Kant: hypothetical and categorical goods 5 See also

[edit] Psychology Main article: Value (personal and cultural)

In psychology, value theory refers to the study of the manner in which human beings develop, assert and believe in certain values, and act or fail to act on them.

Attempts are made to explain experimentally why human beings prefer or choose some things over others, how personal behavior may be guided (or fail to be guided) by certain values and judgments, and how values emerge at different stages of human development (e.g. the work by Lawrence Kohlberg and Kohlberg's stages of moral development).

In psychotherapy and counseling, eliciting and clarifying the values of the patient can play an important role to help him/her orient or reorient himself or herself in social life. [edit] Sociology Main article: Value (personal and cultural)

In sociology, value theory is concerned with personal values which are popularly held by a community, and how those values might change under particular conditions. Different groups of people may hold or prioritize different kinds of values influencing social behavior.

Major Western theorists who stress the importance of values as an analytical independent variable include Max Weber, mile Durkheim, Talcott Parsons and Jrgen Habermas. Classical examples of sociological traditions which deny or downplay the question of values are institutionalism, historical

materialism (including Marxism), behaviorism, pragmatic oriented theories, postmodernism and various objectivist-oriented theories.

Methods of study range from questionnaire surveys to participant observation. [edit] Economics Main article: Theory of value (economics)

Economic analysis emphasizes goods sought in a market and tends to use the consumer's choices as evidence (revealed preference) that various products are of economic value. In this view, religious or political struggle over what "goods" are available in the marketplace is inevitable, and consensus on some core questions about body and society and ecosystems affected by the transaction, are outside the market's goods so long as they are unowned.

However, some natural goods seem to also be moral goods. For example, those things that are owned by a person may be said to be natural goods, but over which a particular individual(s) may have moral claims. So it is necessary to make another distinction: between moral and non-moral goods. A nonmoral good is something that is desirable for someone or other; despite the name to the contrary, it may include moral goods. A moral good is anything which an actor is considered to be morally obligated to strive toward.

When discussing non-moral goods, one may make a useful distinction between inherently serviced and material goods in the marketplace (or its exchange value), versus perceived intrinsic and experiential goods to the buyer. A strict service economy model takes pains to distinguish between the goods and service guarantees to the market, and that of the service and experience to the consumer.

Sometimes, moral and natural goods can conflict. The value of natural "goods" is challenged by such issues as addiction. The issue of addiction also brings up the distinction between economic and moral goods, where an economic good is whatever stimulates economic growth. For instance, some claim that cigarettes are a "good" in the economic sense, as their production can employ tobacco growers and doctors who treat lung cancer. Many people[who?] would agree that cigarette smoking is not morally "good", nor naturally "good," but still recognize that it is economically good, which means, it has exchange value, even though it may have a negative public good or even be bad for a person's body (not

the same as "bad for the person" necessarily - consider the issue of suicide.) Most economists, however, consider policies which create make-work jobs to have a poor foundation economically.

In Ecological Economics value theory is separated into two types: Donor-type value and receiver-type value. Ecological economists tend to believe that 'real wealth' needs a donor-determined value as a measure of what things were needed to make an item or generate a service. (H.T. Odum 1996). An example of receiver-type value is 'market value', or 'willingness to pay', the principal method of accounting used in neo-classical economics. In contrast both, Marx's Labour Theory of Value and the 'Emergy' concept are conceived as donor-type value. Emergy theorists believe that this conception of value has relevance to all of philosophy, economics, sociology and psychology as well as Environmental Science. [edit] Ethics and Axiology Main article: Value (ethics)

Intuitively, theories of value must be important to ethics. A number of useful distinctions have been made by philosophers in the treatment of value. [edit] Intrinsic and instrumental value Main articles: Intrinsic value (ethics) and Instrumental value

Many people find it useful to distinguish instrumental value and intrinsic values, first discussed by Plato in the "Republic". An instrumental value is worth having as a means towards getting something else that is good (e.g., a radio is instrumentally good in order to hear music). An intrinsically valuable thing is worth having for itself, not as a means to something else. It is giving value intrinsic and extrinsic properties.

Intrinsic and instrumental goods are not mutually exclusive categories. Some things are both good in themselves, and also good for getting other things that are good. "Understanding science" may be such a good, being both worthwhile in and of itself, and as a means of achieving other goods.

A prominent argument in environmental ethics, made by writers like Aldo Leopold and Holmes Rolston III, is that wild nature and healthy ecosystems have intrinsic value, prior to and apart from their instrumental value as resources for humans, and should therefore be preserved. [edit] Pragmatism and contributory goodness Further information: Pragmatism

John Dewey (1859-1952) in his book Theory of Valuation saw goodness as the outcome of ethic valuation, a continuous balancing of "ends in view." An end in view was said to be an objective potentially adopted, which may be refined or rejected based on its consistency with other objectives or as a means to objectives already held, roughly similar to an object with relative intrinsic value.

His empirical approach had absolute intrinsic value denial, not accepting intrinsic value as an inherent or enduring property of things. He saw it as an illusory product of our continuous valuing activity as purposive beings. Dewey denied categorically that there was anything like intrinsic values and he hold the same position in regard to moral values -- moral values was also based on a learning process; they were never "intrinsic" either absolutely or relativist. (Indeed, a relative intrinsic values is a selfcontradictory term; intrinsic values are also absolute or else they are by definition in intrinsive).

Another improvement is to distinguish contributory goods with a contributory conditionality. These have the same qualities as the good thing, but need some emergent property of a whole state-of-affairs in order to be good. For example, salt is food on its own, and good as such, but is far better as part of a prepared meal. Providing a good outside this context is not delivery of what is expected. In other words, such goods are only good when certain conditions are met. This is in contrast to other goods, which may be considered "good" in a wider variety of situations. [edit] Kant: hypothetical and categorical goods

For more information, see the main article, Immanuel Kant.

The thinking of Immanuel Kant (1724-1804) greatly influenced moral philosophy. He thought of moral value as a unique and universally identifiable property, as an absolute value rather than a relative value.

He showed that many practical goods are good only in states-of-affairs described by a sentence containing an "if" clause. For example, in the sentence, "Sunshine is only good if you do not live in the desert". Further, the "if" clause often described the category in which the judgment was made (art, science, etc.). Kant described these as "hypothetical goods", and tried to find a "categorical" good that would operate across all categories of judgment without depending on an "if-then" clause.

An influential result of Kant's search was the idea of a good will as being the only intrinsic good. Moreover, Kant saw a good will as acting in accordance with a moral command, the "Categorical Imperative": "Act according to those maxims that you could will to be universal law." which resembles the Ethic of Reciprocity or Golden Rule, e.g. Mt. 7:12. From this, and a few other axioms, Kant developed a moral system that would apply to any "praiseworthy person." (See Groundwork of the Metaphysic of Morals, third section, 446-[447].)

Kantian philosophers believe that any general definition of goodness must define goods that are categorical in the sense that Kant intended Value migration From Wikipedia, the free encyclopediaMarketing Key concepts

Product Pricing Distribution Service Retail Brand management Account-based marketing Marketing ethics Marketing effectiveness Market research Market segmentation Marketing strategy Marketing management

Market dominance Promotional content

Advertising Branding Underwriting Direct marketing Personal Sales Product placement Publicity Sales promotion Sex in advertising Loyalty marketing Premiums Prizes Promotional media

Printing Publication Broadcasting Out-of-home Internet marketing Point of sale Promotional merchandise Digital marketing In-game In-store demonstration Word-of-mouth marketing Brand Ambassador Drip Marketing This box: view talk edit This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (June 2008)

In marketing, value migration is the shifting of value-creating forces. Value migrates from outmoded business models to business designs that are better able to satisfy customers' priorities.

Marketing strategy is the art of creating value for the customer. This can only be done by offering a product or service that corresponds to customer needs. In a fast changing business environment, the factors that determine value are constantly changing.

Adrian Slywotzky described value migration in his 1996 book.Contents [hide] 1 Three types 2 Three stages 3 See also 4 References

[edit] Three types Value flows between industries example: from airlines to entertainment Value flows between companies example: from Corel WordPerfect to Microsoft Value flows between business designs within a company example: from IBM mainframe computers to IBM PC's with system integration [edit] Three stages Value inflow stage value is absorbed from other companies or industries Value stability stage competitive equilibrium with stable market shares and stable profit margins Value outflow stage companies lose value to other parts of the industry - reduced profit margins - loss of market share - outflow of talent and other resources

The value chain is the sum of all activities that add utility to the customer. Parts of the value chain will be internal to the company, while others will come from suppliers, distributors, and other channel partners. A linkage occurs whenever one activity affects other activities in the chain. To optimize a value chain, the linkages must be well coordinated.

The calculation of value migration is more difficult than it would at first seem. Value is perceived by customers and, as such, is subjective. This is very difficult to measure so relative market value of the firm is used as a proxy. Relative market value (defined as capitalization divided by annual revenue) is used as an indication of the firm's success at creating value. Value network From Wikipedia, the free encyclopedia The introduction to this article provides insufficient context for those unfamiliar with the subject. Please help improve the article with a good introductory style. (December 2008)

A value network is a business analysis perspective that describes social and technical resources within and between businesses. The nodes in a value network represent people (or roles). The nodes are connected by interactions that represent tangible and intangible deliverables. These deliverables take the form of knowledge or other intangibles and/or financial value. Value networks exhibit interdependence. They account for the overall worth of products and services. Companies have both internal and external value networks.[1]Contents [hide] 1 External value networks 2 Internal value networks 3 Clayton Christensen's value networks 4 Fjeldstad and Stabells value networks 5 Normann and Ramirez' value constellations 6 Verna Allee's value networks 7 Important terms and concepts 7.1 Tangible value 7.2 Intangible value 7.3 A non-linear approach 7.3.1 Relationship management 7.3.2 Business web and ecosystem development 7.3.3 Fast-track complex process redesign

7.3.4 Reconfiguring the organization 7.3.5 Supporting knowledge networks and communities of practice 7.3.6 Develop scorecards, conduct ROI and cost/benefit analyses, and drive decision making 8 See also 9 External links 10 References

[edit] External value networks

External facing networks include customers or recipients, intermediaries, stakeholders, complementary, open innovation networks and suppliers. [edit] Internal value networks

Internal value networks focus on key activities, processes and relationships that cut across internal boundaries, such as order fulfillment, innovation, lead processing, or customer support. Value is created through exchange and the relationships between roles. Value networks operate in public agencies, civil society, in the enterprise, institutional settings, and all forms of organization. Value networks advance innovation, wealth, social good and environmental well-being. [edit] Clayton Christensen's value networks

Christensen defines value network as:

"The collection of upstream suppliers, downstream channels to market, and ancillary providers that support a common business model within an industry. When would-be disruptors enter into existing value networks, they must adapt their business models to conform to the value network and therefore fail that disruption because they become co-opted." [2]

[edit] Fjeldstad and Stabells value networks

Fjeldstad and Stabell [3] presents a framework for "value configurations" in which a "Value network" is one of two alternatives to Michael Porter's Value Chains (the other being the Value shop configuration).

F&S's value networks consists of these components: A set of customers. Some service the customers all use, and enables interaction between the customers. Some organization that provides the service. A set of contracts that enables access to the service.

An obvious example of a value network is the network formed by phone users. The phone company provides a service, users enter a contract with the phone company and immediately has access to all the value network of other customers of the phone company.

Another less obvious example is a car insurance company: The company provides car insurance. The customers gains access to the roads and can do their thing and interact in various ways while being exposed to limited risk. The insurance policies represent the contracts, the internal processes of the insurance company the service provisioning.

Unfortunately F&S and Christensen's concepts both address the same issue; the conceptual understanding of how a company understands itself and its value creation process, but they are not identical. Christensen's value networks address the relation between the a company and its suppliers and the requirements posed by the customers, and how these interact when defining what represents value in the product that is produced.

Fjeldstad & Stabell's value networks is a configuration which emphasize that the value being created is between customers when they interact facilitated by the value networks. This represents a very different perspective from Christensen's but confusingly also one that is applicable in many of the same situations as Christensen's.

[edit] Normann and Ramirez' value constellations

Normann and Ramirez argued *4+ as early as 1993 that in todays environment, strategy is no longer a matter of positioning a fixed set of activities along a value chain. According to them the focus today should be on the value creating system itself. Where all stakeholders co-produce value. Successful companies conceive of strategy as systematic social innovation. With this article they laid a foundation for the Value Network to emerge as a mental model. [edit] Verna Allee's value networks

Verna Allee defines value networks [5] as any web of relationships that generates both tangible and intangible value through complex dynamic exchanges between two or more individuals, groups or organizations. Any organization or group of organizations engaged in both tangible and intangible exchanges can be viewed as a value network, whether private industry, government or public sector.

Allee developed Value network analysis, a whole systems mapping and analysis approach to understanding tangible and intangible value creation among participants in an enterprise system. Revealing the hidden network patterns behind business processes can provide predictive intelligence for when workflow performance is at risk. She believes value network analysis provides a standard way to define, map and analyse the participants, transactions and tangible and intangible deliverables that together form a value network. Allee says, value network analysis can lead to profound shifts in perception of problem situations and mobilise collective action to implement change [6] [edit] Important terms and concepts [edit] Tangible value

All exchanges of goods, services or revenue, including all transactions involving contracts, invoices, return receipt of orders, request for proposals, confirmations and payment are considered to be tangible value. Products or services that generate revenue or are expected as part of a service are also included in the tangible value flow of goods, services, and revenue (2). In government agencies these

would be mandated activities. In civil society organizations these would be formal commitments to provide resources or services. [edit] Intangible value

Two primary subcategories are included in intangible value: knowledge and benefits. Intangible knowledge exchanges include strategic information, planning knowledge, process knowledge, technical know-how, collaborative design and policy development; which support the product and service tangible value network. Intangible benefits are also considered favors that can be offered from one person to another. Examples include offering political or emotional support to someone. Another example of intangible value is when a research organization asks someone to volunteer their time and expertise to a project in exchange for the intangible benefit of prestige by affiliation (3).

All biological organisms, including humans, function in a self-organizing mode internally and externally. That is, the elements in our bodiesdown to individual cells and DNA moleculeswork together in order to sustain us. However, there is no central boss to control this dynamic activity. Our relationships with other individuals also progress through the same circular free flowing process as we search for outcomes that are best for our well-being. Under the right conditions these social exchanges can be extraordinarily altruistic. Conversely, they can also be quite self-centered and even violent. It all depends on the context of the immediate environment and the people involved.[7] [edit] A non-linear approach Often value networks are considered to consist of groups of companies working together to produce and transport a product to the customer. Relationships among customers of a single company are examples of how value networks can be found in any organization. Companies can link their customers together by direct methods like the telephone or indirect methods like combining customers resources together.

The purpose of value networks is to create the most benefit for the people involved in the network (5). The intangible value of knowledge within these networks is just as important as a monetary value. In order to succeed knowledge must be shared to create the best situations or opportunities. Value networks are how ideas flow into the market and to the people that need to hear them.

Because value networks are instrumental in advancing business and institutional practices a value network analysis can be useful in a wide variety of business situations. Some typical ones are listed below. [edit] Relationship management

Relationship management typically just focuses on managing information about customers, suppliers, and business partners. A value network approach considers relationships as two-way value-creating interactions, which focus on realizing value as well as providing value. [edit] Business web and ecosystem development

Resource deployment, delivery, market innovation, knowledge sharing, and time-to-market advantage are dependent on the quality, coherence, and vitality of the relevant value networks, business webs and business ecosystems[8]. [edit] Fast-track complex process redesign

Product and service offerings are constantly changing - and so are the processes to innovate, design, manufacture, and deliver them. Multiple, inter-dependent, and concurrent processes are too complex for traditional process mapping, but can be analyzed very quickly with the value network method. [edit] Reconfiguring the organization

Mergers, acquisitions, downsizing, expansion to new markets, new product groups, new partners, new roles and functions - anytime relationships change, value interactions and flows change too [9]. [edit] Supporting knowledge networks and communities of practice

Understanding the transactional dynamics is vital for purposeful networks of all kinds, including networks and communities focused on creating knowledge value. A value network analysis helps communities of practice negotiate for resources and demonstrate their value to different groups within the organization. [edit] Develop scorecards, conduct ROI and cost/benefit analyses, and drive decision making

Because the value network approach addresses both financial and non-financial assets and exchanges, it expands metrics and indexes beyond the lagging indicators of financial return and operational performance - to also include leading indicators for strategic capability and system optimization... Value shop From Wikipedia, the free encyclopedia

The value shop was first conceptualized by Thompson in 1967. A value shop is an organization designed to solve customer or client problems rather than creating value by producing output from an input of raw materials.

Compared to Michael Porter's concept of the value chain, there is no sequential fixed set of activities or resources utilized to create value. Each problem is treated uniquely and activities and resources are allocated specifically to cater to the problem in question.

According to the research of Charles B. Stabell and ystein D. Fjeldstad, the value configuration analysis (1998), five main generic activities are carried out in the organization: Problem Finding and acquisition Problem Solving Choice of problem solution Execution of solution Control and evaluation

Value is created in the shop by several mechanisms allowing the organization to solve problems better or faster than the client. These are variables such as: The organization is in possession of more information about the problem than the client The organization is specialized to deal with the problem at hand with specific methods to cover analysis Strong expertise with expert professionals is available.

Some of the classical examples of Value Shops include management consultancies such as Boston Consulting Group, Deloitte Touche Tohmatsu and McKinsey. The Value Shop concept has also been applied to a number of other activities including Norwegian police investigations (e.g. Gottschalk, 2007) and the knowledge-intensive energy exploration business (Woiceshyn and Falkenberg, 2008).

Potrebbero piacerti anche