Sei sulla pagina 1di 13

Computers in Human Behavior 29 (2013) 473–485

Contents lists available at SciVerse ScienceDirect

Computers in Human Behavior


journal homepage: www.elsevier.com/locate/comphumbeh

Assessing and governing IT-staff behavior by performance-based simulation


Vladimir Stantchev a,⇑, Konstantin Petruch b, Gerrit Tamm a
a
SRH University Berlin, Ernst-Reuter-Platz 10, 10587 Berlin, Germany
b
Telekom Deutschland GmbH, Bonn, Germany

a r t i c l e i n f o a b s t r a c t

Article history: When optimizing IT operations organizations typically aim to optimize resource usage. In general, there
Available online 4 July 2012 are two kinds of IT resources – IT infrastructures and IT staff. An optimized utilization of these resources
requires both quantitative and qualitative analysis. While IT infrastructures can offer raw data for such
Keywords: analyses, data about IT staff often requires additional preparation and augmentation. One source for IT
Governance staff-related data can be provided by incident management and ticketing systems. While performance
IT-Staff data from such systems is often stored in logfiles it is rarely evaluated extensively. In this article we pro-
Simulation
pose the usage of such data sources for IT staff behavior evaluation and also present the relevant augmen-
Human aspects
Human behavior
tation techniques. We claim that our approach is able to provide more in-depth insights as compared to
System dynamics typical data visualization and dashboard techniques. Our modeling methodology is based on the
approach of system dynamics.
We also provide formal models and simulation results where we demonstrate the feasibility of the
approach using real-life logfiles from an international telecommunication provider.
Ó 2012 Elsevier Ltd. All rights reserved.

1. Introduction speed, transaction rate). When we define such indicators with re-
spect to specific business processes we can apply an approach
Todays enterprises and organizations conduct a wide range of known as process mining (Gerke & Tamm, 2009).
operative processes electronically. This results not only in im- In this article2 we extend this approach as follows: (1) we aim to
proved processing with respect to time and quality, but also in assess utilization of IT staff (as a specific IT resource) based on anal-
an (more or less) automatically stored performance data about ysis of its human behavior, and (2) we introduce and use more com-
the processing (e.g., how many requests were processed, what plex simulation techniques.
was the duration of each processing task). Furthermore, data gath- Our focus lies on KPIs and we address the question whether an
ered by such operative systems often hides other nontrivial in- extended data analysis using simulation methodologies can pro-
sights. Visualization is often a first step towards a more detailed vide an additional value as compared to standard data visualiza-
data analysis. This is the application domain of the business dash- tion. Main contribution of this work is the introduction of a
boards. One specific example is the usage of Google analytics1 to straightforward approach for assessing human behavior of IT spe-
visualize web server logfiles. More complex application scenarios in- cialists based on typical available IT KPIs.
volve the aggregation of multiple data sources and the subsequent The rest of this article is structured as follows: Section 2 presents
analytical processing of data within a data warehouse. the state of the art in the measurement of process indicators, the
Operative data can provide insights about two distinct types of terminology we use, as well as related work in the area of comput-
measurements – key goal indicators (KGIs) which provide insights ers in human behavior. In Section 3 we propose our hypothesis and
about the results of an operative task, and key performance indica- give an overview of our assessment framework for indicators in the
tors (KPIs) which define the way these results were achieved (e.g., area of IT operations. In Section 4 we describe how we applied the
system dynamics approach (Forrester, 1995) to verify our hypothe-
sis. It includes results regarding a specific data transformation,
⇑ Corresponding author. Tel.: +49 (0) 30 922 535 45x42; fax: +49 (0) 30 922 535 model creation, and simulation process that we have assessed
55.
based on real-life data from an international telecommunications
E-mail addresses: vladimir.stantchev@srh-hochschule-berlin.de (V. Stantchev),
konstantin.petruch@telekom.de (K. Petruch), gerrit.tamm@srh-hochschule-berlin.de
(G. Tamm).
URL: http://www.srh-hochschule-berlin.de/ (V. Stantchev).
1 2
www.google.com/analytics/. This article is an extended version of a paper we presented at WSKS 2011.

0747-5632/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.chb.2012.06.003
474 V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485

provider. Section 5 contains a discussion of our results and outlook Kaehmer, 2008). The notion of compliance has also been discussed
on our future research activities. in the context of business alignment (van der Aalst, 2005).

2.3. Aspects of human behavior


2. Preliminaries
A long-standing focus of computer-related research in the con-
In this section we introduce the motivation for performance and text of human behavior is the subject of computer anxiety. Mostly
output metrics. Furthermore, we discuss the motivation for data based on the Computer Anxiety Rating Scale (CARS), presented in
and logfile analysis in this context and also reflect on relevant hu- Heinssen et al. (1987), studies tried to evaluate anxiety in the usage
man aspects. and perception of computers in different demographics. Examples
include studies of representative samples from older population
2.1. Concepts of indicators as presented in Laguna and Babcock (1997), as well as East European
(specifically Romanian) population groups as presented in Durndell
As stated above, indicators can be generally divided into two and Haag (2002). A meta analysis published in Chua, Chen, and
groups – key performance indicators (KPIs) and key goal indicators Wong (1999) provided the following insights: (1) female university
(KGIs). KPIs measure how well a process is performing and are ex- undergraduates are in general more anxious than male undergrad-
pressed in precisely measurable terms. KGIs represent a descrip- uates; (2) instruments measuring computer anxiety can be consid-
tion of the outcome of the process, often have a customer and ered reliable, although not compatible with one another; and (3)
financial focus and can be typically measured after the fact has oc- computer anxiety is inversely related to computer experience,
curred (Grembergen, 2003). While KGIs specify what should be although the strength of this relationship remains inconclusive.
achieved KPIs specify how it should be achieved. Other works focused on the gender differences (see Whitley et
al. (1997) as example), or on the creation of specific models for
2.2. Objectives of data and logfile analysis the evaluation of qualitative differences (see Todman & Monaghan
(1995) as example).
Various algorithms (de Medeiros, Weijters, & van der Aalst, Another specific focus was the concept of online trust (see
2007; van der Aalst et al., 2003) have been proposed to discover Wang & Emurian (2005) for an overview). Online trust is particu-
different types of models based on a logfile. A special issue of Com- larly challenging since the (first time) user typically does not know
puters in Industry on process mining (van der Aalst & Weijters, experience and trust properties of the online service. This is a par-
2004) offers more insights. In the context of process model verifi- ticular challenge for online marketplaces and a typical approach to
cation there are several notions for equivalence of process specifi- address it is to provide information substitutes for the marketplace
cations such as behavioral equivalence (Van der Aalst, de Medeiros, users. Marketplaces for tangible goods (such as ebay3) typically
& Weijters, 2006; van Dongen, Dijkman, & Mendling, 2008), trace need to provide such substitutes to establish trust in the seller of
equivalence, and bisimulation (Van Glabbeek & Weijland, 1996) the goods. Examples include user ratings and comments. Market-
that have been developed. Traditional equivalence notions like places of non-tangible goods such as electronic services or soft-
bisimulation or trace equivalence are defined as a verification ware-as-a-service (SaaS) need to provide such substitutes both for
property which yields a yes-or-no boolean value, but no insights the provider and the service itself (see Tamm & Günther (2005) for
on the degree of equivalence. When comparing a reference model a framework for such substitutes and the Cloud marketplace Asper-
with a process model, it is not realistic to assume that their gran- ado4 that was based on this framework).
ularities are the same. Therefore, the equivalence analysis with Aspects that come closer to our hypotheses include the impact of
classical equivalence notions will most likely not be conclusive. computer anxiety on employees’ performance and satisfaction (see
In the context of process mining we should apply notions searching Murrell & Sprinkle (1993)) and the impact of IT staff on the per-
for behavioral similarity. Examples include causal footprint (van ceived quality of standard information systems (see Wu & Wang
Dongen et al., 2008) and fitness function (Van der Aalst et al., (2007) for an example that focuses on enterprise resource planing
2006). In (van Dongen et al., 2008), the authors introduce an ap- (ERP) systems). There are also works that look at ways to overcome
proach for determining the similarity between process models by hesitation and resistance during the adoption of new managerial
comparing the footprint of such models. Thereby the footprint de- approaches in IT (e.g., an approach to use a human-oriented matu-
scribes two relationships between activities – the soc. look-back rity model in the context of ITIL is presented in Gama, Nunes da Sil-
and look-ahead links and returns the degree of the process similar- va, & Mira da Silva (2011)). In the area of software development
aspects of managerial decision making in software development
ity expressed in [0, 1]. This value is not conclusive and requires fur-
ther explanation. It is not possible to trace the missing or differing were analyzed quantitatively by Garcia-Crespo, Colomo-Palacios,
Soto-Acosta, and Ruano-Mayoral (2010) and an approach to mea-
activities. Since traceability is an important requirement of the
organization, the approach is not suitable in general. In (Van der sure effects of emotions was presented in Colomo-Palacios,
Casado-Lumbreras, Soto-Acosta, and Garcia-Crespo (2011).
Aalst et al., 2006), the authors introduce the behavioral and the
structural precision and recall. The behavioral equivalence of the
process models compares a process model with respect to some 3. Research hypothesis and assessment framework
typical behavior recorded in log files. The structural precision for IT governance
and recall equate the term ‘‘‘structure’’’ with all firing sequences
of a Petri net that may occur in a process model. Other related 3.1. Research hypothesis
works exist in the areas of pattern matching or semantic matching.
Existing approaches (Ehrig, Koschmider, & Oberweis, 2007) assume Analysis of log data can in general provide a clear picture of per-
that the correspondence of activities can be established automati- formance and utilization of IT infrastructure components. Such
cally. Since they suppose that the same label implies same func- data can even be used to dynamically reconfigure systems for bet-
tion, they try to identify the content of an activity by using an ter performance at different architectural levels (the definition of
automated semantic matching algorithm based on the label of
activities. One specific approach for quality improvement in 3
http://www.ebay.com.
compliance is IT supported compliance evaluation (Sackmann & 4
http://www.asperado.com.
V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485 475

these levels is presented in Stantchev (2008a) and Stantchev & predict unexpected situations and incidents, which can possibly im-
Malek (2006), while details about the techniques are presented ply changes in the service design or strategy, thus integrating all
in Stantchev (2008b) and Stantchev & Malek (2008)). Our hypoth- lifecycle domains to gather feedback in order to improve the quality
esis is that there are aspects of IT staff behavior that are hidden of decisions concerning change.
from the casual observer (or manager). Furthermore, we claim that
such aspects can be derived from log data. This can also provide in- 3.3.4. Service operation
sights about IT staff behavior and thus offer valuable information Service operation includes the processes Incident Management,
regarding its performance and utilization. More specifically, we Problem Management and Event Management focus on detecting,
claim that we can build a causal model (Medsker, Williams, & avoiding and solving incidents, as well as applying and document-
Holahan, 1994) based on such data. ing solutions and changes. The experience gained through repeated
In order to obtain meaningful information and subsequent exercise of operating these ITIL processes will mostly improve the
knowledge regarding IT staff behavior, we need to augment and organization’s management ability to deliver SOA enabled services.
further assess system log data. This augmentation and assessment Finally, Service Desk function along with Request Fulfillment and
should be considered within the context of IT management and IT Access Management processes provide a common interface for
governance. So, in short, we propose the augmentation and ex- every user and use the data they gather to organize and prioritize
tended quantitative and qualitative analysis of log data to assess information across the organization according to the business
and govern IT staff behavior. design.
Our approach outline is as follows:
3.3.5. Relevance for the assessment framework
 Consider relevant IT objectives from IT Governance frameworks. Within our assessment framework we consider the objectives of
 Consider specific indicators in these frameworks. Service Design, Service Transition, Service Operation as well as the
 Define a sample that is relevant for IT staff behavior, and finally. overall field of Continual Service Improvement.
 Use the concept of system dynamics to interpret, model, simu-
late, and analyze data. 3.4. Consideration of COBIT

3.2. IT Governance frameworks 3.4.1. Plan and organize


Plan and organize domain focuses on the strategy and tactics of
IT governance frameworks aim to define standardized processes how IT can best contribute to the accomplishment of business
and control metrics for IT provision. Commonly applied frame- goals. The business requirements for IT need to be derived from
works in this area include the IT Infrastructure Library (ITIL) (Van the enterprise strategy and the optimal IT architecture needs to
Bon, 2008) and the Control Objectives for Information and Related be planned, so that the IT remains aligned with business objectives.
Technology (CObIT) (Lainhart IV, 2000). They typically provide best Control objectives such as IT value management, business IT align-
practices for measurement and control of IT-specific indicators. ment, IT strategic plan, IT tactical plan and IT portfolio manage-
ment are delivered within the process Define a Strategic IT Plan.
3.3. Consideration of ITIL Control objectives concerning the IT architecture, technological
direction, resource management and investment management
3.3.1. Service strategy are also delivered within this domain of the COBIT framework. CO-
The Service Strategy domain of ITIL provides a long term strat- BIT also deals with the communication of the goals and objectives
egy for the service provider, which needs to consider the overall throughout the enterprise and identifying risks threatening the
culture and vision of the organization, be aware of the competitive achievement of business goals, thus providing a control mecha-
environment surrounding the service provider and include strate- nism and high level business perspective for the service provider.
gies, tactics and plans that should leverage a competitive advan-
tage and provide value for the organization. Besides, ITIL includes 3.4.2. Acquire and implement
the areas of service portfolio management and financial manage- Acquire and implement is about realization of the IT strategy
ment. Key performance indicators, however, which are critical planned in the previous domain through concrete IT solutions in
measurement parameters indicating how well the business design this domain. Automated solutions and applications need to be
is performing its business, are handled in Service Design domain of identified, assembled, improved and integrated with the existing
ITIL framework under the best practice Design Measurement Sys- system and business needs. New IT projects need to be developed
tems, Methods and Metrics. and changes need to be applied in order to improve the existing
system with due regard to the business strategy and time or eco-
3.3.2. Service design nomical constraints.
ITIL’s main goal of service design stage is to conceptualize and Acquire and Implement domain also includes the actual deploy-
design innovative IT services and the environment that those ser- ment of the hosting infrastructure. Control objectives delivered
vices need to operate. It includes also the service level manage- within the process Identify Automated Solutions, such as Defini-
ment process. Service catalogue management supports business tion and Maintenance of Business Functional and Technical
IT alignment by providing a single reliable source for the business Requirements, Risk Analysis Report and Feasibility Study and For-
as an overview of the status and description of all services of the mulation of Alternative Courses of Action, for example reflect the
service providers. Furthermore, ITIL addresses availability, security critical activities and best practices, which need to be considered
and continuity issues in service design stage with its processes. and controlled to achieve high quality of automated solutions.
Application software, technology infrastructure development and
3.3.3. Service transition changes applied to those can be controlled accordingly via the con-
ITIL service transition supports continual change with its trol objectives delivered by COBIT.
processes Change Management, Service Asset and Configuration
Management, Validation and Testing Management and Release 3.4.3. Deliver and support
Management, creating a cycle of change within the overall service Deliver and support domain is responsible for the successful
lifecycle. Furthermore, ITIL Service Transition includes practices to delivery of IT services that have been planned and acquired in
476 V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485

previous domains. COBIT’s objective of this domain is to fulfill the excerpt of the relevant indicators in the area of ITIL Service Design,
quality of service concerns such as availability, reliability, integrity, Table 2 lists an excerpt of the relevant indicators in the area of ITIL
efficiency and service ability. COBIT processes supporting that Service Transition, Table 3 lists an excerpt of the relevant indica-
objective can be summarized as Manage Performance and Capac- tors in the area of ITIL Service Operation, Table 4 lists the relevant
ity, Ensure Continuous Service, Ensure Systems Security, Manage indicators in the area of ITIL Continual Service Improvement.
Problems, Manage Service Desk and Incidents. Additionally, service
level management, configuration management, educating and 3.6. Aligning IT and business indicators
training users and identifying and allocating costs are also covered
in Deliver and Support domain. One way towards IT and business alignment can be the applica-
tion of approaches such as CObIT and ITIL for the optimization of IT
3.4.4. Monitor and evaluate organizations. We recently introduced an approach for the contin-
Monitor and evaluate domain is responsible for evaluating the uous quality improvement of IT processes based on such models
performance of IT services, checking if the IT supports the business (Gerke & Tamm, 2009) and process mining. An organization can
goals as planned before and implying changes if not. COBIT delivers also try to assure the continuous provision of service levels as dem-
measurement metrics such as outcome and performance indica- onstrated in our previous work with such reference models and
tors, which will help the service provider to efficiently evaluate our work in the area of service level assurance in SOA (Stantchev
the performance of their services. Through the hierarchical goal & Schröpfer, 2009; Stantchev & Malek, 2009; Stantchev & Schröp-
relationship structure of COBIT these metrics can enable success fer, 2009). Furthermore, in order to coordinate and govern IT pro-
predictions on higher level goals and enable the management to duction, we can assess operative data and try to analyze it more
take early actions. Furthermore, Monitor and Evaluate domain en- deeply with the help of simulation models.
sures that the IT stays aligned with the business by controlling the
compliance issues with internal and external regulations and sup- 3.7. Cloud governance aspects
ports the continual improvement of the IT system.
Governance of cloud computing should regard different deploy-
3.4.5. Relevance for the assessment framework ment models. Abstracting services at the level of infrastructure
Within our assessment framework we consider some control (IaaS) allows comparatively easy virtualization – the user organiza-
objectives of Plan and Organise, and the set of control objectives tion can configure and customize the platform and the services
of Acquire and Implement, Deliver and Support as well as Monitor within the virtual image that is then being deployed and operated.
and Evaluate. This includes the definition of performance parameters for specific
services (e.g., parameters of a Web Service Container), the security
3.5. Consideration of IT-specific indicators aspects of service access, and the integration of services within the
platform.
IT indicators should demonstrate the added value of IT to the When using a standardized platform (the PaaS approach, Plat-
business side. A well accepted view of business objectives is Por- form-as-a-Service, (Petruch et al., 2011)) the user organization de-
ter’s distinction between operational effectiveness (efficiency and ploys the services in a virtualized operating environment. This
effectiveness) and strategic positioning (reach and structure) (Por- operating environment is typically provided as a service – the vir-
ter, 1996). This view can be translated directly into corresponding tualization technology and the operating environment are man-
goals and indicators for IT (Tallon, Kraemer, & Gurbaxani, 2000). aged by the provider. Integration capabilities are always
Organizations require well designed business processes to provider-specific and there are currently no commonly accepted
achieve excellence in a competitive environment: Here, not one- industry standards for integration between services operated in
time optimized business processes play the essential role, but different PaaS environments.5 The usage of software services itself
rather the ability to quickly react to new developments and to flex- (the SaaS approach) precludes fine-grained control and enforcement
ibly adapt respective business processes are decisive (Borzo, 2005). of non-functional aspects (e.g., QoS, response time) and security
It is important that these processes are effectively supported parameters of the infrastructure and the platform by the user
through IT. These requirements have consequently been catalyzing organization.
increased interest in reference modeling for IT process manage- These different levels of virtualization require different levels of
ment. Reference models such as ITIL and CObIT represent proven security and abstraction. The grade of control and responsibility for
best practices and provide key indicators for the design and control security aspects declines with higher levels of abstraction – in IaaS
of IT services (Van Bon, 2008). On the one hand, utilization of ref- the configuration is generally in the hand of the user organization,
erence models promises to enhance quality and facilitates better while in SaaS it is primarily a responsibility of the Cloud provider.
compliance according to statutes and contractual agreements. On There are several emerging patterns for cloud usage. The first
the other hand, IT processes have to correspond to corporate strat- one is a natural consequence of the trend to outsource IT-Opera-
egy and its respective goals. Therefore, the question arises how tions (aka. IT-RUN functions) to external providers and results in
best practices can be implemented in a particular corporate envi- demand for IaaS. IaaS is typically used for the implementation of
ronment. Another challenge lurks in the checking of reference pro- test projects and as a way to overcome underprovisioning in on-
cess execution as well as in assuring compliance to IT procedure in premise infrastructures. The second one is coming from the SaaS
respect to new or altered business processes (Stantchev & Tamm, area and focuses on the provision of Web 2.0 applications. Some
2011). well-known sites offer the user the chance to develop simple appli-
The specific consideration of the objectives of the ITIL domains cations (in the mold of PaaS) and offer them in a SaaS-like manner
and the control objectives of the COBIT domains stated above is later on. This usage pattern could also be called extension facilities.
conducted in the context of feasible indicators that correspond to PaaS is an optimal environment for users seeking testing and
these objectives. As part of a related research project we assessed
over 500 IT-specific indicators, based on ITIL and COBIT. Details 5
Two current standardization activities at the IEEE Standards Association are IEEE
about this assessment are given in (Petruch, Stantchev, & Tamm, P2301, Draft Guide for Cloud Portability and Interoperability Profiles, and IEEE P2302,
2011). From these indicators we derived subsets that we consid- Draft Standard for Intercloud Interoperability and Federation, see http://stan-
ered specifically relevant for IT staff behavior. Table 1 lists an dards.ieee.org/news/2011/cloud.html.
V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485 477

Table 1
Considered ITIL metrics in the area of service design (excerpt); source can be V2 (ITIL V2), V3 (ITIL V3), or metrics for IT Service Management (MC).

Source Indicator ITIL area


V3 Fraction of requirements definition defined on time (and in budget) Service design (general)
V3 Reduction of non-fulfilled SLA Targets Service Level Management (SLM)
MC Number of services that are not covered by an SLA Service Level Management (SLM)
MC Ratio of CIs that are assessed with respect to their performance Capacity management
V3 Reduction of risks and consequences of possible service disruptions IT Service Continuity Management (ITSCM)
V3 Decline of security breaches reported to service-desk Information Security Management (ISM)
MC Number of security-related incidents Information Security Management (ISM)

Table 2
Considered ITIL Metrics in the area of service transition (excerpt); source can be V2 (ITIL V2), V3 (ITIL V3), or metrics for IT Service Management (MC).

Source Indicator ITIL area


V3 Number or ratio of releases, that correspond to customer expectations with respect to costs, quality, Transition planning and support
functionality and deadline
V3 Ratio of changes that fulfill agreed upon customer requirements Change management
V3 Reduction in number of open change requests Change management
V3 Impact of incidents and faults concerning a specific CI category, e.g., specific suppliers or run teams Service Asset and Configuration Management
(SACM)
V3 Mean license costs per user Service Asset and Configuration Management
(SACM)

Table 3
Considered ITIL Metrics in the area of service operation (excerpt); source can be V2 (ITIL V2), V3 (ITIL V3), or metrics for IT Service Management (MC).

Source Indicator ITIL area


V3 Number of events (in categories) Event management
V3 Number of events (in importance) Event management
V3 Ratio of events that require human actions Event management
V3 Total number of incidents Incident management
V3 Number of open incidents Incident management
V3 Total number of service requests Request fulfillment
V3 Distribution of service requests according to processing status Request fulfillment
V3 Mean duration for processing a service request (in categories) Request fulfillment
V3 Total number of problems Problem management
V3 Ratio of problems solved within SLA time Problem management
V2 Number of repeated incidents/problems Problem management
MC Number of closed problems Problem management
MC Number of incidents solved through Known Errors Problem management

development capabilities, these are two new emerging use pat- Cloud Computing offering. During the first phase of requirements
terns which are gaining popularity. Probably, gaming will be one identification and elicitation (often called the Plan-Phase) these
of the most remarkable usage patterns for Cloud technologies, requirements need to be specified and formalized. This allows
due to an inherent scalability, endowing such applications with addressing them already within a first assessment of the Cloud
virtually unlimited graphical power and players. Also the rise of Computing market for the specific offering. Potential Cloud Com-
netbooks in the computer hardware industry triggered the devel- puting providers can then be specifically evaluated with respect
opment of Clouds. These slim devices depend on services being de- to the requirements and specific SLAs can be negotiated with them
ployed in remote Cloud sites since their own capacity is limited. during the second phase. The third phase can focus on the trans-
Behind this stand the idea of getting access to everything, from parent communication of values and benefits of the SLA during
anywhere, at any time. start of production for the specific business unit. The fourth phase
A set of general Corporate Governance rules has to be specifi- would deal with performance monitoring and assessment of SLA
cally refined and targeted for every operational area in an enter- fulfillment and associated bonuses or penalties.
prise. The idea of manageability in Cloud Computing is closely These phases and their associated activities can be introduced
related to the operationalization of Corporate Governance in the as specific Cloud Computing extensions to more traditional IT-Gov-
different phases of the use of a Cloud Computing offering. ernance approaches such as CObIT and ITIL. This introduction is
A specific manifestation of such operationalization can be the typically non-trivial, as there are significant differences between
introduction of SLA-based Governance. This would mean that the the abstraction levels and the semantics of Cloud Computing and
organization has to incorporate specific governance requirements IT-Governance.
as part of a service level agreement for a Cloud Computing offering. In the specific area of SaaS a more straightforward approach can
Suitable examples include the so called ‘‘four-eyes-principle’’ that focus on the introduction of a more specific approach from the area
can be part of the SLA for a SaaS offering, or data availability of SOA Governance – the SOA LifeCycle (Stantchev & Malek, 2011).
requirements that can also be part of the SLA for a SaaS offering. It describes a governance approach for software functionality as
In order to introduce such transparent Cloud Governance mech- provided by web services which makes its paradigms and concepts
anisms an organization has to consider all phases of the usage of a more applicable to the aspects of SaaS Governance. On the other
478 V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485

Table 4
Considered ITIL metrics in the area of continual service improvement (excerpt); source can be V2 (ITIL V2), V3 (ITIL V3), or metrics for IT Service Management (MC).

Source Indicator ITIL area


V2 Precision of cost and expenditure planing IT financial management
V2 The IT-Organization was led within the expected revenue and cash flow IT financial management
V2 Financial targets of IT were met IT financial management
V2 Frequency and impact of changes in accounting and budgeting IT financial management
V2 Degree of charge ability of IT costs IT financial management
MC Delay in generation of monthly forecast IT financial management
V2 Total Cost of Ownership (TCO) IT financial management

an interpretation of the real world into a description. This descrip-


tion is used in the later stages of the approach. In system dynamics,
a formal equations model is derived from this description. This is a
far more rigorous ‘‘reflection’’ of reality. The model creation and
simulation ‘‘ . . . contribute rigor and clarity to systems thinking’’
(Forrester, 1994).
The concept of system dynamics has been applied successfully
to the analysis of human aspects before. Examples include the
analysis of software project staffing (Abdel-Hamid, 1989;
Abdel-Hamid & Madnick, 1989) and goals impact in software pro-
jects (Abdel-Hamid, Sengupta, & Swett, 1999;Abdel-Hamid, 1988).
Furthermore, it was used to create simulation models for specific
software lifecycle processes (Madachy, 1996), for various other
software development processes Madachy and Tarbet (2000), as
Fig. 1. An excerpt from the technical assessment. well as for the assessment of obesity treatment (Abdel-Hamid,
2003) and the public health system in general (Homer & Hirsch,
side, the SOA LifeCycle can be incorporated as part of a general IT-
2006).
Governance strategy based on CObIT and ITIL.
Our research on related works did not uncover any noteworthy
studies that deal with the applicability of system dynamics for the
3.8. The concept of system dynamics analysis of IT staff behavior. Nevertheless, some similarities with
the objectives of software project and software development pro-
Systems thinking aims to help the understanding and improve- cess works make the application of system dynamics a promising
ment of systems (Forrester, 1995). The methodology begins with approach to address our research hypothesis.

Fig. 2. An overview of the qualitative model. Legend: costs (Kosten), number of employees (Anzahl der Mitarbeiter), speed of ticket processing (Schnelligkeit der
Ticketbearbeitung), customer satisfaction (Zufriedenheit beim Kunden), quality of ticket processing (QualitSt der Ticketbearbeitung), number of open incidents (Anzahl
offener Incidents).
V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485 479

Fig. 3. Augmentation of the qualitative model with the factor stress. The factor Stress is derived from the existing factors speed of ticket processing, quality of ticket
processing, and number of open incidents.

Fig. 4. Augmentation of the qualitative sub-model of the factor stress with the factor ’number of staff away sick’(Krankenstand). The new factor is derived from existing
factors and further data from the HR system.

4. Applying system dynamics to assess IT staff behavior 4.1. System dynamics first stage – Interpretation of the real world

In this section we describe our approach for applying system The first stage of the traditional system dynamics approach is
dynamics to asses IT staff behavior. We present our results similar the interpretation of the real world into a description (Forrester,
to the presentation of results of other related works in the area of 1994). It serves as a starting point for the next stages – model cre-
software development (Abdel-Hamid, 1989;Madachy, 1996). ation and simulation. We consider a quantitative view of the real
480 V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485

Fig. 5. The complete qualitative model. It considers the sub-model of the Factor Stress, its influences on the other factors, and vice versa.

world, based on operative log data. The operative log data is gener- The final decision was to use the simulation software Consideo
ated by software applications that provide service support as a set Modeller.6
standardized IT functions. It describes request processing for three
different IT services of an international telecommunications pro- 4.3. System dynamics second stage – Model creation
vider. The IT services are an e-mail service, an IP-based video-on-
demand service, and a web-hosting service. Data is generated in This is basically the second stage of the systems dynamics ap-
a comma separated values (CSVs) format and includes an incident proach (Forrester, 1994). In the first stage, we interpreted the real
number, as well as the following fields: world as a description – that is, the log data from the use of the log
data as description for the definition of the system dynamics model
 Priority. required further transformation of this data.
 Short description. Examples for two specific transformations that we had to con-
 Affected service. duct are:
 Start of incident.
 End of incident.  The original log data includes timestamps (start and end of an
 Range of impact. incident) UNIX-type datetime data. It had to be transformed
 Number of process steps needed. to the supported DD.MM.YYYY HH:MM:SS format.
 We had to include an incident increment that we then facili-
tated to model correspondence between a time value and the
4.2. System dynamics – tool selection
number of incidents that are processed.
 We used only an excerpt of the available log data that covered
There are various tools for the definition and conduction of Sys-
several years. Using data blocks per month allowed us to keep
tem Dynamics models. Our next objective was the selection of a
the execution time of the simulation short.
suitable tool for the enterprise application scenario. Our assess-
ment was based on cost benefit analysis and the analytical hierar-
chy process (AHP) and included the following categories of 4.3.1. Creation of the qualitative model
requirements with their weightings. Fig. 2 shows an overview of our qualitative modeling approach.
It considers costs (Kosten), number of employees (Anzahl der Mita-
 Technical (15%). rbeiter), speed of ticket processing (Schnelligkeit der Ticketbear-
 Functional (50%). beitung), customer satisfaction (Zufriedenheit beim Kunden),
 Environment (25%). quality of ticket processing (QualitSt der Ticketbearbeitung), and
 Supplier/Support (10%). the number of open incidents (Anzahl offener Incidents). The con-
sidered factors are derived from operative systems in the area of IT
Fig. 1 shows exemplary an excerpt from the assessment of the support (speed of ticket processing, quality of ticket processing,
technical requirements of the four alternatives. It visualizes the number of open incidents) and from standard ERP systems (costs,
coverage of the specific technical requirements by each of the four number of employees). We conducted specific augmentations as
alternatives and demonstrates that three of them are well suited,
while one (Consideo Modeller) is very well suited. 6
http://www.consideo.de.
V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485 481

Fig. 6. First Iteration of the quantitative model. On the left – introduction of new daily opened incident via a random function f(x) = [0, 1], probability distribution of incidents
f(x) = [0, 100] and subsequent rounding; on the right – daily completed incidents dependent on the daily active working time, the number of employees, and the processing
ratio per hour.

described in the previous section. Furthermore, we considered spe- shows the consideration of the emergent factor stress that can
cific time frames and therefore had to synchronize data from these be aggregated from the existing factors speed of ticket processing,
two sources correspondingly. quality of ticket processing, and number of open incidents.
This first qualitative model gives us specific insights with re- We then further augment the factor Stress with the new factor
spect to the dependencies between the singular factors. In order ‘number of staff away sick’ (Krankenstand) as shown in Fig. 4. This
to provide additional insights about other (hidden) factors we augmentation can provide potential insights into whether in-
can consider further factors that can be aggregated from the creased levels of stress contribute directly to increased numbers
existing ones and can influence the qualitative model. Fig. 3 of staff ill.

Fig. 7. Second iteration of the quantitative model. The model is now augmented with the influence of the factor Stress as follows – once, by the increasing number of open
incidents (Stress durch offene Incidents, top), and second, by the introduction of quotas based on the processing ration per hour (Stress durch Quotenvergabe, right).
482 V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485

Fig. 8. The results matrix for the qualitative model. Legend: 1. Number of employees (Anzahl der Mitarbeiter), 2. Number of open incidents (Anzahl offener Incidents), 3.
Number of staff away sick (Krankenstand), 4. Quality of ticket processing (QualitSt der Ticketbearbeitung), 5. Speed of ticket processing (Schnelligkeit der Ticketbearbeitung),
6. Stress.

Fig. 5 shows the complete qualitative model. It contains also the factors/parameters. We consider simulations both with the quali-
influence of the factor Stress (included as a sub-model) on the fac- tative and the quantitative model.
tor number of employees.
4.4.1. Simulations with the qualitative model
4.3.2. Creation of the quantitative model Fig. 8 shows a results matrix for the qualitative model. Depen-
Not every qualitative model can be transformed directly into a dency types and their relative strengths dependent on the number
quantitative one – not every factor of the qualitative model can of open incidents are shown in a Cartesian coordinate system. The
be represented quantitatively. We consider one characteristic analysis of the dependencies offers the following insights:
example where we transform a subset of our qualitative model
into a quantitative model. More specifically, we are assessing the  An increase in quality of ticket processing (4) correlates with
influence of stress to the number of open incidents, or, even more increase in number of open incidents.
precisely, we are assessing the dependency of the number of (cur-  An increase in number of employees (1) correlates with an
rently) open incidents on the stress levels. insignificant decrease in number of open incidents.
The dynamic aspects of the model are simulated by the intro-  An increase in speed of ticket processing (5) correlates strongly
duction of daily opened incidents into the system model using a with a decrease in number of open incidents.
random function f(x) = [0, 1], probability distribution of incidents
f(x) = [0, 100] and subsequent rounding (see Fig. 6). The number Let us assume that we are taking responsibility for an IT depart-
of daily completed (closed) incidents is dependent on the following ment and need to provide several quick wins. If we focus on num-
three factors: ber of open incidents we can aim to increase the speed of ticket
processing. We can then focus on the influences of the factors on
 Daily active working time, the speed of ticket processing and thus find the proper variables
 Number of employees, and that we need to control.
 Processing ratio per hour. Our results with the qualitative model seem to confirm our
hypothesis – we are able to uncover hidden dependencies that
We then augment the quantitative model at two places with the can provide additional knowledge and specific action points for
influence of the factor stress (see Fig. 7). Stress impacts the com- IT executives. This approach can be subsequently extended and
pleted incidents negatively and is itself influenced as follows: once, further operationalized with our quantitative models that we pres-
by the increasing number of open incidents, and second, by the ent next.
introduction of quotas based on the processing ration per hour.
Here we assumed values of n open incidents where stress is nega- 4.4.2. Simulation results from the quantitative model
tively impacted when n > 100; D = 0.20 and values of nsim for A key benefit of the quantitative model is that it allows an ex-
simultaneously processed incidents where stress is negatively im- tended simulation. The model is executed step-by-step based on
pacted when nsim > 6;D = 0.20. These values were also confirmed the specified factors, values and transform functions. Changes in
empirically. every factor can then be plotted (e.g., as histograms). Furthermore,
when working with probabilities and random-generated values,
4.4. System dynamics third stage – Simulation we can employ a Monte-Carlo simulation (Binder, 1986).
Fig. 9 shows results of the Monte-Carlo simulation of our quan-
Our objective is to simulate behavior of specific KPIs and KGIs, titative model. The model considers quotas based on the process-
such as the incident processing rate and further indicators from Ta- ing ratio per hour. The ratio is set to 6 incidents per hour (down,
ble 3, e.g., also timing and responses dependencies from specific right). Results in the histogram show that the daily number of
V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485 483

new incidents can be processed without delays in more than 90% of both qualitatively and quantitatively. It also allows the usage of
the cases during a quarterly period (120 days). A gradual increase standard simulation software, which is another benefit of our ap-
in the number of open incidents beyond the threshold of 100 leads proach. Based on our qualitative and quantitative models we can
to an increased feedback effect that causes the outliers. A short provide directly applicable knowledge in the form of specific man-
term availability of additional employees to handle the number agerial (e.g., increase number of operators when n > 150) or gover-
of incidents that exceed the threshold of 100 would have only a nance (e.g., set Q hour ! Q hourj ) recommendations.
minor effect on costs but would practically eliminate delays in
the incident processing. 5. Discussion and outlook
As a next step, we went further in looking for hidden aspects of
IT staff behavior we can derive from the quantitative model. We In this article we stated the hypothesis that there are aspects of
simulated different governance settings without changing the IT staff behavior that are hidden from the casual observer (or man-
number of employees. Our expectation was, that we can find set- ager) and that such aspects can be derived from log data.
tings where we can avoid a self-enforcing increase in the number First, we considered relevant IT objectives from IT Governance
of open incidents without having to provide additional manpower. frameworks. Then we regarded specific indicators in these frame-
Therefore we configured the model with various factor settings and works, and subsequently we defined a sample of this indicators
then ran the simulations. Fig. 10 provides an overview of results that is relevant for IT staff behavior. Finally, we used the concept
from one specific configuration. It differs from the model in Fig. 9 of system dynamics to interpret, model, simulate, and analyze
in the value of only one factor – the value of the factor quotas data.
based on the processing ratio per hour has been increased: The presented simulation results uncover different unexpected
Q hour ! Q hourj ; Q hourj ¼ 7. aspects (e.g., higher quotas for hourly tasks lead to less stress) and
The results of this simulation are a particularly representative thus confirm the hypothesis.
case for the uncovering of hidden aspects in IT staff behavior. There are several limitations of our approach. First, we consid-
The increase in hourly quotas Q hour ! Q hourj is on first sight a typ- ered only IT objectives from ITIL and COBIT, so our indicators and
ical increase in stress levels. Surprisingly, the simulation shows simulation results are necessary restricted to those. Second, the
that as a direct effect of this increase all incidents can be processed solicitation of more than 500 indicators that correspond to these
on time. Furthermore, there are even negative levels of stress that objectives was based on industry expert assessments and actual
we can observe (see Fig. 10, left). indicator listings used in IT production settings. Third, as there is
Overall, our experimental results confirm our hypothesis. The no existing literature dealing with the relevance of such indicators
application of system dynamics allows us to model dependencies to human aspects, the assessment of whether an indicator is

Fig. 9. Results of the monte-carlo simulation of the quantitative model. The model considers quotas based on the processing ratio per hour. The ratio Qhour is set to six
incidents per hour (down, right).
484 V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485

Fig. 10. Results of the Monte-Carlo simulation of the quantitative model. The model considers quotas based on the processing ratio per hour. The ratio Qhour is set to seven
incidents per hour (down, right).

related to IT staff behavior, or not, was based on assumptions and this work. Any remaining errors of fact or analysis are the sole
experience. Fourth, our system dynamics models were created and responsibility of the authors.
our simulations were conducted with the selected software tool.
We have not in any way evaluated the conformity or correctness
References
of the software tool itself.
Nevertheless, our approach enhances clearly the existing body of Abdel-Hamid, T. K. (1989). The dynamics of software project staffing: A system
knowledge in the area of IT Governance by providing a straightfor- dynamics based simulation approach. IEEE Transactions on Software Engineering,
ward use of already existing data (logfiles) to generate new mana- 15(2), 109–119.
Abdel-Hamid, T. K. (2003). Exercise and diet in obesity treatment: An integrative
gerial insights. In this area, it can be extended to cover additional IT system dynamics perspective. Medicine & Science in Sports & Exercise, 35(3), 400.
objectives and indicators, beside the ones that we considered in this Abdel-Hamid, T. K., & Madnick, S. (1989). Lessons learned from modeling the
work. Furthermore, the approach extends research in the area of dynamics of software development. Communications of the ACM, 32(12),
1426–1438.
system dynamics with models that are purely oriented at IT indica- Abdel-Hamid, T. K., Sengupta, K., & Swett, C. (1999). The impact of goals on software
tors. Future works here can look at other real world scenarios that project management: An experimental investigation. MIS Quarterly, 531–555.
are highly indicator–oriented (e.g., natural sciences or social sci- Abdel-Hamid, T. K. (1988). Understanding the 90% syndrome in software project
management: A simulation-based case study. Journal of Systems and Software, 8,
ences, or even auditing). As the amount of data that systems around 319–330. <http://dl.acm.org/citation.cfm?id=56771.56777>.
us are generating increases at an incredible pace, such indicator and Binder, K. (1986). Monte-carlo methods. Mathematical Tools for Physicists, 249–280.
data-driven approaches are becoming more and more important. In Borzo, J. (2005). Business 2010 – Embracing the Challenge of Change. Tech. Rep.
Chua, S., Chen, D., & Wong, A. (1999). Computer anxiety and its correlates: A meta-
the area of human behavior research, the approach provides a clear analysis. Computers in Human Behavior, 15(5), 609–623.
example for applying computers to better understand human Colomo-Palacios, R., Casado-Lumbreras, C., Soto-Acosta, P., & Garcia-Crespo, A.
behavior in complex settings, as well as for generating insights (2011). Using the affect grid to measure emotions in software requirements
engineering. Journal of Universal Computer Science, 17(9), 1281–1298.
for a more informed and fact-based managerial behavior.
de Medeiros, A., Weijters, A., & van der Aalst, W. (2007). Genetic process mining: An
experimental evaluation. Data Mining and Knowledge Discovery, 14(2), 245–304.
Acknowledgments Durndell, A., & Haag, Z. (2002). Computer self efficacy, computer anxiety, attitudes
towards the internet and reported experience with the internet, by gender, in
an east european sample. Computers in Human Behavior, 18(5), 521–535.
The authors thank the anonymous reviewers and the editors for Ehrig, M., Koschmider, A., & Oberweis, A. (2007). Measuring similarity between
the valuable and concise suggestions that considerably improved semantic business process models. In Proceedings of the fourth Asia-Pacific
V. Stantchev et al. / Computers in Human Behavior 29 (2013) 473–485 485

conference on comceptual modelling (Vol. 67, pp. 71–80). Australian Computer Stantchev, V., & Malek, M. (2009). Translucent replication for service level
Society, Inc. assurance. In High Assurance Services Computing (pp. 1–18). Berlin, New York:
Forrester, J. (1994). System dynamics, systems thinking, and soft OR. System Springer.
Dynamics Review, 10(2–3), 245–256. Stantchev, V., & Malek, M. (2011). Addressing dependability throughout the soa life
Forrester, J. (1995). The beginning of system dynamics. McKinsey Quarterly, 4–17. cycle. IEEE Transactions on Services Computing, 4, 85–95. <http://dx.doi.org/
Gama, N., Nunes da Silva, R., & Mira da Silva, M. (2011). Using people-cmm for 10.1109/TSC.2010.15>.
diminishing resistance to itil. International Journal of Human Capital and Stantchev, V., & Schröpfer, C. (2009). Negotiating and enforcing qos and slas in grid
Information Technology Professionals (IJHCITP), 2(3), 29–43. and cloud computing. In GPC ’09: proceedings of the 4th international conference
Garcia-Crespo, A., Colomo-Palacios, R., Soto-Acosta, P., & Ruano-Mayoral, M. (2010). on advances in grid and pervasive computing (pp. 25–35). Berlin, Heidelberg:
A qualitative study of hard decision making in managing global software Springer-Verlag.
development teams. Information Systems Management, 27(3), 247–252. Stantchev, V., & Schröpfer, C. (2009). Service level enforcement in web-services
Gerke, K., & Tamm, G. (2009). Continuous quality improvement of IT processes based based systems. International Journal on Web and Grid Services, 5(2), 130–154.
on reference models and process mining. In AMCIS 2009 Proceedings (pp. 786). Stantchev, V., & Tamm, G. (05 2011). Addressing non-functional properties of
Grembergen, W. V. (Ed.). (2003). Strategies for information technology governance. services in IT service management. In Non-functional properties in service
Hershey, PA, USA: IGI Publishing. oriented architecture: Requirements, models and methods (pp. 324–334). Hershey,
Heinssen, R. et al. (1987). Assessing computer anxiety: Development and validation PA, USA: IGI Global.
of the computer anxiety rating scale. Computers in Human Behavior, 3(1), 49–59. Tallon, P. P., Kraemer, K. L., & Gurbaxani, V. (2000). Executives’ perceptions of the
Homer, J., & Hirsch, G. (2006). System dynamics modeling for public health: business value of information technology: A process-oriented approach. Journal
Background and opportunities. American Journal of Public Health AJPH – 2005. of Management Information Systems, 16, 145–173. <http://portal.acm.org/
Laguna, K., & Babcock, R. (1997). Computer anxiety in young and older adults: citation.cfm?id=1189427.1189434>.
Implications for human–computer interactions in older populations. Computers Tamm, G., & Günther, O. (2005). Webbasierte Dienste: Technologien, Markte und
in Human Behavior, 13(3), 317–326. Geschäftsmodelle. Physica-Verlag Heidelberg.
Lainhart, J. IV, (2000). COBIT: A methodology for managing and controlling Todman, J., & Monaghan, E. (1995). Qualitative differences in computer experience,
information and information technology risks and vulnerabilities. Journal of computer anxiety, and students’ use of computers: A path model. Computers in
Information Systems, 14, 21. Human Behavior, 10(4), 529–539.
Madachy, R., & Tarbet, D. (2000). Case studies in software process modeling with Van Bon, J. (2008). Foundations of IT service management based on ITIL V3. Van Haren.
system dynamics. Software Process: Improvement and Practice, 5(2–3), 133–146. Van der Aalst, W., de Medeiros, A., & Weijters, A. (2006). Process equivalence:
Madachy, R.J. (1996). System dynamics modeling of an inspection-based process. In Comparing two process models based on observed behavior. Business Process
Proceedings of the 18th international conference on software engineering. ICSE ’96 Management, 129–144.
(pp. 376–386). Washington, DC, USA: IEEE Computer Society. <http:// van der Aalst, W., & Weijters, A. (2004). Process mining: A research agenda.
dl.acm.org/citation.cfm?id=227726.227800>. Computers in Industry, 53(3), 231–244.
Medsker, G., Williams, L., & Holahan, P. (1994). A review of current practices for van der Aalst, W. M. P. (2005). Business alignment: Using process mining as a tool
evaluating causal models in organizational behavior and human resources for delta analysis and conformance testing. Requirements Engineering, 10,
management research. Journal of Management, 20(2), 439–464. 198–211. <http://portal.acm.org/citation.cfm?id=1342176.1342177>.
Murrell, A., & Sprinkle, J. (1993). The impact of negative attitudes toward computers van der Aalst, W. M. P., van Dongen, B. F., Herbst, J., Maruster, L., Schimm, G., &
on employees’ satisfaction and commitment within a small company. Weijters, A. J. M. M. (2003). Workflow mining: A survey of issues and
Computers in Human Behavior, 9(1), 57–63. approaches. Data and Knowledge Engineering, 47, 237–267. <http://
Petruch, K., Stantchev, V., & Tamm, G. (2011). A survey on IT-governance aspects of portal.acm.org/citation.cfm?id=961808.961812>.
cloud computing. International Journal of Web and Grid Services, 7(3), 268–303. van Dongen, B., Dijkman, R., & Mendling, J. (2008). Measuring similarity between
Porter, M. (1996). What is strategy? Harvard Business Review, 74(4134), 61–78. business process models. In Advanced Information Systems Engineering
Sackmann, S., & Kaehmer, M. (2008). Expdt: A layer-based approach for automating (pp. 450–464). Springer.
compliance. Wirtschaftsinformatik, 50(5), 366–374. Van Glabbeek, R., & Weijland, W. (1996). Branching time and abstraction in
Stantchev, V. (2008a). Architectural Translucency. Berlin, Germany: GITO Verlag. bisimulation semantics. Journal of the ACM (JACM), 43(3), 555–600.
Stantchev, V. (February 2008b). Effects of replication on web service performance in Wang, Y., & Emurian, H. (2005). An overview of online trust: Concepts, elements,
WebSphere. Icsi tech report 2008-03. International Computer Science Institute, and implications. Computers in Human Behavior, 21(1), 105–125.
Berkeley, California 94704, USA. Whitley, B. et al. (1997). Gender differences in computer-related attitudes and
Stantchev, V., & Malek, M. (2006). Architectural translucency in service-oriented behavior: A meta-analysis. Computers in Human Behavior, 13(1), 1–22.
architectures. IEE Proceedings – Software, 153(1), 31–37. Wu, J., & Wang, Y. (2007). Measuring erp success: The key-users’ viewpoint of the
Stantchev, V., & Malek, M. (June 2008). Addressing web service performance by ERP to produce a viable is in the organization. Computers in Human Behavior,
replication at the operating system level. In ICIW ’08: proceedings of the 2008 23(3), 1582–1596.
third international conference on internet and web applications and services (pp.
696–701). Los Alamitos, CA, USA: IEEE Computer Society.

Potrebbero piacerti anche