Sei sulla pagina 1di 87

Programme Factories of the Future PPP Strategic Objective ICT-2011.7.

3 Virtual Factories and Enterprises Project Title Product Remanufacturing Service System Acronym PREMANUS Project # 285541

D2.1 - INVENTORY ANALYSIS REPORT

Work Package Lead Partner Contributing Partner(s) Security Classification Date Version COPYRIGHT
Copyright 2012 by PREMANUS Consortium

WP2: Reference Architecture 1: SAP 2 (POLIMI), 3 (LBORO), 4 (TIE) PU (Public) 16th July 2012 1.08

Legal Disclaimer
The information in this document is provided as is, and no guarantee or warranty is given that the information is fit for any particular purpose. The above referenced consortium members shall have no liability for damages of any kind including without limitation direct, special, indirect, or consequential damages that may result from the use of these materials subject to any liability which is mandatory due to applicable law. This document may not be copied, reproduced, or modified in whole or in part for any purpose without written permission from all of the Copyright owners. In addition to such written permission to copy, reproduce, or modify this document in whole or part, an acknowledgement of the authors of the document and all applicable portions of the copyright notice must be clearly referenced. All rights reserved. This document may change without notice.

The PREMANUS project (285541) is co-funded by the European Union under the Information and Communication Technologies (ICT) theme of the 7th Framework Programme for R&D (FP7). This document does not represent the opinion of the European Community, and the European Community is not responsible for any use that might be made of its content.

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Document history
Version 0.1 0.2 Date 14/04/12 29/05/12 Comments ToC and first Draft Update SOTA of RSG Author Tobias Wieschnowsky (SAP) Oscar Garcia (TIE) Tobias Wieschnowsky (SAP), Oscar Garcia (TIE), David Potter (POLIMI) Oscar Garcia (TIE) David Potter (POLIMI) Oscar Garcia (TIE) Oscar Garcia (TIE) Yi Peng (LBORO), Jacopo Cassina (POLIMI) Oscar Garcia (TIE) Tobias Wieschnowsky (SAP) Tobias Wieschnowsky, Nicolas Liebau, Benedikt Schmidt (SAP), Oscar Garcia (TIE) Tobias Wieschnowsky (SAP) David Potter (POLIMI), Tobias Wieschnowsky (SAP) Tobias Wieschnowsky (SAP) Ian Graham (LBORO) Nicolas Liebau (SAP) Oscar Garcia (TIE), Benedikt Schmidt (SAP)

0.3

10/06/12

Update SOTA of RIS, RSG, BDSS

0.4 0.5 0.6 0.7 0.8 0.9 1.0

22/06/12 23/06/12 25/06/12 26/06/12 26/06/12 27/06/12 28/06/12

Update SOTA of RSG Update SOTA of RSG Template Fixing Update SOTA of RSG Update BDSS SOTA, and add Holonix i-LiKe description Major revision performed First Version for internal review

1.01

9/07/12

Major revisions in all parts of the document

1.02

9/07/12

Revisions in several parts of the document

1.03

10/07/12

Revisions for several sections

1.04 1.05 1.06 1.08

11/07/12 16/07/12 16/07/12 16/07/12

Minor revisions Language, grammar and readability improvements carried out as part of second internal review Executive Summary and finalizing document Final fixings

The research leading to these results has received funding from the European Communitys Seventh Framework Programme (FP7/2007-2013) under grant agreement no285541.

ii

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Table of contents
1 2 3 EXECUTIVE SUMMARY ........................................................................................................... 1 INTRODUCTION ......................................................................................................................... 3 REMANUFACTURING ............................................................................................................... 5 3.1 REMANUFACTURING DEFINITION ............................................................................................. 5 3.2 REMANUFACTURING RESEARCH OVERVIEW............................................................................ 6 3.3 MARKET MODELS AND SIMULATION TOOLS............................................................................ 7 3.4 STRATEGIC DECISION SUPPORT AND LIFE CYCLE TOOLS ........................................................ 8 3.5 OPERATIONAL MANAGEMENT.................................................................................................. 9 3.6 END OF LIFE DECISION MAKING .............................................................................................. 9 3.7 DECISION SUPPORT SYSTEM THEORY .................................................................................... 11 3.7.1 Introduction ..................................................................................................................... 11 3.7.2 Uncertainty Theory and Risk in Decision Support System Research .............................. 12 3.7.3 Asset Condition Assessment ............................................................................................ 16 4 REMANUFACTURING INFORMATION SERVICES (RIS) ............................................... 21 4.1 ID MANAGEMENT FOR PRODUCT AND COMPONENTS ............................................................ 21 4.1.1 Product Identification Scheme ........................................................................................ 21 4.1.2 Information Services and Networks ................................................................................ 23 4.2 DISTRIBUTED PRODUCT INFORMATION STORE ...................................................................... 24 4.2.1 Introduction ..................................................................................................................... 24 4.2.2 Apache Hadoop ............................................................................................................... 26 4.2.3 Distributed storages ........................................................................................................ 28 4.2.4 NoSQL databases ............................................................................................................ 29 4.2.5 Information Indexing ....................................................................................................... 31 4.2.6 Conclusions ..................................................................................................................... 33 4.3 INFORMATION RETRIEVAL MECHANISM ................................................................................ 35 4.3.1 SOAP Web Service .......................................................................................................... 35 4.3.2 RESTful Web Services ..................................................................................................... 36 4.3.3 OData .............................................................................................................................. 36 4.3.4 Conclusions ..................................................................................................................... 37 4.4 ACCESS CONTROL .................................................................................................................. 37 4.4.1 OpenRBAC ...................................................................................................................... 38 4.4.2 Spring Security ................................................................................................................ 38 4.4.3 jGuard ............................................................................................................................. 39 4.4.4 Conclusion....................................................................................................................... 39 5 REMANUFACTURING SERVICES GATEWAY (RSG) ...................................................... 40 5.1 SEMANTIC SERVICE BUS ........................................................................................................ 40 5.1.1 Introduction ..................................................................................................................... 40 5.1.2 Enterprise Service Bus .................................................................................................... 41 5.1.3 Technologies for the SSB................................................................................................. 42 5.1.4 Conclusions ..................................................................................................................... 47 5.2 INFRASTRUCTURAL SERVICES ................................................................................................ 48 5.2.1 Generic Services.............................................................................................................. 48 5.2.2 Semantic services ............................................................................................................ 49 5.2.3 Gateway services ............................................................................................................. 52 5.3 DEVICE AS A SERVICE, MAINTENANCE AS A SERVICE ........................................................... 52 5.3.1 Device as a Service ......................................................................................................... 52

iii

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

5.3.2 6

Maintenance as a Service ................................................................................................ 53

BUSINESS DECISION SUPPORT SYSTEM (BDSS)............................................................. 54 6.1 END-OF-LIFE PRODUCT RECOVERY PROCESS ECO-EFFICIENCY EVALUATOR ...................... 54 6.2 KPI OPTIMIZER ....................................................................................................................... 56 6.3 USER EXPERIENCE .................................................................................................................. 57 6.3.1 HTML 5 ........................................................................................................................... 57 6.3.2 SAP Streamworks ............................................................................................................ 58 6.3.3 Adobe Flex ...................................................................................................................... 60 6.3.4 Microsoft Silverlight........................................................................................................ 60 6.3.5 Conclusion on User Experience ...................................................................................... 61 6.4 TASK-CENTRIC INFORMATION SYSTEMS................................................................................. 61 6.4.1 ADiWa Workbench .......................................................................................................... 61 6.4.2 Conclusions ..................................................................................................................... 62 6.5 NATURAL LANGUAGE QUERY INTERFACES ............................................................................ 62 6.5.1 Products .......................................................................................................................... 63 6.5.2 Projects ........................................................................................................................... 63 6.5.3 Conclusion....................................................................................................................... 64

END OF LIFE SYSTEMS .......................................................................................................... 65 7.1 PRODUCT LIFECYCLE MANAGEMENT .................................................................................... 65 7.1.1 Quantum Lifecycle Management (QLM)......................................................................... 65 7.1.2 Holonix i-LiKe (intelligent Lifecycle Knowledge)........................................................... 68 7.1.3 SAP PLM ......................................................................................................................... 69 7.1.4 OpenPLM ........................................................................................................................ 70 7.1.5 Conclusion....................................................................................................................... 70 7.2 END-OF-LIFE PRODUCT MANAGEMENT SYSTEMS ................................................................. 71 7.2.1 SAP ERP Recycling Administration ................................................................................ 71 7.2.2 Conclusion End-of-Life Product Management Systems .................................................. 72

8 9

CONCLUSIONS .......................................................................................................................... 73 APPENDIX A: REFERENCES.................................................................................................. 74

iv

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

1 Executive Summary
This deliverable summarizes and analyses the current state-of-the-art in technologies relevant to the PREMANUS project. The selection of technologies for consideration is guided by the DOW and supplemented by others identified over the course of the project so far. Useful and reusable technologies that will play a vital part or aid in the development of PREMANUS have been identified. The document is structured using the work packages of the project as a basis, with the addition of a later chapter covering supplementary systems. Chapter 3 reviews the state of the art in research in context of remanufacturing. First a definition of remanufacturing is given and overview of research in that area is given. As PREMANUS focuses on business decision support for remanufacturing, then a summary on decision support system research is given. As uncertainty plays a crucial role for PREMANUS, decision support under uncertainty is given extra attention. Finally, the state of the art for asset health condition assessment is summarized. Chapter 4 details technologies for the RIS (Remanufacturing Information Services, WP3), which consist of ID management for products, persistency, information retrieval, and access control. For ID management of products the Electronic Product Code (EPC) and the related information service (EPCIS) were identified as technology offering capabilities useful to PREMANUS. For persistency the Apache Hadoop framework provides an integrated, comprehensive set of functionalities for carrying out distributed execution of applications on large clusters. For information retrieval RESTful webservices in combination with oData has been identified as the best solution. For access control no recommendation on the technology to use could be given as the concrete requirements for this functionality are not yet clear at this stage of the project. Chapter 5 deals with topics surrounding the RSG (Remanufacturing Services Gateway, WP4) and is composed of three sections: semantic services bus, infrastructural services, and devices and maintenance as-a-service. For the basis of a semantic service bus the TIE Smart Bridge is a product of partner TIE Kinetix and as such project developers know it and can efficiently integrate it into the PREMANUS system. Similarly, for semantic services the TIE Semantic Integrator (TSI) is provided by TIE Kinetix. For gateway services related to the Internet of Things results from the EU Project Promise will be leveraged, especially QLM. Using QLM data from devices and maintenance reports can be integrated into PREMANUS in a light weight manner. Chapter 6 corresponds to the BDSS (Business Decision Support System) from WP5 and includes; a manufacturing BDSS review, an end-of-life product recovery process eco-efficiency evaluator, KPI optimizer, and user experience design guide. For the End-of-Life Product Recovery Process EcoEfficiency Evaluator the most relevant related work exists in the area of Life Cycle Cost and Life Cycle Assessment. For the KPI Optimizer a set of optimization techniques exist that have been reviewed in context of their application area within PREMANUS; that is operational scheduling and production planning, strategic product life cycle decision making, and general KPI optimization. User Experience is important for a BDSS as users shall interpret the presented results correctly. For user interfaces HTML 5 together with SAP Streamwork offer the capabilities to use PREMANUS on any operating system including mobile devices and further the re-use widgets in order to adapt the PREMANUS to different scenarios in a light weight manner. Chapter 7 identifies end-of-life and similar IT systems which will also be relevant to PREMANUS. Here, only the Holonix Holonix i-LiKe (intelligent Lifecycle Knowledge) product, a result of the EU Promise project, can be leveraged for PREMANUS.

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

2 Introduction
This deliverable presents the results of a state-of-the-art analysis for PREMANUS. According to the DoW, relevant technologies are reviewed relating to the Remanufacturing Information Service (RIS), the Remanufacturing Gateway Services (RGS), and the Business Decision Support System (BDSS). Chapter 3 reviews the state-of-the-art related to the PREMANUS Remanufacturing Information Service (RIS). The RIS provides the mechanisms for distributed storage and retrieval of product information in addition to product ID resolution and access control on the product data. The four main functionalities under the RIS are: ID matching and resolution, providing the ability to reference products by their unique identifiers across stakeholders that use their own local identifiers Distributed Product Information Store, in order to allow aggregation and synchronization of product information that is distributed across a remanufacturing ecosystem Retrieval of distributed information into a repository the BDSS uses to execute its algorithms Access control to ensure the confidentiality and integrity of product information. In Chapter 4 a state-of-the-art review for the Remanufacturing Gateway Services is presented. The RGS allows for efficient use of the information managed by the RIS, allowing product centric collaboration by exposing product data oriented services for end of life product recovery process. This requires: Semantic Service Bus; a distributed bus for message exchange with enhanced support of semantics/metadata, Stack of infrastructural services; for example, publishing services, annotation services, authentication and authorization services, etc., Devices as a service (DaaS), which are Internet-of-Things representatives that expose different devices (and their parts) in the virtual world of PREMANUS, Maintenance as a service (MaaS), which are services that represent contracted maintenance and enable the effective creation/updating of descriptions of the performed actions (and associated conditions) on particular components, Adapter services for connecting business systems (such as ERP, MES), diagnostics systems (predictive maintenance and monitoring systems), and integration middleware to PREMANUS. Chapter 5 reviews the state-of-the-art related to the PREMANUS Business Decision Support System (BDSS). The BDSS has two core components: The EoL product recovery process eco-efficiency evaluator that gives a recommendation on the effects of product recovery based on different environmental factors (for instance the calculation using LCA of alternative scenarios to product disposal), The KPI optimizer supports different decision points involved in the remanufacturing of a product based on the data from diagnostics systems and business systems. If product remanufacturing is not viable it provides suggestions on alternative actions and their business consequences. Further, the efficacy of decision support depends on the way information is presented to the user; hence a user experience state-of-the-art review concludes Chapter 5. Finally, Chapter 6 reviews the state-of-the-art of existing commercial IT systems that are relevant to

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

end-of-life products: Product Lifecycle Management (PLM) systems, used during product design and development, to manage and plan product portfolios and increasingly to manage product related data across the whole life cycle, Recycling systems, supporting the recycling processes and regulations for reporting.

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

3 Remanufacturing
This section provides a general overview about remanufacturing research. The section starts out with a definition of manufacturing and then provides an overview of the different research areas.

3.1

Remanufacturing Definition

Remanufacturing is a distinct process which can be applied to a product that has reached the end of its functional life and is broadly categorised under the reuse category within the waste hierarchy. Remanufacture can be distinguished from other similar reuse processes, such as refurbishment and repair, due to the level of quality the product is returned and the type of warranty which is then given. Unlike repair and refurbishment, a remanufactured product is returned to at least the quality of its original manufactured performance specification and a warranty to match that of an equivalent newly manufactured product [141]. A comprehensive list of definitions for end of life (EoL) options and other similar process can be found within Table 1, whilst a taxonomy of EoL process options can be found within Figure 1.
Table 1: Process definitions adapted from [151]
Term Upgrade Reprocessing/ Reverse manufacture Remanufacture Refurbishment Reconditioning Maintenance Recycling Revalorization Reuse Reuse as is Further use Repair Definition Upgrade refers to any process that gives a product enhanced functionality Reprocessing is the value-adding activity of repair, refurbishment or remanufacture

Remanufacture is the reprocessing of used products in such a manner that product quality is as good or better than new in terms of appearance, reliability and performance Refurbishing is the reprocessing of used equipment at minimum cost in order to ensure that the product performance is within the bounds of what is considered acceptable for reuse Reconditioning is a process within remanufacturing in which used components have their condition restored to as good as new Maintenance is the series of actions taken during the use of a product to enable it to function at a predetermined level for its economical lifespan Recycling is the process of recovering material after a product has been discarded Revalorization includes any process that seeks to recover any embedded value in a discarded product or material Reuse means continuing to use an item after it has been relinquished by its previous user, rather than destroying, dumping or recycling it Reuse as is refers to the reuse of a product with minimal reprocessing Further use is the use of a used product for a different purpose than was originally intended Repair refers either to actions performed to return a product to functioning condition during service or to actions at product end of life to return a component to functioning condition

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Figure 1: End of life categories adapted from [151]

3.2

Remanufacturing Research Overview

The following section is a collection and review of the current state of the art in decision support tools and mathematical models related to the field of remanufacturing. The tools and models are presented in four categories: Market models and simulation Strategic decision support and life cycle tools Operational management Reverse logistics and end of life decision making
Description Models the effects of competition and return policies on market demand and product returns. Models the effects of strategic decisions such as product returns incentive policy and level of investment in design for remanufacture upon financial profits over a period of time. Provides tools to optimise inventory management, production planning and production scheduling. References [130, 131, 136, 157, 133, 148, 147] [161, 162, 166]

An overview of the classifications found can be seen in Table 2.


Tool Classification Market Models & Simulation Strategic Decision Support & Life Cycle tools Operational Management

End of Life Decision Making

Evaluates level of quality to which a product should be reverse manufactured, based on a case by case basis.

[ 164, 134, 154, 155, 167, 170, 170, 163, 132, 149, 137, 160, 139] [146, 159, 144, 140, 158, 153]

165, 145, 143, 168,

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Table 2: Classification of remanufacturing decision support tools and their description

3.3

Market Models and Simulation Tools

These tools and models are designed to investigate issues within the market place and how strategic decisions can affect business performance. Ferrer [136], Debo [131] and Atasu [130] all compare different market scenarios where the concerned remanufacturing company either holds the market monopoly or is faced with competition in a duopoly. In addition, Ferrers work al so focuses on finding optimum levels where both the original equipment manufacturer (OEM) and the independent remanufacturer (IR) can coexist. Furthermore, Debo investigates the effects that pricing and product technology has on profit, and Atasu investigates the green segments and market growth, including sustainable ventures such as remanufacturing. Finally, Robotis [157] models the profit levels obtainable by remanufacturing businesses that sell repaired or remanufactured products only to secondary markets. In the work of Dobos [133] and Mitra [148], governmental decisions involving remanufacturing activities are considered. Dobos investigates the effect that remanufacturing has on an economy, whereas Mitra explores the effects that government subsidies can have on manufacturing and remanufacturing businesses. Matsumoto [147] has created a tool that allows the user to assess a markets response to a particular remanufactured product. The tool is based upon a mathematical model which predicts the type of product bought (new, OEM remanufactured or independently remanufactured), based upon consumer choice. The main output from this model is not that of profit, but of how well a particular remanufactured product diffuses into a market. This is displayed in a graphical form (Figure 1) and allows the user to perform a qualitative analysis.

Figure 2 - Reuse Market Simulator [147]

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

3.4

Strategic Decision Support and Life Cycle Tools

Strategic decision support and life cycle tools allow corporate level decision makers to assess the potential impact that certain policies may have upon their remanufacturing business. These types of decision can range from the incentive system - used to initiate product returns, to the level of design for remanufacture that should be incorporated in a certain product. Spengler [161] has developed a decision tool to help predict the effects of four strategic remanufacturing decisions within a given business. Using a stochastic model to simulate the pseudorandom nature of product returns, this decision tool allows a user to compare the financial outcomes for different strategic scenarios, via the use of life cycle costing to forecast profits or losses over a predefined time period. Work by Subramanian et al. [162] created a holistic decision-making model for a diesel engine remanufacturer that included operational, environmental and strategic elements. This nonlinear mathematical programming model was designed to enable profit maximisation whilst considering environmental goals and constraints. Furthermore, work by Umeda et al. [166] compares the effect that design could have on cost, profit, energy usage, and waste production from a product, redesigned for varying different business strategy, see Figure 2. This was achieved by creating a decision support tool which simulates the flows in the product life cycle and optimises them for each specific individual strategy. The business models compared are: a traditional use-disposal scenario; a recycling scenario; a reuse scenario; a maintenance scenario; and a Product Service System (PSS) scenario (shown as Post Mass Production Paradigm (PMPP) in Figure 3).

Figure 3 - Architecture of Umedas Life Cycle Simulation Tool [166]

Figure 4 - Example set of results from life cycle simulation tool [166]

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

3.5

Operational Management

Optimising the operational processes associated with reverse manufacturing continues to be a large area of interest in academic research. The key areas are: inventory management, production planning and production scheduling. Most of the models developed have stemmed from a traditional manufacturing (linear processes) context, but have been adapted to meet the identified remanufacturing needs. Inventory management models developed for remanufacturing activities have also evolved from traditional liner manufacturing models such as the economic order quantity (EOQ) [131, 164], and the Wagner-Whitin [154, 155] procedure for deterministic models. In addition, dynamic programming has also been used to solve dynamic demand deterministic models [165]. Stochastic models have been developed to simulate the uncertain nature of demand and returns, and these types of models have been categorised by Ilgin [142] as either continuous review [167, 170, 163] or periodic review [145, 132, 149]. Production planning systems for remanufacturing have been developed to assist managers in planning how much, what frequency, and when disassembly should occur. They have also been developed to support the planning of remanufacturing activities, such as whether and when to produce and/or order new materials [142]. Examples of these types of tools include the work of Ferrer [137], the work of Souza [160], and the work undertaken by Jayaraman [143]. Production scheduling is another area in which operational models have been developed in order to maximise efficiency and minimise cost of remanufacture. These models are particularly useful for remanufacturing due to the greater degree of uncertainty and complexity. An example of this type of model is by Guide [139]. Xing et al. present in [169] a recent study on remanufacturing systems using soft computing. It focuses on systems for product design, production planning & scheduling (incl. disassembly sequencing and planning), and inventory management. However it does not focus on estimating an individual products condition and remanufacturing costs. Although there is a large research contribution to the development of operational tools and models for remanufacturing, few commercially available decision tools exist. One such software package is offered by River Cities Software Inc. [156], which appears to be an add-in to their inventory control software that addresses specific remanufacturing requirements (though information on the companys website is limited).

3.6

End of Life Decision Making

End-of-Life (EoL) decision methodologies allow for the assessment of the potential EoL options available to a remanufacturer, such as remanufacture, recondition, repair, recycle or dispose. This is done on a case-by-case basis, proposing a method and calculation for how this should be conducted. The information which is required in this type of decision making generally includes product- and process-specific data regarding, for example: the structure of a particular product such as a bill of materials (BOM) or connections diagram; the quality of the recovered product (e.g. amount of wear); and the costs of the reverse manufacturing processes such as cleaning, disassembly and rework. Examples of optimisation algorithms for profit include the work undertaken by Krikke [146], the work of Rudd [159] and the work carried out by Jun [144]. A key part of this process has been to obtain the data in order to support the relevant decisions, such as costs. An example of particular relevance to the PREMANUS project is that of Jun [144], as the level of quality required to restore a product was based upon the potential profit - which is inherently a function of the quality of the product returned. This was modelled in the decision tool using a non-linear variable rework cost

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

function, and the paper presented a case study of turbocharger remanufacturer in order to demonstrate its function. Considering additional decision parameters alongside cost is common. Environmental impact associated with products returned due to legislative constrains or changes (e.g. Waste Electrical and Electronic Equipment (WEEE) and End of Life Vehicle (ELV) directives) have been covered. Iakovou [140] used a ranking system to allow the cross consideration of factors of: market value; environmental burden; weight; ease of disassembly; and quantity. Furthermore, the effect of operational constraints has been investigated by Xanthopoulos [168] in conjunction with a traditional multi-criteria analysis, applying a two-phase model in order to fully understand the complex dynamics.

Figure 5 - Screen shot of the End of Life Design Advisor (ELDA) Tool developed by Rose [158]

Ologu and Wong [150] use fuzzy logic approach to assess the performances of the collection of ELVs (End of Life Vehicles), recycling and subsequent integration of recyclates into the main manufacturing stream. The data which are involved in the assessment of the reverse logistic process are vague and imprecise, so a fuzzy logic system which uses linguistic variables has been adopted. Based on this study, managers can assess their reverse logistics processes with ease and identify areas which are deficient, thus improving the overall performance of their reverse logistic. This will in turn support environmental management through waste reduction. Pochampally and Gupta [152] instead propose a three-phase fuzzy logic approach to design a reverse supply chain network. Fuzzy logic takes into account uncertainties in supply, so any traditional supply chain approach to identify potential manufacturing facilities cannot be employed to identify potential recovery facilities. The three phases of fuzzy logic approach are: (1) to select the most economical product to reprocess from a set of different used products using a fuzzy benefit function; (2) to employ the AHP and the fuzzy set theory to identify potential facilities in a set of candidate recovery facilities operating in the region where the reverse supply chain network is to be designed; and (3) to solve a single-period and single-product discrete location model to minimize overall cost across the reverse supply chain network.

10

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Finally, a selection of decision support tools has been created to assess EoL options at the design stage of a products life cycle. These tools utilised simple qualitative data to assess decisions due to the lack of concrete information available. Examples of such tools include work undertaken by Rose [158] (see Figure 4), that of Remery [153], the work of the Center of Remanufacturing and Reuse (CRR) in the UK1 [172] and the work of the Center for Remanufacturing (R3C)2 [173] at the Rochester Institute of Technology, NY, USA.

3.7

Decision Support System Theory

3.7.1 Introduction
According to Arnott and Pervan [129] decision support systems (DSS) is the area of the information systems (IS) discipline that is focused on supporting and improving managerial decision-making. In terms of contemporary professional practice, DSS includes personal decision support systems, group support systems, executive information systems, online analytical processing systems, data warehousing, and business intelligence. Over the three decades of its history, DSS has moved from a radical movement that changed the way information systems were perceived in business, to a mainstream commercial IT movement that all organizations engage. DSS has continued to be a significant sub-field of IS scholarship.

Figure 6: Evolution of DSS Theory [129]

According to Arnott and Pervan [129], IS in research faces a significant downturn in IT activity in commerce and government, which has led to serious decline in student numbers in IS degree
1 2

http://www.remanufacturing.org.uk/ http://www.reman.rit.edu/

11

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

programs. At the same time there is a groundswell of concern about the nature and direction of IS research. These concerns include the object of IS research [174], 4 the relevance and rigor of research [175, 176, 177], and the general place of IS in academe [178]. An important vehicle in understanding the current state of IS scholarship is the critical analysis of published research [179]. Combined with a reasoned reflection on the discipline, the analysis of quality publications helps understand how IS research can be improved. This paper provides such an analysis for DSS. It is structured as follows: first, a brief history of the DSS field is presented. The history traces its evolution from its radical beginnings to a complex disciplinary structure of partially connected sub-fields. The history provides the context for a critical analysis of published DSS research. The method and design of the literature analysis is described in detail, followed by the presentation and discussion of the analysis findings. Finally, a number of strategies for improving DSS research are suggested.

3.7.2 Uncertainty Theory and Risk in Decision Support System Research


Goh et al. present in [180] a literature overview for modelling uncertainty in through-life costing of assets: In [181] Zimmermann lists the main reasons for uncertainty as lack or abundance of information, conflicting evidence, measurement uncertainty, ambiguity, and belief (or subjectiveness). Uncertainty is affected by the quality and quantity of information. Various classifications of uncertainty have been proposed [182], but very little consensus has been achieved. Where consensus is achieved, it tends to be specific to certain domains or communities. Isukapalli [183] and Du and Chen [184] distinguish between the parameter and model uncertainty, which is relevant in modelling activities. Model uncertainty is generally implied to be epistemic, mainly due to the lack of knowledge, complexity, and imprecision [199, 185]. Nilsen and Aven [186] further distinguish between model uncertainties as a result of lack of knowledge and deliberate simplifications due to economy and convenience. The selected model is generally a tradeoff between accuracy and detail, so that only that model needs to be developed that performs its required function. Parameter uncertainty may be introduced in the description of the parameters, such as in the physical or the properties parameters in engineering analysis [187]. The sources of parameter uncertainty are typically due to limited datasets, and empirical, subjective, and qualitative information. Another useful classification of uncertainty that is widely accepted in engineering verification and validation (V&V) is the aleatory and epistemic uncertainty. Aleatory uncertainty is inherent variability that cannot be reduced by further measurement, although better sampling can improve knowledge about the variability. Epistemic uncertainty is caused by the lack of knowledge about the true value of a parameter or the behavior of a system and can be reduced by more accurate measurements or expert judgment. This distinction is useful in terms of selecting the suitable modeling methods, although some researchers argue that their separation may not be possible in reality [188]. Earl et al. [189] made a similar distinction, but referred to aleatory uncertainty as the known uncertainty (based on variability in past cases characterized as probability distributions) and epistemic uncertainty as the unknown uncertainty. Unknown uncertainties are thosewhere the specific event or type of event could not have been foreseen. Others [190, 191] further distinguish between the internal and external uncertainties, stating that external uncertainties such as those driven by market and political variables are more difficult for a company to predict.

12

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Figure 7: Asset Life Cycle Cost Breakdown according to Goh et al. [180]

3.7.2.1

Uncertainty typology

Erkoyuncu, Roy et al. summarize in [213] the research results in defining a uniform typology for uncertainty: A common theme in uncertainty research has been to develop uncertainty typologies to create decision support tools [216, 217]; though there is no single approach that has commonly been accepted. A highly adopted approach was proposed in Walker et al. [218, 217]. The paper, within the context of policy analysis, offers a systematic treatment of uncertainty. The tool classifies the literature into three dimensions: location (i.e. application in models), level (i.e. driven by knowledge continuum) and nature of uncertainty (i.e. aleatory and epistemic).
3.7.2.2 Uncertainty in Life Cycle Assessment (LCA)

Goh et al. summarize the state of research regarding uncertainty in life cycle assessment in [180]: Uncertainties have been extensively considered within the context of Life Cycle Assessment (LCA) where uncertainty sources, types, and the modelling approaches have been studied in great detail by many authors [192, 193 , 194]. Various classification schemes to describe uncertainty within LCA have also been proposed depending on the viewpoints of the researchers [192]. For instance, Huijbregts et al. [195] defined uncertainty in input data as parameter uncertainty, in normative choices as scenario uncertainty, and in mathematical relationships as model uncertainty. Heijungs and Huijbregts [192, 196] suggested that there are three broad types of uncertainties associated with each of the categories, i.e., no value, inappropriate value is available, and more than one value is available. Lloyd and Ries [197] adopted the same categorization and found from a survey that the parameter uncertainty was the type of uncertainty most frequently addressed in LCA. However, they cautioned that it was impossible to establish whether it was generally considered the most important. The Society of Environmental Toxicology and Chemist (SETAC) published a full report on data availability and quality issues in LCA [198].
3.7.2.3 Uncertainty in Life Cycle Costs (LCC) Engineering

LCC is the total cost over a products life cycle span [201, 202], it includes design cost,

13

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

manufacturing cost, operating cost and disposal cost. Other terminologies for LCC are WLC (whole life cost) and through-life cost. Nowadays, companies are more concerned to prepare LCC estimates of a product from its conception until the end of its life. This is emphasised by the shift in industrial business processes which have moved from delivering spares and parts to total care packages through the whole lifetime of a product [204]. In a recent literature review on cost engineering for manufacturing [200] Xu et. al summarize the state of art in modelling uncertainty in that field accordingly: There is a significant amount of literature concerning the definition and modelling of uncertainty in a wide range of fields. However, definitions have mainly been driven by purposes and scientific disciplines [205], therefore numerous and varied typologies can be found [206, 207]. Table 3 summarised uncertainties in cost data and models typically found.
Table 3: Classification of uncertainties in cost data and models [200]

Classification
Data Uncertainty Variability Statistical error Vagueness Ambiguity Subjective Judgement Imprecision Intuitive/expert opinion Analogical

Source
Inherent randomness Lack of data Linguistic uncertainty Multiple sources of data Optimism bias Future decision or choice Judgement Selection of benchmark model (Qualitative characteristics) Cost drivers/parameters CER choice Regression fit Data uncertainty Extrapolation Scope Level of detail Available data Change in condition Limited data

Type
Aleatory Epistemic Epistemic Epistemic Epistemic Epistemic Epistemic Epistemic

Example
Repair time, mean time between failure Reliability data The component needs to be replaced about every 2-3 months Expert 1 and expert 2 provide different values to end-of-life costs Over confidence in schedule allocation Supplier A or B Similar manufacturing process will be used but geometrical changes are made The system will have 20% higher capacity than existing system and consumes 10% less fuel Missing key cost drivers Unsuitable CER function form

Model uncertainty

Parametric

Epistemic

Analytical/engineering

Epistemic

Simplification in WBS due to lack of time

Extrapolation from actual costs

Epistemic

Maintenance procedures are revised

Despite the significant presence of uncertainties in Lifecycle Costs (LCC), traditionally LCC was considered in a deterministic fashion. Recent emphasis in governmental agencies, public and defence sectors on understanding risks associated with LCC estimation has resulted in vast practices of probabilistic methods [208, 209]. In probabilistic methods, uncertainty in the cost data are represented by probability density functions (triangular and normal being most popular) and then propagated through cost models to assess the uncertainty in LCC. Analytical and computational methods such as the Monte Carlo simulation are used for uncertainty propagation according to probability theory. However, probabilistic methods although suitable for characterising aleatory uncertainty, may be less useful when statistical data is seriously lacking or when the uncertainty is caused by lack of knowledge (epistemic uncertainty). This drawback has led to the investigation of the possibilistic and fuzzy set approaches [210, 211, 209]. Possibility theory and fuzzy set theory are forms of artificial intelligence, which can be considered to be extensions to probability theory [211]. These approaches are capable of representing uncertainty with much weaker statements of knowledge and more diverse types of uncertainty [210]. There have also been studies that have used deterministic approaches to assess uncertainty [212]. Typical approaches in the deterministic approach include sensitivity analysis, net present value and

14

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

breakeven analysis. To this end, characterisation of epistemic uncertainties is found to be lacking, perhaps due to the difficulty and resources required. Because both types of uncertainty are expected in LCC estimation, it is suggested that a modelling approach that is able to take into account both epistemic and aleatory uncertainty in LCC estimating may be useful. This is particularly driven by the notion that combining aleatory and epistemic uncertainty underestimates the total uncertainty [210]. Modelling uncertainties tend to be epistemic and can be reduced if further resources are expended to collect evidence, add details to the models, quantify boundary conditions, etc. However, to date limited efforts have been observed in industry. Overall, much research has been emphasised on the techniques for modelling uncertainty; however, there has been little work on integrating the whole process of uncertainty identification, quantification, response and management strategies [213]. This implies that uncertainty assessment must guide investment in a holistic manner along the life cycle. The importance of the in-service phase has grown for manufacturers as customers in many industries such as aerospace, automotive, and construction have adopted an approach that transfers responsibilities to manufacturers (i.e. through equipment availability agreements). The two major aspects that have caused challenges for manufacturers in managing uncertainty within this new context include (1) uncertainties move away from the sale of the equipment towards its utilisation in a bundled and concurrent manner, (2) service contracts require a left-shift of the point-intime at which uncertainties are addressed at the bidding stage [214]. An important challenge in facilitating this transition towards service orientation is driven by the ability of the customer to transfer data to manufacturers and/or ability of manufacturers to make use of historical data. A summary of the typical issues that arise from using the data are represented in Figure 3 [215].

Figure 8: Data Uncertainty [215]

15

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

3.7.3 Asset Health Condition Assessment


Asset condition assessment has been researched more than a decade. Different terms exist for similar type of research: Asset condition assessment, product health assessment, health management, residual lifetime assessment, etc.

Figure 9: Taxonomy of Integrated Systems Health Management (ISHM) algorithms [220]

An overview on the mathematical methodologies that can be applied to the problem is given in [220] (see Figure 9). A classification of the applicability of these mechanisms can be found in [221] (see Figure 10).

Figure 10: An overview of prognosis technical approaches. (a) Hierarchy of prognostic approaches. (b) Information necessary to implement the approaches. [221]

An overview to the application of such mathematical models to e-maintenance is given in [222]. As PREMANUS focuses to be a general tool for many products and industries, we will discuss here the data-driven algorithms. We will discuss Artificial neural networks

16

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Anomaly detection algorithms Fuzzy logic Other neural network approaches Bayesian Belief Nets and Case-based Reasoning Other Numerical Techniques Hidden Markov models and conclude with an overview on post-prognostic decision support research.
3.7.3.1 Artificial Neural Networks

In [220] Schwabacher and Goebel summarize the state of the art for using artificial neural networks for prognostics of asset condition. One of the most popular machine-learning approaches to prognostics is to use artificial neural networks to model the system [223, 231, 232, 229, 233, 234, 237, 241, 247, 248, 249, 250, 251, 252, 259, 263, 264, 266, 267, 273, 275, 276]. Artificial neural networks are a type of (typically non-linear) model that establishes a set of interconnected functional relationships between input stimuli and desired output where the parameters of the functional relationship need to be adjusted for optimal performance. This adjustment is typically accomplished by exposing the network to a set of examples, observing the response of the network, and readjusting the parameters to minimize the error. Several techniques can be employed to adjust (or train) these parameters, including a range of gradient descent techniques and optimization techniques [225].
3.7.3.2 Anomaly Detection Algorithms

Detecting anomalies in machine behaviour can be also achieved using machine-learning approaches. Such algorithms are also known as outlier detection algorithms or novelty detection. Schwabacher and Goebel summarize in [220]: These algorithms learn a model of the nominal behavior of the system, and then notice when new sensor data fail to match the model, indicating an anomaly that could be a failure precursor [227, 235, 270]. Other machine-learning techniques used for prognostics include reinforcement learning [226, 244], classification [273], clustering [229], and Bayesian methods [224, 238]. Data mining algorithms seek to discover hidden patterns in large data sets [242]. Some authors have addressed the use of data mining algorithms to assemble and process the data needed to train data-driven prognostic algorithms [257, 261].
3.7.3.3 Fuzzy Logic

According to Schwabacher and Goebel [220] Fuzzy logic is a popular AI technique that is used for prognostics under uncertain input parameters [224, 223, 231, 232, 229, 233, 236, 249, 267, 270]. Fuzzy logic provides a language (with syntax and local semantics) into which one can translate qualitative knowledge about the problem to be solved. In particular, fuzzy logic allows the use of linguistic variables to model dynamic systems. These variables take fuzzy values that are characterized by a sentence and a membership function. The meaning of a linguistic variable may be interpreted as an elastic constraint on its value. These constraints are propagated by fuzzy inference operations. The resulting reasoning mechanism has powerful interpolation properties that in turn give fuzzy logic a remarkable robustness with respect to variations in the system's parameters, disturbances, etc. When applied to prognostics, fuzzy logic is typically applied in conjunction with a machine learning method, and is used to deal with some of the uncertainty that all prognostics estimates face. Indeed, uncertainty representation and management is at the core of performing successful prognostics. Longterm prediction of the time to failure entails large-grain uncertainty that must be represented

17

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

effectively and managed efficiently. For example, as more information about past damage propagation and about future use becomes available, means must be devised to narrow the uncertainty bounds. [220] A very recent introduction to Fuzzy logic and its application asset condition assessment is given by Krontiris in [219]. In [219] Krontiris researches the applicability to fuzzy inference systems to asset condition assessement. A fuzzy inference system (FIS) is a computing framework based on the concepts of fuzzy set theory and fuzzy reasoning. In general, it provides a non-linear mapping from some input variables into some output variables. The main advantage of incorporating fuzzy reasoning is that inference rules can be defined in a precise and consistent way, while uncertainty originating from the input information is considered by defining one or more fuzzy sets for each input. The basic structure of a FIS consists of three conceptual components: a set of inference rules called rule base, a dictionary which defines the fuzzy sets used to model propositions in the antecedents and conclusions of the rules, and a reasoning mechanism which performs the inference procedure upon the rules and given input information to derive a reasonable output. [219] Krontiris concludes that fuzzy inference systems are capable of treating missing or obsolete data in a consistent way. Its adaptive structure enables the substitution of missing or obsolete information from diagnostic tests by indicators of the equipments loading prole and the general service experience. Fuzzy interference systems are especially useful when the uncertainty rises from the interpretation of diagnostic tests. Further Krontiris concludes that strategic condition assessment of equipment involves a great deal of vague technical-operational knowledge and experience which can be effectively modeled by fuzzy inference systems.
3.7.3.4 Other Neural Network Approaches

Schwabacher and Goebel summarize in [220]: Prognostic performance metrics should take the width of the uncertainty bounds into account. In [248] Khawaja et al. introduced a confidence prediction neural network that employs confidence distribution nodes based on Parzen estimates to represent uncertainty. The learning algorithm is implemented as a lazy or Q-learning routine that improves uncertainty of online prognostics estimates over time. Alternative techniques for dealing with uncertainty include Dempster-Shafer theory [240; 247; 271], or using a Bayesian framework with relevance vector machines combined with particle filters [260]. In another effort to reduce uncertainty, the concept of prognostic fusion has been introduced [239; 277]. Here, similar to multiple classifier fusion, the output from several different prognostic algorithms is fused such that the resulting output is more accurate and has tighter uncertainty bounds than on average the output of any individual algorithm alone.
3.7.3.5 Bayesian Belief Nets and Case-based Reasoning

Also, researchers have tried to extend tools commonly found in diagnostics to prognostics. Examples for that are Bayesian Belief Nets and Cased-based Reasoning. Schwabacher and Goebel summarize in [220] the state of the art: For example, Przytula and Choi [256] suggest the use of a Bayesian Belief Net (BBN) for prognostics where the past and future usage need to be discretized and inference on remaining life can be accomplished within the framework of BBNs. In a similar vein, case-based reasoning (and its variants such as instance-based reasoning), an important tool in the domain of diagnostics, has been proposed for use in a diagnostic setting. Saxena et al. [262] propose the use of time history traces as cases that can be used to perform prognosis. Xue et al. [277] propose an instance-based model that they test out on aircraft engine date. In contrast to Saxena, the particular local models proposed here are not based on individual models that consider the track history of a specific engine nor are they based on a global model that would consider the

18

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

collective track history of all the engines. Instead, the authors use local fuzzy models that are based on clusters of peers where a peer is described by similar instances with comparable operational characteristics and performance. A collection of competing instances is generated that are evaluated with respect to their performance in light of the currently available data. The models are refined using evolutionary search, and the best one is selected after a finite number of iterations. The best model at the end of the evolutionary process is used at run time to estimate remaining useful life.
3.7.3.6 Other Numerical Techniques

Regarding other numerical techniques, Schwabacher and Goebel summarize in [220] the state of the art: Some of the conventional numerical techniques used for data-driven prognostics include wavelets [272; 234; 259; 265], Kalman filters [231; 232], particle filters [255; 260], regression [228; 240; 269], demodulation [258; 265], and statistical methods [230; 247; 274]. Hernandez & Gebraeel [243] combined a life usage model with a data-driven technique by using sensor data to automatically update the life usage model.
3.7.3.7 Hidden Markov Models

In [278, 279] Hidden Markov Models are used to predict the remaining lifetime of an asset. Within the Markov chain an aging factor is introduced that discounts the probabilities of staying at current state while increasing the probabilities of transitions to less healthy states.
3.7.3.8 Post-Prognostic Decision Support

Post-prognostic decision support is another area where prognostics intersect with artificial intelligence techniques. Schwabacher and Goebel summarize in [220] the state of the art: Challenges arise from the large amount of different information pieces upon which a decision maker has to act. Conflicting information from on-board and off-board ISHM modules, seemingly contradictory and changing requirements from operations as well as maintenance for a multitude of different systems within strict time constraints make operational decision-making a difficult undertaking. Post-prognostic decision support will enable the user to make optimal decisions based on his expression of rigorous trade-offs between different prognostic and external information sources. This can be accomplished through guided evaluation of different optimal decision alternatives under operational boundary conditions using user-specific and interactive collaboration. Iyer et al. [244] present some preliminary results of the use of such a decision support tool. Tang et al. [268] describe a control reconfiguration that is based on prognostic information. Short-term objectives and long term objectives are dealt with in separate reasoners which are optimized to simultaneously accomplish several different goals. Some authors have collected laboratory data to be used for data-driven prognostics, but have not yet applied any algorithms to the data [246, 253]. Some data repositories are being made publicly available which can be used to baseline different data driven algorithms [254]. KPIs for Asset Health Management in relation to standardization are discussed in [280]. Lau and Dwight present in [281] a fuzzy-based decision support model for engineering asset condition monitoring applied to water pipelines.

3.8

Conclusion

The state of the art in remanufacturing research shows a clear gap for business decision support systems for the remanufacturing strategy about a single instance of a product at the end of its life cycle. Remanufacturing research per se focuses mostly on designing a product for remanufacturability.

19

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Business decision support theory has only been applied to a very small extend to remanufacturing yet. PREMANUS will be able to provide here a valuable contribution to the state of the art. Uncertainty theory and risk in decision support system research has interesting aspect that PREMANUS can exploit for the design of the BDSS algorithms. Especially interesting are here lifecycle cost models under uncertainty. From project month 12 onwards, when the detailed BDSS mechanisms will be designed, the available algorithms will be carefully reviewed in context of the available data and optimization objectives given by the use case partners. Research in assessing asset health focuses on algorithms to forecast the next failure of an asset. This can be valuable input to the PREMANUS BDSS, is however highly product dependent. Therefore, PREMANUS might rely on the use case partners own expertise to input such analysis to the PREMANUS BDSS.

20

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

4 Remanufacturing Information Services (RIS)


This chapter focuses on state-of-the-art technologies that could be used within the RIS or are relevant technologies, which the RIS might need to be compatible with. The chapter is composed of 4 sections dealing with specific parts of the RIS architecture: ID Management, Persistency, Information Retrieval, and Role-based Access Control. Each section will look at current technologies in this area and evaluate them based on how useful they are to the PREMANUS project.

4.1

ID Management for Product and Components

ID management for products and components is a crucial element of the interoperability and integration between data coming from different enterprises within a remanufacturing ecosystem. A robust ID management schema lets the different applications localize and identify artifacts in the product lifecycle yet still relate them together. Providing an ID scheme for loosely coupled processes as they occur in remanufacturing, when not all the parties are well known, is challenging. The system does not only have to provide means to recognize products (based on, e.g., attached tag, meta-data, or physical characteristics) but also be widely accepted in industry.

4.1.1 Product Identification Scheme


The goal of PREMANUS is the support of remanufacturing decisions. Such decisions are made based on diverse product information. Product information comprises a variety of different data classes, including assembly data but also maintenance, service, and replacement part information. Such product lifecycle specific data is distributed among different producers and service providers that will use the PREMANUS system. Each organization may have different identifiers for their products and services. The mapping of the different identifiers to one unique identifier is an important challenge for PREMANUS when coordinating the data access. A short introduction to identification schemes and their important characteristics in the context of PREMANUS follows. Existing identification schemes are then presented and discussed with respect to the characteristics and requirements of PREMANUS.
4.1.1.1 Product Identification Scheme characteristics for PREMANUS

Identification schemes generally consist of a series of signs (numbers or characters) of fixed or variable length. Based on the position of a sign, different characteristics of a product are encoded. Important aspects for PREMANUS are: Industry products: The identification scheme deals with physical, industrial products. Support for multiple schemes: multiple parties are involved in the product life - different producers as well as different service providers. Consequently, a heterogeneous set of product coding schemes may be used. Standardization: The use of a standard to coordinate the identification of products is useful, as the ID management is a challenge for PREMANUS, but not part of the conducted research.

21

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

4.1.1.2

Coding schemes

Many different coding schemes for products exist. For PREMANUS, schemes used for industrial products are most relevant, hence schemes like the national drug code (NDC) or the international standard book number (ISBN) are not considered. Electronic Product Code The EPC is a universal identifier to realize a unique identification of every physical object in the world, valid for all the time. A URI is the canonical representation of EPC. EPC coding accommodates existing coding schemes to simplify the translation between EPC and other coding schemes. Based on mappings, other formats like GRAI or GTIN are transferrable to an EPC format. Global Trade Item Number (GTIN) The global trade item number is a harmonized collection of different existing coding schemes (see Error! Reference source not found.). An important element of GTIN is UPC which is extensively used in the grocery industry. GTIN Coding Scheme EAN/UCC-14 EAN/UCC-13 UCC-12 EAN/UCC-8 Original Term SSC-14 (Shipping Container Code) EAN Code UPC (Universal Product Code) EAN-8

Table 4: Coding Schemes included in GTIN and their original term (before they were included in GTIN)

Global Returnable Asset Identifier (GRAI) GRAI is used to identify returnable assets, used to support exchange processes between trading partners. GRAI consists of a company prefix, an asset type, check digit and optional serial number. Vehicle Identification Number (VIN) VIN is a standard in the automotive industry (ISO 3779) that gives a unique identifier to a car based on a 17 character codes of numbers and letters (without the letters i, o and q). The VIN includes a vehicle description section (vehicle type and manufacturer) and a vehicle identifier system (vehicle specific).

For further information on product codes, consider The Electronic Product Code3. The list underlines that companies and whole industries use very different, more or less standardized product codes. PREMANUS faces the challenge of providing a single entry point to the diverse data sets at the different partners, encoded with potentially different identifiers. EPC tackles this problem. On the one hand, EPC provides the ability to describe all objects in the world with URIs that never outdate. On the other hand, mappings exist that convert existing standards to EPC. These mappings tackle the PREMANUS requirement to create one identifier for a specific product that maps to existing identifiers of different companies.

Brock, David L. (2003), The Electronic Product Code (EPC ) as a Meta Code, auto-id center Massachusetts institute of technology, White Paper.
3

22

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

4.1.2 Information Services and Networks


One of the key aspects of PREMANUS is the exchange of data between different partners within the project. As detailed in the previous section, this requires a common ID scheme in order to manage the information from different partners each with their own ID scheme. Additionally, there needs to be a service that allows the discovery and retrieval of information in the PREMANUS system as a whole, which should leverage the common ID scheme. This section will introduce such systems and determine if they or their approach could be useful in PREMANUS.
4.1.2.1 EPCIS

Electronic Product Code Information Services (EPCIS) is an EPCGlobal standard for sharing EPC related information between trading partners. EPCIS provides important capabilities to improve efficiency, and visibility in the global supply chain, and complements lower level tag, reader, and middleware standards. EPCIS is governed by EPCGlobal. The EPCIS standard provides interface specifications built on top of very widely used business and Internet standards. EPCIS facilitates internal data capture as well as secure external sharing of information about movement and status of goods in the physical world4. The EPCIS defines standard interfaces to identify, store, collect, and query data on any artifact shared between companies. The EPCIS uses the Electronic Product Code (EPC) as identification schema but EPCIS does not apply any restrictions and can work with any ID schemata, e.g. EAN-13 or 2D codes. From an architecture point of view, the EPCIS system is a set of EPC repositories that collect events about the product. The repositories provide an interface for querying, so it is possible to integrate the repository with a business application. The repositories are loosely coupled, the owners of repositories exchange EPCIS events. In this way the What, Where, When, and Whys of events occurring in a supply chain are exchanged, safely and securely. This is important business information, such as the time, location, disposition and business step of each event that occurs during the life of an item in the supply chain. Extensions of the event format are possible, e.g., new data fields in the event message or new event types enable the adaptation of EPCIS to PREMANUS. To find out where to get more information about the product, EPCIS provides a central instance Object Naming Service (ONS). ONS is a network service that is used to look up pointers to EPCIS Repositories, starting from an EPC Manager Number or full Electronic Product Code. Specifically, ONS provides a means to look up a pointer to the EPCIS service provided by the organization who commissioned the EPC of the object in question. The most common example is where ONS is used to discover an EPCIS service that contains product data from a manufacturer for a given EPC. ONS may also be used to discover an EPCIS service that has master data pertaining to a particular EPCIS location identifier (this use case is not yet fully addressed in the ONS specification).
4.1.2.2 RFID-based Automotive Network (RAN)

A related research project in the field of product identity management is the RFID-based Automotive Network (RAN), funded by the German Federal Ministry for Education and Research. It deals with the fact, that companies are organized into production and logistic networks which are becoming more and more complicated when managing internal and inter-plant processes due to the increasing variety of products.
4

http://www.gs1.org/gsmp/kc/epcglobal/epcis/epcis_1_0-faq-20070427.pdf

23

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Current developments in RFID technology and the possibility of exchanging order-specific data between all participants of a process chain (OEMs, suppliers and logistics companies) open up new potential for controlling complex processes with the help of an information broker concept. Cross-company intelligent material flow control should enable efficient production as well as economical, inventory-optimized logistics. This will create industry-wide standards which include all the companies that are involved in real net output. Integration is then concluded with the RAN certification.5 For coordination in an automotive network, RAN implements the information broker based on the EPCIS standard.
4.1.2.3 Conclusion

In PREMANUS, it is possible to leverage the EPCIS to collect data about an artifact (here: product or component) which is about to be remanufactured across different stages in its product life cycle (PLC). EPCIS provides a means to extend the vocabulary on which it operates, making it extensible for both different domains and different data. This would let PREMANUS adapt the data format used by EPCIS to capture the data that is required in the remanufacturing process. The central point of entry allows a remanufacturer to find a manufacturer and with the help of PREMANUS get and process information that is necessary to optimize the remanufacturing process. RAN addresses a problem space that is also inherent in PREMANUS. Remanufacturing requires the coordinated cooperation of an ecosystem regarding product information. An information broker is one component for solving this challenge. Therefore, PREMANUS can profit from the lessons learned from RAN. RAN build extensively on EPCIS. However, due to the fact that the RAN Project is still in progress, PREMANUS will most likely not be able to reuse any actual components from RAN. In conclusion, EPCIS provides a set of functionalities required for PREMANUS. PREMANUS will consider it for implementing required information services in context of product ID Management.

4.2

Distributed Product Information Store

4.2.1 Introduction
As per the DOW, the BDSS needs access to relevant product data so that it can evaluate EoL product recovery KPIs and apply optimization techniques on the result. The focus of this task is to facilitate the development of a Distributed Product Information Store (DPIS) which forms the persistency component in the PREMANUS middleware. The storage will be distributed in nature, residing locally within the stakeholders business systems stack. The system is envisioned based on the design and principles of HDFS or Voldemort. The main challenges include the adaptation of the HDFS or Voldemort design to cope with the different product IDs and providing support for the requirements collected. It is commonly understood that the quantity of information in todays world is much bigger than it used to be. As to how much more, here follows an example: As can be extracted from the article of Dave Turek The case against digital sprawl6, those responsible for supercomputer development at
5 6

http://www.autoran.de http://www.businessweek.com/articles/2012-05-02/the-case-against-digital-sprawl

24

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

IBM state that from the year 2003 and working backwards to the beginning of human history, people have generated five Exabytes that is five billion gigabytes of information. Furthermore, by last year, human beings were cranking out that much data every two days. By next year, Turek predicts, well be doing it every 10 minutes. And all of this amount of data has to be stored in any place. So, it can be directly inferred that traditional approaches for storing data are no longer valid and new solutions are needed. In the context of the PREMANUS use cases, the situation presented above is reflected in the amount of data coming from the Windmill remote monitoring sensors and automotive engines. Another consideration is that both the Wind-farms and the car plants, as with many other industries, are distributed throughout many countries. This distributed business model clearly requires distributed computational tasks and access to distributed information. Another point to be considered is the kind of data to be stored since the PREMANUS middleware has to access different types of content. Roughly speaking, there are three different formats of data: Structured data. This kind of data is organized in a structure so that it is identifiable. The most universal form of structured data is a database like SQL or Access. An example could be a library where book information is stored in a database Semi-structured data. This kind of data is partially organized following a given structure. An example could be the storages used in a digital library: half of the data is stored like the structured data and the other half, e.g. annexes might be classified in folders. To retrieve data from these storages it is necessary to first execute queries searching for the desired information and then lookup for the rest Un-structured data. This kind of data does not follow any particular structure (structure agnostic). It is not stored in any particular database, rather in stores such as document repositories. Using the example of the digital library: books are stored in folders and there exists an application to directly find e-books by typing some content of the book besides the title or author. To retrieve data from these particular stores it is necessary to execute queries, often semantic queries. These queries require that the content stored must be semantically annotated with metadata to allow straightforward searches trying to find content with the usage of a more natural search queries. One last aspect to be taken into account is how to access the information. This section deals with not only the different types of storages according to the different aspects already mentioned, but also presents an analysis based on how to access to the information and the different tools to help indexing it. This analysis is needed because the documentation stored by the user partners should be accessed in a fast and reliable way and, to reach this objective, both information and documentation should be indexed in a proper way. Bearing these assumptions in mind, it is necessary to analyse different storage tools considering the four aspects mentioned: different formats, large amounts of data, distributed data, and indexing facilities. The following sections analyse these tools and frameworks: Apache Hadoop, an integrated framework providing a paradigm for splitting and merging queries and tasks, a NoSQL database, distributed storage and a data warehouse for analysing the data retrieved Distributed storage NoSQL databases

25

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Information indexing.

4.2.2 Apache Hadoop


Apache Hadoop7 is a framework for carrying out distributed execution of applications on large clusters. Hadoop implements Map/Reduce permitting the application to be divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster and then combining the results. In addition, it provides a distributed file system (HDFS) that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both Map/Reduce and the HDFS are designed so that node failures are automatically handled by the framework. Another advantage is HBase, a NoSQL database for storing structured data following the computing distribution. Finally, Hadoop comes with Hive, a data warehouse for facilitating data summarization, ad-hoc queries, and the analysis of large datasets. These Hadoop components are analysed in the following subsections.
4.2.2.1 Map/Reduce

Map/Reduce8 is a programming paradigm that expresses a large distributed computation as a sequence of distributed operations on data sets of key/value pairs. The Hadoop Map/Reduce framework harnesses a cluster of machines and executes user defined jobs across the nodes in the cluster. A typical Map/Reduce computation has two phases, a map phase and a reduce phase. The input to the computation is a data set of key/value pairs. In the map phase, the framework splits the input data set into a large number of fragments and assigns each fragment to a map task. In the reduce phase, each reduce task consumes the fragment resulted from the map phase and executes it. However, Map/Reduce cannot be used as a standalone application and it only makes sense if it is combined with current NoSQL databases such as HBase (see next section) or MongoDB (see section 4.2.4.1). However, Map/Reduce or equivalent techniques could be applied when, e.g. one of the users of PREMANUS launches a query which needs to access information from two or more locations. In PREMANUS it would be necessary to split a query, execute it in different stores, retrieve the data and, finally, compose the results back.
4.2.2.2 HBase

HBase9 is the Hadoop database and it perfectly couples with the functionalities and needs from Hadoop. The usage of HBase is enhanced wherever a need exists for random and real-time read/write access to Big Data. The goal of HBase is the hosting of very large tables on top of clusters. HBase is an open-source, distributed, versioned, column-oriented store modelled after Google's BigTable10. However, HBase has many other features for supporting both linear and modular scaling. The way to expand HBase clusters is by adding RegionServers that are hosted on commodity class servers. HBase features of note are:
7 8

http://hadoop.apache.org http://hadoop.apache.org/mapreduce 9 http://hbase.apache.org 10 http://research.google.com/archive/bigtable.html

26

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Strongly consistent reads/writes: HBase is not an "eventually consistent" DataStore. This makes it very suitable for tasks such as high-speed counter aggregation Automatic sharding: HBase tables are distributed on the cluster via regions, and regions are automatically split and re-distributed as the data grows Automatic RegionServer failover Hadoop/HDFS Integration: HBase supports HDFS out of the box (it being a distributed file system) Map/Reduce: HBase supports massively parallelized processing via Map/Reduce for using HBase as both source and sink Java Client API: HBase supports an easy to use Java API for programmatic access Thrift/REST API: HBase also supports Thrift and REST for non-Java front-ends Block Cache and Bloom Filters: HBase supports a Block Cache and Bloom Filters for high volume query optimization Operational Management: HBase provides built-in web-pages for operational insight as well as JMX metrics.

The usage of HBase within PREMANUS would be similar to the example used for defining the semistructured data (see above) in combination with HDFS, e.g. in HBase the data relates to the properties of the book; within PREMANUS, the product or component. In HDFS the book itself or annexes; within PREMANUS the product datasheet or CAD files.
4.2.2.3 HDFS

As part of the Hadoop framework, HDFS11 is a distributed file system well suited for the storage of large files. However, it is not a general purpose file system and does not provide fast individual record lookups in files. HBase, on the other hand, is built on top of HDFS and provides fast record lookups (and updates) for large tables, e.g. HBase internally puts the data in indexed "StoreFiles" that exist on HDFS for high-speed lookups. Additionally, HDFS is designed to stream large files at high bandwidth to user applications. Like HBase which is based upon Google BigTable, HDFS has also a Google inspiration, the GoogleFileSystem12. The relevance to PREMANUS of HDFS has been already mentioned in the previous section. However, HDFS could have more usages besides storing annexes (like in the example used). Within PREMANUS, HDFS could be used for storing plain un-structured data such as disassembly instructions.
4.2.2.4 Hive

The Apache Hive13 is a data warehouse that facilitates querying and managing large datasets residing in distributed storage. It is built on top of the Hadoop framework and due to this it comes with tools to enable an easy way to data extraction, data transformation and data loading. Hive also provides a mechanism to impose a given structure on a variety of data formats. As part of the Hadoop framework
11 12

http://hadoop.apache.org/hdfs http://research.google.com/archive/gfs.html 13 http://hive.apache.org

27

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

it can access the files stored in HDFS or the data stored in HBase in an easy way. Finally, it provides with a query execution for the Map/Reduce paradigm. The querying language used by Hive is a simple SQL-like query language, called QL. This language also allows programmers to plug in the mappers and reducers to carry out more sophisticated analysis that may not be supported by the built-in capabilities of the language. QL can also be extended with custom scalar functions, aggregations, and table functions. Hive is best used for batch jobs over large sets of append-only data (like web logs). What Hive values most are scalability, extensibility, fault-tolerance, and loose-coupling with its input formats. Within PREMANUS, Hive could be used for easily analysing the data extracted from either HBase or HDFS and put back the results to the main PREMANUS frontend.

4.2.3 Distributed storages


A distributed storage can be defined as a computer repository spread under a network scheme. The primary objective of this kind of storages is to pool and balance the storage capacity of every device connected to the storages. Typically they also infer some kind of resilience as well e.g. if corrupted or if access problems. A distributed storage is a core element of a distributed content repository. Besides providing support for storing, these kind of systems deals with accessing and managing content. Distributed storages can be confused with Cloud storages which, by definition, are also distributed. The main difference between distributed storages and cloud storages remains on the fact while the first ones can be managed internally (like VMFS or HDFS), the second ones are managed by 3rd parties (like the Amazon S3) and accessible by the users via web services. Three well known distributed storages are analysed in the following section: VMware VMFS Amazon S3 HDFS, which has already been analysed (see section 4.2.2.3).
4.2.3.1 VMware VMFS

VMware VMFS14 (Virtual Machine File System) is the file system developed by VMware. It is used by VMware ESX Server and vSphere, the server virtualization suite. It is used to store virtual machine disk images. Multiple servers can read/write the same file system simultaneously, while individual virtual machine files are locked. VMFS contains the following features: Allows access by multiple ESX servers at the same time by implementing per-file locking Manage ESX servers from a VMware VMFS volume Optimize virtual machine I/O with adjustable volume, disk, file and block sizes Recover virtual machines faster and more reliably in the event of server failure with Distributed Journaling.

14

http://www.vmware.com/products/vmfs/overview.html

28

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

4.2.3.2

Amazon S3 (Amazon Simple Storage System)

Amazon S315 is a key-value-based cloud file storage service and provides a simple Web Services interface. This interface can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It provides access to the same highly scalable, reliable, secure, fast, inexpensive infrastructure that Amazon uses to run its own global network of web sites. Among the features provided by Amazon S3 are the following: Write, read, and delete objects containing from 1 byte to 5 terabytes of data each Each object is stored in a bucket and retrieved via a unique, developer-assigned key Authentication mechanisms are provided to ensure that data is kept secure from unauthorized access Options for secure data upload/download and encryption of data at rest are provided for additional data protection Uses standards-based REST and SOAP interfaces designed to work with any Internetdevelopment toolkit Built to be flexible so that protocol or functional layers can easily be added Includes options for performing recurring and high volume deletions Reliability backed with the Amazon S3 Service Level Agreement.

4.2.4 NoSQL databases


NoSQL databases are badged as the Next Generation Databases being non-relational, distributed, and horizontally scalable. The term "nosql" is referred by the community to "not only sql". Some of the features of these databases are: schema-free model, easy replication support, simple API, eventually consistent/BASE (not ACID16), can deal with huge amounts of data. The key benefits of NoSQL databases are improved data comprehension, flexible scaling solutions and productivity. NoSQL is a large and expanding field and the common features of NoSQL data stores are the following ones: Easy to use in conventional load-balanced clusters Persistent data (not just caches) Scale to available memory Have no fixed schemas and allow schema migration without downtime Have individual query systems rather than using a standard query language Are ACID within a node of the cluster and eventually consistent across the cluster. The structured data (not documents) PREMANUS has to deal with has to be stored in a database. However, the amount of data and properties that will be relevant for remanufacturing in the future motivates PREMANUS to look for new solutions. Bearing this in mind, the tools to be analysed under this section are: MongoDB Cassandra
15 16

http://aws.amazon.com/s3 Atomicity, Consistency, Isolation, Durability

29

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report


4.2.4.1

Voldemort HBase, which has already been analysed (see section 4.2.2.2).
MongoDB

MongoDB17 is an open source document-oriented NoSQL database system. Instead of storing data in tables as it is done in a "classical" relational databases, MongoDB stores structured data as JSON-like documents with dynamic schemas (called BSON), making the integration of data in certain types of applications easier and faster. The following is a brief summary of some of the main features: Ad hoc queries: MongoDB supports search by field, range queries, regular expression searches. Queries can return specific fields of documents and also include user-defined JavaScript functions Indexing: Any field in a MongoDB document can be indexed Replication: MongoDB supports master-slave replication Load balancing: MongoDB scales horizontally using sharding. MongoDB can run over multiple servers, balancing the load and/or duplicating data to keep the system up and running in case of hardware failure File storage: MongoDB could be used as a file system, taking advantage of load balancing and data replication features over multiple machines for storing files (GridFS) Aggregation: Map/Reduce can be used for batch processing of data and aggregation operations Server-side JavaScript execution: JavaScript can be used in queries, aggregation functions are sent directly to the database to be executed Capped collections: MongoDB supports fixed-size collections called capped collections.
4.2.4.2 Apache Cassandra

Cassandra18 is a highly scalable, eventually consistent, distributed, structured key-value store. Cassandra brings together the distributed systems technologies from Dynamo19 and the data model from Google's BigTable20. Like Dynamo, Cassandra is eventually consistent. Like BigTable, Cassandra provides a ColumnFamily-based data model richer than typical key/value systems as some of the complexity can be pushed to Cassandra, leading to simpler and more efficient applications. Some of the features of Apache Cassandra are the following: Proven: Cassandra is in use at Netflix, Twitter, Cisco... Fault Tolerant: Data is automatically replicated to multiple nodes for fault-tolerance Decentralized: Every node in the cluster is identical The developer is in Control: Choose between synchronous or asynchronous replication for each update
17 18

http://www.mongodb.org http://cassandra.apache.org 19 http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf 20 http://research.google.com/archive/bigtable.html

30

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Rich Data Model: Allows efficient use for many applications beyond simple key/value Elastic: Read and write throughput both increase linearly as new machines are added, with no downtime or interruption to applications.

The Cassandra data model is designed for distributed data on a very large scale. It trades ACIDcompliant data practices for important advantages in performance, availability, and operational manageability.
4.2.4.3 Voldemort

Voldemort21 is a distributed key-value database and not a relational database. In other words, it is a big, distributed, persistent, fault-tolerant hash table providing horizontal scalability and much higher availability. Voldemort offers a number of advantages: Voldemort combines in memory caching with the storage system Reads and writes scale horizontally Data portioning is transparent allowing the expansion of clusters without rebalancing all data Data replication and placement is decided by a simple API The storage layer is completely mockable.

4.2.5 Information Indexing


When there is the need to store information that will be useful later, it is necessary to look at the socalled problem of Classification and Search. If the System is capable of storing thousands of wellorganized files it is necessary to set up a classification of folders and subfolders to ensure they are accessed efficiently. By using classification, the files will be easily found by filtering or by the use of a manually-created index. This need is emphasized when considering distributed repositories: While the user is familiar with repositories created by them, they may not know the classification and indexing made by other users or by other applications. In this section, tools for facilitating the indexing of non-structured information (html, pdf, word...) are depicted: Apache Solr Microsoft SharePoint.
4.2.5.1 Apache Solr

Apache Solr22 is an engine that could be used by the PREMANUS searching service as it is an open source search platform resulted from the Apache Lucene23 project. Its major features include powerful full-text search, hit highlighting, faceted search, dynamic clustering, database integration and rich document (e.g., Word, PDF) handling, among others. Solr is highly scalable, providing distributed search and index replication by using keywords querying with logic operators.
21 22

http://project-voldemort.com http://lucene.apache.org/solr 23 http://lucene.apache.org

31

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Solr is written in Java and runs as a standalone full-text search server within a servlet container such as Tomcat. It uses the Lucene search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it easy to use. Summarising, the features of Solr are the following: Advanced Full-Text Search Capabilities Optimized for High Volume Web Traffic Standards Based Open Interfaces - XML, JSON and HTTP Comprehensive HTML Administration Interfaces Server statistics exposed over JMX for monitoring Scalability - Efficient Replication to other Solr Search Servers Flexible and Adaptable with XML configuration Extensible Plugin Architecture.
4.2.5.2 Microsoft SharePoint

Microsoft SharePoint24 is a web application platform especially prepared for dealing with web requirements. SharePoint is designed for non-experts dealing with their tasks (in this case developing web tools) without having to understand the technical issues. SharePoint's tools can facilitate enterprise search, document and file management, and business intelligence, amongst others. It also has capabilities around system integration, process integration, and workflow automation. SharePoint provides a development stack based on web technologies and standards-based APIs and it is based on the following points, all of them based on the SharePoint Wheel25: Sites: A site is a contextual work environment Communities: A community is a place where communication and understanding happens Content: Provides management of documents and work items that need to be stored, found, collaborated on, updated, managed, documented, archived, traced or restored Search: It is based on keywords, refinement, and content analysis Insights: Information can be inserted inside useful contexts, providing information that can improve effectiveness Composites: Enables no-code integration of data, documents and processes to provide composite applications ("mash-ups" based on internal data). For indexing and search purposes, Microsoft SharePoint comes with Enterprise Search. SharePoint Server Search is a service application which is independent of other services. Generically speaking, SharePoint search architecture is made up of the Crawler, Indexing Engine, Query Engine and the User Interface and Query Object Model as shown in Figure 11.

24 25

http://sharepoint.microsoft.com http://download.microsoft.com/download/0/B/0/0B06C453-8F7D-4D8E-A5E5D50DC6F8D8F4/SharePoint_2010_Evaluation_Guide.pdf

32

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Figure 11 - Logical overview of the SharePoint Search components 26

4.2.6 Conclusions
PREMANUS operates in an ecosystem of remanufacturing partners, each of them storing information relevant to taking a remanufacturing decision. The analyzed technologies: Distributed storages NoSQL databases Information indexing Apache Hadoop as an integrated framework for splitting/merging queries, a NoSQL database, a distributed storage and a data warehouse for analysing the data retrieved. All these technologies are targeted for computer clusters, e.g. to offer cloud based services. Within PREMANUS, distributed refers to product information distributed across an ecosystem; it will not reside within a computer cluster. Thus, these technologies can only solve a limited set of challenges for PREMANUS. However, for storing information that is collected from the ecosystem, and enabling the searching of this information, the presented technologies are well suited. Overall, among the presented technologies, the Hadoop framework is the most relevant one, as it already integrates many required technologies seamlessly and additional integration effort is not required.
4.2.6.1 Conclusions on Distributed Storages

Based on the analysis made through sections 4.2.2.3 and 4.2.3, the best candidate for PREMANUS is HDFS. The reasons for this selection rely on the following aspects: Easy to plug in the architecture: HDFS as part of the Hadoop framework can easily utilise Hadoops benefits with no major effort (Map/Reduce paradigm, easy connection with HBase and easy analysis of data with Hive). Amazon S3 features could be accessed via its WS API. Access to VMFS would have to be developed Open source: HDFS is an open source tool with a bigger development community supporting it, whereas both VMFS and Amazon S3 are proprietary tools Cost: HDFS is cost free, while the other two are not.
26

http://sharepointgeorge.com/2010/configuring-enterprise-search-sharepoint-2010

33

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

4.2.6.2

Conclusions on NoSQL databases

Sections 4.2.2.2 and 4.2.4 have analysed the different storage and database options that would be used for the PREMANUS solution. The following table compares the four databases analysed. HBase MongoDB Cassandra Voldemort Light description License Data model
A distributed DB based on Hadoop Apache Column-oriented API calls REST XML Thrift Yes Yes No Yes Yes Yes Java Document oriented DB AGPL (Drivers from Apache) Document-oriented NoSQL Schema-less API calls JavaScript Yes Conditional No Yes Yes Yes C++ 2nd generation distributed DB Apache Column-oriented NoSQL API calls Thrift Yes Yes No Yes Yes Yes Java Distributed value DB Apache Key-value key-

Query language Map/Reduce TTL for entries Full text search Horizontal scalability Replication Sharding Programming language

API calls No Yes No Yes Yes Yes Java

Table 5 - Technical comparison between the four databases

PREMANUS needs to access distributed information. The document touched the topics of unstructured data in sections 4.2.2 and 4.2.3 while the topic of structured data, i.e. the data likely to be stored in databases has been touched in 4.2.4. As a conclusion of section 4.2.4, if the priority of PREMANUS is to focus on high speed reading, then the databases to be selected should be MongoDB or Cassandra. However, a composed solution based upon the Apache Hadoop framework with the help of either Cassandra or MongoDB databases would be powerful enough to provide basic storage infrastructure for the PREMANUS project. While Hadoop comes with HDFS as a distributed file system and Map/Reduce for splitting tasks and combining results, both Cassandra and MongoDB provide fast access to whole (or partial) information and also load balancing and fault tolerance to preserve the data as safely as possible. A final point to be considered is that both MongoDB and Cassandra make use of a Map/Reduce paradigm for batch processing and aggregation operations. This effectively removes Voldemort from consideration. Among the three remaining databases, it is quite difficult to opt for one unique technology as it will largely depend on subtle differences in business requirements and on the other tools selected, which could ease or complicate their integration.
4.2.6.3 Conclusions on Information Indexing

The last section analysed in the DPIS has been the indexing approach. PREMANUS needs an indexing tool to enable the selected toolsets and services developed to access the stored information.

34

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

However, although every storage solution analysed comes with a data indexer, the usage of Solr would increase indexing capabilities, benefiting PREMANUS middleware greatly. A major reason for selecting Solr over SharePoint is that it can easily be used with other Java systems for enabling semantic searches (see section 5.2.2). In any case, it would be necessary to develop some testing applications to check the synergies of the complete solution.

4.3

Information Retrieval Mechanism

Information retrieval is concerned with the fetching of information from PREMANUS stakeholders once it has been discovered that they have relevant information, which a remanufacturer requires for the processing of a product. Part of this process is discovering what information the service offers and how the service is structured so the client can structure the query to retrieve the information relevant to them. This section looks at several possible technologies for interfacing to existing system at the stakeholders.

4.3.1 SOAP Web Service


SOAP webservices are based on XML messages and HTTP and SMTP for message transportation. One key advantage of SOAP is the clear definition of a contract between client and server through WSDL files, which is useful for automatic code generation and other automation tasks surrounding the webservices. SOAP can form the foundation layer of a Web Services protocol stack, providing a basic messaging framework upon which Web Services can be built. This XML based protocol consists of three parts: an envelope, which defines what is in the message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing procedure calls and responses. SOAP has three major characteristics: Extensibility (security and WS-routing are among the extensions under development) Neutrality (SOAP can be used over any transport protocol such as HTTP, SMTP, TCP, or JMS), and Independence (SOAP allows for any programming model) As an example of how SOAP procedures can be used, a SOAP message could be sent to a web site that has Web Services enabled, such as a real-estate price database, with the parameters needed for a search. The site would then return an XML-formatted document with the resulting data, e.g., prices, location, features. With the data being returned in a standardized machine-parsable format, it can then be integrated directly into a third-party web site or application. The SOAP architecture consists of several layers of specifications: for message format, Message Exchange Patterns (MEP), underlying transport protocol bindings, message processing models, and protocol extensibility. SOAP is the successor of XML-RPCXML-RPC, though it borrows its transport and interaction neutrality and the envelope/header/body from elsewhere (probably from WDDX).[117] Take-away points of SOAP: Well established standard for web services, WSDL provides clear contract Very extensible with many useful extensions already widely used Can be complicated to work with.

35

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

4.3.2 RESTful Web Services


REpresentational State Transfer (REST) is a simpler form of web services, where messages and requests are oriented on the HTTP protocols request methods (such as POST, GET, and DELETE). Due to its compliance with HTTP, REST web service requests can be sent directly from the browser. This simplicity makes it appealing for small and simple web services. The REST architectural style was developed in parallel with HTTP/1.1, based on the existing design of HTTP/1.0. The largest implementation of a system conforming to the REST architectural style is the World Wide Web. REST exemplifies how the Web's architecture emerged by characterizing and constraining the macro-interactions of the four components of the Web, namely origin servers, gateways, proxies and clients, without imposing limitations on the individual participants. As such, REST essentially governs the proper behaviour of participants. REST-style architectures consist of clients and servers. Clients initiate requests to servers; servers process requests and return appropriate responses. Requests and responses are built around the transfer of representations of resources. A resource can be essentially any coherent and meaningful concept that may be addressed. A representation of a resource is typically a document that captures the current or intended state of a resource. The client begins sending requests when it is ready to make the transition to a new state. While one or more requests are outstanding, the client is considered to be in transition. The representation of each application state contains links that may be used the next time the client chooses to initiate a new state transition. REST facilitates the transaction between web servers by allowing loose coupling between different services. REST is less strongly typed than its counterpart, SOAP. The REST language is based off the use of nouns and verbs, and has an emphasis on readability. Unlike SOAP, REST does not require XML parsing and doesn't require a message header to and from a service provider. This ultimately uses less bandwidth. REST is also different from SOAP in that error handling is different between the two. SOAP can have user defined error messages while REST requires the use of HTTP error handling. REST also only supports synchronous (in contrast to asynchronous) messaging, because of its reliance on HTTP and HTTPS infrastructure. This increases the number of threads in use by a REST Web Service. [118] The main aspects of RESTful web services are: - Based on well-established HTTP protocol - Very simple format - Has no written contract between client and server.

4.3.3 OData
OData (Open Data Protocol) is an advanced web protocol based on a variety of commonly used web standards, such as REST web services, Atom Feeds and JSON. When contacted, the OData service provides a service description which enables a variety of automation around the service. OData is designed to expose data from backend systems to web based services, devices and UIs. OData is consistent with the way the Web works - it makes a deep commitment to URIs for resource identification and commits to an HTTP-based, uniform interface for interacting with those resources

36

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

(just like the Web). This commitment to core Web principles allows OData to enable a new level of data integration and interoperability across a broad range of clients, servers, services, and tools. OData is released under the Open Specification Promise to allow anyone to freely interoperate with OData implementations.[118] One example were OData is widely used is in SAP Gateway for Netweaver systems. Gateway uses OData to expose SAP systems through the web. A related software, Sybase unwired platform is also a great example of automatic code generation based on OData services and enables the quick and easy creation of mobile applications that access OData-based Gateway services. In short OData is: - Based on well accepted and widely used standards - Very simple to use in languages with OData libraries (harder in languages without like C) - Good for automation due to its service description.

4.3.4 Conclusions
One of the keys to information retrieval in PREMANUS will be discovering the structure of the offered services and which information they hold. This means a clear services description or contract is required which therefore rules out the use of RESTful web services since such a description if not provided and hence the calling client has no way of knowing how the service is structured. This would force the definition of a standard service interface for information providing services in PREMANUS. Instead, SOAP or OData should be used. In general OData is preferred for PREMANUS but it would also make sense to support both SOAP and OData as information providing services, since SOAP is more established then OData. This conclusion holds true for the information retrieval within the PREMANUS system, but not other parts of PREMANUS, where it might support any or all of the three technologies (for example in the semantic service bus).

4.4

Access Control

Access is the ability to do something with a computer resource (e.g., use, change, or view). Access control is the means by which the ability is explicitly enabled or restricted in some way (usually through physical and system-based controls). Computer- based access controls can prescribe not only who or what process may have access to a specific system resource, but also the type of access that is permitted. These controls may be implemented in the computer system or in external devices. [120] The goal of PREMANUS is to share as much information as possible between the different stakeholders to enable them to make better decisions within their remanufacturing processes. However, every stakeholder has sensitive data, which they do not wish to share or rather only wish to share with select partners. To prevent unauthorized access to such data, PREMANUS requires some form of access control that regulates who is allowed access to which data and services. There are three major types of access control paradigms: role based access control, mandatory access control and discretionary access control. Within the context of PREMANUS, role based access control is the most relevant paradigm. With role-based access control, access decisions are based on the roles that individual users have as part of an organization. Users take on assigned roles (such as doctor, nurse, teller, manager). The process of defining roles should be based on a thorough analysis of how an organization operates and should include input from a wide spectrum of users in an organization.

37

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Access rights are grouped by role name, and the use of resources is restricted to individuals authorized to assume the associated role. For example, within a hospital system the role of doctor can include operations to perform diagnosis, prescribe medication, and order laboratory tests; and the role of researcher can be limited to gathering anonymous clinical information for studies. The different access control technologies will not yet be evaluated, since it is still unclear what level of security and which security features will fit best into the rest of the PREMANUS architecture. However, some possible security systems that might be selected for PREMANUS will be discussed: OpenRBAC Spring Security jGuard.

4.4.1 OpenRBAC
OpenRBAC is an open source implementation of ANSI incites 359-2004 for role based access control to computer resources. It is published as Open Source under LGPLv3-License. The aim of the OpenRBAC project is to strictly implement the standard while being as flexible as possible in order to fulfil the needs that are not covered by the standard. [121] OpenRBAC enables the definition of access policies based on the following elements User Roles Sessions Resources. The available open source implementation integrates with a LDAP directory for the user ID management and authentication.

4.4.2 Spring Security


Spring Security is a powerful and highly customizable authentication and access-control framework. It is the de-facto standard for securing Spring-based applications. Spring Security is one of the most mature and widely used Spring projects. Founded in 2003 and actively maintained by SpringSource since, today it is used to secure numerous demanding environments including government agencies, military applications and central banks. It is released under an Apache 2.0 license so you can confidently use it in your projects. Spring Security is also easy to learn, deploy and manage. The dedicated security namespace provides directives for most common operations, allowing complete application security in just a few lines of XML. [Spring Security] also offers complete tooling integration in SpringSource Tool Suite, plus Spring Roo rapid application development framework. The Spring Community Forum and SpringSource offer a variety of free and paid support services. Spring Security is also integrated with many other Spring technologies, including Spring Web Flow, Spring Web Services, SpringSource Enterprise, SpringSource Application Management Suite and SpringSource tc Server. [122]

38

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

4.4.3 jGuard
jGuard is a java security framework based on JAAS. This framework is for resolving access control problems in web and standalone applications. Features: Only requires java 1.4 and j2ee 1.3 or higher Can be adapted to any webapp, on any application server Permits a user to have more than one role simultaneously Does not depend on a web framework, or an AOP framework Built on top of the standard, very secure, and flexible JAAS Authentication and authorization are handled by pluggable mechanisms Authentication data stored in a database, an XML file, a JNDI data source, an LDAP directory, Kerberos, etc. Changes take effect 'on the fly' (dynamic configuration) Permissions, roles, and their associations can be created, updated, deleted on the fly through a webapp (an API is provided too) Each webapp has its own authentication and authorization configuration A taglib is provided to protect jsp fragments Support security manager jGuard has 3 main libraries : o core contains the main jGuard features. o ext handles specific authentication and authorization managers such as XML or JDBC based managers. It also embeds login modules like JDBC or jCaptcha. A java1.5 version includes jGuard security features for JMX. o jee meant for web applications. One for standard webapps and one for Ajax (with DWR framework) applications. [123]

4.4.4 Conclusion
Security is an important requirement for PREMANUS. The special challenge here is that information about products and components is to be shared within an ecosystem. Each stakeholder will be interested in managing access rights himself and not trust a third party to do so, as company secrets are at risk. Stakeholders may employ different security mechanisms and policies within their companies. The course of the PREMANUS project and more detailed discussions on this topic will provide detailed requirements for security. At this stage of the project a deeper evaluation of the suitability of different access control mechanisms would be premature.

39

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

5 Remanufacturing Services Gateway (RSG)


This chapter details the current state-of-the-art as it relates to the individual components of PREMANUS in the Remanufacturing Services Gateway. The chapter is composed of the following sections, each dealing with a specific part of the RSG architecture: Semantic Service Bus Infrastructural services such as Semantic services and Connectivity services Other services such as Device as a Service and Maintenance as a Service. Each section will look at current technologies in this area and evaluate them based on their relevance and usefulness to the PREMANUS project.

5.1

Semantic Service Bus

5.1.1 Introduction
The Semantic Service Bus (SSB) is the spine of the PREMANUS architecture and provides the functionality for communication among the different components and systems connected within the PREMANUS middleware. The SSB can be seen, technologically speaking, as an Enterprise Serial Bus (ESB) but enhanced with some semantic oriented services to allow the content and the messages to reach their destination. The SSB will be built using Service Oriented Architectures (SOA) to assist the development team in focusing just on the different services the PREMANUS application should provide, independently of the way that the different components are implemented. As a software architecture model for distributed computing it is a special variant of the more general client server software architecture model and promotes strictly asynchronous message oriented design for communication and interaction between applications. Its primary use is in Enterprise Application Integration of heterogeneous and complex landscapes. As enhanced ESB, the SSB provides the basic functionality from any ESB existing in the current technology market. Each ESB allows different applications to communicate with each other by acting as a transit system for carrying data between applications within an enterprise or across the Internet. The main features are: Service creation and hosting by exposing and hosting reusable services Service mediation shielding services from message formats and protocols, separate business logic from messaging, and enable location-independent service calls Message routing by routing, filtering, aggregating, and re-sequencing messages based on content and rules Data transformation by exchanging data across varying formats and transport protocols. Before presenting the different technologies capable of contributing to the SSB, it is necessary to cover the main features of ESBs.

40

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

5.1.2 Enterprise Service Bus


One generic definition of an ESB is: A platform for the exchange of messages and objects, moving within transport layers. A more elaborated description could be the following: an Enterprise Service Bus is a software architecture for integrating enterprise applications at service level. It is implemented as middleware that provides the means for standardized communication among applications and supports service, message, and event-based interactions among applications. [126] ESBs provide functionality bundled into different functional areas [127] as follows (Figure 12): Architecture. Support for fault tolerance, scalability and throughput, the ability to federate with other ESBs, the supported topologies, and features supporting extensibility Connection. Includes support for a wide range of messaging standards, communications protocols, and connectivity alternatives Mediation. Deals with key requirements related to dynamic provisioning of resources, transformation and mapping support, transaction management, policy meta model features, registry support, and Service Level Agreement (SLA) coordination Orchestration. This area provides lightweight orchestration of services and more robust Business Process Execution Language (BPEL) and/or Business Process Modelling Notation (BPMN) support Change and control. The main components are design tooling, life-cycle management, technical monitoring, and security Commodity services like event handling and event choreography, data transformation and mapping, message and event queuing and sequencing, security or exception handling, protocol conversion and enforcing proper quality of communication service.

Figure 12 - ESB Reference model

41

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

The following can be considered general ESB core functionalities, based on [128]: ESB core Description functionality
Location transparency Transport protocol conversion Message transformation Message routing Message enhancement Security The ESB helps with decoupling the service consumer from the service provider location. The ESB provides a central platform to communicate with any application necessary without coupling the message sender to the message receiver. An ESB should be able to seamlessly integrate applications with different transport protocols such as HTTP(S) to JMS, FTP to a File batch and SMTP to TCP. The ESB provides functionality to transform messages from one format to the other. Determining the ultimate destination of an incoming message is an important functionality of an ESB that is categorized as message routing. An ESB should provide functionality to add missing information based on the data in the incoming message by using message enhancement. Authentication, authorization and encryption functionality should be provided by an ESB for securing incoming messages to prevent malicious use of the ESB as well as securing outgoing messages to satisfy the security requirements of the service provider. and A monitoring and management environment is necessary to configure the ESB to be high performing and reliable and also to monitor the runtime execution of the message flows in the ESB.
Table 6 - ESB core functionalities

Monitoring management

5.1.3 Technologies for the SSB


There are several ESBs in the market, covering many aspects and (i) given that the selected ones are more complete than the disregarded ones, and (ii) to avoid extending this section, the PREMANUS project has analysed only some of them: TIE Kinetix SmartBridge Mule ESB Apache ServiceMix. However, there are other ESBs in the market: Spring Integration provides an extension of the Spring programming model to support Enterprise Integration Patterns (EIP). It enables lightweight messaging within Spring-based applications and supports integration with external systems via declarative adapters Apache Camel is an open source integration framework based on known EIP with powerful bean integration Apache Synapse is a lightweight and high-performance ESB powered by an asynchronous mediation engine which provides support for XML, Web Services and REST Java Open ESB is still under development by the Java Community.

42

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

5.1.3.1

TIE Kinetix SmartBridge (TSB)

TIE Kinetix SmartBridge27, from PREMANUS Partner TIE, is a Business Integrated Platform and a complete and efficient integration solution born from the B2B world. It provides tools for seamless integration of back-office solutions, by implementing different interoperability strategies. Based on a core ESB, it is compounded of a hub for transferring single document/messages among the applications connected to it. TSB is especially designed for optimizing and automating the different

Figure 13 - TSB Message Bus

27

http://businessintegration.tiekinetix.com/node/802

43

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

processes present in Supply Chain Management. One of the main features of the TSB is the B2B message exchanging broker able to transport B2B messages (XML, EDI, Flatfile) from a source point to its destination, performing the adequate message transformations and routing. TSB also provides a mechanism for disaster recovery. The bus implemented in TSB (Figure 13) communicates the different services through messages. New services can be added by using the plug-in paradigm resulting in an increase of the flexibility and the scalability of TSB. TSB has some wrappers around such as Communication modules, and a translator and a Splitter and a Grouper of data and messages. Other functional aspects are the following: Graphic design of document process workflows Integrated dashboard views in the administration UI Support for application integration of TIE services / products / modules. The Bus communicates producers, subscribers and polling consumers which respectively push and pull data (Figure 14). The more complex item is the subscriber which is compound of several local queues (one per subscriber) to attend the different demands, a mediator service to assign the tasks coming from the queues to the different workers, which are the last element. When a new demand is inserted in the queue, the queue launches a new message event, the mediator gets the message, puts it in the executing bag and tries to find an available worker where the message is executed. If this search produces no result, it spawns a new worker. There are multiple workers per subscriber and when it finishes the task it is executing, it requests a new task to the mediator. In summary, TSB is a core ESB but constrained and tailored to benefit throughput and fault tolerance. It is composed by the following .NET elements: MS SQL Server: stores the intermediate data as well as the log of the transactions performed. However, it can also act as permanent storage mechanism for specific component queues, MS Windows Workflow Foundation: TSB provides with a graphical environment to configure the workflows which coordinate the internal components and data flows, MS Windows Communication Foundation: which enables Web Services to access the message bus implemented in TSB.

Figure 14 - Message bus in detail

44

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

5.1.3.2

Mule ESB

Mule ESB28 is a lightweight Java-based ESB and integration platform. It can connect applications together quickly and easily, enabling them to exchange data. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, Web Services (to make easier the interconnections with other platforms), JDBC, HTTP... It also supports jBPM and BPEL. Figure 15 depicts its architecture.

Figure 15 - Mule ESB Architecture

By using Mule it is possible to orchestrate different applications through the integration of two or more applications and/or services together. The benefits of application orchestration are as follows: An approach to integration that decouples applications from each other, Capabilities for message routing, security, transformation and reliability, A way to manage and monitor integrations centrally. The rationale for this is to automate processes or to synchronize data in real-time responses. To permit this integration it makes use of the API layer with which it is possible to develop interfaces with other applications, services and mobile devices with the creation of REST APIs or SOAP Web Services. Since Mule integrates tightly with Spring, it means developers can also leverage the capabilities of Spring to implement business logic. Mule uses common tools for Java developers, such as Maven, Eclipse, JUnit and Spring. On one hand, Mule uses an XML configuration model to define logic and on the other hand, the custom code can be written in a variety of languages, including Java, Groovy, JavaScript, Ruby or Python. Also, for a better configuration of Mule or better development on top of Mule, there exists a graphical development environment, called MuleStudio.
28

http://www.mulesoft.org

45

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Another feature of Mule is that it is designed for horizontal scale on common hardware and its runtime can be easily embedded into an existing application or any of the common application servers such as Tomcat, JBoss. As mentioned in the previous paragraph, Mule provides JUnit support so that it can be embedded in a JUnit test case. This allows the developer to create repeatable unit tests for integrations and can be incorporated into a continuous build. With regards to the message transfer, Mule is message agnostic, meaning that Mule does not oblige the developers to make use of XML messages. While XML is common, there are many scenarios where the developer might want to use JSON or flat files for example. Mules API is based on a REST or Web Services API (WSDL) layer. Then, it offers a decoupled interface to data and/or functionality of other applications by making use of a common, languageagnostic interaction method.
5.1.3.3 Apache ServiceMix

Apache ServiceMix29 is a flexible, open-source integration container unifying the features and functionality of Apache ActiveMQ30 for messaging, Camel31 (for routing), CXF, ODE, and Karaf32. The result is a powerful runtime platform used to build new integration solutions. It provides a complete, enterprise-ready ESB exclusively powered by OSGi33. See Figure 16 for a complete picture of the ServiceMix architecture. Apache ServiceMix is designed according to the Java Business Integration (JBI) specification. As an ESB it allows disparate applications, platforms and business processes to exchange data in a protocolneutral way. the JBI specification (JSR 208) defines the manner in which this communication will take place. The JBI specification requires two types of components. They are defined by JSR 208 as: Service Engine (SE): provide business logic and transformation services to other components like e.g. transforming XML data to an HTML format Binding Component (BCs): provide connectivity to services external to the JBI installation in the protocol used by the external application. BCs convert protocol and transport specific messages, such as HTTP, SOAP, and JMS messages to a normalized format that is used within the JBI infrastructure. Besides the usage of JBI and its components (both SEs and BCs can be service providers or service consumers or both), ServiceMix makes use of JMI for binding components and Camel and XSLT as Service engines. The communication between the different ServiceMix components is performed via XML files using a Normalized Message Router to communicate both SEs and BCs.

29 30

http://servicemix.apache.org http://activemq.apache.org 31 http://camel.apache.org 32 http://karaf.apache.org 33 http://www.osgi.org

46

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Figure 16 - ServiceMix Architecture

Apache Karaf is the ServiceMix runtime and is based on OSGi. It provides a lightweight container where hot deployment of components and applications, and dynamic configuration of services can take place. It also comes with logging and provisioning facilities as well as security services and direct native OS integration. Apache ActiveMQ is the ServiceMix messaging server. It provides support for Enterprise Integration Patterns (EIP) in several developing languages. Regarding the messages themselves, ActiveMQ supports JMS and has REST APIs for communicating the different components plugged to ServiceMix. Finally, Apache Camel is the integration framework used by ServiceMix which supports EIPs and performs the basic activities such as routing and mediation. It also has powerful Bean Integration.

5.1.4 Conclusions
This section has analysed several Enterprise Service Buses (ESBs) including both open source and proprietary solutions. All of them are built upon on standards and allow the integration of 3 rd party systems with the usage of Web Services. The comparison of the selected ESBs highlights the differences between ServiceMix and Mule ESB. ServiceMix is standards based and allows for hot deployment of new components and functionalities. Other minor features to be mentioned are that ServiceMix is loosely coupled with the outside world through WSDL and can be easily integrated with other technologies such as BPEL. Mule ESB is far easier to understand and is more lightweight and configurable (through a single file). Additionally, Mule ESB provides easier support for EIP and it is not bound to XML communication.

47

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Partners TIEs SmartBridge (TSB), also provides an API to allow the communication with 3rd party systems based on communication standards. However, TSB has a direct advantage over the other ESBs: it has been developed by TIE, and as such it is much more efficient for TIE developers, who are responsible for developing the Semantic Service Bus and all the services attached to it, to continue its development instead of adopting a less familiar paradigm. As such TSB should be selected for use in PREMANUS and will give the project greater exploitability potential.

5.2

Infrastructural Services

Infrastructural Services are a series of cross-cutting services coordinated by the Semantic Service Bus during the task of obtaining necessary information and executing the different processes demanded by the user. These services can be separated under the following umbrellas: Generic Semantic Gateway services.

5.2.1 Generic Services


Generic services covered here are as follow: Transformation services, which provide functionality for allowing data transformations between the messages exchanged in the PREMANUS middleware Mediation services, which provide functionality for selecting the task to be executed (usually part of the ESBs) Specialized remanufacture-oriented services Service composition, by using e.g. the tool JOpera Services that cannot be classified under the Semantic or the Gateway sections. These services will be developed ad-hoc once the final tool selection has been carried out. An analysis of these kinds of services will be carried out at this time.
5.2.1.1 JOpera for Eclipse

Service composition tools are considered to be a part of a development environment. JOpera34 offers a visual interface fully integrated in Eclipse for composing and executing Web Services. A heterogeneous typology of both services and Web Services (SOAP, RESTful, Java and JavaScript...) exists, which, usually, has to be combined to provide new complex services. JOpera provides tools for: True visual process definition Agile service composition Efficient process execution Visual Monitoring Recursive service composition.
34

http://www.jopera.org

48

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Summarising, JOpera is specifically for allowing the user to compose services. It can help in the development phase of PREMANUS to compose larger semantic services.

5.2.2 Semantic services


Semantic services enable natural searches, and are where the intelligence of the system lies. This can be achieved by making use of query interfaces with some predefined natural language queries, or automatically constructed from the annotations. To make use of the semantic capabilities in the searches, the information to be retrieved should be annotated through the attachment of metadata. The support for semantic searches is typically built on top of RDF technology. These services cover publishing, annotation and access to semantic storages. The tools to be analysed under this section are: WSMO TIE Semantic Integrator Sesame Jena.
5.2.2.1
35

WSMO

WSMO , which stands for Web Service Modeling Ontology, aligns the research and development efforts in the areas of Semantic Web Services. The main features of WSMO comprise: Simplicity Completeness Executability. Within PREMANUS, the WSMO framework could be in the annotation phase, as WSMO is well on the way to being standardised in the near future.
5.2.2.2 TIE Semantic Integrator

TIE Semantic Integrator (TSI) allows easy access to analyse, view, compare and distil semantics in an efficient environment to more effectively relate the business concepts of one organisation with that of another. This is performed by creating mappings based upon on semantic rules. TSI concentrates on identifying and mapping semantic assets. For example, two concepts <Street> and <Country> may be grouped into one logical semantic entity called "Address" which is mapped to a well-defined concept of an address. The mapping process is based on ontologies used to define and link these semantic assets. The link to the original syntax is still made, but this is completely transparent to the user. TSI allows users to easily identify semantic assets, and then semi-automatically map them to those of business partners. This approach allows people to create mappings in a more natural way by considering the meaning of concepts, rather than their syntax. The principle is that semantic assets and mappings can be shared and reused. In fact, the software is intelligent enough to make mapping suggestions by analysing and reusing existing mappings driven by the context. Within PREMANUS, TSI might help the TSB when the transformation engine used by the TSB would need some semantic support for, e.g. disambiguation of concepts.
35

http://www.wsmo.org

49

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

5.2.2.3

Sesame

Sesame36 is a de-facto standard framework for processing RDF data. This includes passing, storing, inferencing and querying of such data. It offers an easy-to-use API that can be connected to all leading RDF storage solutions. Sesame has been designed with flexibility in mind. It can be deployed on top of a variety of storage systems (relational databases, in-memory, file systems, keyword indexers, etc.), and offers a large set of tools to developers to leverage the power of RDF and related standards. Sesame fully supports the SPARQL query language for expressive querying and offers transparent access to remote RDF repositories using the exact same API as for local access. Finally, Sesame supports all main stream RDF file formats, including RDF/XML, Turtle, N-Triples, TriG and TriX. Sesame offers a JBDC-like user API, streamlined system APIs and a RESTful HTTP interface supporting the SPARQL Protocol for RDF, as can be seen on Figure 17. Additionally, Sesame provides a layer for storage and inference, called SAIL, and libraries for accessing RDF files, called RIO RDF I/O.

Figure 17 - Sesame Architecture

PREMANUS shall make use of a semantic repository to store the annotated content of the information utilised by PREMANUS components. The RDF files containing these annotations are easily accessed with Sesame.
5.2.2.4 Jena

Jena37 is a Java framework especially designed for building Semantic Web applications. It provides a collection of tools and libraries for developing not only those kinds of applications but also linkeddata applications, tools and servers. These tools and applications usually deal with RDF, RDFS, RDFa, OWL and SPARQL techniques. Additionally, Jena comes with a rule-based inference engine to allow reasoning on OWL and RDFS ontologies. Furthermore, it includes a wide range of storage strategies to store RDF triples. The Jena Framework includes (see Figure 18): An API for reading, processing and writing RDF data in XML, N-triples and Turtle formats An ontology API for handling OWL and RDFS ontologies Stores to allow large numbers of RDF triples
36 37

http://www.openrdf.org http://jena.apache.org

50

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

A query engine SPARQL compliant.

Figure 18 - Jena Architecture

RDF triples and graphs, and their various components, are accessed through Jena's RDF API. The graph interface is also a convenient extension point for connecting other stores to Jena, such as LDAP. Jena's inference API provides the means to make entailed triples appear in the store just as if they had been added explicitly. The inference API provides a number of rule engines to perform this job, either using the built-in rule sets for OWL and RDFS, or using application custom rules. Ontologies are one of the keys to many semantic web applications. There are two ontology languages for RDF: RDFS and OWL, the latter being more expressive. Both languages are supported in Jena though the Ontology API. Within PREMANUS, the usage of Jena would offer the same benefits as those offered by Sesame.
5.2.2.5 Conclusions

The usage of the TIE Semantic Integrator (TSI) will allow the SSB to deal with more intelligent transformations meaning that these are triggered by the mapping performed by the user, who really knows the meaning of the concepts and their interpretation. It is also a valid tool for annotating content to solve ambiguities in advance. These annotations are saved in the form of RDF files. How then can PREMANUS access the RDF files? By using one of the two solutions analysed: either Sesame or Jena. Using these open source tools may be considered a risk, but the wide developers community ensures constant updates, professional feedback from development communities and a supply of hints and tips to by-pass arising problems. Additionally, Open Source toolsets are much easier and straightforward

51

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

to use when communicating with 3rd party systems rather than when between proprietary systems.

5.2.3 Gateway services


Gateway services will act as bridges between the PREMANUS middleware and the external world (3rd party systems). In particular, these services have to be able to interact with, at least, the following systems: databases, ERP and PLM systems, distributed storage and other generic B2B connectivity systems. The analysis of most of these services has already been performed in previous sections, such as in 4.2.2 Apache Hadoop for distributed storages and 4.2.4 for databases.

5.3

Device as a Service, Maintenance as a Service

This section examines how on-board or embedded devices and maintenance systems are used to collect Product Lifecycle data and which can potentially be used as information sources to aid decision making in remanufacturing. These entities are often closely linked, as on-board devices are frequently used to collect information used in diagnostic and maintenance activities.

5.3.1 Device as a Service


On-board or embedded devices have been used for a considerable time in some industries such as the automotive industry. Such devices have tended to evolve specifically for the platform for which they are intended and are generally proprietary and closed in nature. Where interface and information format standards do exist, for example OBD-II in the automotive industry or NMEA 0183 and NMEA 2000 in the leisure marine industry, proprietary extensions have been permitted to the required standard base. The European PROMISE Project [88] developed the concept of a standardised embedded information device, the so-called Product Embedded Information Device or PEID. However, most of the products used in the industrial demonstrator use-cases of the PROMISE Project revealed already existing, proprietary on-board systems. This was true for a widely differing range of products including cars, trucks, heavy earth-moving machinery, telecoms equipment, refrigerators, and railway locomotives. Consequently the PEID was mainly used as an adapter between the existing devices and product lifecycle information repositories developed during that project. The PROMISE Project also examined the integration of different types of sensor device and identification technologies. These included barcode readers and different kinds of RFID technologies appropriate for the wide variety of product types studied during the project. Another key conclusion was that since such technologies tend to evolve and grow at their own pace and new kinds of sensor and connectivity solutions are continually being produced, the application of adapter technologies is more practical than expecting the global adoption of a single interface. Furthermore, in the case of current remanufacturing scenarios and those in the immediate future, much of the technology is of a legacy nature, therefore adapter technologies will continue to be appropriate for some time to come. The explosion in the number of things being encouraged by the Internet of Things also needs to be taken into consideration. This includes not only all kinds of stand-alone sensors which can potentially be linked together but also devices such as smart phones and tablets which in themselves are becoming more and more capable of sensing, data gathering and reporting. Cosm (formerly Pachube) [89] encourages the connection of data gathering devices to an open infrastructure where information may be stored and shared, and applications can be developed. This is a very interesting approach for applications for the enthusiast or hobbyist, or where there is no issue with the information being in the public domain. But it raises many privacy and data integrity issues when

52

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

applied to commercial situations. In general, companies that have information stored on devices in products, or captured from those devices in controlled situations such as maintenance interventions, require that information is kept secure, quite often to preserve competitive advantage. Information is rarely shared outside of a single organisation; therefore it is unlikely that these kinds of information will be widely published. Even in the case of information that may be gathered by a tracking and tracing infrastructure (e.g. EPCIS), the user of the information needs to be sure that the information is from a properly authenticated source and has not been compromised or counterfeited. For the purposes of PREMANUS, access to lifecycle information collected on product-embedded devices needs to be managed in a way which is acceptable to the owning party. Therefore controlled adaptation of the information is the optimum solution.

5.3.2 Maintenance as a Service


In order to make high-quality assessment of products to be re-manufactured, PREMANUS also needs access to maintenance information collected in maintenance management systems throughout each products life. Just like the embedded devices, such systems tend to be closed, proprietary and quite specific to whatever industrial sector and often even product line specific. A further complication is that much of the information recorded about maintenance events is not necessarily structured in a manner that lends itself easily to automated import into a lifecycle information repository: it might even be hand-written and scanned into the system. However, there are some very good examples of where extensive progress has been made to create an open and structured information standard, for instance the MIMOSA [91] standard for Operations and Maintenance in manufacturing, fleet, and facility environments, and the OASIS Open Building Information Exchange (oBIX) [92]. Nevertheless, standards such as these have also been conceived with the focus of improving the management and maintenance of their respective environments during the usage phase, and the delivery of summary, structured maintenance information to be used at the end-of-life phase has not been considered sufficiently. Therefore the extraction of appropriate information to be used in decision making for remanufacturing still mainly demands an adapter approach. In the future, it would be desirable to have maintenance management systems forward more generically structured maintenance event information to a wholeof-life product lifecycle information repository.

53

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

6 Business Decision Support System (BDSS)


Within the remanufacturing domain, decision tools and mathematical models have been developed to support issues including: EoL option decision making, Production planning and operational scheduling, Optimisation and strategic product life cycle decision making.

6.1

End-of-Life Product Recovery Process Eco-Efficiency Evaluator

A variety of EoL and Eco focused algorithms and methods have been developed and are well represented in the literature. Such algorithms and methods address a broad range of topics and cases, but it is important to provide a brief overview in order to define the boundaries of the PREMANUS Eco-Efficiency evaluator. This section provides first an overview of generic optimization methods and then a brief overview of Life Cycle Costing (LCC) and Life Cycle Assessment (LCA). LCC and LCA methods and calculations provide the basis for the development of the Eco-Efficiency Evaluator module of the PREMANUS BDSS developed in WP5.
Deterministic
State Space Search Branch and Bound Algebraic Geometry

Probabilistic
Monte Carlo Algorithms Soft Computing

Artificial Intelligence (AI)

(Stochastic) Hill Climbing Random Optimization Simulated Annealing (SA) Tabu Search (TS)

Computational Intellingence (CI) Evolutionary Computation (EC) Memetic Algorithms Evolutionary Algorithms (EA) Genetic Algorithms (GA) (LCS) Learning Classifier System Evolutionary Programming Evolution Strategy (ES) (GP) Genetic Programming Harmonic Search (HS) Swarm Intelligence (SI) Ant Colony Optimization (ACO) Particle Swarm Optimization (PSO) Differential Evolution (DE) Standard Genetic Programming Linear Genetic Programming Grammar Guided Genetic Prog.

Parallel Tempering Stochastic Tunneling Direct Monte Carlo Sampling

Figure 19 - Taxonomy of global optimization algorithms [80]

Figure 19 sketches a rough taxonomy of global optimization methods. Generally, optimization algorithms can be divided into two basic classes:

54

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Deterministic. Deterministic algorithms are most often used if a clear relationship exists between the characteristics of the possible solutions and their utility for a given problem. Then, the search space can efficiently be explored using for example a divide and conquer scheme. If the relation between a solution candidate and its fitness are not so obvious or too complicated, or the dimensionality of the search space is very high, it becomes harder to solve a problem deterministically. Trying it would possibly result in exhaustive enumeration of the search space, which is not feasible, even for relatively small problems. Probabilistic algorithms. Then, probabilistic algorithms come into play. An especially relevant family of probabilistic algorithms are the Monte Carlo-based approaches. They trade in guaranteed correctness of the solution for a shorter runtime. This does not mean that the results obtained using them are incorrect, but they may not always be the global optima. Often in the real world, a solution a little bit inferior to the best possible one is better than one which takes a very long time to be found.

Heuristics used in global optimization are functions that help decide which one of a set of possible solutions is to be examined next. Deterministic algorithms usually employ heuristics in order to define the processing order of the solution candidates. A heuristic is a part of an optimization algorithm that uses the information currently gathered by the algorithm to help to decide which solution candidate should be tested next or how the next individual can be produced. Heuristics are usually problemclass-dependent. A meta-heuristic is a method for solving very general classes of problems. It combines objective functions or heuristics in an abstract and hopefully efficient way, usually without utilizing deeper insight into their structure. This combination is often performed stochastically by utilizing statistics obtained from samples from the search space or based on a model of some natural phenomenon or physical process. An important class of probabilistic Monte Carlo meta-heuristics is Evolutionary Computation. It encompasses all algorithms that are based on a set of multiple solution candidates (called populations) which are iteratively refined. This field of optimization is also a class of Soft Computing as well as a part of the artificial intelligence area. Some of its most important members are evolutionary algorithms and Swarm Intelligence. Besides these nature-inspired and evolutionary approaches, there exist also methods that copy physical processes like Simulated Annealing, Parallel Tempering, and the Raindrop Method, as well as techniques without direct realworld role models like Tabu Search and Random Optimization. Speed and precision are conflicting objectives, at least in terms of probabilistic algorithms. A general rule of thumb is that improvements in accuracy of optimization can be achieved only by investing more time. [3] Life Cycle Cost and Life Cycle Assessment are well-known methodologies in the relevant literature. Both have been developed since the 60ies: LCC are cradle-to-grave costs summarized as an economic model of evaluating alternatives for equipment and projects [1]; LCA is a technique to assess environmental impacts associated with all the stages of a products life from-cradle-to-grave [2]. For this section, it is interesting to analyse the state-of-the-art of optimization applied to LCC and LCA. 39 papers for LCC and 40 papers for LCA, from the last 15 years, have been analysed, grouping them in three clusters: (i) simple application of the methodology, (ii) use of software, (iii) optimization. The first cluster considers papers that barely apply the methodology (LCC or LCA). The second cluster includes contributions that use software to calculate costs and/or environmental impacts. The third cluster takes into account papers that optimize product life-cycles costs and / or environmental impacts. As result only few papers consider optimization issues. In percentage, only 20.51% of LCC literature treats about optimization, reduced to 10% in LCA. Another interesting point is the massive use of software in LCA, compared to LCC where it is very rare. Use of software in LCA is justified by the increased complexity of the methodology, compared

55

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

to LCC. The most popular LCA software products are: SimaPro, GaBi Software and LCAiT. Focusing on papers dealing with optimization, common methods are: Linear Programming, Genetic Algorithms and Particle Swarm Optimization.

6.2

KPI Optimizer

EoL option decision-making tools have utilised a number of optimisation algorithms in order to select the most appropriate EoL activity, based upon economic and environmental criteria. A number of different techniques have been employed by the BDSSs found within the literature: Linear programming [94-99], multi criteria decision methods including weighted sum method (WSM) [100], analytic hierarchy process (AHP) [101], ELECTRE III [102], Grey Relational Analysis [103], TOPIS [104,105], and genetic and evolutionary algorithms [106,107] designed to find Pareto optimum results. Operational scheduling and production planning decision tools are aimed at optimising practices associated with remanufacturing such as scheduling decisions, lot sizing and inventory management. The key differentiation between these tools and standard forward manufacturing is the added complexity of uncertainty with the timing and condition product returns. The review paper by Ilgin and Gupter [108] provides a comprehensive review of the work carried out within this area over the last 10 years. Inventory management tools were the most popular amongst the operational decision tools accounting for 102 of the 164 papers associated remanufacturing operational decisions [108]. Although the primary decision factors are economically driven environmental factors have begun to be considered within these models. Strategic product life cycle decision making tools tend to use stochastic simulation in order to gauge the effect of strategic decision making upon a remanufacturing business. The key aspects which have been modelled within these decision tools include the market demand and rate of product returns. Variable decision parameters can be adjusted to create what if scenarios enabling businesses to predict the effects of their decision upon the system. Life cycle simulation enables economic and environmental analysis of internal business decisions such as comparing different returns policies or product design strategies, examples include [109-111]. Other studies have focused upon the effects modelling and simulation of external parameters to a remanufacturing business such as competition from independent remanufacturers [112, 113]. KPIs present in industry a type of performance measurement for strategic or operational goals. KPIs enable decision-makers to understand the current status of factors, process or events which are relevant to the specific business. KPIs relevant to remanufacturing businesses can be structured according to different literature sources into three main groups: Business Focused KPIs [114] Physical Attributed KPIs [115] Financially Driven KPIs [116] Given the importance of KPIs across the PREMANUS project, Deliverable D1.4 will provide the overall framework for the evaluation of a broader set of parameters, such as the sustainability aspects, the use of energy, cost, quality and time. Potential End-of-Life (EoL) scenarios, recycling applications, remanufacturing processes and disposal conditions, will be determined, taking into account the standardization of parts and the obsolescence of products and components. Such KPIs will then be used in WP5 by the Business Decision Support Systems to guide the users in real time decisions on re-using, remanufacturing or disposing a product or its components.

56

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

6.3

User Experience

As part of the PREMANUS BDSS, the user needs to be provided with some kind of user interface, which presents the KPIs and other data retrieved from the PREMANUS system to the user. This section will introduce and discuss some UI technologies that could be suitable for PREMANUS needs. It focuses on web based UI technologies, since the end users computer will not necessarily be running PREMANUS. Instead PREMANUS will be deployed on one or more servers for each company and the end user will access the system over the network. The following technologies will be discussed: HTML5 (which is powered by JavaScript) SAP Streamwork (based on HTML5 and OpenSocial) Adobe Flex (now Apache Flex) Microsoft Silverlight

6.3.1 HTML 5
HTML or HyperText Markup Language is the most widely used technology for building webpages today. HTML 5 is the newest standard and introduces a variety of new features including many video, audio and animation features. HTML 5 is also intended to be used to create mobile and Windows 8 applications and is supported by many different platforms and browsers. HTML5 is a markup language for structuring and presenting content for the World Wide Web, and is a core technology of the Internet originally proposed by Opera Software38. It is the fifth revision of the HTML standard (created in 1990 and standardized as HTML4 as of 1997)39 and, as of June 2012, is still under development. Its core aims have been to improve the language with support for the latest multimedia while keeping it easily readable by humans and consistently understood by computers and devices (web browsers, parsers, etc.). HTML5 is intended to subsume not only HTML 4, but XHTML 1 and DOM Level 2 HTML as well.39 Following its immediate predecessors HTML 4.01 and XHTML 1.1, HTML5 is a response to the observation that the HTML and XHTML in common use on the World Wide Web are a mixture of features introduced by various specifications, along with those introduced by software products such as web browsers, those established by common practice, and the many syntax errors in existing web documents.40 It is also an attempt to define a single markup language that can be written in either HTML or XHTML syntax. It includes detailed processing models to encourage more interoperable implementations; it extends, improves and rationalises the markup available for documents, and introduces markup and application programming interfaces (APIs) for complex web applications.41 For the same reasons, HTML5 is also a potential candidate for cross-platform mobile applications. Many features of HTML5 have been built with the consideration of being able to run on low-powered devices such as smartphones and tablets.42

38 39

http://dev.w3.org/html5/spec/introduction.html#history-1 http://www.w3.org/TR/2011/WD-html5-diff-20110405/ 40 http://validator.w3.org/ 41 http://www.w3.org/TR/html5-diff/ 42 http://en.wikipedia.org/wiki/HTML5

57

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

6.3.1.1

JavaScript

JavaScript is a script language that is commonly used in conjunction with HTML 5 to create more elaborate webpages or user interfaces. JavaScript (sometimes abbreviated JS) is a prototype-based scripting language that is dynamic, weakly typed and has first-class functions. It is a multi-paradigm language, supporting objectoriented,[5] imperative, and functional[6] programming styles. JavaScript was formalized in the ECMA Script language standard and is primarily used in the form of client-side JavaScript, implemented as part of a Web browser in order to give enhanced user interfaces and dynamic websites. This enables programmatic access to computational objects within a host environment. JavaScript's use in applications outside Web pages for example in PDF documents, site-specific browsers, and desktop widgets is also significant. Newer and faster JavaScript VMs and frameworks built upon them (notably Node.js) have also increased the popularity of JavaScript for server-side web applications. JavaScript uses syntax influenced by that of C. JavaScript copies many names and naming conventions from Java, but the two languages are otherwise unrelated and have very different semantics. The key design principles within JavaScript are taken from the Self and Scheme programming languages.43

6.3.1.2

Evaluation of HTML5

HTML5 and JavaScript in combination could be one possibility for the UI of the BDSS. HTML5 has the advantage that it runs on any system with a web browser, and can be the basis of cross platform mobile apps. Combined with JavaScript, HTML5 can be used to create very powerful applications.

6.3.2 SAP Streamworks


SAP Streamwork is a web based collaboration tool, which allows users to manage tasks and projects that require a lot of communication and joint work with other involved users. Users can share tasks, documents and other types of information. With SAP StreamWork, users can provide trusted information, coordinate people, and monitor discussions to make better decisions based on facts. In SAP StreamWork, users use notes, documents, email, and a set of professional decision making tools to help them and other participants to make business decisions. SAP StreamWork not only brings together the right information and proven business approaches to streamline discussion, strategizing, and decision-making - but also guides people to take action. Move from collaboration to action Give structure to discussions
43

http://en.wikipedia.org/wiki/JavaScript

58

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Create a collective view with embedded business tools Make the best decisions by ensuring your group works from the same information Collectively analyse with intuitive data exploration and visualization.

Upload and share documents to ensure the entire team is making the best decisions based on the current facts. Develop hypotheses and analyze with the team using intuitive data exploration and visualization technology.44 Within StreamWork, Widgets can be utilized to include different types of data visualizations and tools for users to take advantage of. Widgets in StreamWork can be developed by anyone and are based on OpenSocial, HTML5 and JavaScript.
6.3.2.1 OpenSocial

OpenSocial is the standard on which StreamWork widgets are built. It is based around HTML 5 and JavaScript. Building OpenSocial Widgets for StreamWork has the added advantage that they are able to run in a number of other applications like for instance iGoogle and many social networks that adopt the OpenSocial standard OpenSocial is a set of APIs for building social applications that run on the web. OpenSocial's goal is to make more apps available to more users, by providing a common API that can be used in many different contexts. Developers can create applications, using standard JavaScript and HTML, which run on social websites that have implemented the OpenSocial APIs. These websites, known as OpenSocial containers, allow developers to access their social information; in return they receive a large suite of applications for their users. The OpenSocial APIs expose methods for accessing information about people, their friends, and their data, within the context of a container. This means that when running an application on Orkut, users will be interacting with their Orkut friends, while running the same application on MySpace it lets them interact with their MySpace friends. For more information on the types of information exposed by the OpenSocial API, see the Key concepts section.45
6.3.2.2 Evaluation of SAP StreamWork

SAP StreamWork as a UI technology for the BDSS has the advantage that it is designed to be a collaborative approach to decision making. This fits well into the use case of PREMANUS, where the decision about remanufacturing is not only based on the decision of one manager, but also on the information provided by the engineers disassembling and evaluating the product. Therefore, the collaborative nature of StreamWork supports these use cases very well. Additionally StreamWork has the advantage of being freely available to anyone and its OpenSocial Widgets can be used independently of StreamWork, opening the possibility of using any PREMANUS UI based on those widgets without StreamWork itself.

44 45

https://streamwork.com/help/12Sprints.html Open Social Specification: http://opensocial-resources.googlecode.com/svn/spec/2.0.1/OpenSocialSpecification.xml as of 30.05.2012

59

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

6.3.3 Adobe Flex


The Flex SDK enables the creation of Flash based, desktop, mobile and web-based user interfaces. Adobe Flash and Flex are widely used throughout the internet, including popular sites like YouTube. Flex UIs are based on the combination of MXML to describe the UI and ActionScript to implement the UI functionality. The Flex SDK provides a highly productive, open source framework for building and maintaining expressive web applications that deploy consistently on all major browsers, desktops and operating systems. It provides a modern, standards-based language and programming model that supports common design patterns suitable for developers from many backgrounds. Flex applications run in the ubiquitous Adobe Flash Player and Adobe AIR. It is possible to use the Flex SDK to create a wide range of highly interactive, expressive applications. For example, a data visualization application built in Flex can pull data from multiple back-end sources and display it visually. Business users can drill down into the data for deeper insight and even change the data and have it automatically updated on the back end. A product configuration application can help customers navigate the process of selecting or customizing products online. And a self-service application can guide customers through an address change or help employees complete an otherwise complicated multi-step benefits enrolment. In addition to the Open Source Flex SDK, Adobe produces the Free Adobe Flex SDK which contains everything in the Open Source Flex SDK plus useful tools for enhancing the application development experience such as the debugger Adobe Flash Player runtime and Adobe AIR. Adobe also provides a professional IDE, Adobe Flex Builder, for building Flex applications using either the Adobe Flex SDK or the Open Source Flex SDK.46
6.3.3.1 Evaluation of Adobe Flex

While Flex and Flash are currently used in many popular websites, its future is somewhat uncertain as major companies like Apple and Microsoft seem to prefer HTML5 as the future of the web and exclude Flash from their browsers and devices (Apple from the iPad and MS from the Windows 8 Metro Browser). Due to this, HTML5 is preferred over Flex and Flash for PREMANUS.

6.3.4 Microsoft Silverlight


Microsoft Silverlight is an application framework for writing and running rich Internet applications, with features and purposes similar to those of Adobe Flash. The run-time environment for Silverlight is available as a plug-in for web browsers running under Microsoft Windows and Mac OS X. While early versions of Silverlight focused on streaming media, current versions support multimedia, graphics and animation, and give developers support for CLI languages and development tools. Silverlight is also one of the two application development platforms for Windows Phone. Silverlight provides a retained mode graphics system similar to Windows Presentation Foundation (WPF), and integrates multimedia, graphics, animations and interactivity into a single run-time environment. In Silverlight applications, user interfaces are declared in Extensible Application Markup Language (XAML) and programmed using a subset of the .NET Framework. XAML can be used for marking up the vector graphics and animations. Silverlight can also be used to create

46

http://sourceforge.net/adobe/flexsdk/wiki/About/

60

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Windows Sidebar gadgets for Windows Vista.47


6.3.4.1 Evaluation of Silverlight

While MS Silverlight might be an option for the PREMANUS UI, its lack of support for Android or iOS severely limits its appeal as a platform. Additionally Microsoft has shifted their strategy regarding Silverlight and focused it as a development platform for Windows Phone.48

6.3.5 Conclusion on User Experience


The review shows that only HTML 5 based user interfaces can be used across different operating systems, including mobile devices. As the PREMANUS system might be also operated directly at the shop floor, e.g. to enter data, support for mobile devices is important. SAP Streamwork presents the capabilities to re-use widgets and as such adapt the PREMANUS UI to the individual needs of a specific scenario in a light weight manner.

6.4

Task-centric information systems

The requirements have shown that the demand for high flexibility and the difference among the use cases of CRF and SKF show the appropriateness of task-centered approaches to realize coordination and control. In the following, task management systems are presented and their integration with execution environments is discussed. Recent task management systems enable groups to identify and coordinate tasks for all group participants. Web based tools like Remember the milk49, Astrid50, CTM [85], hitask51 are popular examples for this type of tool. The integration of managed to-do lists with actual work execution environments is weak. An interesting alternative is the activities plug-in for Lotus Notes. 'Activities' combines group task management with the integrative approach of Lotus notes.

6.4.1 ADiWa Workbench


The ADiWa workbench integrates task based work organization with the access to dedicated execution environments. A task list, that displays system generated (e.g. from a business process engine), personal and delegated tasks allows the classification of tasks and the access to task related knowledge that is encapsulated in so called task patterns. Each task has a dedicated workspace. A workspace is an environment to access small applications, also called widgets. Widgets perform a set of focused functionalities. The idea is that a set of widgets is combined by the user to execute a complex task. Thereby, widget-to-widget communication helps to integrate the functionalities of different widgets to offer a user experience comparable to a closed application.

47 48

http://en.wikipedia.org/wiki/Microsoft_Silverlight http://www.zdnet.com/blog/microsoft/microsoft-our-strategy-with-silverlight-has-shifted/7834 49 http://www.rememberthemilk.com/ 50 http://astrid.com/ 51 http://hitask.com/

61

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

As an example: A user has the task to reschedule a truck due to a closed airport. The user opens a workspace, selects a map widget, a truck fleet widget and a process widget. He uses the process widget to identify the truck used to execute the respective process. He automatically gets additional information about the truck in the truck fleet widget and sees the position of the truck in the map widget. When he uses the map widget to reconfigure the route, the respective information is automatically reflected in the process and the truck fleet widget.

6.4.2 Conclusions
For PREMANUS, a task-centered approach that follows the structure of the ADiWa workbench seems to be useful. The decomposition of work steps into widget functionalities helps to decompose the core work steps and the optional steps into software functionality. The flexibility requirement is realized by the dynamic access to widgets and the possible delegation of tasks based on user decisions. This approach could be realized in a Streamwork environment.

6.5

Natural language query interfaces

PREMANUS offers a query interface to the user. To simplify knowledge specification and querying, natural language interfaces hide the complexity of formal languages from the users. Natural Language Interfaces (NLI) enable users to interact with a system based on a request in natural language. The used language can be full natural language, a subset or a controlled vocabulary. Overall, NLIs can be seen on a continuum of formality together with formal languages, as visible in Figure 20. In the sections below, different NLIs are presented. Overall, NLIs are still subject to research and only recently some products that used NLIs have been released. The existing projects focus on SPARQL and SQL queries. Studies that addressed NLIs on the continuum show the appropriateness of different continuum positions. Queries in full or slightly controlled English, controlled language and formal query modelling with a graphic user interface have been tested [86]. NLIs achieved moderate recall values due to restricted language use, but very high precision values. By comparing the approaches regarding user satisfaction a controlled language approach created the best results. In the following, products and projects that deliver NLIs are listed. All listed projects are accessible on the web and can be used according to their licenses.

Figure 20 - Formality Continuum for LNIs

62

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

6.5.1 Products
6.5.1.1 Siri

Siri is a personal assistant application for the operating system iOS. The application uses natural language processing to answer questions and make recommendations.
6.5.1.2 Wolfram Alpha

Wolfram Alpha is a search engine that offers answers to factual queries by computing the answers from structured data.
6.5.1.3 Ubiquity

Ubiquity, an add-on for Mozilla Firefox, is a collection of quick and easy natural-language-derived commands that act as mashups of Web Services, thus allowing users to get information and relate it to current and other web pages. It also allows Web users to create new commands without requiring much technical background.52

6.5.2 Projects
6.5.2.1 C-Phrase

C-Phrase is a web-based natural language front end to relational databases. Phrase runs under LINUX, connects with PostgreSQL databases via ODBC and supports both select queries as well as updates. Currently there is only support for English. 53 C-Phrase code can be downloaded from the project website. C-Phrase is licensed under the BSD license.
6.5.2.2 Aqualog and Power Aqua

Aqualog is a portable question-answering system which takes queries expressed in natural language and an ontology as input and returns answers drawn from one or more knowledge bases (KBs), which instantiate the input ontology with domain-specific information. [] AquaLog is portable, because its architecture is completely independent from specific ontologies and knowledge representation systems. Given a knowledge representation system, AquaLog can be configured for a particular ontology in a matter of minutes. AquaLog is also portable with respect to knowledge representation, because it uses a modular architecture based on a plug-in mechanism to access information about an ontology, using an OKBC-like protocol. AquaLog present an elegant solution in which different strategies are combined together. It makes use of the GATE NLP platform as part of the linguistic process , string metrics algorithms, a learning mechanism as a solution to manage lexical resources, including domain-dependent lexica and generic resources such as WordNet AquaLog also makes use of a novel ontology-based relation similarity service to make sense of user queries with respect to the target knowledge base. 54

52 53

http://en.wikipedia.org/wiki/Ubiquity_(Firefox) http://code.google.com/p/c-phrase/ 54 http://technologies.kmi.open.ac.uk/aqualog/

63

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

An extension of AquaLog is PowerAqua55 that answers queries by combining multiple ontologies, e.g. linked data. Aqualog is accessible on the project website and is uses the Apache license.
6.5.2.3 ACE View

ACE View is an ontology and rule editor that uses Attempto Controlled English (ACE) in order to create, view, edit and query OWL ontologies and SWRL rulesets.56 Aceview is accessible on the project websites. Aceviews license is not specified, but it depends on several LGPL projects.

6.5.3 Conclusion
As it is still unclear what complexity the natural language query interface in PREMANUS will require, it is also too early to decide which of the presented technologies would be most suitable for such an interface. A decision will be made as soon as enough details for the query interface have emerged and will be documented in the Living Architecture Document (D2.4).

55 56

http://technologies.kmi.open.ac.uk/poweraqua/ http://attempto.ifi.uzh.ch/aceview/

64

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

7 End of Life Systems 7.1 Product Lifecycle Management

Product Lifecycle Management tackles the management of product data. The history goes back to systems like CAD/CAM with the computer aided design and the computer aided document management. Modern PLM (see also EDM and OFN systems) have an extended view that tackles the complete product lifecycle, including elements like portfolio management, marketing, the development of new products and the disposal of products as well as production facilities. Important vendors for PLM systems are Siemens PLM software, Dassault systmes, Autodesk, PTC, Oracle and SAP. Forrester has analyzed the PLM market in 2008 with the following results: In the established discrete-manufacturing market, we found that Dassault Systmes, Siemens PLM, and PTC demonstrate frontrunner leadership due to their strong combination of current oerings and strategy. A pack of ERP players are close at their heels as Strong Performers: Oracles gained advanced discrete-based and process-based functionality through last years acquisition of Agile PLM; SAP is pursuing its vision to support end-to-end PLM processes across both industry segments; and IFS is dierentiating through specialized processes for engineer-to-order (ETO) manufacturing environments. Infor retains a Strong Performer position in the nascent process industries market, but it has a way to go to compete with the Leaders in the prevailing discrete market. [93] In this section the following three lifecycle management systems are introduced: QLM SAP PLM Open PLM

7.1.1 Quantum Lifecycle Management (QLM)


Probably the most significant obstacle to effective, whole-of-life lifecycle management is that valuable information is all too often locked into vertical applications, sometimes called silos, often even within the same organization. This information is not enabled for sharing with other interested parties across the Beginning-of-Life (BOL), Middle-of-Life (MOL) and End-of-Life (EOL) lifecycle phases. PREMANUS has a fundamental need to access, consolidate and exploit information from multiple lifecycle phases and different system types, and QLM has been identified as a key factor in achieving the projects goals. The Open Group is a global consortium that enables the achievement of business objectives through open IT standards, within which, the Quantum Lifecycle Management (QLM) Work Group aims to deliver standards to support an open, secure and trustworthy infrastructure for the exchange and processing of lifecycle management information throughout all lifecycle phases. The formation of The Open Group QLM Work Group is a direct result of the European Project PROMISE (FP6-IST project No. IST-2004-507100), which ended in July 2008, and results from that project have been adopted as the basis for the QLM standards.

65

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

The QLM Work Group is developing standards and practices for information exchange to bring together not only information from existing systems that relate to lifecycle management but also to harness the explosion of information from trillions of objects that characterize the Internet of Things that can add value to the overall effectiveness of lifecycle management. The Open Group comprises several working groups and forums. To further enhance the management of product lifecycle information, the QLM Work Group is currently cooperating with: The Semantic Interoperability (UDEF) Work Group The Open Group Trusted Technology Forum (OTTF) The Security Forum.

There are three main components of QLM that are of interest to the PREMANUS project: The QLM Messaging Interface The QLM System Object Model The Universal Data Element Framework (UDEF). Each of these will now be examined in some more detail.
7.1.1.1 The QLM Messaging Interface

One of the goals of the EU PROMISE project was to enable a true Internet of Things to become a reality by, among other things, defining a distributed messaging architecture with standardised communication interfaces for the purpose of product tracking and product data gathering. This was done through the creation of a new messaging interface, called the PROMISE Messaging Interface (PMI) (PROMISE, 2008). The development of PMI has continued with the QLM Work Group of The Open Group, and it has now become the QLM Messaging Interface. In the QLM world (Figure 21), the communication between the participants, e.g. products and backend systems, is done by passing messages between nodes using the QLM Messaging Interface. The QLM cloud in Figure 21 is intentionally drawn in the same way as is usual for the internet cloud, i.e. the QLM Messaging Interface is intended to play the same role in the Internet of Things as HTTP does for the internet. A defining characteristic of the QLM Messaging Interface is that nodes do not have predefined roles, as it follows the peer-to-peer approach to communications. That means that products can communicate directly with each other or with back-end servers but the QLM Messaging Interface can also be used for server-to-server information exchange of sensor data, events and other information. A full QLM node capable of sending as well as receiving requests does have to include both client and server functionality, but a more limited node can just have the client functionality, if it is assumed that it will only send messages to other nodes. An example of such limited nodes are ones associated with RFID tag readers, or generally, nodes that are unreachable from the outside because of a firewall, which periodically send product data to a product monitoring system according to a subscription that is specified when the product is installed. The QLM Messaging Interface defines different operations such as a read or write of the value of a particular info item. The info items represent actual values such as sensor readings of a device, such as a car. A QLM Messaging Interface node is a communications end-point in a QLM network, and manages communications for one or several devices. The parameters for the method calls are XML strings whose structure is defined by an XML schema. The XML string conveys additional request

66

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

information, such as the involved device, information item, sub-type of request, etc.

Figure 21 - QLM conceptual connectivity

In addition to reads and writes, the QLM Messaging Interface also provides callback methods for asynchronous communications. Examples of asynchronous communications include a subscription read, a call to the read method with parameters that specify that the target node should not respond directly with a value, but rather send multiple responses at a specified interval. The callback method interface also provides a mechanism for nodes to send events to each other with or without a prior subscription, subject to the particular node implementation. QLM Messaging Interface messages are self-contained, so they can be exchanged using HTTP, SOAP, SMTP, FTP or similar protocols. The most appropriate protocol to use depends on the application. Different protocols also provide their own security mechanisms that might be important when choosing the one to use. Despite its background in Product Lifecycle Management (PLM), the QLM Messaging Interface can be applied to virtually any kind of information, i.e. not only physical products but also to documents, document repositories etc. Querying for available design documents, subscribing to the addition/deletion/modification of documents, subscribing to particular change events in design documents is conceptually similar to similar queries and subscriptions for physical products.
7.1.1.2 The QLM System Object Model

The QLM SOM (Quantum Lifecycle Management System Object Model) [90] is a data model designed to enable reliable and easily implementable integration of PLM (Product Lifecycle Management) data and knowledge. Therefore it has to be considered as a basic structure to format and represent data to facilitate their exchange among applications and lifecycle phases. The QLM SOM aims to systematically integrate and manage data from all product lifecycle phases, in particular, from design, development, production, through use and maintenance, to recycling, and finally, to the end of life, in order to support comprehensive data analysis in business intelligence applications. The ultimate goal is to integrate product data throughout the entire lifecycle from

67

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

different sources, to support comprehensive analysis on such data so enabling the enhancement of operational businesses through the more detailed product insight.
7.1.1.3 The Universal Data Element Framework (UDEF)

The Universal Data Element Framework (UDEF57) is a framework that aims to integrate data across domains while enabling interoperability. It is an enterprise-centric framework for describing data in a manner that improves understandability for enterprise decision makers and simplifies data integration for systems analysts. Once enterprise data has been indexed with UDEF, the data is enabled for simpler interoperability with any other data that has been indexed with UDEF. The time and effort to integrate data between any two applications or any two data standards indexed with UDEF is substantially reduced; thereby substantially reducing enterprise integration costs. The effort to tag data with UDEF is a one-time effort. Once application system data has been tagged with UDEF Name and UDEF Identifier, it becomes understandable, discoverable and more interoperable. If the data is both understandable and discoverable, decision makers are able to make more timely and informed decisions. One can argue that UDEF categories are intuitive. Categories such as Product, Process, Person, Asset, Liability, Enterprise and Document are categories that require little additional explanation. Most enterprises throughout the world manage data within these categories. No single data standard is sufficient to address all data integration requirements for all enterprises. Data standards are typically based on data models that are constrained to specific domains such as product definition, manufacturing, logistics, human resources, finance, health care and procurement. Since there are numerous points of intersection between domains, the need exists to reduce this vocabulary integration effort with a framework that transcends domains. UDEF satisfies this requirement since it is not a data standard but rather a framework capable of integrating data standards.

7.1.2 Holonix i-LiKe (intelligent Lifecycle Knowledge)


Based on the results of several research projects, mainly the EU project Promise, Holonix has developed a platform to manage products and services based on QLM.. It has been called i-LiKe: intelligent Lifecycle data and Knowledge. i-LiKe provides services in context of managing a product along its complete life cycle. Accordingly, i-LiKe is relevant to the vision of PREMANUS and its services can be re-used and built upon to contribute to the goal of the PREMANUS. The platform is product-centric, meaning that everything is seen from the perspective of being on the product, which is followed during all its life. Its aim is to provide a platform to develop and integrate Lifecycle Management solutions, being able therefore to follow the product in several phases, integrating data coming from them and therefore enabling improved product management, analysis and use. It is developed followed a modular approach; some modules follow the product during all the lifecycle; others are phase or process specific. The modules currently targeted are shown in Figure 22. Considering the complexity of the topic, the platform is under continuous development; not all the modules are currently fully implemented and it is planned that functionality will continuously increase over time also through partnerships with other solution providers/research centers etc. (e.g. the design module is based on already existing PLM platforms; the warehousing module can exploit functionalities of voice-picking systems, etc.)
57

UDEF - http://www3.opengroup.org/subjectareas/si/udef

68

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

Figure 22 - i-LiKe platform service

7.1.2.1

Technical background/APIs for usage and extensions

The i-LiKe platform is developed in Java; it has Java based web-based interfaces (both for management and operations) and interfaces in C# (for operations with industrial PDAs). The DB is in MySQL. Interaction through Web Services is currently under development. The platform interaction module has been designed to interact easily through QLM and UDEF. An items traceability can be achieved using RFIDs or other identification technologies (e.g. IMEI for phones, MAC addresses for computers).
7.1.2.2 Summary

The Holonix i-LiKe platform, with its item-centric approach and currently- available modules, can manage data and information coming from the operations both during the Beginning of Life (BoLproduction, logistics) and the Middle of Life (MoL-maintenance, data from the usage, etc.), providing therefore already established tools (interfaces, procedures etc.) to manage these processes. This means that it can offer to PREMANUS a solid link to these operations data, so to enable their usage and analysis during the End of Life Phase as well as to foster integration with remanufacturing operations. PREMANUS partner Polimi owns the licenses to use Holonixs i-LiKe platform. It has been clarified that PREMANUS can use i-LiKe on basis of this license.

7.1.3 SAP PLM


SAP PLM comprises a large variety of elements to realize a 360 degree support for all product related processes. This goes from idea management to manufacturing and services. Of specific importance is the integration into the existing IT landscape, esp. the ERP connection. As a result complex product models including variants can be developed and maintained by the system. The SAP Product Lifecycle Management (SAP PLM) application provides users with a 360-degreesupport for all product-related processes - from the first product idea, through manufacturing to product service. SAP PLM is part of the SAP Business Suite, which gives organizations the unique

69

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

ability to perform their essential business processes with modular software that is designed to work with other SAP and non-SAP software. Organizations and departments in all sectors can deploy SAP Business Suite software to address specific business challenges on their own timelines and without costly upgrades.58 The SAP PLM system consists of many different components like for example: cFolders and cProjects SAP Document Management SAP Product Structure SAP Project System SAP Specification and Recipe Management SAP PPM (Portfolio and Project Management). SAP PLM is a solution of extreme depths to address all required functionalities. Still, it focuses on single stakeholders. A company takes care of its own product lifecycle management without integrating additional stakeholders in the respective process.

7.1.4 OpenPLM
The system allows modelling product lifecycles based on the objects part, document and group. Each object can have a lifecycle status and each object can have hierarchical relations to other objects which creates bills of materials. The user can perform the activities search, navigate, create and study. Variant management and integration with existing bills of materials is a complex task with OpenPLM based on the four given activities. OpenPLM is the product oriented open source PLM. A product oriented Product Lifecycle Management (or PLM) unifies all activities of the company in an ECM which structures data around the product. OpenPLM features a full web and user-friendly interface. It is written in python using the Django framework, and uses proven Open Source software like Apache or PostgreSQL. It is free software, mostly developed by LinObject, and can be used, modified and distributed under the terms of the GNU General Public License v3 (GPLv3)59 OpenPLM structures basic PLM data for single stakeholders and as open source- enables extensive customization and extensions based on development effort.

7.1.5 Conclusion
SAP PLM and OpenPLM represent themselves as monolithic PLM solutions, sometimes considering at as an extension of an ERP system. Thus being a solution for a single stakeholder, considering the lifecycle as a single company resource. QLM goes beyond this perspective as it considers PLM as a process that involves multiple stakeholders.

58 59

http://wiki.sdn.sap.com/wiki/display/PLM/SAP+Product+Lifecycle+Management+(PLM) http://www.openplm.org/

70

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

QLM integrates sensor data and real time 'lifecycle event data' into PLM, as well as allowing this information to be made available to different players in the total lifecycle of an individual product (closing the information loop). This has resulted in the extension of PLM into closed-loop lifecycle management (CL2M). One aspect of this is the UDEF integration. UDEF is a framework to integrate different data standards for improved and simpler interoperability among data sets from different sources with different data models. An important challenge in the PREMANUS project is the integrated access to different data sources which is given with the UDEF integration in QLM. In contrast, SAP PLM and OpenPLM do not provide comparable integration features. However, the Holonix i-LiKe platform does. As summary, we conclude that the Holonix i-LiKe platform would be a valuable contribution to PREMANUS as basis for the PREMANUS data model and product information store.

7.2

End-of-Life Product Management Systems

Business software focusing on the end-of-life of products does not exists in many variations today. Mainly they focus on the administration and reporting of the product disposal process. The operations of such process are typically supported by manufacturing execution systems. As an example, we discuss her SAP ERP Recycling Administration (SAP ERP REA).

7.2.1 SAP ERP Recycling Administration


7.2.1.1 Introduction

In addition to the standard manufacture and sale of products, companies are often now responsible for the correct disposal and recycling of their waste products. Most companies absolve themselves of the duty to perform these tasks themselves by collaborating with recycling partners such as the Duales System Deutschland (DSD), who provide a general collection system in return for a license fee. The recycling partner organizes the collection, sorting, processing, and recycling of the waste packaging. The focus of the Recycling Administration (REA) component is the item-based or weight-based fee calculation for specific materials, as well as end-to-end transparency and implementation of the legal reporting requirements to environmental authorities. REA supports users when entering, managing, and billing the necessary recycling data.60
7.2.1.2 Features

Determination of the most price-effective recycling partner REA provides functions to analyze the data material (for example, condition and price analyses), which you can use to compare the performance of different recycling partners.

Automatic generation of forms for the selected recycling partner according to requirements regarding the data to be declared REA can issue the data to be declared either electronically in the form of data medium exchange (DME) or on paper as a form. For the declaration, REA first generates a document

60

SAP Documentation for ERP Recycling Administration

71

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

that is posted in the REA declaration system. Users can then generate the required documents from this document in accordance with the requirements of the recycling partners. Certified verification REA fulfils country-specific reporting requirements with its recycling partners. In periodical declarations, users state the information and totals for all declared articles as a quantity flow in the period under consideration. REA enables a certified verification at the level of individual documents and items. Use of the material information available in the standard SAP system REA accesses existing master data in the standard SAP system and enhances it. This allows for optimal integration with the standard SAP functions. Automatic determination of quantity flow from Sales and Distribution (SD) and Materials Management (MM) REA automatically obtains important information for generating the declaration from Sales and Distribution (SD) and Materials Management (MM). Cost control and controlling with Sales and Distribution (SD), Materials Management (MM), Financial Accounting (FI), and Controlling (CO) In Financial Accounting (FI), you enter a recycling partner as a vendor. In Controlling (CO), a budget is generated for settlement with recycling partners on the basis of the data processed in REA. In Sales and Distribution (SD), users can evaluate the corresponding billing documents.61
7.2.1.3 Evaluation

The SAP Recycling Administration is aimed at companies that want to turn over their recycling to third parties rather than administration of the actual recycling process or remanufacturing. As such it only has marginal relevance to the PREMANUS project. The ability to compare service providers and their suitability for the recycling needs of a company would however be worthwhile investigating further. Especially in the CRF use-case such a service could be appropriate for those engines, whose remanufacturing is not economically viable.

7.2.2 Conclusion End-of-Life Product Management Systems


As an example for typical End-of-Life Product Management Systems SAP ERP Recycling Administration (REA) has been reviewed. It provides administration and reporting capabilities for companies in order to comply with recycling regulations. Unfortunately, SAP ERP REA does not support remanufacturing decisions but rather reports on the results of such decisions. As such, such recycling administration systems do not provide functionality required to build PREMANUS. Such systems rather report on the results of the overall remanufacturing process that PREMANUS influences.
61

SAP Documentation for ERP Recycling Administration

72

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

8 Conclusions
The current state-of-the-art has been reviewed and discussed in all technologies relevant to PREMANUS. In some cases clear recommendations for technologies to be used within PREMANUS have been made. In other cases the choice remains open for the time being, dependent on the gathering of further detailed requirements which will emerge in the process of refining the architecture and implementation. Some of the notable technologies which will be used in PREMANUS are: ONS based ID Standard (e.g. EPC) for ID management and ID information service RESTful Webservices and oData are considered communication technologies between the elements of the distributed PREMANUS architecture. TIE SmartBridge as an enterprise service bus to be used for building the semantic service bus TIE Semantic Integrator as tool on TIE SmartBridge to provide semantic services HTML 5 based user interfaces in order to provide device independency Holonix i-LiKe as implementation of the QLM standard and basis for the BDSS data storage and as a datamodel. In other cases, no decision has yet been finalised, however clear favourites have usually emerged. For example, Streamwork is very likely to be the UI technology choice for PREMANUS. However since the extent and details of the UI are not yet clear, this choice is subject to change. For security and access control the detailed requirements are not clear at this stage of the project; thus here no recommendation can be concluded. These final choices regarding specific technologies and their role within the project, some of which are out of the scope of this document, will be detailed in the living architecture document (D2.4).

73

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

9 Appendix A: References
1. Aktacir, M. A., Buyucalaka, O., Yilmaz, T.: Life-cycle cost analysis for constant-air-volume and variable-air-volume air-conditioning systems. In: Applied Energy 83 (2006) 606-627 2. Ally, J., Pryor, T.: Life-cycle assessment of diesel, natural gas and hydrogen fuel cell bus transportation systems. In: Journal of Power Sources 170 (2007) 401-411 3. Ardente, F., Beccali, M., Cellura, M., Lo Brano, V.: Energy performances and life cycle assessment of an Italian wind farm. In: Renewable and Sustainable Energy Reviews 12 (2008) 200-217 4. Arpke, A., Strong, K.: A comparison of life cycle cost analyses for a typical college dormitory using subsidized versus full-cost pricing of water. In: Ecological Economics 58 (2006) 66-78 5. Azapagic, A., Clift, R.: Life Cycle Assessment and Linear Programming -Environmental Optimisation of Product System. In: Computers chemical Enginering 19 (1995) 229-234 6. Azapagic, A., Clift, R.: Life cycle assessment and multiobjective optimisation. In: Journal of Cleaner Production 7 (1998) 135-143 7. Baquero, G., Esteban, B., Riba, J., Rius, A., Puig, R.: An evaluation of the life cycle cost of rapeseed oil as a straight vegetable oil fuel to replace petroleum diesel in agriculture. In: Biomass and Bioenergy 35 (2011) 3687-3697 8. Barringer, H. P.: A Life Cycle Cost Summary. In: ICOMS 2003 (2003) 9. Beccali, M., Cellura, M., Iudicello, M., Mistretta, M.: Life cycle assessment of Italian citrus-based products. Sensitivity analysis and improvement scenarios. In: Journal of Environmental Management 91 (2010) 1415-1428 10. Bovea, M. D., Cabello, R., Querol, D.: Comparative Life Cycle Assessment of Commonly Used Refrigerants in Commercial Refrigeration Systems. In: The International Journal of Life Cycle Assessment Volume 12 (2007) 299-307 11. Brentrup, F., Kusters, J., Kuhlmann, H., Lammel, J.: Application of the Life Cycle Assessment methodology to agricultural production: an example of sugar beet production with different forms of nitrogen fertilisers. In: European Journal of Agronomy 14 (2001) 221-233 12. Butry, D. T., Chapman, R. E., Huang, A. L., Thomas, D. S.: A Life-Cycle Cost Comparison of Exit Stairs and Occupant Evacuation Elevators in Tall Buildings. In: Fire Technology (2010) 1-18 13. Canova, A., Profumo, F.: LCC Design Criteria in Electrical Plants Oriented to the Energy Saving. In: IEEE Transactions on Industry Applications 39 (2002) 53-58 14. Cattaneo, E.: LOttimizzazione della Progettazione tramite il Life-Cycle Cost. Thesis (2009) 129 15. Chen, X., Shao, J., Tain, Z.: Family Cars' Life Cycle Cost (LCC) Estimation Model based on the Neural Network Ensemble. In: International Federation for Information Processing (IFIP) Volume 207 (2006) 610-618 16. Cieslak, M.: Life cycle costs of pumping stations. In: WORLD PUMPS 2008 n 505 (2008) 30-33 17. Dai, D., Leng, R., Zhang, C., Wang, C.: Using hybrid modelling for life cycle assessment of motor bike and electric bike. In: Journal of Central South University of Technology Volume 12 (2005) 77-80 18. Das, S.: Life cycle assessment of carbon fiber-reinforced polymer composites. In: The International Journal of Life Cycle Assessment Volume 16 (2011) 268-282 19. Davis, J., Sonesson, U.: Life cycle assessment of integrated food chains - a Swedish case study of two chicken meals. In: The International Journal of Life Cycle Assessment Volume 13 (2008) 574-584 20. Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. Evolutionary Computation. In: IEEE Transactions 6 (2002) 182-197. 21. Dobon, A., Cordero, P., Kreft, F., Ostergaard, S. R., Robertsson, M., Smolander, M., Hortal, M.: The sustainability of communicative packaging concepts in the food supply chain. A case study: part 1. Life cycle assessment. In: The International Journal of Life Cycle Assessment Volume 16 (2011) 168-177 22. Dufo-Lopez, R., Bernal-Agustin, J. L., Yusta-Loyo, J. M., Dominguez-Navarro, J. A., Ramirez-Rosado, I. J., Lujano, J., Aso, I.: Multi-objective optimization minimizing cost and life cycle emissions of stand-alone PVwinddiesel systems with batteries storage. In Applied Energy 88 (2011) 4033-4041 23. Eide, M. H., Life Cycle Assessment (LCA) of Industrial Milk Production. In: The International Journal of Life Cycle Assessment Volume 7 (2002) 115-126 24. Ekman, A., Borjesson, P.: Life cycle assessment of mineral oil-based and vegetable oil-based hydraulic fluids including comparison of biocatalytic and conventional production methods. In: International Journal of Life Cycle Assessment Volume 16 (2011) 297-305 25. Fleischer, J., Wawerla, M., Niggeschmidt, S.: Machine Life Cycle Cost Estimation via Monte-Carlo Simulation. In: 4th CIRP Conference on Life Cycle Engineering (2007) 449-453 26. Folgado, R., Peas, P., Henriques, E.: Life cycle cost for technology selection: A Case study in the manufacturing of injection moulds. In: Int. J. Production Economics 128 (2010) 368-378 27. Frangopol, D. M., Liu, M.: Multiobjective Optimization for Risk-based Maintenance and Life-Cycle Cost of civil infrastructure systems. In: IFIP International Federation for Information Processing Volume 199 (2006) 123-137

74

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

28. Gitzel, R., Herbort, M.: Optimizing life cycle cost using genetic algorithms. In: Journal of cost management Vol. 22 (2008) 34-47 29. Goedecke, M., Therdthianwong, S., Gheewala, S. H.: Life cycle cost analysis of alternative vehicles and fuels in Thailand. In: Energy Policy 35 (2007) 3236-3246 30. Gorre, M., Guine, J. B., Huppes, G., van Oers, L.: Environmental Life Cycle Assessment of Linoleum. In: The International Journal of Life Cycle Assessment Volume 7 (2002) 158-166 31. Gustavsson, J.: Software Programme that calculates the Life Cycle Cost of Air Filters. In: Filtration & Separation Volume 39 (2002) 22-26 32. Halleux, H., Lassaux, S., Renzoni, R., Germain, A.: Comparative Life Cycle Assessment of Two Biofuels Ethanol from Sugar Beet and Rapeseed Methyl Ester. In: The International Journal of Life Cycle Assessment Volume 13 (2008) 184190, 2008 33. Hellgren, J.: Life cycle cost analysis of a car, a city bus and an intercity bus powertrain for year 2005 and 2020. In: Energy Policy 35 (2007) 39-49 34. Hennecke, F. W.: Life cycle costs of pumps in chemical industry. In: Chemical Engineering and Processing 38 (1999) 511-516 35. Hinow, M., Mevissen, M.: Substation Maintenance Strategy Adaptation for Life-Cycle Cost Reduction Using Genetic Algorithm. In: IEEE Transactions on Power Delivery Volume 26 (2011) 197-204 36. Hong, T., Han, S., Lee, S.: Simulation-based determination of optimal life-cycle cost for FRP bridge deck panels. In: Automation in Construction 16 (2007) 140-152 37. Hussain, M. M., Dincer, I., Li, X.: A preliminary life cycle assessment of PEM fuel cell powered automobiles. In: Applied Thermal Engineering 27 (2007) 2294-2299 38. Jeong, K. S., Oh, B. S.: Fuel economy and life-cycle cost analysis of a fuel cell hybrid vehicle. In: Journal of Power Sources 105 (2002) 58-65 39. Jungbluth, N., Bauer, C., Dones, R., Frischknecht, R.: Life Cycle Assessment for Emerging Technologies: Case Studies for Photovoltaic and Wind Power. In: The International Journal of Life Cycle Assessment Volume 10 (2005) 24-34 40. Kaveh, A., Laknejadi, K., Alinejad, B.: Performance-based multi-objective optimization of large steel structures. In: Acta Mechanica Volume 223 (2011) 355-369 41. Kim, S., Hwang, T., Overcash, M.: Life Cycle Assessment Study of Color Computer Monitor. In: The International Journal of Life Cycle Assessment Volume 6 (2001) 35-43 42. Kim, G., Kim, K., Lee, D., Han, C., Kim, H., Jun, J.: Development of a life cycle cost estimate system for structures of light rail transit infrastructure. In: Automation in Construction 19 (2009) 308-325 43. Kim, S., Jimenez-Gonzalez, C., Dale, B. E.: Enzymes for pharmaceutical applications - a cradle-to-gate life cycle assessment. In: The International Journal of Life Cycle Assessment Volume 14 (2009) 392-400 44. Koornneef, J., van Keulen, T., Faaij, A., Turkenburg, W.: Life cycle assessment of a pulverized coal power plant with post-combustion capture, transport and storage CO2. In :International journal of greenhouse gas control 2 (2008) 448467 45. Koroneos, C., Dompros, A., Roumbas, G., Moussiopoulos, N.: Life Cycle Assessment of Kerosene Used in Aviation. In: The International Journal of Life Cycle Assessment Volume 10 (2005) 417-424 46. Kornelakis, A.: Multiobjective Particle Swarm Optimization for the optimal design of photovoltaic grid-connected systems. In: Solar Energy 84 (2010) 2022-2033 47. Kumakura, Y., Sasajima, H.: A consideration of Life Cycle Cost of a Ship. In: Proceedings of the Eighth International Symposium on Practical Design of Ships and Other Floating Structures (2001) 29-35 48. Kumar, S., Tiwari, G. N.: Life cycle cost analysis of single slope hybrid (PV/T) active solar still. In: Applied Energy 86 (2009) 1995-2004 49. Lee, J., Yoo, M., Cha, K., Lim, T. W., Hur, T.: Life cycle cost analysis to examine the economical feasibility of hydrogen as an alternative fuel. In: International journal of hydrogen energy 34 (2009) 4243-4255 50. Liu, H., Gopalkrishnan, V., Quynh, K., Ng, W.: Regression models for estimating product life cycle cost. In: Journal of Intelligent Manufacturing Volume 20 (2009) 401-408 51. Lo, S., Ma, H., Lo, S.: Quantifying and reducing uncertainty in life cycle assessment using the Bayesian Monte Carlo method. In: Science of the Total Environment 340 (2005) 23-33 52. Mangena, S. J., Brent, A. C.: Application of a Life Cycle Impact Assessment framework to evaluate and compare environmental performances with economic values of supplied coal products. In: Journal of Cleaner Production 14 (2006) 1071-1084 53. Marszal, A. J., Heiselberg, P.: Life cycle cost analysis of a multi-storey residential Net Zero Energy Building in Denmark. In: Energy 36 (2011) 5600-5609 54. McCleese, D. L., LaPuma, P. T.: Using Monte Carlo Simulation in Life Cycle Assessment for Electric and Internal Combustion Vehicles. In: The International Journal of Life Cycle Assessment Volume 7 (2002) 230-236 55. Meyer, D. E., Curran, M. A., Gonzalez, M. A.: An examination of silver nanoparticles in socks using screening-level life cycle assessment. In: Journal of Nanoparticle Research Volume 13 (2011) 147-156

75

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

56. Mil i Canals, L., Burnip, G. M., Cowell, S. J.: Evaluation of the environmental impacts of apple production using Life Cycle Assessment (LCA): Case study in New Zealand. In: Agriculture, Ecosystems and Environment 114 (2006) 226238 57. Moberg, A., Johansson, M., Finnveden, G., Jonsson, A.: Printed and tablet e-paper newspaper from an environmental perspective - A screening life cycle assessment. In: Environmental Impact Assessment Review 30 (2010) 177-191 58. Morrissey, J., Horne, R. E.: Life cycle cost implications of energy efciency measures in new residential buildings. In: Energy and Buildings 43 (2011) 915-924 59. Munoz, I., Peral, J., Ayllon, J. A., Malato, S., Passarinho, P., Domenech, X.: Life cycle assessment of a coupled solar photocatalyticbiological process for wastewater treatment. In: Water Research 40 (2006) 3533-3540 60. Ntiamoah, A., Afrane, G.: Environmental impacts of cocoa production and processing in Ghana: life cycle assessment approach. In: Journal of Cleaner Production 16 (2008) 1735-1740 61. Okasha, N. M., Frangopol, D. M.: Lifetime-oriented multi-objective optimization of structural maintenance considering system reliability, redundancy and life-cycle cost using GA. In: Structural Safety 31 (2009) 460-474 62. Perzon, M., Johansson, K., Froling, M.: Life Cycle Assessment of District Heat Distribution in Suburban Areas Using PEX Pipes Insulated with Expanded Polystyrene. In: The International Journal of Life Cycle Assessment Volume 12 (2007) 317-327 63. Phumpradab, K., Gheewala, S. H., Sagisaka, M.: Life cycle assessment of natural gas power plants in Thailand. In: The International Journal of Life Cycle Assessment Volume 14 (2009) 354-363 64. Puri, P., Compston, P., Pantano, V.: Life cycle assessment of Australian automotive door skins. In: The International Journal of Life Cycle Assessment Volume 14 (2009) 420-428 65. Rafaschieri, A., Rapaccini, M., Manfrida, G.: Life Cycle Assessment of electricity production from popular energy crops compared with conventional fossil fuels. In: Energy Conversion & Management 40 (1999) 1477-1493 66. Ribeiro, C., Ferreira, J. V., Partidario, P.: Life Cycle Assessment of a Multi-Material Car Component. In: The International Journal of Life Cycle Assessment Volume 12 (2007) 336-345 67. Scientific Applications International Corporation: Life Cycle Assessment: Principles and Practice, Technical Report (2006) 80 68. Sarma, K. C., Adeli, H.: Life-cycle cost optimization of steel structures. In: International Journal for Numerical Methods in Engineering Volume 55 (2002) 1451-1462 69. Savi, D. A., Bicik, J., & Morley, M. S.: A DSS Generator for Multiobjective Optimisation of Spreadsheet-Based Models. In: Environmental Modelling and Software 26 (2011) 551-561 70. Schmidt, J. H.: Comparative life cycle assessment of rapeseed oil and palm oil. In: The International Journal of Life Cycle Assessment Volume 15 (2010) 183-197 71. Seo, K. K., Park, J. H., Jang, D. S., Wallace, D.: Approximate Estimation of the Product Life Cycle Cost Using Articial Neural Networks in Conceptual Design. In: International Journal of Advanced Manufacturing Technology 19 (2002) 461-471 72. Seo, K. K.: A Methodology for Estimating the Product Life Cycle Cost Using a Hybrid GA and ANN Model. In: Lecture Notes in Computer Science Volume 4131 (2006) 386-395 73. Seo, K. K., Kim, W. K.: Approximate Life Cycle Assessment of Product Concepts Using a Hybrid Genetic Algorithm and Neural Network Approach. In: Lecture Notes in Computer Science Volume 4413 (2007) 258-268 74. Silalertruksa, T., Bonnet, S., Gheewala, S. H.: Life cycle costing and externalities of palm oil biodiesel in Thailand. In: Journal of Cleaner Production Volume 28 (2011) 225-232 75. Sorapipatana, C., Yoosin S.: Life cycle cost of ethanol production from cassava in Thailand. In: Renewable and Sustainable Energy Reviews 15 (2011) 1343-1349 76. Suwanit, W., Gheewala, S. H.: Life cycle assessment of mini-hydropower plants in Thailand. In: The International Journal of Life Cycle Assessment Volume 16 (2011) 849-858 77. Valan Arasu, A., Sornakumar, T.: Life cycle cost analysis of new FRP based solar parabolic trough collector hot water generation system. In: Journal of Zhejiang University - Science A Volume 9 (2008) 416-422 78. Vendrusculo, E. A., de Castilho Queiroz, G., De Martino Jannuzzi, G., da Silva Jnior, H. X., Pomilio, J. A.: Life cycle cost analysis of energy efficiency design options for refrigerators in Brazil. In: Energy Efficiency 2 (2009) 271-286 79. Wang, K., Dai, L., Myklebust, O.: Applying Particle Swarm Optimization (PSO) in Product Life Cycle Cost Optimization. In: IPROMS (2009) 6 80. Weise, T.: Global Optimization Algorithms Theory and Application. In: Self Published (2009) 820 81. Wong, N. H., Tay, S. F., Wong, R., Ong, C. L., Sia, A.: Life cycle cost analysis of rooftop gardens in Singapore. In: Building and Environment 38 (2003) 499-509, 2003 82. Wong, J. S., Scanlan, J. P., Eres, M. H.: Modelling the Life Cycle Cost of Aero-engine Maintenance. In: Collaborative Product and Service Life Cycle Management for a Sustainable World (2008) 233-240 83. Xu, Y., Wang, J., Tan, X., Curran, R., Raghunathan, S., Doherty, J., Gore, D.: A Generic Life Cycle Cost Modeling Approach for Aircraft System. In: Collaborative Product and Service Life Cycle Management for a Sustainable World (2008) 251-258, 2008

76

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

84. Zufia, J., Arana, L.: Life cycle assessment to eco-design food products: industrial cooked dish case study. In: Journal of Cleaner Production 16 (2008) 1915-1921 85. Stoitsev, T., Scheidl, S., Flentge, F., Muhlhauser, M.: Enabling end-user driven business process composition through programming by example in a Collaborative Task management system. 2008 IEEE Symposium on Visual Languages and Human-Centric Computing. 157-165 (2008). 86. Kaufmann, E.: Talking to the Semantic Web: Natural language query interfaces for casual end-users, http://www.ifi.uzh.ch/pax/uploads/pdf/publication/1384/Kaufmann_2007.pdf, (2008). 87. Vollmer, K.: The Forrester WaveTM: Enterprise Service Bus, Q2 2011, (2012). 88. European PROMISE Project (FP6-IST project No. IST-2004-507100). http://www.promise.no/ 89. Cosm (formely Pachube): https://cosm.com 90. Cassina J., Tomasella M., Taisch M., Matta A.; A new closed-loop PLM Standard for mass products. IJPD International Journal of Product Development 2009 - Vol. 8, No.2 141 161/ 91. MIMOSA: http://www.mimosa.org/ 92. oBIX: http://www.obix.org/ 93. Wildemand, Roc C., The Forrester Wave: Product Life-Cycle Management Applications, Q2 2008, Forrester Research, Inc. 2008 94. Krikke H,R, van Harten A, Schuur P,C. - On a medium term product recovery and disposal strategy for durable assembly products. - International Journal of Production Research 1998(- 1):- 111. 95. Teunter RH. Determining optimal disassembly and recovery strategies. Omega 2006 12;34(6):533-537. 96. Willems B, Dewulf W, Duflou JR. Can large-scale disassembly be profitable? A linear programming approach to quantifying the turning point to make disassembly economically viable. Int J Prod Res 2006 03/15; 2011/11;44(6):11251146. 97. Jorjani S, Leu J, Scott C. Model for the allocation of electronics components to reuse options. Int J Prod Res 2004 03/15;42(6):1131-1145. 98. S. Das and D. Yedlarajiah. An integer programming model for prescribing material recovery strategies. Electronics and the Environment, 2002 IEEE International Symposium on; 2002. 99. Lee SG, Lye SW, Khoo MK. A Multi-Objective Methodology for Evaluating Product End-of-Life Options and Disassembly. The International Journal of Advanced Manufacturing Technology 2001;18(2):148-156. 100. Iakovou E, Moussiopoulos N, Xanthopoulos A, Achillas C, Michailidis N, Chatzipanagioti M, et al. A methodological framework for end-of-life management of electronic products. Resour Conserv Recycling 2009 4;53(6):329-339. 101. Du Y, Cao H, Liu F, Li C, Chen X. An integrated method for evaluating the remanufacturability of used machine tool. J Clean Prod 2012 1;20(1):82-91. 102. Bufardi A, Gheorghe R, Kiritsis D, Xirouchakis P. Multicriteria decision-aid approach for product end-of-life alternative selection. Int J Prod Res 2004 08/15; 2012/06;42(16):3139-3157. 103. Chan JWK. Product end-of-life options selection: grey relational analysis approach. Int J Prod Res 2008 06/01; 2011/11;46(11):2889-2912. 104. Remery M, Mascle C, Agard B. - A new method for evaluating the best product end-of-life strategy during the early design phase. - Journal of Engineering Design 2011:- 1. 105. Wadhwa S, Madaan J, Chan FTS. Flexible decision modeling of reverse logistics system: A value adding MCDM approach for alternative selection. Robot Comput Integrated Manuf 2009 4;25(2):460-469. 106. Jun H-, Cusin M, Kiritsis D, Xirouchakis P. A multi-objective evolutionary algorithm for EOL product recovery optimization: turbocharger case study. Int J Prod Res 2007 09/15; 2012/01;45(18-19):4573-4594. 107. Hula A, Jalali K, Hamza K, Skerlos SJ, Saitou K. Multi-Criteria Decision-Making for Optimization of Product Disassembly under Multiple Situations. Environ Sci Technol 2003 12/01; 2012/06;37(23):5303-5313. 108. Ilgin MA, Gupta SM. Environmentally conscious manufacturing and product recovery (ECMPRO): A review of the state-of-the-art. J Environ Manage 2010 2;91(3):563-591. 109. Umeda Y, Nonomura A, Tomiyama T. Study on life-cycle design for the post mass production paradigm. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 2000;14:149-161. 110. Spengler T, Stolting W. Life cycle costing for strategic evaluation of remanufacturing systems. Progress in Industrial Ecology, an International Journal 2008;5(1-2):65-81. 111. H. Komoto, T. Tomiyama, M. Nagel, S. Silvester and H. Brezet. Life Cycle Simulation for Analyzing Product Service Systems. Environmentally Conscious Design and Inverse Manufacturing, 2005. Eco Design 2005. Fourth International Symposium on; 2005. 112. Atasu A, Sarvary M, Van Wassenhove LN. Remanufacturing as a Marketing Strategy. MANAGEMENT SCIENCE 2008 October 1;54(10):1731-1746. 113. Debo LG, Toktay LB, Van Wassenhove LN. Market Segmentation and Product Technology Selection for Remanufacturable Products. MANAGEMENT SCIENCE 2005 August 1;51(8):1193-1205. 114. Guide VDR. Production planning and control for remanufacturing: industry practice and research needs. J Oper Manage 2000 6;18(4):467-483.

77

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

115. Ijomah WL. The application of remanufacturing in sustainable manufacture. Proceedings of the ICE - Waste and Resource Management 2010;163(4):157-163. 116. Jayaraman V, Jr. VDRG, Srivastava R. A Closed-Loop Logistics Model for Remanufacturing. J Oper Res Soc 1999 May;50(5), 497-508. 117. Wikipedia. SOAP. http://en.wikipedia.org/wiki/SOAP 118. Wikipedia. Representational State Transfer. http://en.wikipedia.org/wiki/Representational_state_transfer 119. Open Data Protocol. www.odata.org 120. Information Technology Laboratory of the National Institute of Standards and Technology. An Introduction to Rolebased Access Control. http://csrc.nist.gov/groups/SNS/rbac/documents/design_implementation/Intro_role_based_access.htm 121. openRBAC Project. http://www.openrbac.de/en_startup.xml 122. springsource. Spring Security. http://static.springsource.org/spring-security/site/ 123. jGuard Project. http://jguard.xwiki.com/ 124. Wikipedia. Diameter protocol. http://en.wikipedia.org/wiki/Diameter_(protocol) 125. Wikipedia. RADIUS protocol. http://en.wikipedia.org/wiki/RADIUS 126. D. Chappell, Enterprise Service Bus, O'Reilly, 2004 127. Forrester ESB 2011. (n.d.). Retrieved from http://www.oracle.com/us/corporate/analystreports/infrastructure/forresterwave-esb-q2-2011-395900.pdf 128. Tijs Rademakers, J. D. (2008). Open Source ESBs in Action. Manning. 129. Arnott, D. and Pervan, G., 2005. A critical analysis of decision support systems research. Journal of Information Technology, 20 (2), 67-87. 130. Atasu, A., Sarvary, M. and van Wassenhove, L.N., 2008. Remanufacturing as a Marketing Strategy. Management Science, 54 (10), 1731-1746. 131. Debo, L.G., Toktay, L.B. and van Wassenhove, L.N., 2005. Market Segmentation and Product Technology Selection for Remanufacturable Products. Management Science, 51 (8), 1193-1205. 132. DeCroix, G.A., 2006. Optimal Policy for a Multiechelon Inventory System with Remanufacturing. Operations research, 54 (3), 532-543. 133. Dobos, I. and Floriska, A., 2008. The efficiency of remanufacturing in a dynamic inputoutput model. Central European Journal of Operations Research, 16 (3), 317-328. 134. Dobos, I. and Richter, K., 2004. An extended production/recycling model with stationary demand and return rates. International Journal of Production Economics, 90 (3), 311-323. 135. Erkoyuncu, J., Roy, R., Shehab, E. and Cheruvu, K., 2011. Understanding service uncertainties in industrial product service system cost estimation. Springer London. 136. Ferrer, G. and Swaminathan, J.M., 2006. Managing New and Remanufactured Products. Management Science, 52(1), 15-26. 137. Ferrer, G. and Whybark, D.C., 2001. Material Planning for a Remanufacturing Facility. Production and Operations Management, 10 (2), 112-124. 138. Goh, Y.,Mey, Newnes, L., Mcmahon, C., Mileham, A. and Paredis,Christiaan,J,J, 2009. A Framework for Considering Uncertainty in Quantitative Life Cycle Cost Estimation, Proceedings of the ASME 2009 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2009 2009. 139. Guide, V.D.R., 1996. Scheduling using drum-buffer-rope in a remanufacturing environment. International Journal of Production Research, 34 (4), 1081-1091. 140. Iakovou, E., Moussiopoulos, N., Xanthopoulos, A., Achillas, C., Michailidis, N., Chatzipanagioti, M., Koroneos, C., Bouzakis, K.-. and Kikis, V., 2009. A methodological framework for end-of-life management of electronic products. Resources, Conservation and Recycling, 53 (6), 329-339. 141. Ijomah, W.L., 2009. - Addressing decision making for remanufacturing operations and design-for-remanufacture. Taylor & Francis. 142. Ilgin, M.A. and Gupta, S.M., 2010. Environmentally conscious manufacturing and product recovery (ECMPRO): A review of the state of the art. Journal of environmental management, 91 (3), 563-591. 143. Jayaraman, V., 2006. Production planning for closed-loop supply chains with product recovery and reuse: an analytical approach. International Journal of Production Research, 44 (5), 981-998. 144. Jun, H.-., Cusin, M., Kiritsis, D. and Xirouchakis, P., 2007. A multi-objective evolutionary algorithm for EOL product recovery optimization: turbocharger case study. International Journal of Production Research, 45 (18-19), 4573-4594. 145. Kiesmller, G.P. and Scherer, C.W., 2003. Computational issues in a stochastic finite horizon one product recovery inventory model. European Journal of Operational Research, 146 (3), 553-579. 146. Krikke, H.,R, Van Harten, A. and Schuur, P.,C, 1998. - On a medium term product recovery and disposal strategy for durable assembly products. - Taylor & Francis.

78

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

147. Matsumoto, M., 2010. Development of a simulation model for reuse businesses and case studies in Japan. Journal of Cleaner Production, 18 (13), 1284-1299. 148. Mitra, S. and Webster, S., 2008. Competition in remanufacturing and the effects of government subsidies. International Journal of Production Economics, 111 (2), 287-298. 149. Mostard, J. and Teunter, R., 2006. The newsboy problem with resalable returns: A single period model and case study. European Journal of Operational Research, 169 (1), 81-96. 150. Ologu, E.U. and Wong, K.Y., 2008. Fuzzy logic evaluation of reverse logistics performance in the automotive industry. Scientific Research and Essays, Vol. 6 (7), 1639-1649. 151. Parkinson, H.J. and Thompson, G., 2003. Analysis and taxonomy of remanufacturing industry practice. Proceedings of the Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering, 217 (3), 243-256. 152. Pochampally, K. K. and Gupta, S. M., 2008. A Multi-Phase Fuzzy Logic Approach to Strategic Planning of a Reverse Supply Chain Network. IEEE Transactions on Electronics Packaging Manufacturing, Vol. 3 (1), 72-82. 153. Remery, M., Mascle, C. and Agard, B., 2011. - A new method for evaluating the best product end-of-life strategy during the early design phase. - Taylor & Francis. 154. Richter, K. and Sombrutzki, M., 2000. Remanufacturing planning for the reverse Wagner/Whitin models. European Journal of Operational Research, 121 (2), 304-315. 155. Richter, K. and Weber, J., 2001. The reverse Wagner/Whitin model with variable manufacturing and remanufacturing cost. International Journal of Production Economics, 71 (13), 447-456. 156. River Cities Software Inc, , Remanufacturing Package. Available: http://www.rivercities.com/packages/package_reman.htm [29th Feburay, 2012]. 157. Robotis, A., Bhattacharya, S. and van Wassenhove, L.N., 2005. The effect of remanufacturing on procurement decisions for resellers in secondary markets. European Journal of Operational Research, 163 (3), 688-705. 158. Rose, C. M., 2000. Design For Environment: A Method For Formulating Product End-of-Life Strategies, Stanford University. 159. Ruud H., T., 2006. Determining optimal disassembly and recovery strategies. Omega, 34 (6), 533-537. 160. Souza, G.C., Ketzenberg, M.E. and Guide, V.D.R., 2002. Capacitated Remanufacturing With Service Level Constraints. Production and Operations Management, 11 (2), 231-248. 161. Spengler, T. and Stolting, W., 2008. Life cycle costing for strategic evaluation of remanufacturing systems. Progress in Industrial Ecology, an International Journal, 5 (1-2), 65-81. 162. Subramanian, R., Talbot, B. and Gupta, S., 2010. An Approach to Integrating Environmental Considerations within Managerial Decision-Making. Journal of Industrial Ecology, 14 (3), 378-398. 163. Takahashi, K., Morikawa, K., Myreshka, Takeda, D. and Mizuno, A., 2007. Inventory control for a MARKOVIAN remanufacturing system with stochastic decomposition process. International Journal of Production Economics, 108 (1 2), 416-425. 164. Teunter, R.H., 2001. Economic ordering quantities for recoverable item inventory systems. Naval Research Logistics (NRL), 48 (6), 484-495. 165. Teunter, R.H., Bayindir, Z.P. and Den Heuvel, W.V., 2006. Dynamic lot sizing with product returns and remanufacturing. International Journal of Production Research, 44 (20), 4377-4400. 166. Umeda, Y., Nonomura, A. and Tomiyama, T., 2000. Study on life-cycle design for the post mass production paradigm. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 14, 149-161. 167. van der Laan, E., Salomon, M., Dekker, R. and Wassenhove, L.V., 1999. Inventory Control in Hybrid Systems with Remanufacturing. Management Science, 45 (5), 733-747. 168. Xanthopoulos, A. and Iakovou, E., 2009. On the optimal design of the disassembly and recovery processes. Waste Management, 29 (5), 1702-1711. 169. Xing, B., Gao, W.-J., Nelwamondo, F.V., Battle, K., and Marwala, T., 2012. Soft Computing in Product Recovery: A Survey Focusing on Remanufacturing System. CoRR abs/1206.0908 170. Zanoni, S., Ferretti, I. and Tang, O., 2006. Cost performance and bullwhip effect in a hybrid manufacturing and remanufacturing system with different control policies. International Journal of Production Research, 44 (18-19), 38473862. 171. Zanoni, S., Segerstedt, A., Tang, O., and Mazzoldi, L.: Multi-product economic lot scheduling problem with manufacturing and remanufacturing using a basic period policy. Computers & Industrial Engineering (CANDIE), 62 (4), 1025-1033 (2012) 172. Center for Remanufacturing and Reuse. Center for Remanufacturing and Reuse. http://www.remanufacturing.org.uk/. 173. Rochester Institute of Technology. Center for Remanufacturing. http://www.reman.rit.edu/. 174. Weber, R., 1987. Towards a theory of artifacts: A paradigmatic base for information systems research. Journal of Information Systems, 1, 3-20. 175. Galliers, R.D., 1994. Relevance and rigour in information systems research: Some personal reflections on issues facing the information systems research community. In B.C Glasson, I.T. Hawryszkiewycz, B.A. Underwood and R. Weber

79

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

176. 177. 178. 179. 180. 181. 182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195.

196. 197. 198. 199.

200. 201. 202. 203. 204. 205.

(Eds.). Business process re-engineering: Information systems opportunities and challenges (Elsevier North-Holland, Amsterdam), 93-101. Saunders, C., 1998. The role of business in IS research. Information Resource Management Journal, Winter, 11 (1), 4-6. Benbasat, I. and Zmud, R.W., 1999. Empirical research in information systems: The question of relevance. MIS Quarterly, 23 (1), 3-16. King, J.L. and Lyytinen, K., 2004. Reach and grasp. MIS Quarterly, 28 (4), 539-551. Chen, W.S. and Hirschheim, R., 2004. A paradigmatic and methodological examination of information systems research from 1991 to 2001. Information Systems Journal, 14, 197-235. Goh, Y. M., Newnes, L. B., Mileham, A. R., McMahon C. A., and Saravi, M. E., 2010. Uncertainty in Through-Life Costing Review and Perspectives. IEEE Transactions on Engineering Management, 57 (4), 689-701. Zimmermann, H.-J., 2000. An application-oriented view of modeling uncertainty. Eur. J. Oper. Res., 122, 190198. Ayyub B. M., 2001. Elicitation of Expert Opinions for Uncertainty and Risks: Theory, Applications and Guidance. West Palm Beach, FL: CRCPress. Isukapalli, S. S.1999. Uncertainty analysis of transporttransformation models. Ph.D. dissertation, State Univ. New Jersey, New Brunswick. Du, X. and Chen, W., 2000. Methodology formanaging the effect of uncertainty in simulation-based design. AIAA J., 38 (8), 14711478. Laskey, K. B., 1996. Model uncertainty: Theory and practical implications. IEEE Trans Syst., Man, Cybern. A, Syst.,Humans, 26 (3), 340348. Nilsen, T. and Aven, T., 2003. Models and model uncertainty in the context of risk analysis. Rel. Eng. Syst. Saf., 79, 309317. Hills, R. G., and Trucano, T. G., 1999. Statistical validation of engineering and scientific models: Background. Sandia Nat. Lab., Albuquerque, NM, Tech. Rep. SAND99-1256. Zio, E., 1996. Two methods for the structured assessment of model uncertainty by experts in performance assessments of radioactivewaste repositories. Rel. Eng. Syst. Saf., 54 (2), 225241. Earl, C., Johnson, J., and Eckert, C., 2005. Complexity. In: Design Process ImprovementA Review of Current Practice, J. Clarkson and C. Eckert, Eds. London, U.K.: Springer-Verlag, 2005. de Weck, O., Eckert, C., and Clarkson, J., 2007. A classification of uncertainty for early product and system design. presented at the ICED 2007, Paris, France. Greenberg, M., Mayer, H., and Lewis, D., 2004. Life-cycle cost in a highly uncertain economic environment: The case of managing the U.S. Department of Energys nuclear waste legacy. Fed. Facil. Environ. J., 15, 6782. Heijungs R., and Huijbregts, M. A. J., 2004. A review of approaches to treat uncertainty in LCA. presented at the IEMSS, Osnabruck, Germany, 2004. Bjrklund, A. E., 2002. Survey of approaches to improve reliability in LCA. Int. J. LCA, 7 (2), 6472. Geisler, G., Hellweg, S., and Hungerbhler, K., 2005. Uncertainty analysis in life cycle assessment (LCA): Case study on plant protection products and implications for decision making. Int. J. LCA, 10 (3), 192.1192.3. Huijbregts, M. A. J., Gilijamse, W., Ragas, A. M. J., and Reijnders, L., 2003. Evaluating uncertainty in environmental life-cycle assessment. A case study comparing two insulation options for a Dutch one-family dwelling. Environ. Sci. Technol., 37 (11), 26002608. Bretz, R., 1998. SETAC LCA Workgroup: Data availability and data quality. Int. J. LCA, 3 (3), 121123. Lloyd S. M., and Ries, R., 2007. Characterizing, propagating, and analyzing uncertainty in life-cycle assessment: A survey of quantitative approaches. J. Ind. Ecol., 11 (1), 161179. de Beaufort-Langeveld, A., Bretz, R., R. Hischier, Huijbregts, M., Jean, P., Tanner, T., and van Hoof G., 2003. Code of Life Cycle Inventory Practice. Brussels, Belgium: Soc. Environ. Toxicol. Chem., 2003. Oberkampf, W. L., DeLand, S., Rutherford, B., Diegert, K., and Alvin, K., 1999. A new methodology for the estimation of total uncertainty in computational simulation. presented at the AIAA/ASME/ASCE/AHS/ASC SSDM Conf. Exhib., St. Louis, MO. Xu, Y., et al., 2012. Cost Engineering for manufacturing: Current and future research. Int. J. of Comp. Integr. Manuf., 25 (4-5), 300-314. Xu, Y., et al., 2008a. Object-oriented systems engineering approach for modeling life cycle cost of aircraft wing. The 46th AIAA aerospace sciences meeting and exhibit, January 2008, Reno, NV. Dhillon, B.S., 1981. Life cycle cost: a survey. Microelectronic Reliability, 21 (4), 495 511. Roy, R., 2003. Cost engineering: why, what and how? Decision Engineering Report Series. Cranfield University. ISBN 1-861940-96-3. Roy, R., et al., 2009. Cost of industrial product-service systems (IPS2), Keynote paper. 16th CIRP International Conference on Life Cycle Engineering. Refsgaard, J.C., et al., 2007. Uncertainty in the environmental modeling process A framework and guidance. Environmental Modelling & Software, 22 (11), 1543 1556.

80

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

206. Heijungs, R. and Huijbregts, M.A., 2004. A review of approaches to treat uncertainty in LCA. Proceedings of IEMSS, Osnabruck, Germany, 18. 207. Lloyd, S.M. and Ries, R., 2007. Characterizing, propagating, and analyzing uncertainty in life-cycle assessment: a survey of quantitative approaches. Journal of Industrial Ecology, 11 (1), 161 179. 208. Treasury, 2003. The Green Book Appraisal and Evaluation in Central Government. London. 209. Kishk, M., 2004. Combining various facets of uncertainty in whole-life cost modeling. Construction Management and Economics, 22 (4), 429435. 210. Oberkampf, W. and Helton, J., 2001. Mathematical representation of uncertainty. Non-deterministic approaches forum, AIAA Seattle, WA, 1619 April, AIAA-2001-1645. 211. Dubois, D. and Prade, H., 2003. Fuzzy set and possibility theory based methods in artificial intelligence. Artificial Intelligence, 148, 19. 212. Boussabaine, A. and Kirkham, R., 2004. Whole life-cycle costing: risk and risk responses. 1st ed. Blackwell Publishing, Oxford, 5681. 213. Erkoyuncu, J., et al., 2011. Understanding service uncertainties in industrial product-service system cost estimation. Int. J. of Adv. Manuf. Tech., 52 (9-12), 1223-1238. 214. Erkoyuncu, J.A., et al., 2009. Uncertainty challenges in service cost estimation for product-service systems in the aerospace and defence industries. Proceedings of the 1 st CIRP IPSS Conference, Cranfield University, Cranfield, 200 206. 215. Durugbo, C., et al., 2010. Data uncertainty assessment and information flow analysis for product-service systems in a library case study. International Journal of Services Operations and Informatics (IJSOI), 5 (4), 320-330. 216. DeLaurentis D, Mavris D (2000) Uncertainty modeling and management in multidisciplinary analysis and synthesis. AIAA Paper 2000-0422. 217. Saccani N, Johansson P, Perona M (2007) Configuring the aftersales service supply chain: a multiple case study. Int J Prod Econ, 110 (12):5269. 218. Walker WE, Harremoes P, Rotmans J, Vander Sluijs JP, Van Asselt MBA, Janssen P, Krauss KV (2003) Defining uncertainty: a conceptual basis for uncertainty management in model based decision support. Integrated Assessment 4 (1):517. 219. Krontiris, A., 2012: Fuzzy systems for condition assessment of equipment in electric power systems. Dissertation, TU Darmstadt, Germany. Accessable via: http://tuprints.ulb.tu-darmstadt.de/2930/1/Diss.pdf. 220. Schwabacher, M., and Goebel, K., 2007. A Survey of Artificial Intelligence for Prognostics. In AAAI Fall Symposium: Artificial Intelligence for Prognostics. 221. Roemer M, Dzakowic J, Orsagh R, Byington C, Vachtsevanos G. An overview of selected prognostic technologies with reference to an integrated PHM architecture. In: Proceedings of the IEEE aerospace conference 2005, Big Sky, United States, 2005. 222. Mullera, A., Marquezb, A.C., Iunga, B., 2008. On the concept of e-maintenance: Review and current research. Reliability Engineering & System Safety, 93 (8), 11651187. 223. Bonissone, P. and Goebel, K., 2002. When will it break? A Hybrid Soft Computing Model to Predict Time-to-break Margins in Paper Machines. Proceedings of SPIE 47th Annual Meeting, International Symposium on Optical Science and Technology, #4787, 53-64. 224. Amin, S., Byington, C., and Watson, M., 2005. Fuzzy Inference and Fusion for Health State Diagnosis of Hydraulic Pumps and Motors. Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society. 225. Bishop, C. M., Neural Networks for Pattern Recognition. Oxford University Press, 1995. 226. Bock, J. R., Brotherton, T. W., and Gass, D., 2005. Ontogenetic Reasoning System for Autonomic Logistics. Proceedings of the IEEE Aerospace Conference. New York: IEEE. 227. Bock, J. R., Brotherton, T., Grabill, P., Gass, D., and Keller, J. A., 2006. On False Alarm Mitigation. Proceedings of the IEEE Aerospace Conference. New York: IEEE. 228. Brown, D., Kalgren, P., Roemer, M., and Dabney, T., 2006. Electronic Prognostics - A Case Study Using SwitchedMode Power Supplies (SMPS). Proceedings of the IEEE Systems Readiness Technology Conference. New York: IEEE. 229. Byington, C. S., Watson, M., Edwards, D., and Dunkin, B., 2003. In-Line Health Monitoring System for Hydraulic Pumps and Motors. Proceedings of the IEEE Aerospace Conference. New York: IEEE. 230. Byington, C. S., Roemer, M. J., Watson, M. J., Galie, T. R., McGroarty, J. J., and Savage, C., 2004a. Prognostic Enhancements To Diagnostic Systems (PEDS) Applied To Shipboard Power Generation Systems. Proceedings of ASME Turbo Expo. New York: ASME. 231. Byington, C. S., Watson, M. J., and Edwards, D., 2004b. Data-Driven Neural Network Methodology to Remaining Life Predictions for Aircraft Actuator Components. Proceedings of the IEEE Aerospace Conference. New York: IEEE. 232. Byington, C. S., Watson, M., and Edwards, D., 2004c. Dynamic Signal Analysis and Neural Network Modeling for Life Prediction of Flight Control Actuators. Proceedings of the American Helicopter Society 60th Annual Forum. Alexandria, VA: AHS.

81

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

233. Chinnam, R. B. and Baruah, P., 2003. A Neuro-Fuzzy Approach For Estimating Mean Residual Life In ConditionBased Maintenance Systems. International Journal of Materials and Product Technology, 20. 234. Chinnam, R. B. and Mohan, P., 2002. Online Reliability Estimation Of Physical Systems Using Neural Networks And Wavelets. International Journal of Smart Engineering System Design, 4, (4). 235. Clifton, D., 2006. Condition Monitoring of Gas-Turbine Engines. Transfer Report, Department of Engineering Science, University of Oxford. 236. Frelicot, C., 1996. A Fuzzy-Based Prognostic Adaptive System. RAIRO-APII-JESA, Journal Europeen des Systemes Automatises, vol.30, no.2-3, 281-99. 237. Gebraeel, N., Lawley, M., Liu, R., and Parmeshwaran, V., 2004. Life Distributions From Component Degradation Signals: A Neural Net Approach. IEEE Transactions on Industrial Electronics, 51, (3). 238. Gebraeel, N., 2006. Sensory-Updated Residual Life Distributions for Components with Exponential Degradation Patterns. IEEE Transactions on Automation Science and Engineering. 239. Goebel, K., and Eklund, N., 2007. Prognostic Fusion for Uncertainty Reduction. Proceedings of AIAA Infotech@Aerospace Conference. Reston, VA: American Institute for Aeronautics and Astronautics, Inc. 240. Goebel, K., Eklund, N., and Bonanni, P., 2006. Fusing Competing Prediction Algorithms for Prognostics. Proceedings of 2006 IEEE Aerospace Conference. New York: IEEE. 241. Goebel, K., Qiu, H., Eklund, N., and Yan, W., 2007. Modeling Propagation of Gas Path Damage. Proceedings of 2007 IEEE Aerospace Conference. New York: IEEE. 242. Hand, D. J., Mannila, H., and Smyth, P., 2000. Principles of Data Mining. Cambridge, MA: MIT Press. 243. Hernandez, L., and Gebraeel, N., 2006. Electronics Prognostics--Driving Just-In-Time Maintenance. Proceedings of the IEEE Systems Readiness Technology Conference. New York: IEEE. 244. Iyer, N., Goebel, K., Bonissone, P., 2006. Framework for Post-Prognostic Decision Support. Proceedings of 2006 IEEE Aerospace Conference 11.0903. 245. Kalgren, P. W., and Byington, C. S., 2005. Self-Evolving, Advanced Test Stand Reasoning For Closed Loop Diagnostics. Proceedings of IEEE Autotestcon. New York: IEEE. 246. Kalgren, P. W., Baybutt, M., Ginart, A., Minnella, C., Roemer, M. J., and Dabney, T., 2007. Application of Prognostic Health Management in Digital Electronic Systems. Proceedings of the IEEE Aerospace Conference. New York: IEEE. 247. Kallappa, P. and Hailu, H., 2005. Automated Contingency And Life Management For Integrated Power And Propulsion Systems. Proceedings of ASME Turbo Expo. New York: ASME. 248. Khawaja, T., Vachtsevanos, G., and Wu, B., 2005. Reasoning about Uncertainty in Prognosis: A Confidence Prediction Neural Network Approach. Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society. 249. Kozlowski, J. D., Watson, M. J., Byington, C. S., Garga, A. K., and Hay, T. A., 2001. Electrochemical Cell Diagnostics Using Online Impedance Measurement, State Estimation And Data Fusion Techniques. Proceedings of IECEC Energy Technologies Beyond Traditional Boundaries. 250. Lavretsky E. and Chidambaram, B., 2002. Health Monitoring of an Electro-Hydraulic System Using Ordered Neural Networks . Proceedings of the 2002 International Joint Conference on Neural Networks. 251. Lee, J., 1996. Measurement Of Machine Performance Degradation Using A Neural Network Model. Computers in Industry. 252. Naipei, Haas, and Morales., 2003. Neural Network Estimation of Low Airspeed for the V-22 Aircraft in Steady Flight. Proceedings of the American Helicopter Society 59th Annual Forum. Alexandria, VA: AHS. 253. Nanduri, S., Almeida, P., Kalgren P. W., and Roemer, M. J., 2007. Circuit as a Sensor, A Practical Concept for Electronic Prognostics. Proceedings of the 61st Meeting Of The Society For Machinery Failure Prevention Technology. 254. NASA Ames Research Center, 2007. Prognostics Center of Excellence Data Repository web site. http://ic.arc.nasa.gov/tech/groups/index.php?gid=53&ta=4. 255. Orchard, M., Wu, B., and Vachtsevanos, G., 2005. A Particle Filtering Framework For Failure Prognosis. Proceedings of the World Tribology Congress. 256. Przytula, K. W., Choi, A., 2007. Reasoning Framework for Diagnosis and Prognosis. Proceedings of 2007 IEEE Aerospace Conference, 10.1109. New York: IEEE. 257. Reichard, K., Crow, E., and Weiss, L., 2005b. Applications of Data Mining in Automated ISHM and Control for Complex Engineering Systems. Proceedings of the First International Forum on Integrated System Health Engineering and Management in Aerospace. 258. Roemer, M. J. and Byington, C. S., 2007. Prognostics And Health Management Software For Gas Turbine Engine Bearings. Proceedings of the ASME Turbo Expo. New York: ASME. 259. Roemer, M. J, Ge, J., Liberson, A., Tandon, G. P., and Kim, R. Y., 2005a. Autonomous Impact Damage Detection and Isolation Prediction for Aerospace Structures. Proceedings of the IEEE Aerospace Conference. New York: IEEE. 260. Saha, B., Goebel, K., Poll, S., and Christopherson, J., 2007. An Integrated Approach to Battery Health Monitoring using Bayesian Regression, Classification and State Estimation. Proceedings of IEEE Autotestcon. New York: IEEE.

82

Project No Date Classification

285541 16-July-12 PU

D2.1 - Inventory Analysis Report

261. Sandborn, P., Mauro, F., and Knox, R., 2005. A Data Mining Based Approach to Electronic Part Obsolescence Forcasting. Proceedings of the DMSMS Conference. 262. Saxena, A., Wu, B., 2005. Vachtsevanos, G. Integrated diagnosis and prognosis architecture for fleet vehicles using dynamic case-based reasoning. Proceedings of Autotestcon. 263. Shao, Y. and Nezu, K., 2000. Prognosis Of Remaining Bearing Life Using Neural Networks. Proceedings of the Institute of Mechanical Engineer, Part I, Journal of Systems and Control Engineering, 214, (3). 264. Sharda, R., 1994. Neural network for the MS/OR analyst: An application bibliography. Interfaces, 24, (2), 116-130. 265. Sheldon, J., Lee, H., Watson, M., Byington, C., and Carney, E., 2007. Detection of Incipient Bearing Faults in a Gas Turbine Engine Using Integrated Signal Processing Techniques. Proceedings of the American Helicopter Societey Annual Forum. Alexandria, VA: AHS. 266. Stone, V. M. and Jamshidi, M., 2005. Neural Net Based Prognostics for an Industrial Semiconductor Fabrication System. Proceedings of the IEEE International Conference on Systems, Man and Cybernetics. New York: IEEE. 267. Studer, L. and Masulli, F., 1996. On The Structure Of A Neuro-Fuzzy System To Forecast Chaotic Time Series. Proceedings of the International Symposium on Neuro-Fuzzy Systems, 103 110. 268. Tang, L., Kacprzynski, G., Goebel, K., Reiman, J., Orchard, M., Saxena, A., and Saha, B., 2007. Prognostics in the Control Loop. Working Notes of 2007 AAAI Fall Symposium: AI for Prognostics. 269. Veaux, D. S. J., Schweinsberg, J., and Ungar, J., 1998. Prediction Intervals For Neural Networks Via Nonlinear Regression. Technometrics, vol. 40, no. 4, 273-82. 270. Volponi, A., 2005. Data Fusion for Enhanced Aircraft Engine Prognostics and Health Management. NASA Contractor Report CR2005-214055. 271. Wang, H.-F., 2011. Decision of Prognostics and Health Management under Uncertainty. International Journal of Computer Applications, 13 (4), 1-5. 272. Wang, P., Vachtsevanos, G.J., 2001. Fault prognostics using dynamic wavelet neural networks. AI EDAM, 15(4), 349365. 273. Watson, M. and Byington, C. S., 2005. Improving the Maintenance Process and Enabling Prognostics for Control Actuators using CAHM Software. Proceedings of the IEEE Aerospace Conference. New York: IEEE. 274. Watson, M., Byington, C., Edwards, D., and Amin, S., 2004. Dynamic Modeling and Wear-Based Remaining Useful Life Prediction of High Power Clutch Systems. Proceedings of the ASME/STLE Intl Joint Tribology Conference. New York: ASME. 275. Weigend, A. S. and Gershenfeld, N. A. eds., 1993. Time Series Prediction: Forecasting the Future and Understanding the Past. Reading, MA: Addison-Wesley. 276. Werbos, P. J., 1988. Generalization Of Back Propagation With Application To Recurrent Gas Market Model. Neural Networks, 1, 339-356. 277. Xue, F., Goebel, K., Bonissone, P., and Yan, W., 2007. An Instance-Based Method for Remaining Useful Life Estimation for Aircraft Engines. Proceedings of MFPT. 278. Peng, Y., Dong, M., 2011. A hybrid approach of HMM and grey model for age-dependent health prediction of engineering assets. Expert Syst. Appl., 38 (10), 12946-12953. 279. Tai, A.H., Ching, W.-K., Chana, L.Y., 2009. Detection of machine failure: Hidden Markov Model approach. Computers & Industrial Engineering, 57 (2), 608-619. 280. Roe, S., Mba, D., 2009. The environment, international standards, asset health management and condition monitoring: An integrated strategy. Reliability Engineering & System Safety, 94 (2), 474-478. 281. Lau, H.C.W., Dwight, R.A., 2011. A fuzzy-based decision support model for engineering asset condition monitoring A case study of examination of water pipelines. Expert Syst. Appl., 38 (10), 13342-13350.

83

Potrebbero piacerti anche