Sei sulla pagina 1di 8

International Journal of Computational Intelligence and Information Security, November 2012 Vol. 3, No.

9 ISSN: 1837-7823

Analysis of Integrating External Trustworthiness Software & Study on Information Retrieval Systems
DANDABOINA.SREELATHA 1
1

DHYARAM. LAKSHMI PADMAJA 2

Asst. Professor Department of Information Technology BSIT HYDERABADA.P INDIA

Associate. Professor Department of Information Technology CVSR HYDERABADA.P INDIA

Abstract Integration of software components become a challenging issues and risky to make a quality product, to reduce the effort of developers and cost SaaS model of service delivery. Trust product increases reputation of the systems however, existing systems rely on ratings provided by consumers. Our paper presents aware of quality of trust software service selection is devised for service recommendation providing for consumers with best quality efforts. SLA is service model to identify the different levels of software selection implements the efficient business needs, also analysis the shows the comparing quality of information and software service to end user. The framework can effectively capture the service behavior and recommend the best possible choices. Keywords - Software as a service (SaaS), Information Retrieval System, SLA model, quality of serviceproduct.

I. INTRODUCTION The importance of product depends on large-scale software systems of astonishing complexity, because the consequences of theirpossible failure are so high, it is vital that software systems shouldexhibit a trustworthy behavior.Trustworthiness is a major issue when people and organization arefaced with the selection and the adoption of new software.Although some ad-hoc methods [1], there is not yet general agreement about softwarecharacteristics contributing to trustworthiness, therefore, this work focuses on defining an adequate notion oftrustworthiness of Open Source products and artifacts andidentifying a number of factors that influence it to provide bothdevelopers and users with an instrument that guides them whendeciding whether a given program (or library or other piece ofsoftware) is good enough and can be trusted in order to be usedin an industrial or professional context.Product of software is measurable is the determination of trustworthiness of software is difficult may be different quantifiable representations of trustworthiness. This analysis proposes a framework for assessing the trustworthiness of software. Such a trustworthy quantification framework will have some characteristics of software systems that relate to or support trustworthiness, and seek to identify and improve metrics and measurement methods (i.e., the metrology) that enable developers to analyze, evaluate and assure trustworthiness in software systems and applications. The approach currently taken involves development of a framework composed of models, with the ultimate goal being the ability to calculate a trustworthy factor for software.
37

International Journal of Computational Intelligence and Information Security, November 2012 Vol. 3, No. 9 ISSN: 1837-7823

Figure 1 is to identify the trustworthiness of software product Figure 1 the first step focuses on defining an adequate trustworthiness of software products to identifying a number of factors that influence it. To this end, several people with various professional roles, to derive the factors from the real user needs instead of deriving them to test the feasibility of deriving a correct, complete, and reliable trustworthiness model on the basis of these factors, a set of well-known OSS projects have been chosen. Then, verified the possibility to assess the proposed factors on each project. Next, developed the trustworthiness models by using a number of factors as its independent variables and an assessment of trustworthiness by OSS practitioners and users as its dependent variable, therefore, it was necessary to collect data from practitioners and users about the trustworthiness of existing OSS products. Finally the information collected was analysed to find out whether the factors influence the trustworthiness of the OSS products and artifacts. SECTION II 2. Related work: Trust worth is a complex phenomenon that has been the object of interest in various disciplines,depending on the approach, trust has been defined in many ways. As anexample, trust can be defined as "have confidence or faith in,"[89] "something (asproperty) held by one party (the trustee) for the benefit of another (thebeneficiary),"[90].Trust is a relationship between people, It involves the suspension of disbelief that oneperson will have towards another person or idea. Trust is a relationship of reliance, "Atrusted party is presumed to seek to fulfill policies, ethical codes, law and their previouspromises."[91] Also, in security engineering, a trusted system is a system that is reliedupon to a specified extent to enforce a specified security policy. All these quotes to underline the confusion that exists on defining what "trust" means ifapplied to FLOSS software and products. Since it is fairly difficult to define trustwithout a context, then defining trust in a particular topic like FLOSS is a real issue.However, some assumptions can be done on trust. Cooperative relationships, forexample, need to be built on the foundation of trust. Antikainen reports a distinctionbetween affective and cognitive trust. Affective trust derives from an emotionalattachment between a trustor and a
38

International Journal of Computational Intelligence and Information Security, November 2012 Vol. 3, No. 9 ISSN: 1837-7823

trustee, while cognitive trust relies on the rationalassessment of the target by the trustor [3].Antikainen [3] argues about the correlation between communities' sentiments and trust. She starts assuming that into communities discussions, trust is a key factor, because someone may have an opportunistic behaviour and so it may manipulate the public opinion about an OSS product positively or negatively, to damage or to promote it. Also, Antikainen does not forget how trust is a very important factor when organizations and companies are making decision about whether they choose an software product or not. She defines trust as "the extent to which a person is confident in and willing to act on the basis of, the words, actions, and decisions of another." Trust requires a relationship between a trustor and a trust target. She analyses one of the more active communities on the FLOSS world: the Linux Kernel community. She found eight factors which seem to affect trust in the community, ordered by their importance: skills (the most important one), practices, reputation, common goals, sharing information, culture and values, possibility to influence, familiarity. Nearer to Antikainen work, Hertzum aims to explain the trust value of the relationships between colleagues [2]. Hertzum noticed how it is important and cheap for employees to ask information to colleagues rather than to external sources. Thus, this implies a problem about trustworthiness of received information. The quality and credibility of an object, a person, or a piece of information are not properties inherent in the object, person, or information. Rather, quality and credibility are perceived as properties. Engineers are looking for high quality of information that has the following characteristics: Accessible in a way that enables the engineer to form a perception of its quality Perceived to be of high quality. In relation to human interaction, trust is defined as an emotive issue where the trusted party has a moral responsibility toward the trusting party. To the trusting party, trust involves an assessment of whether the other person possesses the required knowledge and skills and is likely to give a truthful and unbiased account of what he knows varying degrees, depending on numerous situational factors. It is possible to differentiate types of trust by means of the evidence on which the trust is founded and with respect to the amount of evidence involved: First-hand experience; Reputation: what third parties have reported; Simple inspection of surface attributes; General assumptions and stereotypes. Thus, knowing an information source first-hand, or knowing someone who knows it first-hand, provides people with a more solid basis for assessing the trustworthiness of the source. Assuming that trust may govern cooperative relationships, it is possible to adopt that such a trusted relation needs to exist also between different applications. German explains [4] that almost every OSS application depends upon some other external application to be executed. Thus, if there is a need to evaluate how trustworthy a product is, the assessment should be extended to all of its external dependences. As a matter of fact, one single product may be evaluated as trustworthy, but it can depend upon an external library which is not trusted. SECTION III 3. Problem Definition: Integration of software product is a challenging and risky process, the quality of the trust of the product may be unknown at integration time. According to user needs for quality assurance testing is a program even the testing done for each product, how the user
39

International Journal of Computational Intelligence and Information Security, November 2012 Vol. 3, No. 9 ISSN: 1837-7823

will identify the software product trustworthiness. To get best effort of software product our analysis provides the framework for consumers 3.1. SLA Monitoring: SLA have been used in IT organizations for many years, the definition SLA in a SOA context is becoming extremely important as service-oriented systems and thirdparty service to identify the support requirements for internal and external customers of IT services. It is common for IT service providers to deliver services at different levels of quality based on the cost paid for a service. An SLA is valuable for helping all parties understand the trade-offs inherent between cost schedule and quality because their relationship is stated explicitly An SLA cannot guarantee that you will get the service it describes, any more than a warranty can guarantee that your car will never break down. In particular, an SLA cannot make a good service out of a bad one. At the same time, an SLA can mitigate the risk of choosing a bad service [Allen 2006]. A good service is one that meets the needs of the service customer in terms of both quality and suitability. SLA describes each service as following How delivery of the service at the specified level of quality will become realized which metrics will be collected Who will collect the metrics and how Actions to be taken when the service is not delivered at the specified level of quality and who is responsible for doing them Penalties for failure to deliver the service at the specified level of quality How and whether the SLA will evolve as technology changes (e.g., multi-core processors improve the providers ability to reduce end-to-end latency) IT SLAs are enforced by service management processes and procedures. 3.1.1. Service level management (SLM) is the main practice area for managing and maintaining quality of service, process area focuses on improving the quality of service by continuously reviewing the quality of the services provided by an IT organization. SLM provides input to all service management processes and receives input from service support processes. 3.1.2. Change management process area focuses on managing change in an efficient and consistent manner. It provides input to SLM on how infrastructure changes can affect quality of service. 3.1.3. Incident management process areas main purpose is to minimize the disruption thatincidents cause to a business. It provides input to SLM on violations to agreements, who shouldbe notified, historical data and trends, and any other actions that may be needed. 3.1.4. Configuration management process areas main purpose is to identify, verify, control, andmaintain configuration items. This area is also responsible for managing the relationships betweenconfiguration items. It identifies the services affected by problems with configuration items andprovides this information to SLM. 3.1.5. Capacity management process areas activities include monitoring, analysing, tuning, andreporting on service performance to produce SLA recommendations.

40

International Journal of Computational Intelligence and Information Security, November 2012 Vol. 3, No. 9 ISSN: 1837-7823

3.2. Business Techniques for Service as Software: 3.2.1. Service behaviour in terms of quality attributes: Define a language that enables the description of a service from a resilience perspective. Mechanisms based on versioning allied with timeframes and time windows can be explored in to capture real-time behavior. Since the KPIs need to be made available and accessible to external parties which maybe located in any part of world, we may explore recent developments in the area of Semantic Web to enable access to KPIs data for reasoning and knowledge inference to better understand why certain runtime behaviors occur. 3.2.2. Assessment of software components and services: Quality of third-party components individually andas a whole, in order to verify which combinations would work betterin practice. Furthermore, in many systems the tools and benchmarksbeing used need to adapt to changing conditions, since the experimentsbeing done during development must comply with data collected atruntime. The goal is to use such data in a feedback loop to transparentlyupdate tools and workloads, used for assessment and benchmarking, on the development phase. 3.2.3. Verification & ValidationSupport traceability requirements for software releases: provide tools that allow versioningrequirements and tracking evolution. A potential approach is to usemultidimensional traceability matrixes that describe the differentversions of each requirement, allow mapping each version of eachrequirement with the system architecture and software code, andallow identifying interdependencies among existing requirements. Thematrix and the existing metadata describing the pool of developmenttime V&V checks and results from the previous system versions can beused to identify the checks that need to be repeated/updated and thosethat are still valid. 3.2.4. Anomaly detection and failure prediction techniques: ITshouldbe based on supplementary operational data about the infrastructureproduced through adaptive monitoring (as used for runtime V&V) and runtime stimulation. Theidea is to rely on the identification ofvariables and patterns of the infrastructures operational profile byemploying data mining and machine learning techniques, includingvariable (feature) selection methods and regression analysis. Forfailure prediction, instead of following the conventional techniquesthat just observe the operational state of the infrastructure, theproposed approach heavily relies on the additional data produced bythe controlled runtime stimulation.

SECTION IV 4.1. Evaluation Process: Trustworthy software can be trusted to work dependably in some critical function and failure to do so may have catastrophic results such as serious injury lost of life or property of security. It is the degree of confidence that exists that the software meets a set of requirements. Reliability is the probability of failure free operation of software for a specified period of time in a specified environment, failure can be defined as a state or condition of not meeting a desirable objective.

41

International Journal of Computational Intelligence and Information Security, November 2012 Vol. 3, No. 9 ISSN: 1837-7823

Analysis and testing tools are required to facilitate the evaluation process, which cover the chain from architecture modelling to analysis and testing and support interoperability, public easily available

Figure 2: Process to identify trustworthiness of software selection. Targeted to software development using components and assists software integrators to evaluate trustworthiness of components and the software system reliability with the selected components Technical evaluation consists of quantitative reliability analysis and testing verifying that the component works as expected model to be implemented, non-technical evaluation consists of qualitative analysis is a sum of several aspects such as components maturity reputation.Three levels of trustworthiness are component level focuses on component, architecture level focuses on interacting components at the architectural level and system level focuses on interacting components in the integration environment Model based = the evaluation is done before implementation, based on architectural models Implementation based evaluation = evaluation is performed to already implemented artifact (i.e. component or system) Enables integrators a quick, easy and easily repeatable component/system reliability analysis as follows Automating the system simulation at the architectural level Automating the calculations of the probability of failure for components and the system Updates the UML models based on the analysis. Enables to detect the influence of the selected to the system reliability Interoperates with different UML modeling tools with minor modifications Trustworthiness appraisal method collect practical data during software development process and transform the project data to related metrical data, appraise with the evidence of software process. 4.2. Comparative Study: Information retrieval system is consists of software program supported by hardware or other special components to identify the truth information process which provides the user relevant information. Information retrieval system only for quality of user needs for short and long time requirements not for a software product. Same like
42

International Journal of Computational Intelligence and Information Security, November 2012 Vol. 3, No. 9 ISSN: 1837-7823

information retrieval system our proposed approach software trustworthiness is a service selection criteria of multiple decision-making problemwhose resolution commonly involves a trade-off between quality and productivity cost. There is no guarantee of service quality at selection time however, reputation can help in predicting the likelihood of a quality offer to be met. As a matter of fact, selection can translate into a three-criteria decision-making problem involving reputation, quality, and cost. This problem can be reduced to a single-criterion decision-making problem provided that quality reputation and cost are aggregated into a single selection metric. First, feedback can be subjective since it is based on consumers personal expectations and opinions. Second, consumers may have an obstructed view of a service and its performance, especially when the latter is part of a composite service. Third, reputation systems are prone to attacks by malicious consumers who may give false ratings and subvert service reputation. Generally, it is harder to maintain a per-consumer reputation system than a per-service reputation system, mainly because services are less versatile, more traceable, and come in a smaller number. Moreover, it is harder to manage user identities especially for malicious users who are likely to change such as sybil attacks. For all of these reasons, the analyzed the automated reputation aware selection framework is to unambiguously define the feedback as a computable nonarbitrary metric and to devise an objective rating system.

V. CONCLUSION The issues raise in the software quality is a risky process in the context of project development usingexternal software service components. Our papers presented an automated quality and reputationbasedframework for service rating and selection. Existing a few works have considered quality andreputation for service selection, none have considered theautomation of the service rating process.This service rating allows feedback to beassigned to a delivered service that objectively reflects thesatisfaction or dissatisfaction with the rendered performanceand quality. A reputationderivation model has also been proposed to aggregatefeedback into a reputation value that better reflects thebehavior of the service at selection time that supports the integration of software services in application development and provisioning projects. Reference [1] D.Taibi, L.Lavazza, S.Morasca. OpenBQR: a framework for the assessment of Open Source Software, Open Source Software 2007, Limerick, June 2007. [2] Hertzum, M. "The importance of trust in software engineers assessment and choice of information sources". In Information and Organization, vol. 12, no. 1, pp. 1-18, 2002. [3] Antikainen M., "Is trust based on cognitive factors in OSS communities?", Trust in OpenSource Software (TOSS) 2007, Limerick, June 2007. [4] German D. M., Gonzales-Barahona J. M., Robles G., "In what do you trust when you trust? The importance of dependencies in trust analysis", Trust in Open-Source Software (TOSS) 2007, Limerick, June 2007. [5] Hansen M. Ks.,hntopp K., Pfitzmann A., "The Open Source Approach Opportunities and Limitations with Respect to Security and Privacy". Computers & Security, vol. 21/5, pp. 461471, 2002.

43

International Journal of Computational Intelligence and Information Security, November 2012 Vol. 3, No. 9 ISSN: 1837-7823

[6] Hasselbring W., Reussner R., "Toward Trustworthy Software Systems", US Army Research Laboratories Information Assurance Center, IEEE. 2006. Dandaboina Sreelatha M.Tech Software Engineering from CVSR College of Engineering B.Tech Information Technology. Her areas of interest include Software Engineering, Project management & Software Testing

DHYARAM LAKSHMI PADMAJA Ph.D Computer Science Engineering from JNTU currently she Associate Professor in CVSR college of Engineering. Her research areas include Software Engineering & Data mining

44

Potrebbero piacerti anche