Sei sulla pagina 1di 27

4G NETWORKS: MOBILITY ISSUES

Saroj Bala
Assistant Professor, MCA Department E-mail: saroj_kkr@rediffmail.com

ABSRACT Numerous different network technologies with their individual pros and cons are existing globally. 4G or Fourth Generation networks are designed to facilitate improved wireless capabilities, network speeds and visual technologies. The growing interest in 4G networks is driven by the set of new services, will be made available for the first time such as accessing the Internet anytime from anywhere, global roaming, and wider support for multimedia applications. This article discusses some of the mobility issues in 4G networks along with a little coverage of the evolution of different generations.

Kbytes/sec. 2.5G Network, mid generation offered a higher data rate than 2G technology and enabled the delivery of basic data services like text messaging but not enough to

download an image or browse a website with data rate upto 144 kbps. GPRS, EDGE and CDMA 2000 were 2.5 technologies. 2.75G Network enabled watch streaming video and download mp3 files faster upto 180kbps. 3G Network represents the 3rd Generation designed to overcome all the limitations of above technologies. GSM 3G networks are termed UMTS in US and wideband CDMA (WCDMA) worldwide. UMTS supports global roaming capabilities and speed is 3 times that of a GSM. 3.5G or 3G+ NETWORK offers 7.2 and 14.4 Mbps on cell phones. 4G Networks is the future. Some basic 4G research is being done, but no frequencies have been allocated. The Fourth Generation could be ready for implementation around 2012. 4G should support at least 100 Mbps peak rates in full-mobility wide area coverage and 1Gbps in low-mobility local area coverage. Some of the limitations of 3G [3] which originated 4G can be listed as: 1) All the problems are partly solved, doesnot have sufficient capabilities. 2) Difficulty in increasing bandwidth. 3) Limitation of spectrum and its allocation. 4) Difficult to roam across distinct service environment. The 4G mobility management includes additional mobility related features, absent in previous generation networks, such as; Moving Networks, Seamless Roaming and Vertical Handover. 3. MOBILITY MANAGEMENT ISSUES According to the mobility scenarios for future, referred in ongoing researches the following mobility management issues can be highlighted: 3.1 Connectivity Triggering. Different kinds of events can trigger mobility management actions that may result in some conflicts. A general framework is required to resolve conflicting triggers generated simultaneously by different components, on the basis of predefined policies and rules.

1. INTRODUCTION With the huge worldwide increase in the number of mobile users each day and with emerging demands like totally user-centric services, high speed streaming Internet multimedia services, seamless global roaming with ubiquitous coverage and unrestricted QOS support, 3G systems have started showing their limitations with bandwidth availability, spectrum allocation, air interference standards and lack of seamless transport mechanisms between different networks[2]. The 4G systems is a potential smooth merger of all the existing heterogeneous technologies with a natural progression to support seamless cost-effective high data rate, global roaming, efficient personalized services, typical usercentric integrated service model, high QOS and overall stable system performance[4]. The article is structured as follows: section 2 introduces the evolution and section 3 discusses the mobility issues in 4 G networks. The conclusion is presented in section 4. 2. THE EVOLUTION 0G Networks represents the 1st Generation of mobile telephony, where satellite phones were developed and deployed for boats mainly. 1G Network provided the facilities of making voice calls and sending text messages. (NMT, AMPS, TACS) are considered to be the first analog cellular systems, which started in early 1980s. The greatest disadvantage 1G had was that it only allowed to contact within the premises of that particular nation. 2G Network (GSM) represents the 2nd Generation of mobile telecommunications and is still the most widespread technology in the world but with a slow rate of 9.6

Handover . In the emerging 4G networks which are both multi-domain and multi-technology, handover requests could be based on a number of different needs or policies such as cost reduction criteria, network resource optimization etc. Various handover solutions have been devised to provide seamless transfer of services across heterogeneous boundaries. One is IP-Based. Many researchers agree that Mobile IP will be the key for providing efficient interworking between different technologies. Others are IDMP-based or Agent based. 3.2 Location Management Location management involves two operations; location registration and call delivery. Location registration involves the mobile terminal periodically updating the network about its new location (access point).This allows the network to keep a track of the mobile terminal. In the second operation the network is queried for the user location profile and the current position of the mobile host is retrieved. 3.3 Routing Group Formation Moving networks are a prominent component of future networking scenarios. a typical example can be of moving users with several terminals forming temporary moving clusters and network hierarchies while traveling on a train. A common characteristic for this kind of scenarios is that some mobile entities that are close by move together, forming a cluster,, be joined together into a unified network. The formation of this unified network will be highly dynamic, and some kind of hierarchy will be needed in order to integrate them into encapsulating moving networks. 3.4 Seamless Mobility Seamless mobility must be a set of solutions that will provide easy, uninterrupted access to information, entertainment, communication, monitoring and control when, where and how we want, regardless of the device, service, network or location. Instead of experiencing a disconnect as movement occurs between different devices, environments and networks, seamless mobility will deliver experiences that span the home, vehicle, office and beyond. 3.5 Mobility Context Management It is assumed that the future terminals, applications and networks will be able to provide a versatile set of information about themselves, their surroundings and the situation where they are used. The mobility management component needs access to the Context Information Base, CIB, within the network that is responsible for maintaining user policy and context information, and that is updated by mobility triggers from the mobility events.

3.6 Paging Current paging solutions are dependent on the link layer technology and network structure. The 4G Network requires the facility to be able to page across heterogeneous network technologies. 3.7 Network composition Composition, as a new architectural element, can enable new type of dynamic networks where new business models and roles evolve: anyone can become a network/service operator. In this view, everything is a network and a terminal is a network itself. Composition of networks will be possible, independently from the technologies of composing networks. 3.8 Migration Backward compatibility and migration is one of the basic requirements in the evolution and deployment of heterogeneous networks. Although migration from current technologies and compatibility is different, similar approaches that address both these issues exist. Backward compatibility enables smooth migration. So, such a design should be aimed that interoperate with existing technologies using their original interfaces. 4. CONCLUSION 4G wireless networks not only enable more efficient, scalable, and reliable wireless services but also provides wider variety of services. The article discussed the evolution of network generations from 0G to 4G networks. It mainly discussed the significant mobility issues within 4G heterogeneous networks which are the hot issues in todays research. The future research will overcome these challenges and integrate newly developed services to 4G networks making them available to everyone, anytime and everywhere. REFERENCES
[1] Sadia Hussain, Zara Hamid and Naveed S. Khattak, Mobility Management Challenges and Issues in 4G Heterogeneous Networks, InterSense '06. Proceedings of the First International Conference on Integrated Internet Ad hoc and Sensor Networks, May 30-May 31 2006, Nice, France [2] Hassan Challenges Gobjuka, 4G Wireless Networks: Opportunities and

[3] U. Varshney, R. Jain, issues in emerging http://computer.org

4G wireless networks,

[4] Sayan Kumar Ray, IETE Technical Review Vol 23, No 4, July-August 2006, pp 253-265

INKLESS PRINTING TECHNOLOGY: 2 ZINK


Suchitra Singh
Assistant Professor, MCA Department E-mail : suchitra_singh@yhaoo.com ZINK Paper looks like regular white photo paper. ABSTRACT
It's a digital world... and it's about to get even better. Digital content has exploded to permeate, and change, every part of our lives. ZINK is a revolutionary digital approach to full color printing that is mobile, embeddable in any device, simple to use, easy to maintain and has dramatically less waste. . In the ZINK world, when you have a desire to hold a hard copy of whatever digital content you want in your hands, all it takes is the touch of a button. Your laptop will be able to print. Your televisions and receivers will be able to print. And your stand-alone printer won't be standing alone - it'll be coming with you.

1. INTRODUCTION ZINK stands for Zero Ink - an amazing new way to print in full color without the need for ink cartridges or ribbons. The ZINK Technology encompasses both the ZINK Paper and the intelligence embedded in every ZINK-enabled device. ZINK Technology is based on advances in chemistry, engineering, physics, image science, and manufacturing 2. HOW ZINK WORKS At the heart of the ZINK Technology is the ZINK Paper, an advanced composite material with cyan, yellow, and magenta dye crystals embedded inside, and a protective polymer overcoat layer outside. The crystals are colorless before printing, so ZINK Paper looks like regular white photo paper. Heat from a ZINK-enabled device activates the crystals, forming all the colors of the rainbow. The printing process is now radically simple. Just add ZINK Paper. 2.1 The Magic Paper The ZINK Paper is an advanced composite material with cyan, yellow, and magenta dye crystals embedded inside and a protective polymer overcoat layer outside. Before printing,

3
Through an advanced manufacturing process, the ZINK color forming layers are coated as a colorless thin multi-layer "stack" onto a base layer. The total thickness of all the layers combined is about the thickness of a single human hair. The cyan, magenta, and yellow crystal layers are colorless at room temperature and are activated by heat during the printing process to create color. 2.2 ZINK Amorphochromic Dye Crystals The proprietary dye crystals that give ZINK its color, named Amorphochromic crystals, represent an entirely new class of molecules. The properties of each dye crystal are finely tuned to achieve the color palette and image stability required for beautiful, full-color digital prints.Each of the crystals are activated independently using heat pulses of precisely determined duration and temperature to achieve any color in the rainbow.

2.3 The Physics : Time + Temp = every coclor imaginable


Previously direct thermal printing in a single printing pass has been possible only in low quality, black and white applications. Now, with ZINK Technology, it is possible to do full-color single-pass direct thermal printing. This is possible, not only by the invention of special dye crystals, but also by another of the fundamental mechanisms of ZINK the physics of controlling time and temperature. Each color forming layer within the ZINK Paper structure is addressed individually to create the colors required for every image.

The various colors in a print are created by controlling the temperature and time of the heat pulses delivered from the print head in the device to the ZINK Paper. This pulse pattern determines which crystals in which layers are melted and thereby which colors are formed

In the ZINK world, the touch of a print button brings an entirely new experience that results in a whole new sense of freedom, spontaneity by making printing more readily available, simpler and unlocking the full value of the digital content like never before - whether at home, on-the-go, at work, or at play.Xerox is working on an inkless printer which will use a special reusable paper coated with a few micrometers of UV light sensitive chemicals. The printer will use a special UV light bar which will be able to write and erase the paper.

As the ZINK Paper passes beneath the print head, hundreds of millions of heat pulses are delivered by a linear array of heating elements, in a line-by-line fashion, to produce the desired colors at each printed pixel. 2.4 Key Features of ZINK Paper * Zero ink required. No need for ink cartridges or ribbons. All you need for full-color photos is right in the ZINK Paper. Just add ZINK Paper. * Capable of reproducing millions of vivid colors at very high resolution. * Earth Friendly. Less waste - No cartridges, no extra packaging to throw away. * Protected by a polymer overcoat, providing water resistance and image durability. * Affordable for everyday use. * Not sensitive to light. * Long lasting and designed to resist fading from exposure to light, heat and humidity. 3. CONCLUSION REFERENCES
[1] www.zink.com [2] www.thefutureofthings.com [3] www.printerinkcartridges.printcountry.com

KNOWLEDGE REPRESENTATION AND REASONING METHODS


Snehlata Kaul
Assistant Professor, MCA Department E-mail: sneha8kaul@yahoo.com

ABSTRACT
Knowledge Representation and reasoning is a central problem in Artificial Intelligence (AI) today. Its importance stems from the fact that the current design paradigm for "intelligent" systems stresses the need for expert knowledge in the system along with associated knowledge-handling facilities. A present paper gives brief introduction to knowledge representation and reasoning and also describes different methods of knowledge representation and reasoning

is generally carried out within the field known as informal logic or critical thinking. 2.DIFFERENT METHODS REPRESENTATION OF KNOWLEDGE

1. INTRODUCTION Knowledge representation is an area of artificial intelligence fundamental goal is to represent knowledge in a manner that facilitates inference (i.e. drawing conclusions) from knowledge. It analyzes how to formally think - how to use a symbol system to represent a domain of discourse (that which can be talked about), along with functions that allow inference (formalized reasoning) about the objects. Generally speaking, some kind of logic is used both to supply formal semantics of how reasoning functions apply to symbols in the domain of discourse, as well as to supply operators such as quantifiers, modal operators, etc. that, along with an interpretation theory, give meaning to the sentences in the logic [1]. When we design a knowledge representation (and a knowledge representation system to interpret sentences in the logic in order to derive inferences from them) we have to make choices across a number of design spaces. The single most important decision to be made, is the expressivity of the KR. The more expressive, the easier and more compact it is to "say something". However, more expressive languages are harder to automatically derive inferences from. Reasoning is the cognitive process of looking for reasons, beliefs, conclusions, actions or feelings. When we require any knowledge system to do something it has not been explicitly told how to do it must reason. The system must figure out what it needs to know from what it already knows[3]. One approach to the study of reasoning is to identify various forms of reasoning that may be used to support or justify conclusions. The main division between forms of reasoning that is made in philosophy is between deductive reasoning and inductive reasoning. Formal logic has been described as "the science of deduction "The study of inductive reasoning

Mind Map An exceedingly effective way of organizing information in hierarchical, brain-friendly format is representing it visually, in a form of a map. Visualizing information helps structure it in the most comprehensive and clear manner. A powerful, yet simple technique for representing knowledge is Mind Map. Mind Map is a graphical tool that mirrors the way the brain thinks. Mind map can be used in a wide range of human activities. This method helps enhance human learning, brainstorm processes and manage projects, highlighting the key ideas, questions and current objectives. A mind map usually starts with one main topic or idea and expands in a radiant fashion with other relevant ideas and notes added to the branches growing from the main topic [1]. Concept Map In contrast to mind map, a concept map may contain perspectives and ideas of the whole team. It is usually based on a number of principal, most inclusive concepts, placed at the top of the map. Other less general concepts should be arranged in the hierarchical fashion below, linked by lines or arrows showing the relationships between the concepts. Concept mapping can be used as a learning or an evaluation tool to enhance and assess the knowledge level of the group of individuals. It is also an indispensable technique for group brainstorming and activities planning [1]. Process map Representing information in a form of a process map gives extremely high results in managing multiple work processes. Properly organized information represents all the process associated activities and provides the view of the complete business system. Process map includes such information as process complexity, the number of people involved and time and cost issues. This can be the basis for process reengineering on a comprehensible customer-oriented basis. 3. REASONING METHODS AND ARGUMENTATION Deductive reasoning Reasoning in an argument is valid if the argument's conclusion must be true when the premises (the reasons given to support that conclusion) are true. One classic example of

deductive reasoning is that found in syllogisms like the following[3]: Premise 1: All humans are mortal. Premise 2: Socrates is a human. Conclusion: Socrates is mortal. The reasoning in this argument is valid, because there is no way in which the premises, 1 and 2, could be true and the conclusion, 3, be false. Validity is a property of the reasoning in the argument, not a property of the premises in the argument or the argument as a whole. In fact, the truth or falsity of the premises and the conclusion is irrelevant to the validity of the reasoning in the argument. The following argument, with a false premise and a false conclusion, is also valid (it has the form of reasoning known as modus pones). Premise 1: If green is a color, then grass poisons cows. Premise 2: Green is a color. Conclusion: Grass poisons cows. Again, if the premises in this argument were true, the reasoning is such that the conclusion would also have to be true. In a deductive argument with valid reasoning the conclusion contains no more information than is contained in the premises. Therefore, deductive reasoning does not increase one's knowledge base, and so is said to be nonimplicative. Within the field of formal logic, a variety of different forms of deductive reasoning have been developed. These involve abstract reasoning using symbols, logical operators and a set of rules that specify what processes may be followed to arrive at a conclusion. These forms of reasoning include Aristotelian logic, also known as syllogistic logic, propositional logic, predicate logic, and modal logic. Inductive reasoning Induction is a form of inference producing propositions about unobserved objects or types, either specifically or generally, based on previous observation. It is used to ascribe properties or relations to objects or types based on previous observations or experiences, or to formulate general statements or laws based on limited observations of recurring phenomenal patterns. Inductive reasoning contrasts strongly with deductive reasoning in that, even in the best, or strongest, cases of inductive reasoning, the truth of the premises does not guarantee the truth of the conclusion. Instead, the conclusion of an inductive argument follows with some degree of probability. Relatedly, the conclusion of an inductive argument contains more information than is already contained in the premises. Thus, this method of reasoning is implicative .

Abductive reasoning Abductive reasoning, or argument to the best explanation, is a form of inductive reasoning, since the conclusion in an abdicative argument does not follow with certainty from its premises and concerns something unobserved. What distinguishes abduction from the other forms of reasoning is an attempt to favors one conclusion above others, by attempting to falsify alternative explanations or by demonstrating the likelihood of the favored conclusion, given a set of more or less disputable assumptions. For example, when a patient displays certain symptoms, there might be various possible causes, but one of these is preferred above others as being more probable. Analogical reasoning Analogical reasoning is reasoning from the particular to the particular. An example follows: Premise 1: Socrates is human and Socrates died. Premise 2: Plato is human. Conclusion: Plato will die. Analogical reasoning can be viewed as a form of inductive reasoning, since the truth of the premises does not guarantee the truth of the conclusion. However, the traditional view is that inductive reasoning is reasoning from the particular to the general, and thus analogical reasoning is distinct from inductive reasoning. An example of inductive reasoning from the particular to the general follows: Premise 1: Socrates is human and Socrates died. Premise 2: Plato is human and Plato died. Premise 3: Aristotle is human and Aristotle died. Conclusion: All humans die. It has been argued that deductive, inductive, and abdicative reasoning are all based on a foundation of analogical reasoning Fallacious reasoning Flawed reasoning in arguments is known as fallacious reasoning. Reasoning within arguments can be bad because it commits either a formal fallacy or an informal fallacy. Formal fallacies Formal fallacies occur when there is a problem with the form, or structure, of the argument. The word "formal" refers to this link to the form of the argument. An argument that contains a formal fallacy will always be invalid. Consider, for example, the following argument: 1. If a drink is made with boiling water, it will be hot. 2. This drink was not made with boiling water. 3. This drink is not hot. The reasoning in this argument is bad, because the antecedent (first part) of the conditional (the "if..., then..." statement) can be false without the consequent (second half ) of the condition being true. Informal fallacies An informal fallacy is an error in reasoning that occurs due to a problem with the content, rather than mere structure, of the

A classic example of inductive reasoning comes from the empiricist David Hume: Premise: The sun has risen in the east every morning up until now. Conclusion: The sun will also rise in the east tomorrow.

argument. Reasoning that commits an informal fallacy often occurs in an argument that is invalid, that is, contains a formal fallacy. One example of such reasoning is a read herring argument. An argument can be valid, that is, contain no of formal reasoning fallacies, and yet still contain an information fallacy. The clearest example of this occur when an argument contains circular reasoning also known as begging the question. 4. CONCLUSION This paper describe different methods of

knowledge representation and reasoning. These methods can be used while representing the knowledge in different areas by considering the different reasoning techniques REFERENCES
[1] Chein, M., Mugnier, M.-L. (2009), Graph-based Knowledge Representation: Computational Foundations of Conceptual Graphs, Springer, 2009, ISBN: 978-1-84800-285-2. [2] Hermann Helbig: Knowledge Representation and the Semantics of Natural Language, Springer, Berlin, Heidelberg, New York 2006 [3] Principles of Knowledge Representation and Reasoning: Proceedings of the Twelfth International Conference, KR 2010, Toronto, Ontario, Canada, May 9-13, 2010. AAAI Press2010.

LEARNING TO RANK FOR INFORMATION RETRIEVAL


Pooja Arora
Asst. Professor, MCA Deptt. Puja.arora06@gmail.com

ABSTRACT
One central problem of Information Retrieval is to determine which documents are relevant and which are not to the information need. This problem is handled by a ranking function. In this article, various approaches for learning a ranking function are discussed. Also a comparative analysis is performed which covers pros and cons of these approaches.

erroneous object among a thousand retrieved objects means total failure. 3. LEARNING TO RANK One central problem of information retrieval (IR) is to determine which documents are relevant and which are not to the information need. This problem is practically handled by a ranking function which defines an ordering among documents according to their degree of relevance to the user query. The process of generating an effective ranking function for IR is referred to as Learning to Rank for IR in the field.[2] The task of "learning to rank" has emerged as an active and growing area of research both in information retrieval and machine learning. The goal is to design and apply methods to automatically learn a function from training data, such that the function can sort objects (e.g., documents) according to their degrees of relevance, preference, or importance as defined in a specific application.

1. INTRODCUTION Information is being created and becoming available in ever growing quantities as the access possibilities to it proliferates. There is currently a great deal of excitement and confusion about the promise of an Electronic Information Superhighway that would enable anybody to access these diverse and large information sources. Many information providers are developing on-line services to provide users with an interface to this emerging rich universe of knowledge stored in the form of multimedia documents, business and financial data, games and entertainment, shopping and consumer information. However, it is not possible to make information available to users almost instantly without better methods to filter, retrieve and manage this potentially unlimited influx of information. Users face an information overload problem and they require tools to explore this vast universe of information in a structured way. So, storage and retrieval of information in a convenient manner is of utmost importance. 2. WHAT IS IR? Information retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers).Information Retrieval is different from data retrieval. Data retrieval mainly consists of determining which documents of a collection contain the keywords in the user query which is not enough to satisfy the user information need. In fact, the user of an IR system is concerned more with retrieving information about a subject than with retrieving data which satisfies a given query. For an Information Retrieval system, the retrieved object might be inaccurate and small errors are likely to go unnoticed but for a data retrieval system, however a single

Figure 1: General Framework of Learning- Based methods for IR ranking problem Figure 1 shows a general framework that most learningbased methods follow to deal with IR ranking problem. The learning process, formalized as follows, consists of two steps: training and test. Given a query collection Q = { q 1,., qN } and a document collection , D = { d1, . . . . . . , dM } , the training corpus is created as a set of query-document pairs , each (qi , dj ) Q D, upon which a relevance judgment indicating the

relationship between qi and dj is assigned by a labeler . The relevance judgment given by a label can be: 1) A class label eg. relevant or non-relevant, 2) Rating, eg. definitely relevant, possibly relevant, or nonrelevant, 3) An order, eg. k, meaning that dj is ranked in the kth position of the ordering of all documents when qi is considered, 4) A score, eq. sim(qi, dj ) specifying the degree of relevance between qi and dj . For each instance (qi, dj), a feature extractor produces a vector of features that describe the match between qi and dj. The inputs to the learning algorithm comprise training instances, their feature vectors and the corresponding relevance judgments. The output is a ranking function, f, where f (qi, dj) is supposed to give the true relevance judgment for qi and dj. During the training process, the learning algorithm attempts to learn a ranking function such that a performance measure (eg. MAP, error rate, NDCG etc. ) with respect to the output relevance judgment can be optimized. In the test phase, the learned ranking function is applied to determine the relevance between each document di in D and a new query q. Clearly, factors, such as the form of the training instances, the form of the desired output, and the performance measure, will lead to different design of learning to rank for IR algorithms.

E.g. Ranking SVM, Rank Boost, RankNet and many more. 4.3 Listwise Approach: The list-wise approach
Pointwise Approach Equal to no. of training elements O(n) Pairwise Approach Equal to half of the training elements O(n2) Listwise Approach Take whole list as training instance More complex & difficult to implement Time complexity depends on surrogate loss used Less More suitable to Learning to rank Straightforwar dly represent Learning to Rank problem

No. Of instances Implementation complexity

Training time Characteristic Technique

More More suitable to ordinal regression Transform ranking to regression, classification or ordinal regression

Less More suitable to Learning to rank Transform ranking to pairwise classification

Figure 2 : Comparative analysis of various approaches tackles the ranking problem directly, by adopting listwise loss functions, or optimizing IR evaluation measures. It treats the list of documents associated with the same query as learning instance to obtain rank (position). Query level information. It takes document collection with respect to query as input space : { X 1(q) , ., XM(q)(q) } (RT)M(q) and produces permutation of these documents as output space : Y M(q) E.g. ListNet, RankGP. Pointwise approach is the simplest technique with O(n) complexity which can use existing theories algorithms on regression and classification. But the problem is that it is not obvious to directly compare two documents for the same query. Pairwise technique solves this difficulty of Pointwise approach in which an instance is one document pair. The document pair means that for a given query, two returned documents are taken into consideration, so that the two documents can be compared and it is easy to decide the relative position of the documents. This method also has some problems: It ignores the fact that ranking is a prediction task on list of objects. It formalizes the problem of learning to rank as that of classification. Specifically, in learning it collects document pairs from the ranking lists, and for each document pair it assigns a label representing the relative relevance of the two documents. It then trains a classification model with the labeled data and makes use of the classification model in ranking. Here the objective of learning is formalized as minimizing errors in

4. RANKING TECHNIQUES In learning to rank for information retrieval, a training set of queries and their associated documents (with relevance judgments) are provided. The ranking model is trained with the data in a supervised fashion, by minimizing certain loss functions. For ranking, the model is applied to new queries and sorts their associated documents. Three major approaches have been proposed, i.e., point-wise, pair-wise and list-wise approaches to learning to rank. Each approach has its pros and cons. [3]

Different approaches for Learning to Rank are: 4.1 Pointwise Approach The point-wise approach solves the problem of ranking by transforming it to regression, classification or ordinal regression. E.g. Pranking, Ranking with Large Margin Principles etc. 4.2 Pairwise Approach: The pair-wise approach transforms ranking to classification on document pairs. It takes pairs of documents and their relative preferences as training instances and attempt learning to classify each object pair into correctly ranked or incorrectly ranked.

classification of document pairs, rather than minimizing errors in ranking of documents. Its complexity is O(n2 ). Finally, Listwise approach came that takes ranked lists of objects as instances & trains a ranking function through the minimization of a Listwise loss function defined on the predicted list and the ground truth list. 5. CONCLUSION In this article, the task of learning to rank for information retrieval is described. Effective ranking function could be generated using various approaches by learning a function from the training data. Comparative analysis of various approaches is performed which shows listwise approach is

better than other two but it is little bit complex to implement. REFERENCES
[1] Christopher D. Manning, Prabhakar Raghavan, Hinrich Schtze Introduction to Information Retrieval Cambridge University Press, 2008. [2] Jen-Yuan Yen, Jung-Yi-Lin, Hao-Ren Ke,Wei-Pang Yang Learning to Rank for Information Retrieval Using Genetic Programming In Proceedings of the 30th Annual International ACM SIGIR conference on Learning to Rank for Information Retrieval, Amsterdam, The Netherlands, July2007. [3] Z. Cao, T. Qin, T.-Y. Liu, M-F. Tsai, H. Li. Learning to Rank: from pairwise approach tolistwise approach In Proceedings of 24thAnnual International Conference on Machine Learning, pp: 129- 136, 2007.

10

AODV WITH SUFFICIENT BANDWIDTH AWARE ROUTING PROTOCOL


Sanjeev K Prasad
Asst. Prof, MCA Department sanjeeevkps@rediffmail.com

ABSTRACT
Congestion is a major problem in mobile ad hoc network (MANET) which causes long delay and significant loss of data packets, and increases the routing overhead and battery power consumption. The shortest path route seldom offers the optimal route, especially when it traverses the congested area of the mobile network. This paper proposes a novel AODV (Ad-hoc On Demand Distance Vector Routing) with Sufficient Bandwidth Aware (AODV+SBA) routing protocol which significantly improves the performance of on-demand routing protocols by discovering better routes to avoid congestion and reducing excessive routing overhead.

In the paper [3] IEEE 802.11 DCF mode, since it is the most widely used wireless LAN standard. By using the wireless medium information from the MAC layer, AODV+SBA prevents the discovery of routes over which it is undesirable to carry additional data and routing traffic since the wireless medium over those hops is already very busy. The simulation results show many significant improvement of network performance and stability as well as a noticeable increase of data delivery ratio and decrease of data packet delay and routing overhead, especially in stressful network situations. 2. CHANNEL FREE TIME Congested area defense is a lightweight method to improve network performance and stability. Congestion in MANET causes long delay, high packet loss rate, and high routing overhead incurred from frequent rerouting. Node movement and media sharing produce dynamic bandwidth in high density area of nodes. All nodes in this area share the channel and increase the chance of congestion and instability. While the high priority of routing packets makes congestion worse. Therefore, the shortest route is seldom the best route when packets traverse the congested area. The residual bandwidth is used to protect the congested area. In case the area has a sufficient bandwidth to accept new data traffic and does not effect the current communication, the new data traffic can traverse it. Otherwise, the new data traffic is prohibited. Therefore, no additional new traffic traverses the congested area to worsen the situation. But so far, the difficulty of calculating the residual bandwidth using the IEEE 802.11 is still a challenging problem since the bandwidth is interfered and shared among neighbors. Moreover, each node does not have knowledge of traffic status of its neighbors. The estimation of the network capacity available can be done by calculating the Channel Free Time (CFT). The status flags in the [4] IEEE 802.11 can determine the free and busy times. Fig. 1 illustrates the protocol timing of the IEEE 802.11 RTS/CTS. The sender is busy while sending RTS until ACK is responded. Likewise, the receiver is busy while receiving RTS until ACK is successfully sent. Other nodes receive RTS and/or CTS that specify the Virtual Carrier Sense or Network Allocation Vector (NAV) to announce the channel busy time. The DIFS, SIFS and backoff scheme represent overhead, which must be accounted for in each data transmission. This overhead

1. INTRODUCTION A mobile ad hoc network (MANET) is a mobile wireless network that is formed spontaneously. It is a collection of autonomous mobile computing nodes that communicate with each other over packet radio and without using any existing network infrastructure, and thus to be self creating, self organizing, and battery-powered. Unlike the traditional wireless networks, communication in such a decentralized network is typically multi-hop, with the nodes using each other as relay routers without any fixed infrastructure. However, multi hop routing, random movement of mobile nodes and other features unique to MANET lead to enormous control overhead for route discovery and maintenance. In some scenarios, the routing maintenance overhead may consume so much resource that it seriously compromises long term efficiency. Furthermore, compared with the traditional networks, MANET suffers from the resource constraints in energy, computational capacities and bandwidth. All of these make routing in MANET a very challenging problem. To address the [2] routing challenge in MANET, many approaches have been proposed in the literature. Based on the routing mechanism for the traditional networks, the proactive approaches attempt to maintain routing information for each node in the network at all times, In this paper, we propose a new improved version of [1] Ad hoc Ondemand Distance Vector (AODV) that uses a lightweight mechanism to determine network congestion. It is based on the information acquired from the MAC layer, to improve algorithm performance. This algorithm which we call AODV+SBA uses the concept of congestion avoidance that prohibits the new route to allow additional traffic coming into the congested area.

11

makes it impossible in a distributed MAC competition scheme to fully use the available bandwidth for data transmission. The available period is idle time or CFT. The IEEE 802.11 uses RTS/CTS to detect free channel when the following three requirements are met. NAV value is less than the current time. Receiving state is idle. Sending state is idle.

3. CONCLUSIONS This paper has presented an AODV with Sufficient Bandwidth Aware (AODV+SBA) routing protocol. Several contributions have been made. First, it significant increases the network performance and stability when the network is heavily loaded. Second, it establishes a better route by avoiding the congested area. Third, it reduces the routing overhead as well as the battery power consumption to enhance the network lifetime. Finally, it preserves compatibility with the well known AODV routing protocol without modification of the original routing packets and internal processes. REFERENCES
[1] Perkins, C. E., Belding-Royer, E. M., and Das, S. 2003. Ad hoc ondemand distance vector (AODV) routing. RFC 3561. [2] Johnson, D., Hu, Y. and Maltz, D. 2007. The dynamic source routing protocol for mobile ad hoc networks for IPv4. RFC 4728. [3] Broach, J., Maltz, D., Johnson, D., Hu, Y. and Jetcheva, J. 1998. A performance comparison of multi-hop wireless ad hoc network routing protocols. Proc. ACM/IEEE MOBICOM. 85-97. [4] Lee, S. J. and Gerla, M. 2001. Dynamic load- aware routing in ad hoc networks. Proc. IEEE ICC. 3206-3210.

The MAC claims that the channel is busy when one of the following occurs: NAV sets a new value. Receive state changes from idle to any other state. Send state changes from idle to any other state.

12

THE GOOGLE CROME OPERATING SYSTEM


Ankita Sen
MCA Department, Vth Sem. E-mail: 15ankitasen@gmail.com

3. DETAILED FEATURES ABSTRACT


Googles blog announces a natural extension of the Chrome project: an operating system for netbooks. "Google Chrome OS" is an open source, lightweight operating system that will initially be targeted at netbooks but soon would be able to power full-size PCs. Later this year, its open-source will be available, and netbooks running Google Chrome OS will also be available for consumers in the second half of 2010. The software architecture is simple Google Chrome running within a new windowing system on top of a Linux kernel.

3.1 Speed The first thing Google have done is "take a pick axe to the stuff that happens when you boot up and has cut out all the stuff they deem unnecessary"[3] (such as checking whether or not you have a floppy drive) to result in a boot time of around 10 seconds allowing you to get connected quicker than ever. 3.2 Security Something very easy to claim to have but which is often over exaggerated, but Google do actually have a pretty good reason for this claim and it lies in the fact that rather than storing things locally everything (files, applications, etc) it will be all saved on the cloud which means Google can run daily security checks to ensure nothing is out of sync, and will allow them to fix threats and problems as they happen. 3.3 Simplicity Chrome OS is essentially a glorified version of the Google Chrome Browser you can be fairly certain that if your capable of using one of them then mastering the OS will be a doddle especially as there will be less features to play around with, and less things to go wrong. 3.4 User interface Design goals for Google Chrome OS's user interface include using minimal screen space by combining applications and standard Web pages into a single tab strip, rather than separating the two. Designers are considering a reduced window management scheme that would operate only in full-screen mode. Secondary tasks would be handled with "panels": floating windows that dock to the bottom of the screen for tasks like chat and music players. Split screens are also under consideration for viewing two pieces of content side-by-side. "Google Chrome OS will follow the Chrome browser's practice of leveraging HTML5's offline modes, background processing, and notification" [4]. Designers propose using search and pinned tabs as a way to quickly locate and access applications. 3.5 Remote application access Chrome OS will access remote applications through a technology unofficially called "Chromoting"[4], which would resemble Microsoft's Remote Desktop Connection.

1. INTRODUCTION After two decades of Windows' dominance, a new operating system could enter the fray later this year. Google announced the Chrome OS, not to be confused with the Chrome browser, about a year ago. Compared to Windows, it's expected to be leaner, with less code. It promises to speed start-up times when you turn on your PC as well as when connecting to the Web. "The new operating system fit its Internet-centric vision"[1] of computing. Google believes that software delivered over the Web will play an increasingly central role, replacing software programs that run on the desktop. In that world, applications run directly inside an Internet browser, rather than atop an operating system, the standard software that controls most of the operations of a PC. That vision challenges not only Microsofts lucrative Windows business but also its applications business, which is built largely on selling software that runs on PCs. Chrome OS will have a minimalist user interface, leaving most space on the screen to applications. 2. SALIENT FEATURES The OS will be "open source" and free. A secure OS without virus and malware issues. The OS will focus on speed and simplicity. It will boot quickly, have a minimal interface Ideal for people who spend the majority of their time on the web. "Chrome OS will run on both ARM and x86-based chip"and is designed for netbooks. The architecture is Chrome running within a new windowing system on top of a Linux kernel.

3.6 Integrated media player

13

Google will integrate a media player into both Chrome OS and the Chrome browser, enabling users to play back MP3s, view JPEGs, and handle other multimedia files while offline[5]. 3.7 Printing Google plans to create a service called Google Cloud Print, which will help any application on any device to print on any printer. This method of printing does not require any drivers and therefore will be suitable for printing from Google Chrome OS[6]. 4. CLOUD COMPUTING BASED APPROACH Google is taking a cloud-based approach in its operating system strategy. With traditional operating systems, the OS works with your computer's processor, memory and hard drive (or other storage medium) to execute commands. But Google is looking to port those tasks from your computer to a network of Web servers. Instead of saving a document to hard drive, save it to Google account. The actual information will exist on a remote server. Instead of storing lots of programs on computer, you'll access Web applications. It also means the computer doesn't need to be powerful or have a lot of storage capacity. It just needs to be able to access the Internet.

5. CONCLUSION Now, finally, even the tech purists can see the light at the end of the tunnel, who earlier complained and brought up mundane issues about hardware drivers, memory and processor management, and other red herrings. Forget the netbooks, which Google is targeting initially. We'll see PCs of all types being sold by the major manufacturers as soon as Google gets this out of beta next year. Microsoft has a very serious competitive threat to the core of their revenues. So can Google create a stable, reliable operating system that could revolutionize the consumer computer market? REFERENCES
[1]. Womack, Brian (2009-07-08). "Google to Challenge Microsoft With OperatingSystem".Bloomberg.com.http://www.bloomberg.com/apps/news? pid=20601087&sid=aTd2k.YdQZ.Y. [2]. Sengupta, Caesar (2009-11-19). "Releasing the Chromium OS open source project". Official Google Blog. Google, Inc.. http://googleblog.blogspot.com/2009/11/releasing-chromium-os-opensource.html. [3]. Helft, Miguel (November 19, 2009). "Google Offers Peek at Operating System, a Potential Challenge to Windows". New York Times. http://www.nytimes.com/2009/11/20/technology/companies/20chrome.html. [4]."The Chromium Projects: User Experience". Google. http://www.chromium.org/chromium-os/user-experience. 5]. Metz, Cade (June 9, 2010). "Google morphs Chrome OS into netbook thinclient".TheRegister.http://www.theregister.co.uk/2010/06/09/google_to _include_remote_access_in_chrome_os/ . [6]. Jazayeri, Mike (April 15, 2010). "A New Approach to Printing". The Chromium Blog. Google Inc..

14

AN INTELLECTUAL PROPERTY RIGHT OF SOFTWARE CODES


B. K. Sharma
Assistant Professor, MCA Department E-mail: bksharma888@yahoo.com

ABSTRACT
Today an Intellectual Property Right (IPR) of software codes is very challenges task for software developers or companies. A variety of prevention techniques have been developed for intellectual property rights using both hardware and software. But, unfortunately no single technique is currently strong enough to protect the software codes. However, through a combination of techniques software developer can better protect their software codes.

It is very easy to copy software codes It does not harm anyone The low quality of software Software is expensive The risk is minimal

2. TECHNIQUES OF COPYRIGHT PROTECTION Various techniques of copyright protection of software codes have been defined. These techniques are categorized into two ways: static watermark and dynamic watermark 2.1 Static watermarking In static watermarking the watermark is stored in the source code, either in the data section or in the code section. The one kind of static watermarking is naming convention like variable name always starts with VAR or numeric is appended at the end of the variable name [1]. Infosys Company is using these concepts [5]. This company is using these concepts at the time of declaring variables. In each variable first letter always starts with their data types like integer variable start with letter i, float variable starts with letter f, double variable start with letter d etc [3]. The other static watermarking recursively applies mathematical operations on a variable, which has no effect in over execution of the program. The extraction of such watermarks does not need to run the software. 2.2 Dynamic watermarking In dynamic watermarking the watermarks are generated during program execution and stores in the program execution state. The dynamic watermarking hides the watermark in data structures that is built specially for embedding purpose during execution of the program. The Semblance Based Disseminated Software Watermarking Algorithm (SDSW) [6] describes the watermarking techniques for the virtual environment like JVM. This is based on java-based applications. SDSW is designed to encode the secret information, which is added to the program after compilation. This encoding of secret information within the program is achieved by adding dummy instructions, which are hard to identify and to replace. The SDSW is divided into four steps (a) Watermark Encoding (b) Dictionary Mapping (c) Instruction Embedding (d) Watermark Recognizer. These four steps are briefly defined as below:

1. INTRODUCTION Software piracy is one of the main direct threats to software industry, which will bring serious damages to the interests of software developers. It directly affects the revenue of software vendors. As a prevention technique, one of the most promising attempts to protect intellectual property rights includes software watermarking. Software watermarking is a new research area that aims at providing copyright protection for commercial software. It is relatively new software protection technique appeared in recent decade, whose basic principle is to embed secret information as the evidence to identify an owner, track pirated software. This technique is also used in other kinds of protection and enforcement of intellectual property rights such as text, digital images, digital audio, digital video etc. The software can be protected by two main approaches namely hardware-based and software-based [2]. In the hardware-based technique, the developers or providers used additional hardware components such as a specific CD, smart card etc to execute the software. It is impossible to execute the software without the presence of a trusted hardware component. In the software-based technique, the developers used earlier registration codes, license files, shelling of the codes and many other methods, which protect the software merely via the software itself. The most common implementation technique is to put the registration on the client. It requires a legal token so as to give the user permission to use. The token may be a license key, a license file or an activation code and so on. The software codes are copied by most of the people due to the following reasons : Software is intangible or non-exclusive Everyone does it

15

1)

Watermark Encoding

considered successful. Distortion attacks The distortion attacks will not remove the watermark but might be able to damage or distort it in a way that the owner cannot prove the ownership of the software codes. Additive attacks The additive attacks can insert own watermarks in the software codes. The new watermark can either replaces the original watermark or be inserted in addition to the original watermark and thus it would be difficult to prove which watermark was inserted first. 4. CONCLUSIONS An Intellectual Property Right of software codes is used to improve functioning of the information society especially for the IT industry. Software source codes are one of the most important assets of the information society, and thus it is important to capture, store and apply it without any piracy. Through this approach we can show IPR of our s/w codses. REFERENCES
[1] Zeeshan Pervez, Noor-ul-Qayyum, Yasir Mahmood, Hafiz Farooq Ahmad, Semblance Based Disseminated Software Watermarking Algorithm IEEE 23rd International Symposium on CIS, 27-29 Oct. 2008, PP 1-4 Xuesong Zhang, Fengling He, Wanli Zuo, Hash Function Based Software Watermarking IEEE International conference on ASEA 2008, 13-15 Dec. 2008, PP 95 - 98

2) Watermark encoding for JVM works by manipulating the classes of program. Every class of the program is represented by smaller representation which is calculated by hashing or by manipulating the local variables or instruction of methods 3) Dictionary Mapping 4) Dictionary mapping maps the set of possible dummy instructions or variables, which are inserted in the program to encode watermark. 5) Instruction Embedding 6) Instruction Embedding explains various techniques for watermarking like watermark the whole class, every method or selected methods. 7) Watermark Recognizer 8) Watermark Recognizer recognizes the method to extract the watermark by tracing the dummy instructions. The dummy instructions are scrutinized by scanning the program and each time instruction counterparts to the one in dictionary its corresponding binary is recorded. The recorded binaries are used to reveal the Key to unmask the legitimate buyer 3. THREAT ANALYSIS Any watermark is considered robust if it stands various attacks and distortion attempts. Attacks against watermarks fall in three main categories. Subtractive attacks The subtractive attacks try to remove the watermark from the software codes. By this the software code might damage some functionalities or parts of the program. If the software codes are still able to retain enough original content then watermark is

[2]

[3]

Siva subramanyam Y, Deepak Ranjan Shenoy, Computer Hardware and System Software Concepts Vol -1, Version 1.0, March 2007. .[4 Mikko T . Siponen , Tero Vartiainen , Unauthorized Copying of Software - An Empirical Study of Reasons For and Against,SIGCAS Computers and Society,Volume 37, No.1, June 2007, PP 30 -43 [5] Petar Djekic, Claudia Loebbecke Preventing application software An empirical investigation of technical Copy Protections.The Journal of Strategic Information System , Volume 16, Issue 2, June 2007, PP 173-186

16

CoLINUX
Ms. Arpna Saxena
Assistant Professor, MCA Department E-mail: arpna_sre@rediffmail.com

ABSTRACT
Cooperation is probably the last thing you think of when considering GNU/Linux and Microsoft Windows, but that's exactly what you get with the Cooperative Linux (CoLinux) kernel. CoLinux is a port of the Linux operating system that executes as a single process in the Microsoft operating system. CoLinux utilizes the rather underused concept of a Cooperative Virtual Machine (CVM), in contrast to traditional VMs that are unprivileged and being under the complete control of the host machine [1].

The element of the system that provides the virtualization is commonly known as a virtual machine monitor or hypervisor. Each operating system uses its own virtual machine that cooperates with the hypervisor to arbitrate access to the physical hardware (see Figure 2).

1. INTRODUCTION CoLINUX is port of the Linux kernel that allows it to run as an unprivileged lightweight virtual machine in kernel mode, on top of another OS kernel. It allows Linux to run under any operating system that supports loading drivers, such as Windows or Linux, after minimal porting efforts. It could be stated it is a software which allows Microsoft Windows and the Linux kernel to run simultaneously in parallel on the same machine. e operating systems cooperate with each other by giving each other the central processing unit (CPU), as shown in Figure 1.

Guest Operating Guest Operating System A System B Virtual machine Virtual machine A A Hypervisior (Virtual machine monitor)

Hardware
Figure 2. Hypervisor Arbitrates access to the Physical hardware

Microsoft Windows XP

Control

Linux Virtual Abstraction

3. COOPERATIVE LINUX 3.1 History Dan Aloni originally started the development of Cooperative Linux based on similar work with User-mode Linux. He announced the development on 25 Jan 2004. In July 2004 he presented a paper at the Linux Symposium. The source was released under the GNU General Public License. Other developers have since contributed various patches and additions to the software. 3.2 CoLINUX CoLinux is a port of the standard Linux kernel. In other words, CoLinux is the Linux kernel that's modified to run cooperatively with another operating system. The host operating system (Windows or Linux) maintains control of the physical resources of the operating system, while the guest operating system (CoLinux) is provided with a virtual abstraction of the hardware. The host operating system must provide the means to execute a driver in the privileged ring (ring 0) and export the means to allocate memory (see Figure 3). The root file system for CoLinux is a regular file within the host operating system. To Windows it's just a regular

Hardware
Figure 1. Microsoft Windows and Linux cooperate with each other in CoLinux But before proceeding further, question arises that what is meant by virtualization and virtual machines? 2. VIRTUALIZATION Virtualization is an over-used term. In the context of this article, I'm referring to the platform variation. Virtualizing [ 1] a platform (or hardware) means that the hardware is abstracted from a physical platform into a collection of logical platforms onto which operating systems can be run. In the simplest sense, this means that you can run multiple operating systems (of the same or different types) on the same hardware platform.

17

file, but to CoLinux it's an ext3 file system that can be read and written.

ii. It also has some dependencies on external software for normal operation (windows and networking support). iii. Does not yet support 64-bit Windows or Linux (nor utilize more than 4GB memory) but a port is under active development by the community. 4. CONCLUSION While there are many virtualization schemes out there, CoLinux is novel in its approach and the capabilities that it provides. CoLinux by itself provides a virtualized Linux on top of Windows. With the addition of some other open source tools, you can support a full-fledged Linux development system complete with networking and a graphical user interface. CoLinux isn't perfect, but it's a great way to use Linux on a standard Windows desktop computer. . REFERNECES
[1] http://www.colinux.org/?section=status [2] http://colinux.wikia.com/wiki/XcoLinux [3] Dan Aloni paper presented July 2004 at Linux Symposium,Ottawa,Canada [4] Jeff Dike. User Mode Linux http://user-mode-linux.sf.net. [5] Donald E. Knuth. The Art of Computer Programming, volume 1. Addison-Wesley, Reading, Massachusetts,1997. Describes coroutines in their pure sense. [6] Richard Potter. Scrapbook for User ModeLinux. http://sbuml.sourceforge.net/.

Linux apps coLinux-daemon coLinux Single host process

Windows XP Hardware Figure 3 CoLinux executes as a process of the Host Operating System 3.3 Uses of CoLinux 1) Relatively effortless migration path from Windows In the process of switching to another OS, there is the choice between installing another computer, dual booting, or using virtualization software 2) Adding Windows machines to Linux clusters The Cooperative Linux patch is minimal and can be easily combined with others such as the MOSIX or OpenMOSIX patches that add clustering capabilities to the kernel. This work in progress allows to add Windows machines to super-computer clusters, where one illustration could tell about a secretary workstation computer that runs Cooperative Linux as a screen saverwhen the secretary goes home at the end of the day and leaves the computer unattended, the office's cluster gets more CPU cycles for free. 3) Running an otherwise-dual-booted Linux System from the other OS The Windows port of Cooperative Linux allows it to mount real disk partitions as block devices. 4) Using Linux as a Windows firewall on the same machine As a likely competitor to other out-of-the-box Windows firewalls, iptables along with a stripped-down Cooperative Linux system can potentially serve as a network firewall. 5) Problems with Colinux i. The primary disadvantage of CoLinux is that it has the ability to crash the entire machine (all cooperating operating systems) because the guest operating system runs in a privileged mode in the host kernel.

18

SEE THROUGH WALLS


Lalit Arora
Assistant Professor, MCA Department E-mail: l_k_a2000@yahoo.com

ABSTRACT It is not possible to see any thing behind the objects which are not transparent. But Nokia has developed new technology to see through the walls and hard objects. Here I am giving a brief overview about this new invention.

1. INTRODUCTION Researchers from the University of South Australia in collaboration with Nokia started working on one of their latest inventions that would make it possible for users of cell phones to see through walls.

surrounding real world of the user becomes interactive and digitally usable. Artificial information about the environment and the objects in it can be stored and retrieved as an information layer on top of the real world view. The term augmented reality is believed to have been coined in 1990 by Thomas Caudell, an employee of Boeing at the time[1]. The team has categorized AR system in three types: 1) X-ray Vision, 2) Melt Vision 3) Distort Vision. According to Dr Christian Sandor, Director of the Magic Vision Lab at UniSA, users prefer Meltvision over X-ray vision, due to a more appealing look, where structures appear to melt away. As for Distortvision, it changes the mobile video picture so that the objects that cannot be seen "bent" so the person could see them in the image. 2.1 X-RAY VISION X-ray Vision provides a view through the objects like as transparent objects. See Figure-1 and two which gives the example of X-ray Vision.

2. TECHNOLOGY USED The latest invention of University of South Australia makes use of augmented reality (AR), being able to overlay graphics on top Nokia and researchers from the of real the video. Augmented reality (AR) is a term for a live direct or indirect view of a physical real-world environment whose elements are augmented by virtual computer-generated imagery. It is related to a more general concept called mediated reality in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing ones current perception of reality. In the case of Augmented Reality, the augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the

Figure 1.Taking the view of Wall of Building

Figure 2. X-ray Vision (Behind the Wall). 2.2 MELT VISION In this view objects are melt to show the view behind them. Figure-3 and 4 shows the example of this vision.

19

It would be interesting to note that the researchers have also been working on an invention that would make it possible for users to see and sense virtual objects. The new technology is called Visuo-Haptic Augmented Reality and it allows an individual to manipulate a 3D object by making use of a head mounted screen and touch-based gadgets. 4. CONCLUSION This technology brings the Computer graphics into the new heights. Now we can see what are doing our neighbors. We can watch all the needful things which we want. And this technology is very good for the police and securities. REFERNECES
[1]Brian X. Chen. "If Youre Not Seeing Data, Youre Not Seeing", Wired Magazine, 2009. [2]http://www.magicvisionlab.com/projects/mars [3]http://thenextweb.com/location/2010/03/10/nokia-building-xray-phoneapp/ [4] http://techie-buzz.com/mobile-news/nokiaa-new-app-to-have-an-x-rayscanner.html [5] http://www.advanced-intelligence.com/ goggles.html [6]http://english.pravda.ru/science/19/94/377/11797_phenomenon.html [7]www.theinternetpatrol.com/x-ray-vision-becomes-a-reality-camera-lenslets-you-see-through-clothes/

Figure 3. Taking the view of building.

Figure 4. Melt-vision (showing behind the Building) 2.3 DISTORT VISION Distort Vision enables users to explore points of interest that are outside their field of view. Distortvision alters the image so that objects out of the line of sight can be bent into vision. 3. FUTURE SCOPE

20

MUTIPLE SEQUENCE ALIGNMENT PROBLEM SOLUTION USING GENETIC ALGORITHM


Ruchi Gupta
Assistant Professor, MCA Department E-mail: 80_ruchi@gmail.com

ABSTRACT
Multiple Sequence Alignment is an important problem in molecular biology; In general, MSA belongs to a class of hard optimization problems called combinatorial problems. One of the methods that have been developed recently to solve this type of Problems is Genetic Algorithms. In this study we show how Genetic Algorithms can be used to solve the Multiple Sequence Alignment problem. Our results suggest that optimal or near-optimal solutions can be obtained with GA faster than with dynamic programming methods.

1. INTRODUCTION Multiple sequence alignment is an optimization problem that appears in many and diverse scientific fields. During the last decade, there has been an increasing interest in the biosciences for methods that can efficiently solve this problem for sequences such as biological molecules, DNA and proteins. To date, most of these methods follow either the dynamic programming approach, or a tree-based approach. However, multiple sequence alignment is a combinatorial problem with exponential time complexity; therefore, there is no good analytical method that can solve it efficiently. Genetic algorithms is a fairly new, nonanalytical optimization technique that can give solutions to hard optimization problems that traditional techniques fail to solve. It is based on a simulated evolution, where processes such as crossover, mutation and survival of the fittest help to evolve good solutions to a given problem. Multiple sequence alignment allows comparison of the structural relationships between sequences by simultaneously aligning multiple sequences and constructing connections between the elements in different sequences. The input set of query sequences, are assumed to have an evolutionary relationship. The main problem in MSA is its exponential complexity with the considered input data set. In this study, we show how genetic algorithms can be used to solve the problem of multiple sequence alignment. 2. MULTIPLE SEQUENCE ALIGNMENT Multiple sequence alignment (MSA) refers to the problem of optimally aligning three or more sequences of symbols with or without inserting gaps between the symbols. The objective is to maximize the number of matching symbols

between the sequences and also use only minimum gap insertion, if gaps are permitted. This problem appears in several fields, such as molecular biology, geology, and computer science. In biology it is especially important for constructing evolutionary trees based on DNA sequences and for analyzing the protein structures to help design new proteins. Multiple sequence alignment belongs to a class of optimization problems with exponential time complexity, called combinatorial problems. It exhibits O(LN) time complexity where L is the mean length of the sequences to be aligned and N is the number of sequences [Carrillo88]. In biology, the sequences can have lengths in the order of hundreds (proteins), thousands (RNA), or millions of units (DNA). This results in unacceptable long times, even for aligning only a few sequences . To compare different alignments, a fitness function is defined based on the number of matching symbols and the number and size of gaps. In biology, this fitness function is referred to as cost function and is given biological meaning by using different weights for different types of matching symbols and assigning gap costs when gaps are used. 3. GENETIC ALGORITHMS Genetic algorithms are population-based algorithms based on the concept of biological evolution and biological genetics [6]. When applied to optimization specific problems, genetic algorithms provide the advantage of hybridization with domain-dependent heuristics. Genetic algorithms is an optimization technique that was formulated during the early years of the 1970s by John Holland [5]. This technique is useful for finding the optimal or near optimal solutions for combinatorial optimization problems that traditional methods fail to solve efficiently. Genetic algorithms use chromosomes to represent a possible solution of the problems, and begin with a randomly selected population set, which represents the genes in a chromosome, coded as variables of the problem. These populations produce other generations following the principles of natural selection. These include crossover, mutation over many generations and survival of the fittest. Usually, fitness values can be evaluated with some utility measure or benefit that is appropriate

21

The genetic algorithms approach is based on the assumption that simulating an evolutionary process in a population of potential solutions can eventually evolve good solutions. Biological terms are conveniently used to describe this process: chromosomes are the potential solutions. Every chromosome is composed of several genes, the solution parameters. Many chromosomes form a population. Successive populations are referred to as generations. Crossover is the exchange of genes (solutions parameters) between two chromosomes (solutions). Mutation is the random change of one or more genes in a chromosome. Offsprings are the new chromosomes created by two parent chromosomes by crossover. The genetic algorithms process starts with an initial population composed of random chromosomes, which form the first generation. Crossover is used to combine genes from the existing chromosomes and create new ones. Then, the best chromosomes are selected to form the next generation. This selection is based on a fitness function which assigns a fitness value to every chromosome. The ones with the best fitness value survive to give offsprings for the new generation, and the process is repeated until satisfactory solutions evolve. The main advantage of genetic algorithms over other optimization methods is that there is no need to provide a particular algorithm to solve a given problem. It only needs a fitness function to evaluate the quality of different solutions. Also since it is an implicitly parallel technique, it can be implemented very effectively on powerful parallel computers to solve exceptionally demanding large-scale problems.

4. CONCLUSION Multiple sequence alignment is very useful in many scientific fields, including biology. However, it belongs to the combinatorial optimization problems with exponential time complexity. Genetic algorithms is a fairly new Optimization technique that is effective for this type of problems. In this study we describe the genetic algorithms methodology and we demonstrate how it can be implemented to produce optimal or near-optimal solutions to the Multiple Sequence Alignment problems. Two different types of alignments are considered: alignments with and without gaps. In both cases, genetic algorithms produce reasonably good solutions using only a small amount of computer resources. REFERNECES
[1] Carrillo H., and Lipman D., Siam J. The multiple sequence alignment problem in biology, Appl. Math., vol. 48, no. 5, pp. 10731082, October 1988. [2] Altschul S.F., Gap costs for multiple sequence alignment, J. Theoretical Biol., vol. 138, pp. 297-309,1989. [3] Holland J.H., Genetic Algorithms, Scientific American, pp. 66-72, July 1992. [4] Sankoff D., and Kruskal J.B., eds., Time warps, string edits, and macromolecules: the theory and practice of sequence comparison, Addison-Wesley, 1983. [5] Holland J.H., Adaptation in Natural and Artificial Systems, Ann Arbor: The University of Michigan Press, 1975.

22

A. SOCIAL SOFTWARE
Sharad Kumar Mishra
MCA Department, III-Sem.. E-mail: sharadmishra596@gmail.com

designed for use in business includes IBM Lotus Same ABSTRACT


The origins of social softwarefrom blogs to facebooks to instant messaging to wikisare firmly based in the information technologies of the past few decades. This research bulletin explores the genesis of some of the current social software products, helps define common characteristics, describes how the software is being used in higher education, and examines the implications for activities in colleges and universities.

time, XMPP and Microsoft Messenger. One can add friends to a contact or "buddy" list by entering the person's email address or messenger ID. If the person is online, their name will typically be listed as available for chat. Clicking on their name will activate a chat window with space to write to the other person, as well as read their reply. 4. TEXT CHAT Internet Relay Chat (IRC) and other online chat technologies allow users to join chat rooms and communicate with many people at once, publicly. Users may join a pre-existing chat room or create a new one about any topic. Once inside, you may type messages that everyone else in the room can read, as well as respond to messages from others. Often there is a steady stream of people entering and leaving. Whether you are in another person's chat room or one you've created yourself, you are generally free to invite others online to join you in that room. Instant messaging facilitates both one-to-one (communication) and many-to-many interaction. 5. INTERNET FORUMS Originally modeled after the real-world paradigm of electronic bulletin boards of the world before internet was born, internet forums allow users to post a "topic" for others to review. Other users can view the topic and post their own comments in a linear fashion, one after the other. Most forums are public, allowing anybody to sign up at any time. A few are private, gated communities where new members must pay a small fee to join, like the Something Awful Forums.Forums can contain many different categories in a hierarchy according to topics and subtopics. Other features include the ability to post images or files or to quote another user's post with special formatting in one's own post. Forums often grow in popularity until they can boast several thousand members posting replies to tens of thousands of topics continuously.There are various standards and claimants for the market leaders of each software category. Various add-ons may be available, including translation and spelling correction software, depending on the expertise of the operators of the bulletin board. In some industry areas, the bulletin board has its own commercially successful achievements: free and paid hardcopy magazines as well as professional and amateur sites.Current successful services have combined new tools with the older newsgroup and mailing list paradigm to produce hybrids like Yahoo! Groups and Google Groups. Also as a service

1. INTRODUCTION This article is about computer software. For the study of social procedures from a computer science perspective. Social software encompasses a range of software systems that allow users to interact and share data. This computermediated communication has become very popular with social sites like MySpace, Facebook and Bebo media sites like Flickr and YouTube as well as commercial sites like Amazon.com and eBay. Many of these applications share characteristics like open APIs, service-oriented design and the ability to upload data and media. [2] The terms Web 2.0 and (for large-business applications) Enterprise 2.0 are also used to describe this style of software.The more specific terms collaborative software and groupware are usually applied narrowly to software that enables collaborative work. Distinctions among usage of the terms "social", "trusted" and "collaborative" are in the applications or uses, not the tools themselves, although some tools are used only rarely for collaborative work. 2. KINDS OF TOOLS FOR ONLINE COMMUNICATION Social software applications include communication tools and interactive tools. Communication tools typically handle the capturing, storing and presentation of communication, usually written but increasingly including audio and video as well. Interactive tools handle mediated interactions between a pair or group of users. They focus on establishing and maintaining a connection among users, facilitating the mechanics of conversation and talk. 3. INSTANT MESSAGING An instant messaging application or client allows one to communicate with another person over a network in real time, in relative privacy. Popular, consumer-oriented clients include AOL Instant Messenger, Google Talk, ICQ, Meebo, MSN Messenger, Pidgin (formerly Gaim), Skype and Yahoo! Messenger. Instant messaging software

23

catches on, it tends to adopt characteristics and tools of other services that compete. Over time, for example, wiki user pages have become social portals for individual users and may be used in place of other portal applications. 6. WIKIS A wikb page whose content can be edited by its visitors. Examples include Wikipedia, Wiktionary, the original Portland Pattern Repository wiki, MeatballWiki, CommunityWiki and Wikisource. For more detail on free and commercially available wiki systems see Comparison of wiki software. 7. BLOGS Blogs, short for web logs, are like online journals for a particular person. The owner will post a message periodically, allowing others to comment. Topics often include the owner's daily life, views on politics or a particular subject important to them.Blogs mean many things to different people, ranging from "online journal" to "easily updated personal website." While these definitions are technically correct, they fail to capture the power of blogs as social software. Beyond being a simple homepage or an online diary, some blogs allow comments on the entries, thereby creating a discussion forum. They also have blogrolls (i.e. links to other blogs which the owner reads or admires) and indicate their social relationship to those other bloggers using the XFN social relationship standard. Pingback and trackback allow one blog to notify another blog, creating an inter-blog conversation. Blogs engage readers and can build a virtual community around a particular person or interest. Examples include Slashdot, LiveJournal, BlogSpot. Blogging has also become fashionable in business settings by companies who use software such as IBM Lotus Connections. 8. SOCIAL NETWORK SERVICES Social network services allow people to come together online around shared interests, hobbies or causes. For example, some sites provide dating services where users post personal profiles, locations, ages, gender, etc. and are able to search for a partner. Other services enable business networking (Ryze, XING and LinkedIn) and social event meetups (Meetup).Some large wikis have effectively become social network services by encouraging user pages and portals.Anyone can create their own social networking service using hosted offerings like Ning, grou.ps or rSitez or more flexible, installable software like Elgg, BuddyPress, phpFox or Concursive's ConcourseConnect. 9. SOCIAL NETWORK SEARCH ENGINES Social network search engines are a class of search engines that use social networks to organize, prioritize or

filter search results. There are two subclasses of social network search engines: those that use explicit social networks and those that use implicit social networks.Explicit social network search engines allow people to find each other according to explicitly stated social relationships such as XFN social relationships. XHTML Friends Network, for example, allows people to share their relationships on their own sites, thus forming a decentralized/distributed online social network, in contrast to centralized social network services listed in the previous section.Implicit social network search engines allow people to filter search results based upon classes of social networks they trust, such as a shared political viewpoint. [3] This was called an epistemic filter in the 1993 "State of the Future Report" from the American Committee for the United Nations University which predicted that this would become the dominant means of search for most users.Lacking trustworthy explicit information about such viewpoints, this type of social network search engine mines the web to infer the topology of online social networks. For example, the NewsTrove search engine infers social networks from content - sites, blogs, pods and feeds - by examining, among other things, subject matter, link relationships and grammatical features to infer social networks. 10. CONCLUSION It is clear that the use of social software in ARL member libraries has rapidly increasedfrom two institutions in 1996 to 63 institutions in early 2008. The range of social software applications has also diversified in that time span from chat and instant messaging in 1996 to ten, or more, types in 2008. Accompanying this diversification, social software has also been streamlined to some extent. A decade ago libraries implemented one, or perhaps two, applications. Today, libraries implement multiple applications as part of larger integrated tools, e.g., subject guides that are part wiki, part blog, part instant messaging, part social tagging, etc., and social networking sites that are part widget, part media sharing applications, part instant messaging, etc. While the data in this survey offers a snapshot of the past, it also offers a glimpse of the future. Whatever the future holds, it is certain that ARL libraries will continue to offer and expand upon the social software offerings of today. REFERENCE
[1]. Yadav, Sid. Facebook The Complete Biography. Mashable: Social Networking News. August 25, 2008. http://mashable.com/2006/08/25/facebook-profile/ Viewed July 18, 2008. [2]. Ronan, Jana. 2003. Chat Reference. Libraries Unlimited, p. 2. [3]. The WELL. 2008 Salon Media Group Inc. 101 Spear Street, Suite 203, San Francisco, CA 94105 http://www.well.com/aboutwell.html Viewed July 18, 2008. [4]. Social Networking Timeline. Searcher 15 no. 7 (July 2007): 38.

24

TELEPORTATION
Vinakar Singh MCA Department, III-Sem.. E-mail: vinakar_singh@gmail.com scanned, the more it is disturbed by the scanning process, until one reaches a point where the object's original state has ABSTRACT
In about 300 years, at the apse of human technology, teleportation will be possible. If you doubt that, read this article and you will understand. Scientists already know how to do it, but they dont have enough technology.

1. INTRODUCTION Teleportation is the transfer of matter from one point to another, without the matter traversing the intervening space in material form. The word Teleportation was coined in 1931 by American writer Charles Fort. He joined the Greek prefix tele(meaning distant) to the Latin verb portare(meaning to carry). The teleportation of humans , animals, inanimate objects, etc. has been depicted in many books of science fiction, but currently the teleportation of living or inanimate objects is considered to be beyond the capabilites of modern science. 2. TECHNICAL ASPECT: Its not the kind you see in TV, that people teleport anywhere with mind powers. The actual teleportation is a big scanner that people go in and every one of their atoms is scanned, type (H, C, O, O), and their exact position. A modern computer sends the info to the destiny computer and it makes a person from a barrel of oil by changing its atoms form and position. The original object is scanned in such a way as to extract all the information from it, then this information is transmitted to the receiving location and used to construct the replica, not necessarily from the actual material of the original, but perhaps from atoms of the same kinds, arranged in exactly the same pattern as the original. A teleportation machine would be like a fax machine, except that it would work on 3dimensional objects as well as documents. It would produce an exact copy rather than an approximate facsimile. A few science fiction writers consider teleporters that preserve the original, and the plot gets complicated when the original and teleported versions of the same person meet but the more common kind of teleporter destroys the original, functioning as a super transportation device, not as a perfect replicator of souls and bodies. In the past, the idea of teleportation was not taken very seriously by scientists, because it was thought to violate the uncertainty principle of quantum mechanics, which forbids any measuring or scanning process from extracting all the information in an atom or other object. [1] According to the uncertainty principle, the more accurately an object is

been completely disrupted, still without having extracted enough information to make a perfect replica. This sounds like a solid argument against teleportation: if one cannot extract enough information from an object to make a perfect copy, it would seem that a perfect copy cannot be made. But the six scientists found a way to make an end run around this logic, using a celebrated and paradoxical feature of quantum mechanics known as the Einstein-Podolsky-Rosen effect. and it would destroy the original in the process of scanning it. 3. PROBLEMS IN PRESENT TECHNOLOGY Scientists can only teleport a few atoms these days, and anyway, there would be a problem. The person that goes into the scanner has to be scanned and disintegrated very quickly, otherwise the person would die in the middle of the process. And even if it is quick, after the disintegration, the person is dead, and another person exactly like the one that died appears in the destiny with the exact same memories, but it still wouldnt be the same person, she wouldnt be able to tell if its her or not. The good thing is that you would be able to teleport objects!

4. CONCLUSION Whatever people might think about teleportation, the causality says that its impossible to observe the effect before the cause as you cannot see something before it happens and you cannot receive the information before it had been sent.

25

REFERENCES [1] Darling, David(2005). Teleportation: The Impossible Leap.


[2]Dash, Mike(2000). Borderlands: The Ultimate Exploration of the Unknown. Overlook Press. ISBN 0-87951-724-7. [3] Fort,Charles (1941). The Books of Charles 0. Fort. [4]Graham,Danielle(2006,January20). American Institute Of Physics Conference Proceedings,813,1256.

26

Potrebbero piacerti anche