Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
1. INTRODUCTION
People typically made friends with others who live or work close to themselves, such as neighbors or
colleagues. We call friends made through this traditional fashion as G-friends, which stands for
geographical location-based friends because they are inuenced by the geographical distances between
each other. With the rapid advances in social networks, services such as Facebook, Twitter and
Google+ have provided us revolutionary ways of making friends. According to Facebook statistics, a
user has an average of 130 friends, perhaps larger than any other time in history.
One challenge with existing social networking services is how to recommend a good friend to a user.
Most of them rely on pre-existing user relationships to pick friend candidates. For example, Facebook
relies on a social link analysis among those who already share common friends and recommends
symmetrical users as potential friends. Unfortunately, this approach may not be the most appropriate
based on recent sociology ndings. According to these studies the rules to group people together
include:
1) Habits or life style;
2) Attitudes;
3) Tastes;
4) Moral standards;
5) Economic level; and
6) People they already know.
Although probably the most intuitive, is not widely used because users life styles are difficult, if not
impossible, to capture through web actions. Rather, life styles are usually closely correlated with daily
routines and activities. Therefore, if we could gather information on users daily routines and
activities, we can exploit and recommend friends to people based on their similar life styles. This
recommendation mechanism can be deployed as a standalone app on smartphones or as an add-on to
existing social network frameworks. In both cases, Friend book can help mobile phone users nd
friends either among strangers or within a certain group as long as they share similar life styles.
In our everyday lives, we may have hundreds of activities,
which form meaningful sequences that shape our lives. In this paper, we use the word activity to
specifically refer to the actions taken in the order of seconds, such as sitting, walking, or typing,
while we use the phrase life style to refer to higher-level abstractions of daily lives, such as office
work or shopping. For instance, the shopping life style mostly consists of the walking activity,
but may also contain the standing or the sitting activities
Our proposed solution is also motivated by the recent advances in smartphones, which have become
1
Venkatachalam, Nellore
QCET,
1.1 Motivation:
We call friends made through this traditional fashion as G-friends, which stands for
geographical location-based friends because they are influenced by the geographical distances between
each other. With the rapid advances in social networks, services such as Facebook, Twitter and
Google+ have provided us revolutionary ways of making friends. According to Facebook statistics, a
user has an average of 130 friends, perhaps larger than any other time in history.
Most of them rely on pre-existing user relationships to pick friend candidates. Facebook relies
on a social link analysis among those who already share common friends and recommends
symmetrical users as potential friends. Unfortunately, this approach may not be the most appropriate
based on recent sociology findings.
According to these studies, the rules to group people together
include:1)habits 2)attitudes,3)tastes,4)moral standards,5)economic level and 6)people they already
know. In our everyday lives, we may have hundreds of activities, which form meaningful sequences
that shape our lives. We use the word activity to specifically refer to the actions taken in the order of
seconds, such as sitting, walking, or typing, while we use the phrase life style to refer to higherlevel abstractions of daily lives, such as office work or shopping. For instance, the shopping life
style mostly consists of the walking activity, but may also contain the standing or the sitting
activities.
1.2 PROBLEM DEFINITION
The problem of link recommendation in weblogs and similar social networks, and proposed an
approach based on collaborative recommendation using the link structure of a social network and
content-based recommendation using mutual declared interests. Gou et al. [15] proposed a visual
system, SFViz, to support users to explore and find friends interactively under the context of interest,
and reported a case study using the system to explore the recommendation of friends based on peoples
2
Venkatachalam, Nellore
QCET,
To the best of our knowledge, Friendbook is the first friend recommendation system exploiting
a users life style information discovered from smart phone sensors.
Inspired by achievements in the field of text mining, we model the daily lives of users as life
documents and use the probabilistic topic model to extract life style
information of users.
We propose a unique similarity metric to characterize the similarity of users in terms of life
styles and then construct a friend-matching graph to recommend friends to users based on their
life styles.
We integrate a linear feedback mechanism that exploits the users feedback to improve
recommendation accuracy.
We conduct both small-scale experiments and large scale simulations to evaluate the
performance of our system. Experimental results demonstrate the effectiveness of our system.
QCET,
2. LITERATURE SURVEY
2.1 Introduction:
Easy Tracker: Automatic Transit Tracking, Mapping, and Arrival Time Prediction Using Smart
phones
In order to facilitate the introduction of transit tracking and arrival time prediction in smaller
transit agencies, we investigate an automatic, smart phone-based system which we call Easy Tracker.
To use Easy Tracker, a transit agency must obtain smartphones, install an app, and place a phone in
4
Venkatachalam, Nellore
QCET,
each transit vehicle. Our goal is to require no other input. This level of automation is possible through
a set of algorithms that use GPS traces collected from instrumented transit vehicles to determine routes
served, locate stops, and infer schedules. In addition, online algorithms automatically determine the
route served by a given vehicle at a given time and predict its arrival time at upcoming stops.
We evaluate our algorithms on real datasets from two existing transit services. We demonstrate
our ability to accurately reconstruct routes and schedules, and compare our systems arrival time
prediction performance with the current state of the art for smaller transit operators: the official
schedule. Finally, we discuss our current prototype implementation and the steps required to take it
from a research prototype to a real system.
Incremental Page Rank Computation on Evolving Graphs
In this paper, we propose a method to incrementally compute Page Rank for a large graph that is
evolving. Our approach is quite general, and can be used to incrementally compute (on evolving
graphs) any metric that satisfies the first order Markov property
General Terms: Algorithms, Measurement, Performance, Reliability, Security, Standardization,
Theory, Verification.
Probabilistic Mining of Socio-Geographic Routines From Mobile Phone Data
There is relatively little work on the investigation of large-scale human data in terms of
multimodality for human activity discovery. In this paper, we suggest that human interaction data, or
human proximity, obtained by mobile phone Bluetooth sensor data, can be integrated with human
location data, obtained by mobile cell tower connections, to mine meaningful details about human
activities from large and noisy datasets.
QCET,
recommended friends.
Most of the friend suggestions mechanism relies on pre-existing user relationships to pick friend
candidates. For example, Face book relies on a social link analysis among those who already share
common friends and recommends symmetrical users as potential friends. The rules to group people
together include:
1) Habits or life style
2) Attitudes
3) Tastes
4) Moral standards
5) Economic level; and
6) People they already know.
Apparently, rule #3 and rule #6 are the mainstream factors considered by existing recommendation
systems.
QCET,
styles between users, and calculate users Impact in terms of life styles with a friend-matching graph.
We integrate a linear feedback mechanism that exploits the users feedback to improve
recommendation accuracy.
2.5 ADVANTAGES OF PROPOSED SYSTEM:
Friend book is the first friend recommendation system exploiting a users life style information
It use the probabilistic topic model to extract life style information of users
3. ANALYSIS
3.1 Introduction:
In our everyday lives, we may have hundreds of activities, which form meaningful sequences
that shape our lives. In this paper, we use the word activity to specifically refer to the actions taken in
the order of seconds, such as sitting, walking, or typing, while we use the phrase life style to
refer to higher-level abstractions of daily lives, such as office work or shopping. For instance, the
shopping life style mostly consists of the walking activity, but may also contain the standing or
the sitting activities.
To model daily lives properly, we draw an analogy between peoples daily lives and
documents, as shown in Figure 1. Previous research on probabilistic topic models in text mining has
treated documents as mixtures of topics, and topics as mixtures of words. Inspired by this, similarly,
we can treat our daily lives (or life documents) as a mixture of life styles (or topics), and each life style
as a mixture of activities (or words). Observe here, essentially, we represent daily lives with life
documents, whose semantic meanings are reflected through their topics, which are life styles in our
study.
Just like words serve as the basis of documents, peoples activities naturally serve as the
primitive vocabulary of these life documents. Our proposed solution is also motivated by the recent
advances in smart phones, which have become more and more popular in peoples lives. These smart
phones (e.g., iPhone or Android-based smart phones) are equipped with a rich set of embedded
sensors, such as GPS, accelerometer, microphone, gyroscope, and camera.
7
Venkatachalam, Nellore
QCET,
Thus, a smart phone is no longer simply a communication device, but also a powerful and
environmental reality sensing platform from which we can extract rich context and content-aware
information. From this perspective, smart phones serve as the ideal platform for sensing daily routines
from which peoples life styles could be discovered. In spite of the powerful sensing capabilities of
smart phones, there are still multiple challenges for extracting users life styles and recommending
potential friends based on their similarities.
Objectives
1. Input Design is the process of converting a user-oriented description of the input into a computerbased system. This design is important to avoid errors in the data input process and show the correct
direction to the management for getting correct information from the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of data. The
goal of designing input is to make data entry easier and to be free from errors. The data entry screen is
designed in such a way that all the data manipulates can be performed. It also provides record viewing
8
Venkatachalam, Nellore
QCET,
facilities.
3. When the data is entered it will check for its validity. Data can be entered with the help of screens.
Appropriate messages are provided as when needed so that the user will not be in maize of instant.
Thus the objective of input design is to create an input layout that is easy to follow
Output Design
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to other
system through outputs. In output design it is determined how the information is to be displaced for
immediate need and also the hard copy output. It is the most important and direct source information
to the user. Efficient and intelligent output design improves the systems relationship to help user
decision-making.
1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will find
the system can use easily and effectively. When analysis design computer output, they should Identify
the specific output that is needed to meet the requirements.
2. Select methods for presenting information.
3. Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following objectives.
Convey information about past activities, current status or projections of the Future.
Signal important events, opportunities, problems, or warnings.
Trigger an action.
Confirm an action.
QCET,
indistinguishable hard to change the plan formerly it has been arranged and then again confectioning a
plan, which does not coddle the necessities of the client, is of no utilization. The necessity
specification as any scheme behind is broadly stated as contribution is down: The scheme ought to be
able to interface with the costing scheme.
The scheme ought be exact
The scheme ought be better than the costing scheme
The costing scheme is completely dependent on the user to purism all the duties.
: Windows95/98/2000/XP /7
: TOMCAT7.0/8.X
When running under a security manager, user requests may fail with a security exception.
The log message level for invalid URL parameters has been reduced from WARNING to
INFO.
Hanging Servlet 3 asynchronous requests, when using the APR based AJP connector, have
been fixed.
10
Venkatachalam, Nellore
QCET,
Apache Tomcat, often referred to as Tomcat, is an open-source web server developed by the
Apache Software Foundation (ASF). Tomcat implements several Java EE specifications
including Java Servlet, Java Server Pages (JSP), Java EL, and Web Socket, and provides a
"pure Java" HTTP web server environment for Java code to run in.
Tomcat is developed and maintained by an open community of developers under the auspices
of the Apache Software Foundation, released under the Apache License 2.0license, and is
open-source software.
Davidson had initially hoped that the project would become open sourced and, since many
open source projects had O'Reilly books associated with them featuring an animal on the cover,
he wanted to name the project after an animal. He came up with Tomcat since he reasoned the
animal represented something that could fend for itself. Although the tomcat was already in use
for another O'Reilly title,[8] his wish to see an animal cover eventually came true when
O'Reilly published their Tomcat book with a snow leopard on the cover in 2003
Tomcat 7.x implements the Servlet 3.0 and JSP 2.2 specifications.[7] It requires Java version
1.6, although previous versions have run on Java 1.1 through 1.5. Versions 5 through 6 saw
improvements in garbage collection, JSP parsing, performance and scalability. Native
wrappers, known as "Tomcat Native", are available for Microsoft Windows and Unix for
platform integration.
FRONT END
Java Server Pages (JSP) is a technology that helps software developers create dynamically generated
web pages based on HTML, XML, or other document types. Released in 1999 by Sun Microsystems,
JSP is similar to PHP and ASP, but it uses the Java programming language. To deploy and run Java
Server Pages, a compatible web server with a servlet container, such as Apache Tomcat or Jetty, is
required.
11
Venkatachalam, Nellore
QCET,
JSP may be viewed as a high-level abstraction of Java servlets. JSPs are translated into servlets at
runtime; each JSP servlet is cached and re-used until the original JSP is modified.
JSP can be used independently or as the view component of a server-side modelviewcontroller
design, normally with JavaBeans as the model and Java servlets (or a framework such as Apache
Struts) as the controller. This is a type of Model 2 architecture.
JSP allows Java code and certain pre-defined actions to be interleaved with static web markup content,
such as HTML, with the resulting page being compiled and executed on the server to deliver a
document. The compiled pages, as well as any dependent Java libraries, contain Java byte code rather
than machine code. Like any other Java program, they must be executed within a Java virtual machine
(JVM) that interacts with the server's host operating system to provide an abstract, platform-neutral
environment.
JSPs are usually used to deliver HTML and XML documents, but through the use of Output Stream,
they can deliver other types of data as well.
SCRIPTS :
JAVA SCRIPT
Java scripts is Netscape cross platform object based scripting language for client server application.
JAVA SCRIPT is mostly used as a client side scripting language. This means that java script code can
be written into an HTML page. The script is sent to the browser ,when a user request as HTML page
with java script in it. Java script can be used in other contents then a web browser .Netscape created
server-side java script as a CGI language that can be done roughly as same as per or ASP.
Fortunately the majority of browsers can handle Java script now a days,but ofcourse there are some
browsers which do not support some bits of scripts.
Types of java scripts:
a.Navigator java script is also called client-side javascript.
b. live wire javascript is also called server-side javascript
Dynamic HTML pages are created by used java scripts
that processes user input and maintain persistent data using special objects, files and relational data
base. Browser interprets the java scripts statements embedded in an HTML page. Netscape
Navigator2.0 and internet Explorer 3.0 versions and later recognized javascript.
SERVER SIDE SCRIPT
produce web sites and applications in an open and standard way. JSP is based on java,an object
12
Venkatachalam, Nellore
QCET,
MySQL
MySQL (officially pronounced as /ma skjul/and unofficially as /ma sikwl/ "My Sequel") is an
open-source relational database management system (RDBMS); in July 2013, it was the world's
second most widely used RDBMS, and the most widely used open-source clientserver model
RDBMS. It is named after co-founder Michael Widenius's daughter MySQL was owned and sponsored
by a single for-profit firm, the Swedish company MySQL AB, now owned by Oracle Corporation. For
proprietary use, several paid editions are available, and offer additional functionality.
DATA BASE CONNECTIVITY
JDBC
Java Database Connectivity (JDBC) is an application programming interface (API) for the
programming language Java, that defines how a client may access a database. It is part of the Java
Standard Edition platform, from Oracle Corporation. It provides methods to query and update data in a
database, and is oriented towards relational databases.
The JDBC library includes APIs for each of the tax commonly associated with database usage:
Java Applications
Java Applets
Java Servlets
Enterprise javabeans(EJBs)
13
Venkatachalam, Nellore
QCET,
JAVA TECHNOLOGY
INTRODUCTION
Java technology is both a programming language and a platform.
THE JAVA PROGRAMMING LANGUAGE
The Java programming language is a high-level language that can be characterized by all of the
following buzzwords:
Simple
Architecture neutral
Object oriented
Portable
Distributed
High performance
Interpreted
Multithreaded
Robust
Dynamic
Secure
With most programming dialects, you either order or decipher a project so you can run it on your PC.
The Java programming dialect is strange in that a project is both gathered and deciphered. With the
compiler, first you make an interpretation of a system into a middle dialect called Java byte codes
the stage autonomous codes deciphered by the mediator on the Java stage. The mediator parses and
runs every Java byte code direction on the PC. Accumulation happens just once; translation happens
every time the system is executed. The accompanying figure outlines how this function
QCET,
is an execution of the Java VM. Java byte codes make "compose once, run anyplace" conceivable. You
can incorporate your project into byte codes on any stage that has a Java compiler. That implies that
the length of a PC has a Java VM, the same system written in the Java programming dialect can keep
running on Windows 2000, a Solaris workstation, or on an iMac.
You've as of now been acquainted with the Java VM. It's the base for the Java stage and is
ported onto different equipment based stages.
The Java API is an extensive accumulation of instant programming parts that give numerous
valuable capacities, for example, graphical client interface (GUI) gadgets. The Java API is
assembled into libraries of related classes and interfaces; these libraries are known as bundles.
The following segment, how Java Technology is valuable? Highlights what usefulness a
portion of the bundles in the Java API give.
The accompanying figure delineates a project that is running on the Java stage. As the figure
demonstrates, the Java API and the virtual machine protect the system from the equipment.
15
Venkatachalam, Nellore
QCET,
The most well-known sorts of projects written in the Java programming dialect are
applets and applications. In the event that you've surfed the Web, you're presumably
officially acquainted with applets. An applet is a system that holds fast to specific
traditions that permit it to keep running inside of a Java-empowered program.
Be that as it may, the Java programming dialect is not only to write charming,
enthralling applets for the Web. The broadly useful, abnormal state Java programming
dialect is additionally a capable programming stage. Utilizing the liberal API, you can
compose numerous sorts of projects.
How does the API bolster every one of these sorts of projects? It does as such with
bundles of programming segments that gives an extensive variety of usefulness. Each
full execution of the Java stage gives you the accompanying components:
16
Venkatachalam, Nellore
QCET,
The essentials: Objects, strings, threads, numbers, input and output, data structures,
system properties, date and time, and so on.
Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram
Protocol) sockets, and IP (Internet Protocol) addresses.
Internationalization: Help for writing programs that can be localized for users
worldwide. Programs can automatically adapt to specific locales and be displayed in the
appropriate language.
Security: Both low level and high level, including electronic signatures, public and
private key management, access control, and certificates.
The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration,
telephony, speech, animation, and more. The following figure depicts what is included in the
Java 2 SDK.
17
Venkatachalam, Nellore
QCET,
Get started quickly: Although the Java programming language is a powerful objectoriented language, its easy to learn, especially for programmers already familiar with C
or C++.
code Write less: Comparisons of program metrics (class counts, method counts, and so
on) suggest that a program written in the Java programming language can be four times
smaller than the same program in C++.
Write better code: The Java programming language encourages good coding practices,
and its garbage collection helps you avoid memory leaks. Its object orientation, its
JavaBeans component architecture, and its wide-ranging, easily extendible API let you
reuse other peoples tested code and introduce fewer bugs.
Develop programs more quickly: Your development time may be as much as twice as
fast versus writing the same program in C++. Why? You write fewer lines of code and
it is a simpler programming language than C++.
Avoid platform dependencies with 100% Pure Java: You can keep your program
portable by avoiding the use of libraries written in other languages. The 100% Pure
JavaTM Product Certification Program has a repository of historical process manuals,
white projects, brochures, and similar materials online.
Write once, run anywhere: Because 100% Pure Java programs are compiled into
machine-independent byte codes, they run consistently on any Java platform.
Distribute software more easily: You can upgrade applets easily from a central server.
Applets take advantage of the feature of allowing new classes to be loaded on the fly,
without recompiling the entire program.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application
designers and database frameworks suppliers. Before ODBC turned into an accepted standard for
18
Venkatachalam, Nellore
QCET,
Windows projects to interface with database frameworks, developers needed to utilize exclusive
dialects for every database they needed to unite with. Presently, ODBC has settled on the decision of
the database framework practically unimportant from a coding point of view, which is as it ought to
be. Application engineers have substantially more imperative things to stress over than the grammar
that is expected to port their project starting with one database then onto the next when business needs
all of a sudden change. Through the ODBC Administrator in Control Panel, you can determine the
specific database that is connected with an information source that an ODBC application system is
composed to utilize. Think about an ODBC information source as an entryway with a name on it.
Every entryway will lead you to a specific database. For instance, the information source named Sales
Figures may be a SQL Server database, though the Accounts Payable information source could allude
to an Access database. The physical database alluded to by an information source can dwell anyplace
on the LAN.
The ODBC framework records are not introduced on your framework by Windows 95. Maybe, they
are introduced when you setup a different database application, for example, SQL Server Client or
Visual Basic 4.0. At the point when the ODBC symbol is introduced in Control Panel, it utilizes a
document called ODBCINST.DLL. It is additionally conceivable to direct your ODBC information
sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit
rendition of this project and each keeps up a different rundown of ODBC information sources.
JDBC
19
Venkatachalam, Nellore
QCET,
With an end goal to set an autonomous database standard API for Java; Sun Microsystems created Java
Database Connectivity, or JDBC. JDBC offers a nonexclusive SQL database access instrument that
gives a predictable interface to an assortment of RDBMSs. This reliable interface is accomplished
through the utilization of "module" database network modules, or drivers. On the off chance that a
database merchant wishes to have JDBC bolster, he or she must give the driver to every stage that the
database and Java keep running on.
To pick up a more extensive acknowledgment of JDBC, Sun construct JDBC's structure with respect
to ODBC. As you found before in this section, ODBC has broad backing on an assortment of stages.
Constructing JDBC in light of ODBC will permit merchants to put up JDBC drivers for sale to the
public much quicker than building up a totally new network arrangement.
JDBC was declared in March of 1996. It was discharged for a 90 day open survey that finished June 8,
1996. In light of client data, the last JDBC v1.0 determination was discharged before long.
The rest of this segment will sufficiently cover data about JDBC for you to comprehend what it is
about and how to utilize it viably. This is in no way, shape or form a complete diagram of JDBC. That
would fill a whole book.
JDBC GOALS
Some software bundles are planned without objectives personality a primary concern. JDBC is one
that, on account of its numerous objectives, drove the advancement of the API. These objectives, in
conjunction with right on time analyst input, have settled the JDBC class library into a strong system
for building database applications in Java.
The objectives that were set for JDBC are critical. They will give some understanding in respect to
why certain classes and functionalities carry on the way they do. The eight configuration objectives for
JDBC are as per the following:
QCET,
accomplishing this objective takes into consideration future device merchants to "create" JDBC
code and to conceal huge numbers of JDBC's complexities from the end client.
2. SQL Conformance
SQL sentence structure shifts as you move from database seller to database merchant. With an
end goal to bolster a wide assortment of merchants, JDBC will permit any question explanation
to be gone through it to the fundamental database driver. This permits the network module to
handle non-standard usefulness in a way that is suitable for its clients.
3. JDBC
must
be
implemental
on
top
of
common
database
interfaces
The JDBC SQL API must "sit" on top of other normal SQL level APIs. This objective
permits JDBC to utilize existing ODBC level drivers by the utilization of a product interface.
This interface would make an interpretation of JDBC calls to ODBC and the other way around.
4. Provide a Java interface that is consistent with the rest of the Java system
Because of Javas acceptance in the user community thus far, the designers feel that they
should not stray from the current design of the core Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt
that the design of JDBC should be very simple, allowing for only one method of completing a task
per mechanism. Allowing duplicate functionality only serves to create confusion to the users of
the API.
6. Use strong, static typing wherever possible
Strong typo allows for more error checking to be done at compile time; also, less error appear
at runtime.
7. Keep the common cases simple
Because more often than not, the usual SQL calls used by the programmer are simple
SELECTs, INSERTs, DELETEs and UPDATEs, these queries should be simple to perform with
JDBC. However, more complex SQL statements should also be possible.
Finally we decided to proceed the implementation using Java Networking.
And for dynamically updating the cache table we go for MS Access database.
Java ha two things: a programming language and a platform.
Java is a high-level programming language that is all of the following :
21
Venkatachalam, Nellore
QCET,
Simple
Architecture-neutral
Object-oriented
Portable
Distributed
High-performance
Interpreted
multithreaded
Robust
Dynamic Secure
Java is also unusual in that each Java program is both compiled and interpreted. With a
compile you translate a Java program into an intermediate language called Java byte codes
the platform-independent code instruction is passed and run on the computer.
Compilation happens just once; interpretation occurs each time the program is executed.
The figure illustrates how this works.
Java Program
Interpreter
Compilers
My Program
You can think of Java byte codes as the machine code instructions for the Java Virtual
Machine (Java VM). All Java interpreter, whether its a Java development tool or a Web
browser which can run Java applets, is an implementation of the Java VM. The Java VM
can also be implemented in hardware. Java byte codes help make write once, run
anywhere possible. You can compile your Java program into byte codes on my platform
that has a Java compiler. The byte codes can then be run any implementation of the Java
VM. For example, the same Java program can run Windows NT, Solaris, and Macintosh.
Networking
1) TCP/IP stack
The TCP/IP stack is shorter than the OSI one:
22
Venkatachalam, Nellore
QCET,
UDP
UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents
of the datagram and port numbers. These are used to give a client/server model - see later.
4) TCP
TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a
virtual circuit that two processes can use to communicate.
23
Venkatachalam, Nellore
QCET,
Internet addresses
In order to use a service, you must be able to find it. The Internet uses an address scheme for
machines so that they can be located. The address is a 32 bit integer which gives the IP
address. This combines a network ID and more addressing. The network ID falls into various
classes according to the size of the network address.
Network address
Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class
B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses
all 32.
Subnet address
Internally, the UNIX network is divided into sub networks. Building 11 is currently on one
sub network and uses 10-bit addressing, allowing 1024 different hosts.
Host address
8 bits are finally used for host addresses within our subnet. This places a limit of 256
machines that can be on the subnet.
Total address
QCET,
using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this
handle can be used with Read File and Write File functions.
#include <sys/types.h>
#include <sys/socket.h>
int socket(int family, int type, int protocol);
Here "family" will be AF_INET for IP communications, protocol will be zero, and type
will depend on whether TCP or UDP is used. Two processes wishing to communicate over a
network create a socket each. These are similar to two ends of a pipe - but the actual pipe
does not yet exist.
JFree Chart
JFree Chart is a free 100% Java chart library that makes it easy for developers to
display professional quality charts in their applications. JFree Chart's extensive feature set
includes:
JFree Chart is "open source" or, more particularly, free software. It is distributed under
the terms of the GNU Lesser General Public Licence (LGPL), which permits use in
proprietary applications.
1. Map Visualizations
Graphs indicating qualities that identify with topographical ranges. A couple of outlines
include: (a) people thickness in every one state of the United States, (b) pay each capita for
each country in Europe, (c) future in each country of the world. The assignments in this
endeavor include:
Sourcing energetically redistributable vector plots for the countries of the world,
states/domains particularly countries (USA particularly, furthermore diverse districts); Making
25
Venkatachalam, Nellore
QCET,
a fitting dataset interface (notwithstanding default utilization), a rendered, and fusing this with
the current XYPlot class in JFreeChart; Testing, archiving, testing some all the more, reporting
some more.
2. Time Series Chart Interactivity
Execute another (to JFreeChart) highlight for intuitive time arrangement graphs - to show a
different control that demonstrates a little form of ALL the time arrangement information, with a
sliding "perspective" rectangle that permits you to choose the subset of the time arrangement
information to show in the fundamental diagram.
3. Dashboards
There is at present a ton of enthusiasm for dashboard shows. Make an adaptable
dashboard system that backings a subset of JFreeChart diagram sorts (dials, pies,
thermometers, bars, and lines/time arrangement) that can be conveyed effortlessly by means of
both Java Web Start and an applet.
4. Property Editors
The property editor mechanism in JFreeChart only handles a small subset of the properties that
can be set for charts. Extend (or re implement) this mechanism to provide greater end-user
control over the appearance of the chart.
3.2.3 HARDWARE REQUIREMENT
System
Intel i3 processor
Intel intended the Core i3 as the new low end of the performance processor line from Intel, following
the retirement of the Core 2 brand. The first Core i3 processors were launched on January 7, 2010.
The first Nehalem based Core i3 was Clarkdale-based, with an integrated GPU and two cores. The
same processor is also available as Core i5 and Pentium, with slightly different configurations.
The Core i3-3xxM processors are based on Arrandale, the mobile version of the Clarkdale desktop
processor. They are similar to the Core i5-4xx series but running at lower clock speeds and without
Turbo Boost.[24] According to an Intel FAQ they do not support Error Correction Code (ECC)
memory.[25] According to motherboard manufacturer Supermicro, if a Core i3 processor is used with
a server chipset platform such as Intel 3400/3420/3450, the CPU supports ECC with UDIMM.[26]
When asked, Intel confirmed that, although the Intel 5 series chipset supports non-ECC memory only
with the Core i5 or i3 processors, using those processors on a motherboard with 3400 series chipsets it
supports the ECC function of ECC memory.[27] A limited number of motherboards by other
26
Venkatachalam, Nellore
QCET,
companies also support ECC with Intel Core ix processors; the Asus P8B WS is an example, but it
does not support ECC memory under Windows non-server operating systems.
Hard Disk
500 GB.
A hard disk drive (HDD), hard disk, hard drive or fixed disk[b] is a data storage device used for
storing and retrieving digital information using one or more rigid ("hard") rapidly rotating disks
(platters) coated with magnetic material. The platters are paired with magnetic heads arranged on a
moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a randomaccess manner, meaning that individual blocks of data can be stored or retrieved in any order rather
than sequentially. HDDs retain stored data even when powered off.
Introduced by IBM in 1956, HDDs became the dominant secondary storage device for generalpurpose computers by the early 1960s. Continuously improved, HDDs have maintained this position
into the modern era of servers and personal computers. More than 200 companies have produced HDD
units, though most current units are manufactured by Seagate, Toshiba and Western Digital.
Worldwide disk storage revenues were US $32 billion in 2013, down 3% from 2012.
The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit
prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1,000 gigabytes
(GB; where 1 gigabyte = 1 billion bytes). Typically, some of an HDD's capacity is unavailable to the
user because it is used by the file system and the computer operating system, and possibly inbuilt
redundancy for error correction and recovery. Performance is specified by the time required to move
the heads to a track or cylinder (average access time) plus the time it takes for the desired sector to
move under the head (average latency, which is a function of the physical rotational speed in
revolutions per minute), and finally the speed at which the data is transmitted (data rate).
Monitor
15 VGA Colour
A computer monitor or a computer display is an electronic visual display for computers. A monitor
usually comprises the display device, circuitry, casing, and power supply. The display device in
modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) or a flat panel
LED display, while older monitors used a cathode ray tubes (CRT). It can be connected to the
computer via VGA, DVI, HDMI, DisplayPort, Thunderbolt, LVDS (Low-voltage differential
signaling) or other proprietary connectors and signals.
27
Venkatachalam, Nellore
QCET,
Originally, computer monitors were used for data processing while television receivers were used for
entertainment. From the 1980s onwards, computers (and their monitors) have been used for both data
processing and entertainment, while televisions have implemented some computer functionality. The
common aspect ratio of televisions, and computer monitors, has changed from 4:3 to 16:10, to 16:9.
As technology developed it was realized that the output of a CRT display was more flexible than a
panel of light bulbs and eventually, by giving control of what was displayed to the program itself, the
monitor itself became a powerful output device in its own right.
Mouse
Logitech
A computer mouse is a pointing device (hand control) that detects two-dimensional motion relative to
a surface. This motion is typically translated into the motion of a pointer on a display, which allows for
fine control of a graphical user interface.
Physically, a mouse consists of an object held in one's hand, with one or more
buttons. Mice often also feature other elements, such as touch surfaces and "wheels", which enable
additional control and dimensional input.
Different ways of operating the mouse cause specific things to happen in the GUI:
Click: pressing and releasing a button.
(Left) Single-click: clicking the main button.
(Left) Double-click: clicking the button two times in quick succession counts as a different gesture
than two separate single clicks.
(Left) Triple-click: clicking the button three times in quick succession.
Right-click: clicking the secondary button.
Middle-click: clicking the tertiary button.
28
Venkatachalam, Nellore
QCET,
Drag and drop: pressing and holding a button, then moving the mouse without releasing. (Using the
command "drag with the right mouse button" instead of just "drag" when one instructs a user to drag
an object while holding the right mouse button down instead of the more commonly used left mouse
button.)
Mouse button chording (a.k.a. Rocker navigation).
Combination of right-click then left-click.
Combination of left-click then right-click or keyboard letter.
Combination of left or right-click and the mouse wheel.
Clicking while holding down a modifier key.
RAM
2GB
Random-access memory (RAM /rm/) is a form of computer data storage. A random-access memory
device allows data items to be accessed (read or written) in almost the same amount of time
irrespective of the physical location of data inside the memory. In contrast, with other direct-access
data storage media such as hard disks, CD-RWs, DVD-RWs and the older drum memory, the time
required to read and write data items varies significantly depending on their physical locations on the
recording medium, due to mechanical limitations such as media rotation speeds and arm movement
delays.
Today, random-access memory takes the form of integrated circuits. RAM is normally associated with
volatile types of memory (such as DRAM memory modules), where stored information is lost if power
is removed, although many efforts have been made to develop non-volatile RAM chips.[1] Other types
of non-volatile memory exist that allow random access for read operations, but either do not allow
write operations or have limitations on them. These include most types of ROM and a type of flash
memory called NOR-Flash.
Integrated-circuit RAM chips came into the market in the late 1960s, with the first commercially
available DRAM chip, the Intel 1103, introduced in October 1970.
The two main forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM). In
29
Venkatachalam, Nellore
QCET,
SRAM, a bit of data is stored using the state of a six transistor memory cell. This form of RAM is
more expensive to produce, but is generally faster and requires less power than DRAM and, in modern
computers, is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor
and capacitor pair, which together comprise a DRAM memory cell. The capacitor holds a high or low
charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the
chip read the capacitor's state of charge or change it. As this form of memory is less expensive to
produce than static RAM, it is the predominant form of computer memory used in modern computers.
QCET,
Friendbook which adopts a client-server mode where each client is a smartphone carried by a
user and the servers are data centers or clouds. On the client side, each smartphone can record data of
its user, perform real-time activity recognition and report the generated life documents to the servers. It
is worth noting that an offline data collection and training phase is needed to build an appropriate
activity classifier for real-time activity recognition on smartphones. We spent three months on
collecting raw data of 8 volunteers for building a large training data set. As each user typically
generates around 50MB of raw data each day, we choose MySQL as our low level data storage
platform and Hadoop MapReduce as our computation infrastructure. After the activity classifier is
built, it will be distributed to each users smartphone and then activity recognition can be performed in
real-time manner. As a user continually uses Friendbook, he/she will accumulate more and more
activities in his/her life documents, based on which, we can discover his/her life styles using
probabilistic topic model.
31
Venkatachalam, Nellore
QCET,
Flow chart:
32
Venkatachalam, Nellore
QCET,
4. DESIGN
4.1 Introduction
The Scheme Conception Projects portrays the plan necessities, working environs, plan and sub
plan structural engineering, registers and database origination, data masterminds creation formats,
human-machine ports, nitty gritty origination, actioning walking rationale, and incidental ports.
QCET,
QCET,
4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of
abstraction. DFD may be partitioned into levels that represent increasing information flow and
functional detail.
35
Venkatachalam, Nellore
QCET,
Admin
List Users
admin
User
36
Venkatachalam, Nellore
QCET,
Search User
Send Request
View Request
Send Query
Send Feedback
37
Venkatachalam, Nellore
QCET,
Admin
User
+User Name
+Pass Word
+Register
+User Name
+Pass Word
+Add Details()
+View Details()
+Add Groups()
+FromGroups()
+View Groups()
+List Users()
+View User FeedBacks()
+View User Query()
+Android Mobile Users()
+View All User Ranks()
+View Friend Match Graph()
+Search User()
+Send Request()
+View Request()
+View Your Ranks()
+Send Query()
+Send Feedback()
+Recommend the Friend()
Admin
38
Venkatachalam, Nellore
QCET,
Admin
Login
Details
Groups
List Users
1 : login()
2 : Add/View Details()
3 : Add/View/FromGroups()
4 : List Users()
User
39
Venkatachalam, Nellore
QCET,
User
Login
Search User
Request
Send Query
Send Feedback
1 : login()
2 : Search User()
3 : Send/View Request()
5 : Send Query()
6 : Send Feedback()
40
Venkatachalam, Nellore
QCET,
41
Venkatachalam, Nellore
QCET,
Admin
Admin
User
Login
Register
User
Add/View Details
Search User
Send Request
Add/View/FromGroups
View Request
List Users
View Your Ranks
View User FeedBacks
Send Query
Logout
42
Venkatachalam, Nellore
QCET,
QCET,
G-friends, which stands for geographical location-based friends because they are influenced by the
geographical distances between each other. With the rapid advances in social networks, services such
as Facebook, Twitter and Google+ have provided us revolutionary ways of making friends. According
to Facebook statistics, a user has an average of 130 friends, perhaps larger than any other time in
history. One challenge with existing social networking services is how to recommend a good friend to
a user.
Most of them rely on pre-existing user relationships to pick friend candidates. For example,
Facebook relies on a social link analysis among those who already share common friends and
recommends symmetrical users as potential friends. Unfortunately, this approach may not be the most
appropriate based on recent sociology findings. According to these studies, the rules to group people
together include: 1) habits or life style; 2) attitudes; 3) tastes; 4) moral standards; 5) economic level;
and 6) people they already know. Apparently, rule 3 and rule 6 are the mainstream factors considered
by existing recommendation systems. Rule 1, although probably the most intuitive, is not widely used
because users life styles are difficult, if not impossible, to capture through web actions. Rather, life
styles are usually closely correlated with daily routines and activities.
We use the Expectation-Maximization (EM) method to solve the LDA decomposition, where the Estep is used to estimate the free variation Dirichlet parameter
standard LDA model and the M-step is used to maximize the log likelihood of the activities under
these parameters. After the EM algorithm converges, we are able to calculate the decomposed activitytopic matrix. Readers are referred to
decomposition approaches. It is worth noting that the matrix decomposition process can be
implemented more efficiently through incremental iteration. That is, when a users life document
changes or a new users life document is uploaded to the system, Friendbook can calculate the new life
style vectors for each user based on previously derived life style vectors and the new life document.
KEY FUNCTIONS
5.1.1
Input Screens
FORM: Form is a typical component of the web page. Typical Component may be text field, text area,
check box, radio button and push button etc.
HTML allows us to place these components on the web page and send the desired information to the
44
Venkatachalam, Nellore
QCET,
destination server.
TEXT FIELD: Text Field is a field in which it required to place one line of text.
TEXT AREA: Text Area is a field in which it required to place more than one line of text.
CHECK BOX: It is one of the components which is used to select some options from several
options.
RADIO BUTTON: It is also one of the components which is used to select only one option from
several options.
BUTTON: There are two types of buttons that are possible to create in
HTML.
1. Submit button.
2. Reset button.
MENUS:
A menu can be created using <select> and <option> in the following
Manner:
<select>
<option>Provider</option>
<option> User </option>
<option> TA</option>
</select>
5.1.2
Output Screens
45
Venkatachalam, Nellore
QCET,
46
Venkatachalam, Nellore
QCET,
47
Venkatachalam, Nellore
QCET,
48
Venkatachalam, Nellore
QCET,
49
Venkatachalam, Nellore
QCET,
50
Venkatachalam, Nellore
QCET,
51
Venkatachalam, Nellore
QCET,
52
Venkatachalam, Nellore
QCET,
53
Venkatachalam, Nellore
QCET,
combination of components.
Functional Test
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.
54
Venkatachalam, Nellore
QCET,
Invalid Input
Functions
Output
QCET,
Features to Be Tested
Integration Testing
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or one step up software applications at the company level
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
QCET,
Code testing:
This examines the logic of the program. For example, the logic for updating various sample
data and with the sample files and directories were tested and verified.
Specification Testing:
Executing this specification starting what the program should do and how it should performed
under various conditions. Test cases for various situation and combination of conditions in all the
modules are tested.
Unit testing:
In the unit testing we test each module individually and integrate with the overall system. Unit
testing focuses verification efforts on the smallest unit of software design in the module. This is also
known as module testing. The module of the system is tested separately. This testing is carried out
during programming stage itself. In the testing step each module is found to work satisfactorily as
regard to expected output from the module. There are some validation checks for fields also. For
example the validation check is done for varying the user input given by the user which validity of the
data entered. It is very easy to find error debut the system. Each Module can be tested using the
following two Strategies:
57
Venkatachalam, Nellore
QCET,
Here are the generic steps followed to carry out any type of Black Box Testing.
Initially requirements
Tester
chooses valid inputs (positive test scenario) to check whether SUT processes them correctly.
Also some invalid inputs (negative test scenario) are chosen to verify that the SUT is able to detect
them.
Tester
Software
The
Software
Defects
testing This black box testing type is related to functional requirements of a system; it
testing This type of black box testing is not related to testing of a specific
testing Regression testing is done after code fixes , upgrades or any other system
maintenance to check the new code has not affected the existing code.
WHITE BOX TESTING:
White Box Testing is the testing of a software solution's internal coding and infrastructure. It
focuses primarily on strengthening security, the flow of inputs and outputs through the application, and
improving design and usability. White box testing is also known as clear, open, structural, and glass
box testing.
What do we verify in White Box Testing?
White box testing involves the testing of the software code for the following:
Internal security holes
Broken
The
Expected
The
output
Venkatachalam, Nellore
QCET,
Testing
The testing can be done at system, integration and unit levels of software development. One of the
basic goals of white box testing is to verify a working flow for an application. It involves testing a
series of predefined inputs against expected or desired outputs so that When a specific input does not
result in the expected output, you have encountered a bug.
How do we perform White Box Testing?
To give you a simplified explanation of white box testing, we have divided it into
Two basic steps. This is what testers do when testing an application using the white box Testing
technique.
STEP 1) UNDERSTAND THE SOURCE CODE
The first thing a tester will often do is learn and understand the source code of the application. Since
white box testing involves the testing of the inner workings of an application, the tester must be very
knowledgeable in the programming languages used in the applications they are testing. Also, the
testing person must be highly aware of secure coding practices. Security is often one of the primary
objectives of testing software. The tester should be able to find security issues and prevent attacks
from hackers and nave users who might inject malicious code into the application either knowingly or
unknowingly.
STEP 2) CREATE TEST CASES AND EXECUTE
The second basic step to white box testing involves testing the applications source code for proper
flow and structure. One way is by writing more code to test the applications source code. The tester
will develop little tests for each process or series of processes in the application. This method requires
that the tester must have intimate knowledge of the code and is often done by the developer. Other
methods include manual testing, trial and error testing and the use of testing tools as we will explain
further on in this article.
System testing:
Once the individual module testing is completed, modules are assembled and integrated to perform as
a system. The top down testing, which began from upper level to lower level module, was carried out
to check whether the entire system is performing satisfactorily.
There are three main kinds of System testing:
59
Venkatachalam, Nellore
QCET,
1 Alpha Testing
2 Beta Testing
3 Acceptance Testing
Alpha Testing:
This refers to the system testing that is carried out by the test team within the Organization.
Beta Testing:
This refers to the system testing that is performed by a selected group of friendly customers
Acceptance Testing:
This refers to the system testing that is performed by the customer to determine whether or not to
accept the delivery of the system.
Expected Result
Username and
Actual Result
Pass / Fail
We do not allow Pass
passwords fields
users to do
must be filled
Registration form
login
All the fields must be Pass
fields
be filled with
filled
appropriate data
Venkatachalam, Nellore
QCET,
Proper alignments of
registration details
To
give
unique Pass
be properly aligned
experience
page
Appropriate
alignment
All the messages Pass
must be
triggered
happened
happened
Expected Result
All the pages must
Actual Result
User
should
Pass / Fail
be Pass
have appropriate
comfortable
respect to the
layouts
modules
Web pages
Navigation links in
User
navigation must be
comfortable
good
work appropriately
to navigate through
should
be Pass
the
Purpose of the
pages
User should be able Pass
website should be
to grab
clear to user
he is looking for
the information
61
Venkatachalam, Nellore
QCET,
Expected Result
User should be able
Actual Result
Pass / Fail
User should see the Pass
list to of
moderation
pagination to
navigate
User should see the
options to approve
links to approve or
for
or disapprove
approval
post
able to
Time must be
When a post is
done here
Updated news should Pass
approved at the
be
approved posts
updated
the news
Moderator should be
list
Users
able to disapprove
notified for
disapproval with
their
reason
must
be
should
be Pass
news
disapproval with
the reason
Fig: Table for Moderator Features to be tested
Expected Result
User should be able
Actual Result
Pass / Fail
All the fields must be Pass
filled
fields required
information related
to
provide
more
62
Venkatachalam, Nellore
QCET,
accurate
User should see
news information
User must be notified Pass
success message
to see the
of the
appropriate success
and for
done
moderation
through
success
User must provide
flash message
Users should make Pass
an images of the
must be mandatory
sure
news
Users must be
to
them
them.
done by
them with approval
or
disapproval message
Description of
Expected results
Covered by script
Condition
coverage
ID
1
Verification of a
If a particular record
particular record
already exists it
{verify}
displays a message
procedure in every
Jsp file where
a record is inserted
via an
interface
63
Venkatachalam, Nellore
QCET,
Updating of a
particular record
Not be updated.
covered in all
the Asp files where
updations are
Validity of login
Made.
This is covered in the
login
System.
procedure
for
the
validity of a
user
Fig: Table for Acceptance Testing
Integration Testing:
Data can be lost across an interface, one module can have an adverse effort on the other sub
functions, when combined, may not produce the desired major functions. Integrated testing is the
systematic testing for constructing the uncover errors within the interface. The testing was done with
sample data. The developed system has run successfully for this sample data. The need for integrated
test is to find the overall system performance.
Output testing:
After performance of the validation testing, the next step is output testing. The output displayed
or generated by the system under consideration is tested by asking the user about the format required
by system. The output format on the screen is found to be correct as format was designed in the system
phase according to the user needs. Hence the output testing does not result in any correction in the
system.
Test plan:
The test-plan is basically a list of test cases that need to be run on the system. Some of the test
cases can be run independently for some components (report generation from the database, for
example, can be tested independently) and some of the test cases require the whole system to be ready
for their execution. It is better to test each component as and when it is ready before integrating the
components. It is important to note that the test cases cover all the aspects of the system (ie, all the
requirements stated in the RS document).
6.3 VALIDATION
64
Venkatachalam, Nellore
QCET,
We have mentioned all the test cases applicable for the application we are developing. They
must go through the test process and make sure they all pass. All these results will be recorded for the
metrics purposes. Validation checks that the product design satisfies or fits the intended use (highlevel
checking), i.e., the software meets the user requirements. This is done through dynamic testing and
other forms of review. Verification and validation are not the same thing, although they are often
confused. Succinctly expressed the difference between them:
Verification: The process of evaluating software to determine whether the products of a given
development phase satisfy the conditions imposed at the start of that phase.
Validation: The process of evaluating software during or at the end of the development process to
determine whether it satisfies specified requirements. In other words,validation ensures that the
product actually meets the user's needs, and that the specifications were correct in the first place, while
verification is ensuring that the product has been built according to the requirements and design
specifications. Validation ensures that "you built the right thing". Verification ensures that "you built it
right". Validation confirms that the product, as provided, will fulfil its intended use. From testing
perspective:
Fault wrong or missing function in the code.
Failure the manifestation of a fault during execution.
Malfunction according to its specification the system does not meet its specified
functionality.Within the modeling and simulation community, the definitions of validation, verification
and accreditation are similar:
Validation is the process of determining the degree to which a model, simulation, or federation of
models and simulations, and their associated data are accurate representations of the real world from
the perspective of the intended use(s).
Accreditation is the formal certification that a model or simulation is acceptable to be used for a
specific purpose.
Verification is the process of determining that a computer model, simulation, or federation of models
and simulations implementations and their associated data accurately represent the developer's
65
Venkatachalam, Nellore
QCET,
QCET,
explore the adaption of the threshold for each edge and see whether it can better represent the
similarity relationship on the friend-matching graph. At last, we plan to incorporate more sensors on
the mobile phones into the system and also utilize the information from wearable equipments (e.g.,
Fitbit, iwatch, Google glass, Nike+, and Galaxy Gear) to discover more interesting and meaningful life
styles. For example, we can incorporate the sensor data source from Fitbit, which extracts the users
daily fitness infograph, and the users place of interests from GPS traces to generate an infograph of
the user as a document. From the infograph, one can easily visualize a users life style which will
make more sense on the recommendation.
Actually, we expect to incorporate Friendbook into existing social services (e.g.,
Facebook, Twitter, LinkedIn) so that Friendbook can utilize more information for life
discovery, which should improve the recommendation experience in the future.
As part of our future work, we could add this implementation to the framework to make Friendbook
scalable to large scale systems.
REFERENCES
[1] Amazon. http://www.amazon.com/.
[2] Facebook statistics. http://www.digitalbuzzblog.com/ facebook-statistics-stats-facts-2011/.
[3] Netfix. https://signup.netflix.com/.
[4] Rotten tomatoes. http://www.rottentomatoes.com/.
[5] G. R. Arce. Nonlinear Signal Processing: A Statistical Approach. John Wiley & Sons, 2005.
[6] B. Bahmani, A. Chowdhury, and A. Goel. Fast incremental and personalized pagerank. Proc. of
VLDB Endowment, volume 4, pages 173-184, 2010.
[7] J. Biagioni, T. Gerlich, T. Merrifield, and J. Eriksson. EasyTracker: Automatic Transit Tracking,
Mapping, and Arrival Time Prediction Using Smartphones. Proc. of SenSys, pages 68-81, 2011.
[8] L. Bian and H. Holtzman. Online friend recommendation through personality matching and
collaborative filtering. Proc. of UBICOMM, pages 230-235, 2011.
[9] C. M. Bishop. Pattern recognition and machine learning. Springer New York, 2006.
[10] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning
Research, 3:993-1022, 2003.
[11] P. Desikan, N. Pathak, J. Srivastava, and V. Kumar. Incremental page rank computation on
evolving graphs. Proc. of WWW, pages 1094-1095, 2005.
67
Venkatachalam, Nellore
QCET,
[12] N. Eagle and A. S. Pentland. Reality Mining: Sensing Complex Cocial Systems. Personal
Ubiquitous Computing, 10(4):255-268, March 2006.
[13] K. Farrahi and D. Gatica-Perez. Probabilistic mining of sociogeographic routines from mobile
phone data. Selected Topics in Signal Processing, IEEE Journal of, 4(4):746-755, 2010.
[14] K. Farrahi and D. Gatica-Perez. Discovering Routines from Largescale Human Locations using
Probabilistic Topic Models. ACM Transactions on Intelligent Systems and Technology (TIST), 2(1),
2011.
[15] B. A. Frigyik, A. Kapila, and M. R. Gupta. Introduction to the dirichlet distribution and related
processes. Department of Electrical Engineering, University of Washignton, UWEETR-2010-0006,
2010.
[16] A. Giddens. Modernity and Self-identity: Self and Society in the late Modern Age. Stanford Univ
Pr, 1991.
[17] L. Gou, F. You, J. Guo, L.Wu, and X. L. Zhang. Sfviz: Interestbased friends exploration and
recommendation in social networks. Proc. of VINCI, page 15, 2011.
[18] W. H. Hsu, A. King, M. Paradesi, T. Pydimarri, and T. Weninger. Collaborative and structural
recommendation of friends using weblog-based social network analysis. Proc. of AAAI Spring
Symposium Series, 2006.
[19] T. Huynh, M. Fritz, and B. Schiel. Discovery of Activity Patterns using Topic Models. Proc. of
UbiComp, 2008.
[20] J. Kwon and S. Kim. Friend recommendation method using physical and social context.
International Journal of Computer Science and Network Security, 10(11):116-120, 2010.
[21] J. Lester, T. Choudhury, N. Kern, G. Borriello, and B. Hannaford. A Hybrid
Discriminative/Generative Approach for Modeling Human Activities. Proc. of IJCAI, pages 766-772,
2005.
[22] Q. Li, J. A. Stankovic, M. A. Hanson, A. T. Barth, J. Lach, and G. Zhou. Accurate, Fast Fall
Detection Using Gyroscopes and Accelerometer-Derived Posture Information. Proc. of BSN, pages
138-143, 2009.
[23] E. Miluzzo, C. T. Cornelius, A. Ramaswamy, T. Choudhury, Z. Liu, and A. T. Campbell. Darwin
Phones: the Evolution of Sensing and Inference on Mobile Phones. Proc. of MobiSys, pages 5-20,
2010.
[24] E. Miluzzo, N. D. Lane, S. B. Eisenman, and A. T. Campbell. Cenceme-Injecting Sensing
Presence into Social Networking Applications. Proc. of EuroSSC, pages 1-28, October 2007.
[25] L. Page, S. Brin, R. Motwani, and T. Winograd. The Pagerank Citation Ranking: Bringing Order
68
Venkatachalam, Nellore
QCET,
69
Venkatachalam, Nellore
QCET,