Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
DISSEMINATED SUPPOSITION
FOR SWAPPING MAP REDUCE
USING ONTOLOGIES
ABSTRACT
The large amount of Semantic Web data and its fast growth pose a significant
computational challenge in performing efficient and scalable reasoning.
The resources of single machines are no longer sufficient and we are required
to distribute the process to improve performance Constructing transfer
inference forest and effective assertion triples.
A large volume of Semantic Web data, the fast growth of ontology bases has
brought significant challenges in performing efficient and scalable reasoning.
evaluated our system using very large real-world datasets (Bio2RDF, LLD,
LDSR) and the LUBM synthetic benchmark, scaling up to 100 billion triples.
EXISTING SYSTEM
The general execution constraint that map tasks are executed before reduce
tasks.
Disadvantage
The data volume of RDF closure is ordinarily larger than original RDF
data.
The storage of RDF closure is thus not a small amount and the query on it
takes nontrivial time.
The data volume increases and the ontology base is updated, these
methods.
PROPOSED SYSTEM
The choice of Map Reduce is motivated by the fact that it can limit data
exchange and alleviate load balancing problems by dynamically scheduling jobs
on computing nodes.
The incremental RDF triples more efficiently, we present two novel concepts,
transfer inference forest and effective assertion triples.
Advantage
Well leverage the old and new data to minimize the updating time and
reduce the reasoning time when facing big RDF datasets
Bench Mark
Proposed Algorithm
RSVM
Bio2RDF
SYSTEM REQUIREMENTS
SOFTWARE REQUIREMENTS
Operating system
Windows XP/7.
Coding Language
JAVA/J2EE
IDE
Implementation Tools :
Hadoop
Tools Support
Netbeans 7.4
:
HARDWARE REQUIREMENTS
System
Hard Disk
Ram
160 GB.
2GB
System Architecture
List of Modules
Data Preprocessing
Imputation Process
HDFS Upload
Data Upload
Job Execution
Evaluation
Preprocessing
Imputation Process
Thank You