Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
http://cs246.stanford.edu
1/8/2013
Descriptive Methods
Find human-interpretable patterns that describe the data
Predictive Methods
Use some variables to predict unknown or future values of other variables
1/8/2013
Different cultures:
Databases: Large-scale data, simple queries Machine learning: Small data, Complex models Statistics: Predictive Models To a DB person, data mining is an extreme form of analytic processing queries that examine large amounts of data
Result is the query answer
Statistics/ AI Machine Learning/ Pattern Recognition
Data Mining
1/8/2013
Database systems
6
This class overlaps with machine learning, statistics, artificial intelligence, databases but more stress on
Scalability (big data) Algorithms and architectures Automation for handling large data
Statistics/ AI Machine Learning/ Pattern Recognition
Data Mining
Database systems
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 7
1/8/2013
1/8/2013
Graph data
PageRank, SimRank
Infinite data
Filtering data streams Web advertising
Machine learning
Apps
SVM
Clustering
Community Detection
Decision Trees
Spam Detection
Queries on streams
Perceptron, kNN
TAs:
We have 9 great TAs!
Sean Choi (Head TA), Sumit Arrawatia, Justin Chen, Dingyi Li, Anshul Mittal, Rose Marie Philip, Robi Robaszkiewicz, Le Yu, Tongda Zhang
Office hours:
Jure: Wednesdays 9-10am, Gates 418 See course website for TA office hours For SCPD students we will use Google Hangout
We will post Google Hangout links on Piazza
1/8/2013
13
Course website:
http://cs246.stanford.edu
Readings: Book Mining of Massive Datasets by Anand Rajaraman and Jeffrey Ullman Free online: http://i.stanford.edu/~ullman/mmds.html
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 14
Lecture slides (at least 30min before the lecture) Homeworks, solutions Readings
1/8/2013
We will post course announcements to Piazza (make sure you check it regularly)
Auditors are welcome to sit-in & audit the class
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 15
1/8/2013
How to submit?
Upload code:
Put the code for 1 question into 1 file and submit at: http://snap.stanford.edu/submit/
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 16
1/8/2013
Homework schedule:
Date 01/08, Tue 01/10, Thu 01/15, Tue 01/24, Thu 02/07, Thu 02/21, Thu 03/07, Thu Out HW0 HW1 HW2 HW3 HW4 In
1/8/2013
3 recitation sessions:
Hadoop: Thurs. 1/10, 5:15-6:30pm
We prepared a virtual machine with Hadoop preinstalled HW0 helps you write your first Hadoop program
Review of probability&stats: 1/17, 5:15-6:30pm Review of linear algebra: 1/18, 5:15-6:30pm All sessions will be held in Thornton 102, Thornton Center (Terman Annex) Sessions will be video recorded!
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 20
InfoSeminar (CS545):
http://i.stanford.edu/infoseminar Great industrial & academic speakers Topics include data mining and large scale data processing
1/8/2013
21
1/8/2013
22
Much of the course will be devoted to large scale computing for data mining Challenges:
1/8/2013
24
CPU
1/8/2013
25
20+ billion web pages x 20KB = 400+ TB 1 computer reads 30-35 MB/sec from disk
~4 months to read the web
~1,000 hard drives to store the web Takes even more to do something useful with the data! Today, a standard architecture for such problems is emerging:
Cluster of commodity Linux nodes Commodity network (ethernet) to connect them
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 26
2-10 Gbps backbone between racks 1 Gbps between any pair of nodes in a rack Switch Switch
Switch
CPU
CPU
Mem Disk
Mem Disk
Each rack contains 16-64 nodes In 2011 it was guestimated that Google had 1M machines, http://bit.ly/Shh0RO
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 27
1/8/2013
28
How do you distribute computation? How can we make it easy to write distributed programs? Machines fail:
One server may stay up 3 years (1,000 days) If you have 1,000 servers, expect to loose 1/day People estimated Google had ~1M machines in 2011
1,000 machines fail every day!
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 29
Issue: Copying data over a network takes time Idea: Map-reduce addresses these problems
Googles computational/data manipulation model Elegant way to work with big data Storage Infrastructure File system
Google: GFS. Hadoop: HDFS
Bring computation close to the data Store files multiple times for reliability
Programming model
1/8/2013
Map-Reduce
30
Problem: Answer:
If nodes fail, how to store data persistently? Distributed File System:
Huge files (100s of GB to TB) Data is rarely updated in place Reads and appends are common
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 31
Chunk servers
Master node
File is split into contiguous chunks Typically each chunk is 16-64MB Each chunk replicated (usually 2x or 3x) Try to keep replicas in different racks
a.k.a. Name Node in Hadoops HDFS Stores metadata about where files are stored Might be replicated Talks to master to find chunk servers Connects directly to chunk servers to access data
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 32
1/8/2013
Reliable distributed file system Data kept in chunks spread across machines Each chunk replicated on different machines
Seamless recovery from disk or machine failure
C0 C5 C1 C2 D0 C5 C1 C3 C2 D0 C5 D1 C0 C5 C2
D0
Chunk server 1
Chunk server 2
Chunk server 3
Chunk server N
Bring computation directly to the data! Chunk servers also serve as compute servers
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 33
Count the number of times each distinct word appears in the file Sample application:
Analyze web server logs to find popular URLs
1/8/2013
34
where words takes a file and outputs the words in it, one per a line
1/8/2013
35
Sequentially read a lot of data Map: Group by key: Sort and Shuffle Reduce: Write the result
Extract something you care about
Outline stays the same, Map and Reduce change to fit the problem
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 36
k k
v v
k map k
k v
k v
1/8/2013
37
k v k
v k
1/8/2013
38
1/8/2013
39
Reduce:
Collect all values belonging to the key and output
(The, 1) (crew, 1) (of, 1) (the, 1) (space, 1) (shuttle, 1) (Endeavor, 1) (recently, 1) . (key, value)
(crew, 1) (crew, 1) (space, 1) (the, 1) (the, 1) (the, 1) (shuttle, 1) (recently, 1) (key, value)
Big document
1/8/2013
(key, value)
40
map(key, value): // key: document name; value: text of the document for each word w in value: emit(w, 1)
reduce(key, values): // key: a word; value: an iterator over counts result = 0 for each count v in values: result += v emit(key, result)
1/8/2013
41
Map-Reduce environment takes care of: Partitioning the input data Scheduling the programs execution across a set of machines Performing the group by key step Handling machine failures Managing required inter-machine communication
1/8/2013
42
Group by key:
Collect all pairs with same key
(Hash merge, Shuffle, Sort, Partition)
Reduce:
Collect all values belonging to the key and output
1/8/2013
43
All phases are distributed with many tasks doing the work
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 44
Input and final output are stored on a distributed file system (FS):
Scheduler tries to schedule map tasks close to physical storage location of input data
Intermediate results are stored on local FS of Map and Reduce workers Output is often input to another MapReduce task
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 46
1/8/2013
1/8/2013
1/8/2013
48
1/8/2013
49
1/8/2013
50
Often a Map task will produce many pairs of the form (k,v1), (k,v2), for the same key k Can save network time by pre-aggregating values in the mapper:
combine(k, list(v1)) v2 Combiner is usually same as the reduce function E.g., popular words in the word count example
1/8/2013
System uses a default partition function: Sometimes useful to override the hash function:
E.g., hash(hostname(URL)) mod R ensures URLs from a host end up in the same output file
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 54
Suppose we have a large web corpus Look at the metadata file For each host, find the total number of bytes
That is, the sum of the page sizes for all URLs from that particular host Lines of the form: (URL, size, date, )
Other examples:
Link analysis and graph processing Machine Learning algorithms
1/8/2013
56
Reduce:
Combine the counts
1/8/2013
58
Compute the natural join R(A,B) S(B,C) R and S are each stored in files Tuples are pairs (a,b) or (b,c)
B b1 b1 b2 b3 R B C c1 c2 c3 S A C c1 c2 c3
A a1 a2 a3 a4
b2 b2 b3
a3 a3 a4
1/8/2013
59
Map processes send each key-value pair with key b to Reduce process h(b) Each Reduce process matches all the pairs (b,(a,R)) with all (b,(c,S)) and outputs (a,b,c).
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
1/8/2013
60
In MapReduce we quantify the cost of an algorithm using 1. Communication cost = total I/O of all processes 2. Elapsed communication cost = max of I/O along any path 3. (Elapsed) computation cost analogous, but count only running time of processes
Note that here the big-O notation is not the most useful (adding more machines is always an option)
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 61
1/8/2013
62
Total cost tells what you pay in rent from your friendly neighborhood cloud Elapsed cost is wall-clock time using parallelism
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 63
1/8/2013
Were going to pick k and the number of Map processes so that the I/O limit s is respected We put a limit s on the amount of input or output that any one process can have. s could be:
What fits in main memory What fits on local disk
With proper indexes, computation cost is linear in the input + output size
So computation cost is like comm. cost
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 64
1/8/2013
Google Hadoop
Not available outside Google An open-source implementation in Java Uses HDFS for stable storage Download: http://lucene.apache.org/hadoop/ Aster Data Cluster-optimized SQL Database that also implements MapReduce
1/8/2013
66
Amazons Elastic Compute Cloud (EC2) Aster Data and Hadoop can both be run on EC2 For CS341 (offered next quarter) Amazon will provide free access for the class
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 67
1/8/2013
Jeffrey Dean and Sanjay Ghemawat: MapReduce: Simplified Data Processing on Large Clusters
http://labs.google.com/papers/mapreduce.html
Sanjay Ghemawat, Howard Gobioff, and ShunTak Leung: The Google File System
http://labs.google.com/papers/gfs.html
1/8/2013
68
Hadoop Wiki
Introduction
Getting Started
Map/Reduce Overview
Eclipse Environment
http://lucene.apache.org/hadoop/docs/api/
1/8/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 69
Javadoc
http://people.apache.org/dist/lucene/hadoop/nig htly/
http://lucene.apache.org/hadoop/version_control .html
1/8/2013
70
Programming model inspired by functional language primitives Partitioning/shuffling similar to many large-scale sorting systems Re-execution for fault tolerance Locality optimization has parallels with Active Disks/Diamond work Backup tasks similar to Eager Scheduling in Charlotte system Dynamic load balancing solves similar problem as River's distributed queues
River ['99]
Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 71
NOW-Sort ['97]
1/8/2013