Sei sulla pagina 1di 6

Linear Regression Homework

COS 703

Keelia Altheimer-Bienemy
April 11, 2014

Isotonic Regression:
Scheme:
weka.classifiers.functions.IsotonicRegression
Relation: Linear Regression Datas
Instances: 10
Attributes: 2
x - years experience
y - salary (in $1000s)
Test mode: evaluate on training data
=== Classifier model (full training set) ===
Isotonic regression
Based on attribute: y - salary (in $1000s)
prediction:
1
cut point: 25
prediction:
3
cut point: 39.5
prediction:
6
cut point: 50
prediction:
8
cut point: 58
prediction: 10
cut point: 68
prediction: 13
cut point: 77.5
prediction: 16
cut point: 86.5
prediction: 21
Time taken to build model: 0.02 seconds
=== Evaluation on training set ===
Time taken to test model on training data: 0 seconds
=== Summary ===
Correlation coefficient
0.9972
Mean absolute error
0.2
Root mean squared error
0.4472
Relative absolute error
4.065 %
Root relative squared error
7.465 %
Plot : Master Plot
Instance: 7
x - years experience : 11.0
y - salary (in $1000s) : 59.0

Linear Regression:
=== Run information ===
Scheme:
weka.classifiers.functions.LinearRegression -S 0 -R 1.0E-8
Relation: Linear Regression Datas
Instances: 10
Attributes: 2
x - years experience
y - salary (in $1000s)
Test mode: evaluate on training data
=== Classifier model (full training set) ===
Linear Regression Model
x - years experience =
0.2671 * y - salary (in $1000s) +
-5.7001
Time taken to build model: 0.05 seconds
=== Evaluation on training set ===
Time taken to test model on training data: 0 seconds
=== Summary ===
Correlation coefficient
Mean absolute error
Root mean squared error
Relative absolute error
Root relative squared error
Total Number of Instances

0.9721
1.17
1.4045
23.7814 %
23.4449 %
10

Multilayer Perceptron:
=== Run information ===
Scheme:
weka.classifiers.functions.MultilayerPerceptron -L 0.3 -M 0.2 -N 500 -V 0 -S 0 -E 20 -H a
Relation: Linear Regression Datas
Instances: 10
Attributes: 2
x - years experience
y - salary (in $1000s)
Test mode: evaluate on training data
=== Classifier model (full training set) ===
Linear Node 0
Inputs Weights
Threshold 2.046407295607268
Node 1 -3.1560514620656384
Sigmoid Node 1
Inputs Weights
Threshold 1.006826160067298
Attrib y - salary (in $1000s) -1.6559471582428547
Class
Input
Node 0
Time taken to build model: 0.06 seconds
=== Evaluation on training set ===
Time taken to test model on training data: 0 seconds
=== Summary ===
Correlation coefficient
Mean absolute error
Root mean squared error
Relative absolute error
Root relative squared error
Total Number of Instances

0.9826
1.0945
1.2769
22.2467 %
21.3149 %
10

RBF Regressor:
=== Run information ===
Scheme:
weka.classifiers.functions.RBFRegressor -N 2 -R 0.01 -L 1.0E-6 -C 2 -P 1 -E 1 -S 1
Relation: Linear Regression Datas
Instances: 10
Attributes: 2
x - years experience
y - salary (in $1000s)
Test mode: evaluate on training data
=== Classifier model (full training set) ===
Output weight: 0.4497789072429531
Unit center:
1.0613443734572008
Unit scale:
0.13141576270523955
Output weight: -0.5706402558335077
Unit center:
-0.02290388765437227
Unit scale:
0.3689409027192465
Bias weight: 0.5884918753923721
Time taken to build model: 0.05 seconds
=== Evaluation on training set ===
Time taken to test model on training data: 0 seconds
=== Summary ===
Correlation coefficient
Mean absolute error
Root mean squared error
Relative absolute error
Root relative squared error
Total Number of Instances

0.9886
0.8105
0.9062
16.4738 %
15.1267 %
10

The Linear Regression model and the Simple Regression model had the same outputs; so I included the
Isotonic Regression model, the Linear Regression model, Multilayer Perceptron model, and RBF
Regressor model.
The Isotonic Regression model has the most significant output difference of the four models which is
illustrated in the graph of the models output.

Potrebbero piacerti anche