Author | Description | Date submitted | % Error rate |
bfang | Single layer neural network, 80 epochs, alpha=0.01 | Sun Jan 12 13:49:39 2014 | 17.6 |
K.L. | Single layer neural net, 100 training rounds, learning rate = .01. | Tue Jan 7 15:36:37 2014 | 17.7 |
Kiwis | AdaBoost with decision trees. Number of iterations: 200. Max depth for tree: 3. | Sun Jan 12 16:01:54 2014 | 18.0 |
bchou | AdaBoost on Binary Decision Stumps. 150 rounds of Boosting | Fri Jan 3 07:09:15 2014 | 18.1 |
Anna Ren (aren) and Sunny Xu (ziyangxu)<BR><BR>Sunnanna | Adaboost using 150 rounds of boosting and decision stumps as a weak learner | Thu Jan 9 22:39:29 2014 | 18.1 |
Mr. Blobby | AdaBoost (150 rounds) with decision stumps | Fri Jan 10 20:22:29 2014 | 18.1 |
Green Gmoney Choi | This is an implementation of the AdaBoostalgorithm with decision stumps. | Sun Jan 12 15:10:36 2014 | 18.1 |
Rocky | AdaBoost algorithm with the decision stumps as the weak learner, T=150 | Mon Jan 13 21:17:55 2014 | 18.1 |
R.A.B. | Adaboost on decision stumps 150 rounds | Tue Jan 14 03:15:28 2014 | 18.1 |
CC | AdaBoost with Neural Networks as the learner. Uses the percentage with the highest weights of the data to make the hypothesis on a given round | Tue Jan 14 11:13:03 2014 | 18.1 |
weezy | Implements AdaBoost using decision stumps as a weak learner and running for 1000 rounds of boosting. | Wed Jan 8 23:17:50 2014 | 18.2 |
SkyNet | 1000-iteration AdaBoost with Decision Stump | Thu Jan 9 00:20:25 2014 | 18.2 |
CC | null | Thu Jan 9 14:40:01 2014 | 18.2 |
S1 | An implementation of AdaBoost, with decision stumps as the weak learner for the algorithm and 500 rounds of boosting. | Thu Jan 9 15:22:39 2014 | 18.2 |
SAJE | ADABOOST | Thu Jan 9 15:35:46 2014 | 18.2 |
ytterbium | AdaBoost with decision stumps. (1000 rounds) | Thu Jan 9 20:00:48 2014 | 18.2 |
Cam Porter | A version of the AdaBoost learning algorithm that uses decision stumps as a weak learning base. | Thu Jan 9 22:47:38 2014 | 18.2 |
me | Adaboost using decision stumps and 400 rounds of boosting. | Thu Jan 9 23:18:51 2014 | 18.2 |
dmmckenn_pthorpe | Implements Adaboost with 1,000 rounds of boosting with decision stumps as the weak learner. | Thu Jan 9 23:32:58 2014 | 18.2 |
Mr. Blobby | AdaBoost (1000 rounds) with decision stumps | Fri Jan 10 20:11:34 2014 | 18.2 |
R.A.B. | Adaboost on decision stumps 1000 rounds | Tue Jan 14 02:04:14 2014 | 18.2 |
ebp | Adaboost with decision stumps minimizing smoothed weighted training error, 100 rounds of boosting. | Mon Jan 6 21:02:21 2014 | 18.3 |
dlackey | This is an implementation of AdaBoost that uses 175 rounds of boosting. The weak learning algorithm used is a decision stump that directly minimizes the weighted training error. | Wed Jan 8 14:10:54 2014 | 18.3 |
Caligula | An implementation of AdaBoost with decision stumps and 800 rounds of boosting. | Wed Jan 8 19:46:23 2014 | 18.3 |
Macrame | Adaboost, decision stumps, 250 rounds | Tue Jan 14 01:55:44 2014 | 18.3 |
skarp | AdaBoost with decision stumps as the weak learner (chosen to minimize the weighted training error) and 200 rounds of boosting. | Sat Dec 28 16:13:58 2013 | 18.4 |
anon5 | An implementation of the AdaBoost algorithm using decision stumps as the learner with 200 rounds of boosting | Thu Jan 2 15:28:13 2014 | 18.4 |
CTTT | Adaboost + Decision Stumps (200 rounds). | Mon Jan 6 20:44:55 2014 | 18.4 |
Mickey Mouse | A implementation of AdaBoost withDecision Stump as the weak learner and 200 rounds of boosting | Tue Jan 7 15:44:27 2014 | 18.4 |
lolz | Adaboost with decision stumps as the weak learner algorithm (k = 200) | Thu Jan 9 00:39:23 2014 | 18.4 |
SAJE | ADABOOST | Thu Jan 9 15:32:14 2014 | 18.4 |
Chuck and Larry | Perceptron neural network | Thu Jan 9 18:00:04 2014 | 18.4 |
Chuck and Larry | AdaBooooooost!!! using binary decision stumps with 200 rounds of boosting | Thu Jan 9 18:04:16 2014 | 18.4 |
Charliezsc | Adaboost with Decision Stumps | Thu Jan 9 18:21:10 2014 | 18.4 |
Wumi | AdaBoost with Decision Stump and 200 boosts | Thu Jan 9 20:52:31 2014 | 18.4 |
Mr. Blobby | AdaBoost (200 rounds) with decision stumps | Thu Jan 9 23:32:26 2014 | 18.4 |
jabreezy | Adaboost with decision stumps (boosted 200 rounds | Thu Jan 9 23:49:49 2014 | 18.4 |
hi | an implementation of AdaBoost woo hoo | Thu Jan 9 23:57:10 2014 | 18.4 |
Marius | AdaBoost using decision stumps as the weak-learning algorithms. It is run for 200 iterations. | Sun Jan 12 14:09:03 2014 | 18.4 |
0108 | Adaboost with decision stump as weak learner | Sun Jan 12 14:22:07 2014 | 18.4 |
Kiwis | AdaBoost with decision trees. Number of iterations: 100. Max depth for tree: 4. | Sun Jan 12 16:18:27 2014 | 18.4 |
Hello! | AdaBoost with Decision Stumps | Sun Jan 12 23:21:29 2014 | 18.4 |
Supahaka | AdaBoost with Decision Stumps with 500 rounds of boosting. | Tue Jan 14 01:07:47 2014 | 18.4 |
skarp | AdaBoost with decision stumps as the weak learner (chosen to minimize the weighted training error) and 300 rounds of boosting. | Tue Jan 14 12:07:17 2014 | 18.4 |
Jordan | The Adaboost Algorithm with 2000 Decision Stumps | Sun Jan 5 15:21:07 2014 | 18.5 |
Jameh | A learning algorithm using Adaboost along with decision stumps to determine a classifier to use in future test cases. Give a BinaryDataSet and number of rounds for boosting. | Tue Jan 7 22:52:25 2014 | 18.5 |
BPM | I implemented AdaBoost with binary decision stumps and 100 rounds of boosting. | Wed Jan 8 22:42:32 2014 | 18.5 |
CC | AdaBoost with Decision Stump for 5000 rounds. | Thu Jan 9 14:49:58 2014 | 18.5 |
SAJE | ADABOOST | Thu Jan 9 15:27:21 2014 | 18.5 |
Jgs | An implementation of AdaBoost with Decision Stumps that is optimized by only using the best possible decision stump for each attribute. Rounds of boosting = 150 | Sat Jan 11 13:56:33 2014 | 18.5 |
Fanny | An ensemble learning algorithm that consists of AdaBoost using decision stumps as weak learner. | Sun Jan 12 04:25:11 2014 | 18.5 |
John Whelchel | Basic implementation of AdaBoost using decision stumps as weak learners and 101 rounds of boosting. | Mon Jan 13 20:42:40 2014 | 18.5 |
skarp | AdaBoost with decision stumps as the weak learner (chosen to minimize the weighted training error) and 100 rounds of boosting. | Tue Jan 14 12:12:07 2014 | 18.5 |
hb | AdaBoost, basic decision stumps | Tue Jan 14 15:14:42 2014 | 18.5 |
bfang | Boosting with decision stumps (100 rounds) | Fri Jan 3 21:10:37 2014 | 18.6 |
Jgs | An implementaiton of AdaBoost with Decision Stumps that is optimized by only using the best possible decision stump for each attribute. Rounds of boosting = 250 | Thu Jan 9 23:27:29 2014 | 18.6 |
CC | AdaBoost with Decision Stump for 50 rounds. | Mon Jan 13 21:31:50 2014 | 18.6 |
CC | AdaBoost with Decision Stump for 80 rounds. | Tue Jan 14 10:51:50 2014 | 18.6 |
skarp | AdaBoost with decision trees as the weak learner (chosen to minimize the entropy, where each tree is restricted to a maximum depth of 5) and 200 rounds of boosting. | Tue Jan 14 12:38:05 2014 | 18.6 |
SAJE | ADABOOST | Thu Jan 9 15:43:16 2014 | 18.7 |
SAJE | ADABOOST 2 | Thu Jan 9 15:48:58 2014 | 18.7 |
B&Y | We use the voted-perceptron algorithm. It runs repeatedly on each training set until it finds a prediction vector which is correct on all examples. We keep track of the survival times for each new prediction vector. These weights help us make a final binary prediction using a weighted majority vote. | Thu Jan 9 23:14:49 2014 | 18.7 |
CC | AdaBoost with Neural Networks as the learner. Uses the percentage with the highest weights of the data to make the hypothesis on a given round | Mon Jan 13 16:07:10 2014 | 18.7 |
Sunnanna | single-layer feedforward neural net usinglogistic function | Mon Jan 13 16:18:07 2014 | 18.7 |
hb | AdaBoost, KNN as week learner, k chosen empirically | Tue Jan 14 16:33:16 2014 | 18.7 |
Ravi Tandon | Implementation of Adaboost, using decision stump as weak learning algorithm. | Tue Dec 31 02:31:23 2013 | 18.8 |
sm | AdaBoost, using pruned Decision Trees as the weak learner. | Thu Jan 9 19:13:02 2014 | 18.8 |
Mike Honcho | Adaboost implementation | Thu Jan 9 21:47:00 2014 | 18.8 |
Shaheed Chagani | AdaBoost | Mon Jan 13 21:31:38 2014 | 18.8 |
Hello! | AdaBoost with Naive Bayes(200) | Tue Jan 14 14:19:36 2014 | 18.8 |
bfang | Boosting with decision stumps and early stopping | Sun Dec 29 23:30:12 2013 | 18.9 |
Tiny Wings | AdaBoost with decision tree algorithm as weak learner (maximum depth of decision trees = 5, chi-square pruning significancel level = 0.01, # of AdaBoost rounds = 200) | Mon Jan 6 05:07:34 2014 | 18.9 |
George and Katie | Random Forests implemented using vanilla Decision Trees and customizable depth, tree size, and bootstrap size. | Thu Jan 9 20:27:28 2014 | 18.9 |
Tauriel | RandomForest w/ DecisionTrees | Sun Jan 12 15:23:34 2014 | 18.9 |
Wu-Tang Dynasty | AdaBoost using random sampling and Decision Trees | Mon Jan 13 21:22:24 2014 | 18.9 |
skarp | AdaBoost with decision trees as the weak learner (chosen to minimize the entropy, where each tree is restricted to a maximum depth of 4) and 200 rounds of boosting. | Tue Jan 14 12:29:29 2014 | 18.9 |
Janie Gu | AdaBoost algorithm with decision trees as the weak learner (with a random subset of training examples selected each round by resampling). | Mon Jan 6 15:26:52 2014 | 19.0 |
Kiwis | AdaBoost with decision stumps and 80 iterations. | Fri Jan 10 07:39:57 2014 | 19.0 |
Epic Harbors | Adaboost with decision stumps as the weak learner and 250 rounds of boosting | Thu Jan 9 15:00:12 2014 | 19.1 |
Jgs | An implementation of AdaBoost with Decision Stumps that is optimized by only using the best possible decision stump for each attribute. Rounds of boosting = 20 | Sat Jan 11 13:50:02 2014 | 19.1 |
Andra Constantinescu and Bar Shabtai | Random Forest with number of trees, N and M optimized for each dataset! | Tue Jan 14 07:12:37 2014 | 19.1 |
B&Y | We use the voted-perceptron algorithm. It runs repeatedly on each training set until it finds a prediction vector which is correct on all examples. We keep track of the survival times for each new prediction vector. These weights help us make a final binary prediction using a weighted majority vote. | Tue Jan 14 12:25:17 2014 | 19.1 |
tenrburrito | AdaBoost algorithm with Decision Trees as weak learning algorithm | Tue Jan 14 16:38:21 2014 | 19.1 |
Mike Honcho 500 | Adaboost implementation | Tue Jan 14 12:28:11 2014 | 19.2 |
Andra Constantinescu and Bar Shabtai | AdaBoost on a single layer neural network The neural classifier takes binary input and loops through all training examples to update the weights of each attribute. Number of boosting rounds optimized for data set (here 2) | Tue Jan 14 16:20:19 2014 | 19.2 |
Fanny | An ensemble learning algorithm that consists of AdaBoost using decision stumps as weak learner. | Sat Jan 4 06:55:07 2014 | 19.3 |
Supahaka | AdaBoost with Decision Stumps with 100000 rounds of boosting. | Mon Jan 13 00:38:32 2014 | 19.3 |
bcfour | A Naive Bayes approach to classification. | Sat Jan 4 22:28:10 2014 | 19.4 |
Solving From Classifier | The Naive Bayes algorithm executes the maximum-likelihood parameter learning problem and uses the learned parameters (obtained from observed attribute values) to find the maximum-likelihood naive Bayes hypothesis. | Tue Jan 7 13:55:17 2014 | 19.4 |
Mickey Mouse | An implementation of Naive Bayes | Tue Jan 7 15:50:34 2014 | 19.4 |
Lil Thug | A simple decision tree algorithm with chi-squared pruning. | Wed Jan 8 15:12:38 2014 | 19.4 |
bcfour,jkwok | Naive Bayes with standard Laplacian correction | Thu Jan 9 13:03:05 2014 | 19.4 |
dmmckenn_pthorpe | Implements Naive Bayes using discretization as opposed to continuous values. | Thu Jan 9 21:21:27 2014 | 19.4 |
Hello! | Naive Bayes | Thu Jan 9 23:05:21 2014 | 19.4 |
Sunnanna | Naive Bayes Alogrithm using maximum likelihood estimator | Mon Jan 13 18:54:30 2014 | 19.4 |
Wu-Tang Dynasty | AdaBoost using random sampling and Decision Stumps | Mon Jan 13 21:24:34 2014 | 19.4 |
Andra Constantinescu and Bar Shabtai | AdaBoost with decision stump as the weak learner.Number of iterations of AdaBoost optimized per example. | Tue Jan 14 02:55:45 2014 | 19.4 |
vluu | An attempt at AdaBoost with Naive Bayes | Thu Dec 26 21:36:21 2013 | 19.5 |
Kiwis | AdaBoost with decision trees. Number of iterations: 200. Max depth for tree: 1. | Sun Jan 12 15:46:23 2014 | 19.5 |
Jordan | Adaboost to create a new feature space, then KNN | Sun Jan 5 21:17:16 2014 | 19.6 |
Wafflepocalypse | A random forest classifier with 1001 trees. | Thu Jan 9 04:53:45 2014 | 19.6 |
Aaron Doll | This is an implementation of the random forests with m=1, 400 trees | Sat Jan 11 02:40:37 2014 | 19.6 |
haoyu | Random Forest with Decision Tree | Fri Dec 27 00:48:21 2013 | 19.7 |
Dr. Steve Brule (For Your Health) | Neural Network. | Thu Jan 9 17:28:18 2014 | 19.7 |
Mr. Blobby | AdaBoost (200 rounds) with decision trees (depth limit of 5) | Sun Jan 12 06:13:26 2014 | 19.7 |
sm | AdaBoost, using pruned Decision Trees as the weak learner. | Mon Jan 13 15:35:01 2014 | 19.7 |
Tauriel | RandomForest w/ DecisionTrees | Sat Jan 4 16:30:35 2014 | 19.8 |
L.M. | K-nearest | Thu Jan 9 03:40:10 2014 | 19.8 |
S1 | Random forests with decision trees (500 trees). | Sat Jan 11 20:12:48 2014 | 19.8 |
Solving From Classifier | The Naive Bayes algorithm using a binary representation as opposed to a discrete representation. | Sat Jan 11 21:42:28 2014 | 19.8 |
Solving From Classifier | The Naive Bayes algorithm using a binary representation at times an a discrete representation at other times. | Sat Jan 11 21:58:17 2014 | 19.8 |
corgi2.0 | AdaBoost 150 w/ basic decision stumps | Sat Jan 11 23:14:44 2014 | 19.9 |
God | Implements naive Bayes algorithm. | Tue Jan 14 00:19:12 2014 | 19.9 |
Nihar the God | Uses Adaboost with decision stumps as weak learners and then uses 150 rounds of boosting | Tue Jan 14 15:07:58 2014 | 19.9 |
Dr. Steve Brule (For Your Health) | Neural Network trained for 100 epochs. | Tue Jan 14 15:40:58 2014 | 19.9 |
Jake Barnes | Single layer artificial neural network with 125 rounds of training. Learning rate is 0.1 | Thu Jan 9 11:30:41 2014 | 20.0 |
Aaron Doll | This is an implementation of the random forest algorithm | Fri Jan 10 14:50:20 2014 | 20.0 |
Tauriel | RandomForest w/ DecisionTrees pruned at significance level 0.95 | Sun Jan 12 15:51:47 2014 | 20.0 |
Fanny | A voted perceptron algorithm (epoch = 10) | Sun Jan 12 22:58:47 2014 | 20.0 |
Fanny | A voted perceptron algorithm (epoch = 30) | Wed Jan 8 13:14:07 2014 | 20.1 |
R.A.B. | K nearest neighbors with k = 20 | Thu Jan 9 22:26:05 2014 | 20.1 |
Aaron Doll | This is an implementation of the random forests with m=1, 400 trees | Fri Jan 10 22:18:09 2014 | 20.1 |
qshen | An implementation of AdaBoost that uses a weak learner that chooses the decision stump that minimizes the weighted training error and is iterated 500 times. | Mon Dec 30 13:58:28 2013 | 20.2 |
DH | AdaBoost with decision trees | Sun Jan 5 16:28:21 2014 | 20.2 |
Tiny Wings | Decision tree with chi-square pruning (pruning significance level = 0.01) | Mon Jan 6 05:05:36 2014 | 20.2 |
finn&jake | Knn, K=40, Euclidean distance for numeric and standardized distance for discrete variables; majority vote for nearest neighbors. | Thu Jan 9 14:13:25 2014 | 20.2 |
Tauriel | AdaBoost w/ DecisionTree | Sun Jan 12 14:11:25 2014 | 20.2 |
Andra Constantinescu and Bar Shabtai | Vanilla single layer neural network algorithm. Takes binary input and loops through all training examples to update the weights of each attribute. Alpha and epochs optimized for each dataset. | Tue Jan 14 02:29:11 2014 | 20.2 |
Nihar the God | Uses Adaboost with decision stumps as weak learners and then uses 200 rounds of boosting | Tue Jan 14 15:12:20 2014 | 20.3 |
NY | Random forest with 500 iterations | Wed Jan 8 16:53:40 2014 | 20.4 |
Dr. Roberto | Single layer Neural Net run for 100 epochs with a learning value 0f 0.01 | Thu Jan 9 13:42:15 2014 | 20.4 |
bchou | Nearest 7-neighbors | Fri Jan 3 20:02:09 2014 | 20.8 |
Dr. Roberto | ADABoost with 100 rounds of Single Layer Neural Net run for 10 epochs with a varying learning value of around 0.01 | Thu Jan 9 14:51:08 2014 | 20.8 |
Kiwis | AdaBoost with decision stumps and 150 iterations. | Thu Jan 9 07:10:52 2014 | 20.9 |
weezy | Implements a k-Nearest Neighbor algorithm with k = 15. | Thu Jan 9 20:02:27 2014 | 20.9 |
George and Katie | A simple implementation of decision trees as per R&N. | Thu Jan 9 21:01:56 2014 | 20.9 |
Sunnanna | nearest neighbors algorithm with k = 7 | Mon Jan 13 14:49:40 2014 | 20.9 |
Rocky | Bagging algorithm with single layer neural network as the weak learner | Tue Jan 14 15:01:01 2014 | 21.0 |
anon5 | An implementation of a decision-tree-learning algorithm with pruning | Fri Jan 3 23:11:54 2014 | 21.1 |
T.C. | Multi-layered Neural Net, 200 iterations, .1 learning rate | Thu Jan 9 03:05:38 2014 | 21.1 |
SkyNet | 1000-iteration AdaBoost with Decision Stump | Thu Jan 9 21:22:55 2014 | 21.1 |
Fanny | A voted perceptron algorithm (epoch = 10) | Mon Jan 13 20:46:33 2014 | 21.1 |
LK | AdaBoost using decision stump | Fri Jan 3 04:28:24 2014 | 21.2 |
Gewang | Predicts the classification label based on the k nearest neighbors | Tue Jan 7 11:58:55 2014 | 21.3 |
NY | Bagged Decision Trees with 500 trees | Wed Jan 8 16:58:45 2014 | 21.3 |
George and Katie | Random Forests implemented using vanilla Decision Trees and customizable depth, tree size, and bootstrap size. | Wed Jan 8 23:01:13 2014 | 21.3 |
Glenn | Backpropagation performed on a neural network with 1 hidden layers for 5000 iterations. The learning rate was set to 0.001 and the layers (from input to output) contain [ 105 51 1 ] units, including a bias unit for each non-output layer. | Tue Jan 14 12:55:42 2014 | 21.3 |
LK | Bagging with AdaBoost that uses decision stumps | Mon Jan 6 08:57:03 2014 | 21.4 |
Gewang | Predicts the classification label based on the k nearest neighbors | Tue Jan 7 11:43:15 2014 | 21.4 |
Gewang | Predicts the classification label based on the k nearest neighbors | Thu Jan 2 13:12:19 2014 | 21.5 |
Andrew Werner | AdaBoost using vanilla decision trees as the weak learner | Thu Jan 2 22:55:08 2014 | 21.5 |
dericc, sigatapu | 200-iteration AdaBoost with Decision Trees | Mon Jan 13 16:45:48 2014 | 21.5 |
S1 | An implementation of AdaBoost, with pruned decision trees as the weak learner for the algorithm and 500 rounds of boosting. | Mon Jan 13 16:46:25 2014 | 21.5 |
R.A.B. | k-NN with K = 21 and votes weighted by the inverse of distance | Tue Jan 14 00:03:23 2014 | 21.5 |
LK | Bagging with AdaBoost that uses decision stumps | Tue Jan 14 12:29:37 2014 | 21.5 |
Andrew Werner | AdaBoost using vanilla decision trees as the weak learner | Sat Jan 4 15:01:33 2014 | 21.6 |
Katie and George | An implementation of the (voted) perceptron algorithm run for 25 epochs. | Thu Jan 9 20:47:01 2014 | 21.6 |
Andra Constantinescu and Bar Shabtai | Vanilla single layer neural network algorithm. Takes binary input and loops through all training examples to update the weights of each attribute. Alpha = 0.1. Very nice, I like! | Thu Jan 9 21:00:47 2014 | 21.6 |
Gewang | Predicts the classification label based on the k nearest neighbors | Tue Jan 7 11:36:12 2014 | 21.7 |
akdote | Naive Bayes Algorithm | Thu Jan 9 21:10:07 2014 | 21.7 |
hb | KNN with L2 distance, k empirically set after cross validation | Tue Jan 14 14:50:46 2014 | 21.7 |
Gewang | A very simple learning algorithm that, on each test example, predicts the classification based on the k nearest neighbors during training | Sun Dec 29 11:16:57 2013 | 21.8 |
ASapp | Nearest Neighbor Algorithm with k = 5. Normalizes using mean and standard deviation of each attribute. | Thu Jan 9 23:48:40 2014 | 21.8 |
anon5 | An implementation of the AdaBoost algorithm using decision trees with pruning as the learner with 200 rounds of boosting | Fri Jan 3 20:00:02 2014 | 21.9 |
anon5 | An implementation of the AdaBoost algorithm using vanilla decision trees as the learner with 200 rounds of boosting | Fri Jan 3 20:15:30 2014 | 21.9 |
hb | KNN with L2 distance, k empirically set after cross validation | Thu Jan 9 22:52:35 2014 | 21.9 |
weezy | Implements a k-Nearest Neighbor algorithm with k = 27. | Sat Jan 11 00:38:42 2014 | 21.9 |
Linda Zhong | Basic decision tree algorithm implementation , no pruning. | Sun Jan 12 16:29:37 2014 | 22.0 |
sabard | A decision tree learning algorithm with chi squared pruning. | Tue Jan 14 05:27:30 2014 | 22.1 |
Mike Honcho 100 | Adaboost implementation | Tue Jan 14 12:25:16 2014 | 22.1 |
Linda Zhong | Basic decision tree algorithm implementation , no pruning. | Fri Jan 10 00:47:46 2014 | 22.2 |
David H | Random Forest with 500 trees | Sun Jan 5 20:16:10 2014 | 22.3 |
Bob Dondero | Adaboost (200 rounds) with weak learner as a decision tree (max depth 5) and chi-squared pruning (1%). | Thu Jan 9 20:54:23 2014 | 22.3 |
David Hammer | bagging (using binary decision trees) | Sun Jan 5 12:29:43 2014 | 22.4 |
Ravi Tandon | This algorithm is the implementation of the bootstrap aggregation algorithm. | Mon Dec 23 18:40:44 2013 | 22.5 |
The Whitman Whale | Nearest neighbor classification with 17 neighbors and manhattan distance | Thu Jan 9 22:48:28 2014 | 22.5 |
Mike Honcho | Adaboost implementation | Tue Jan 14 12:23:10 2014 | 22.5 |
akdote | Naive Bayes Algorithm | Thu Jan 9 23:47:14 2014 | 22.6 |
Bob Dondero | A decision tree learning algorithm using information gain and chi-squared pruning. | Thu Jan 9 20:39:27 2014 | 22.7 |
Katie and George | An implementation of the (voted) perceptron algorithm run for 100 epochs. | Mon Jan 6 21:04:33 2014 | 22.8 |
K.L. | AdaBoost run with 1000 iterations. | Tue Jan 7 15:33:41 2014 | 22.8 |
Rocky | Nearest Neighbors with weighted vote(weight is inversely proportional to distance), Manhattan distance for normalized attributes, linear scan all examples to find K nearest neighbors(not good for very large training set) | Thu Jan 9 10:11:26 2014 | 22.8 |
Wu-Tang Dynasty | AdaBoost using random sampling and Decision Trees | Mon Jan 13 07:30:13 2014 | 22.8 |
Wu-Tang Dynasty | AdaBoost using random sampling and Decision Trees | Mon Jan 13 21:53:03 2014 | 22.8 |
AFC | An (initial) implementation of K nearest neighbors with K = sqrt(number of training samples). | Thu Jan 2 16:52:21 2014 | 22.9 |
ASapp | Nearest Neighbor Algorithm with k = 5. Normalizes using mean and standard deviation of each attribute. | Sat Jan 11 01:33:37 2014 | 22.9 |
bclam | Single-layer Neural Network - 2000 epochs, Learning rate of 0.01 | Tue Jan 14 07:12:01 2014 | 23.0 |
vvspr | An implementation of the Naive Bayes Algorithm | Tue Dec 31 13:34:32 2013 | 23.1 |
bfang | Bagging with decision trees | Wed Jan 1 11:30:09 2014 | 23.1 |
Jgs | An implementation of AdaBoost with vanilla Decision Trees. Rounds of boosting = 150 | Sun Jan 12 22:08:45 2014 | 23.1 |
sabard | A decision tree learning algorithm with chi squared pruning (5%). | Tue Jan 14 15:32:46 2014 | 23.1 |
Valya | Implements a single layer neural net, much like the algorithm used in W6, with alpha = 0.01, running for 1000 epochs. | Mon Jan 13 22:06:10 2014 | 23.3 |
Aaron Doll | Decision tree with reduced error pruning. | Thu Dec 26 16:45:48 2013 | 23.4 |
Mickey Mouse | An Implementation of Voted Perceptron algorithm with 200 epochs | Wed Jan 8 15:52:53 2014 | 23.5 |
NY | Purifies training set for decision tree (pruning alternative) | Wed Jan 8 17:03:23 2014 | 23.6 |
mdrjr | This is an implementation of k nearest neighbors. I've played around with both k and the distance function. | Thu Jan 9 20:04:54 2014 | 23.6 |
Rocky | Bagging algorithm with single layer neural network as the weak learner | Thu Jan 9 20:49:05 2014 | 23.6 |
Jgs | An implementation of AdaBoost with vanilla Decision Trees. Rounds of boosting = 250 | Sun Jan 12 20:27:00 2014 | 23.6 |
lilt | An implementation of 10-nearest neighbors | Thu Jan 9 00:23:16 2014 | 23.7 |
CookieMonster | This is a bagging algorithm which uses a nearest neighbor algorithm as its weak classifier. | Mon Jan 6 20:47:22 2014 | 23.8 |
Katie and George | An implementation of the (voted) perceptron algorithm run for 25 epochs. | Thu Jan 9 20:08:04 2014 | 23.8 |
CookieMonster | This is a nearest-neighbor classifier which takes a majority vote from the k nearest points in feature space using Euclidean Distance. | Thu Jan 9 22:57:24 2014 | 23.8 |
0108 | Bagging with decision stump as weak learner | Sun Jan 12 14:19:35 2014 | 23.8 |
0108 | Bagging with decision stump as weak learner | Sun Jan 12 14:59:40 2014 | 23.8 |
hp | Bagging Decision Stumps | Sat Jan 4 03:42:00 2014 | 23.9 |
LK | Bagging with decision stump! | Tue Dec 31 08:04:32 2013 | 24.0 |
CTTT | A decision stump weak learner. | Mon Jan 6 22:58:42 2014 | 24.0 |
K.L. | Decision Stumps | Tue Jan 7 15:29:40 2014 | 24.0 |
Mickey Mouse | An implementation of Decision Stump Algorithm | Tue Jan 7 15:47:47 2014 | 24.0 |
Squirtle | An implementation of the vanillia decisionstumps classifier | Thu Jan 9 14:27:10 2014 | 24.0 |
Charliezsc | Decision Stump without boosting | Thu Jan 9 19:19:26 2014 | 24.0 |
Charliezsc | Bagging with Decision Stumps (200 weak learners and half bootstrap samples) | Thu Jan 9 20:08:14 2014 | 24.0 |
Charliezsc | Bagging with Decision Stumps (200 weak learners and 2 percent bootstrap samples) | Thu Jan 9 20:49:51 2014 | 24.0 |
dmmckenn_pthorpe | Basic Stumps Implementation | Thu Jan 9 21:12:17 2014 | 24.0 |
corgi | basic decision stump | Thu Jan 9 23:35:40 2014 | 24.0 |
PandaBear | Adaboost on decision stumps, 1000 rounds | Mon Jan 13 22:01:39 2014 | 24.0 |
PandaBear | Adaboost on decision stumps, 500 rounds | Tue Jan 14 11:02:10 2014 | 24.0 |
Mike Honcho 10 | Adaboost implementation | Tue Jan 14 12:26:40 2014 | 24.0 |
God | Implements Basic Decision Stumps and chooses the one which performs the best | Tue Jan 14 14:35:39 2014 | 24.0 |
Learner | Implementation of Adaboost with decision stumps as the weak learner. | Tue Jan 14 15:17:01 2014 | 24.0 |
Learner | Implementation of Adaboost with decision stumps as the weak learner. 100 rounds of boosting. | Tue Jan 14 15:33:13 2014 | 24.0 |
sabard | Decision Stump weak learning algorithm to be used with AdaBoost | Tue Jan 14 15:39:42 2014 | 24.0 |
Aaron Doll | This is an implementation of the random forest algorithm | Thu Jan 9 18:17:46 2014 | 24.1 |
Boar Ciphers | Implements a single-layer neural network with 100 epochs and a 0.001 learning rate | Tue Jan 14 10:53:40 2014 | 24.2 |
JS | Single-layer Neural Network, 100 epochs, Learning Rate = 0.001 | Tue Jan 14 15:33:47 2014 | 24.2 |
hb | AdaBoost, KNN as week learner, k chosen empirically | Tue Jan 14 16:02:40 2014 | 24.2 |
Ameera and David | Decision Tree Learning algorithm implementation. | Sun Jan 12 22:04:52 2014 | 24.5 |
CTTT | Decision Tree Algorithm with Chi-Squared Pre-Pruning | Mon Jan 6 20:48:45 2014 | 24.6 |
Catherine Wu and Yan Wu | AdaBoost using random sampling and Decision Trees | Wed Jan 8 22:51:13 2014 | 24.7 |
anon5 | An implementation of vanilla decision-tree-learning | Fri Jan 3 23:15:26 2014 | 24.8 |
Stephen McDonald | A K-nearest neighbours algorithm that predicts a test example by taking a majority vote of the k nearest neighbours, as measured by Manhattan distance (k is set to 1 for this trial). Additionally, this algorithm first converts the attribute types to numeric and normalizes each attribute to have zero mean and unit variance. | Wed Jan 8 18:29:05 2014 | 25.0 |
CookieMonster | This is a bagging algorithm which uses a nearest neighbor algorithm as its weak classifier. | Thu Jan 9 23:13:29 2014 | 25.1 |
Hello! | AdaBoost with Naive Bayes | Tue Jan 7 15:09:21 2014 | 25.2 |
Tauriel | RandomForest w/ DecisionTrees | Sun Jan 12 15:40:11 2014 | 25.3 |
NY | Decision Tree | Sun Jan 12 15:34:37 2014 | 25.4 |
Shaheed Chagani | Naive Bayes Classifier | Mon Jan 13 14:01:58 2014 | 25.7 |
Khoa | An algorithm that classifies. | Tue Jan 14 14:16:06 2014 | 25.7 |
AFC | An implementation of K nearest neighbors with empirically optimized K values. | Thu Jan 2 20:59:59 2014 | 26.0 |
asdf | A single perceptron (using a logistic threshold) with a learning rate of 0.001 and 100 epochs of training. | Thu Jan 9 23:08:08 2014 | 26.7 |
Wu-Tang Dynasty | AdaBoost using random sampling and Decision Trees | Sat Jan 11 18:34:42 2014 | 26.7 |
Shaheed Chagani | Naive Bayes Classifier | Wed Dec 18 07:43:08 2013 | 28.0 |
Bob Dondero | Adaboost (200 rounds) with weak learner as a decision tree (max depth 5) and chi-squared pruning (1%) | Fri Jan 10 23:04:11 2014 | 28.4 |
John Whelchel | Basic implementation of AdaBoost using decision stumps as weak learners and 100 rounds of boosting. | Sun Jan 12 11:32:16 2014 | 29.4 |
John Whelchel | Basic implementation of AdaBoost using decision stumps as weak learners and 150 rounds of boosting. | Sun Jan 12 20:29:33 2014 | 29.4 |
0108 | Adaboost with decision stump as weak learner | Sun Jan 12 13:35:33 2014 | 30.2 |
CAPSLOCK | A mostly vanilla decision tree. Uses some cool data structures though. | Thu Jan 9 22:57:33 2014 | 30.3 |
EC | Neural Net | Tue Jan 7 18:40:44 2014 | 32.0 |
ebp and Wafflepocalypse | Adaboost on random forests of 30 trees, sampling .65 of the weighted training data with replacement for each hypothesis, 100 rounds of boosting. | Sat Jan 11 03:58:16 2014 | 32.9 |
corgi3.0 | basic decision tree | Tue Jan 14 02:39:38 2014 | 33.3 |
corgi4.0 | decision tree with chi squared pruning | Tue Jan 14 04:16:18 2014 | 33.3 |
corgi4.0 | decision tree with chi squared pruning | Tue Jan 14 12:52:05 2014 | 33.3 |
corgi5.0 | decision tree with chi squared pruning | Tue Jan 14 14:26:38 2014 | 33.3 |
corgi3.0 | decision tree, discrete attribute splitting | Tue Jan 14 12:55:01 2014 | 33.5 |
0108 | Adaboost with decision stump as weak learner | Wed Jan 8 18:35:33 2014 | 34.1 |
T.C. | Multi-layered Neural Net, 100 iterations, .1 learning rate | Wed Jan 8 04:38:21 2014 | 36.3 |
anon | AdaBoost (using shallow binary decision trees as weak learner) | Sun Jan 5 12:34:32 2014 | 36.5 |
Jake Barnes | Multiple layer artificial neural network (5 hidden nodes) with 125 rounds of training. Learning rate is 0.1 | Mon Jan 13 17:03:22 2014 | 36.9 |
Shaheed Chagani | Naive Bayes Classifier | Sun Jan 12 22:41:12 2014 | 37.5 |
ebp and Wafflepocalypse | Adaboost on random forests of 30 trees, sampling .65 of the weighted training data with replacement for each hypothesis, 150 rounds of boosting. | Sat Jan 11 02:46:53 2014 | 38.0 |
EC | Neural Net | Tue Jan 7 18:57:03 2014 | 39.0 |
Igor Zabukovec | SVM | Thu Jan 9 15:14:08 2014 | 39.0 |
Glenn | Backpropagation performed on a neural network with 1 hidden layers for 3000 iterations. The learning rate was set to 0.1 and the layers (from input to output) contain [ 105 4 1 ] units, including a bias unit for each non-output layer. | Thu Jan 9 23:20:33 2014 | 39.0 |
bclam | Single-layer Neural Network - 10000 epochs, Learning rate of 0.05 | Tue Jan 14 07:20:25 2014 | 39.0 |
sabard | A decision tree learning algorithm. | Thu Jan 9 23:04:47 2014 | 40.0 |
tenrburrito | AdaBoost algorithm with Decision Trees as weak learning algorithm | Thu Jan 9 15:24:10 2014 | 40.2 |
Joshua A. Zimmer | A learning algorithm that uses weightingof the training examples via decision stumps to predict theclassification of the test examples. | Tue Jan 14 16:01:11 2014 | 42.0 |
bclam | Single-layer Neural Network - 1000 epochs, Learning rate of 0.05 | Wed Jan 8 22:16:33 2014 | 42.2 |
Joshua A. Zimmer | A learning algorithm that uses weightingof the training examples via decision stumps to predict theclassification of the test examples. | Fri Jan 10 04:11:05 2014 | 45.1 |
JS | Single-layer Neural Network, 200 epochs, Learning Rate = 0.01 | Thu Jan 9 15:36:09 2014 | 46.1 |
EC | Neural Net | Tue Jan 7 18:50:00 2014 | 46.4 |
bfang | Single layer neural network, 80 epochs, alpha=0.01 | Fri Jan 10 12:43:09 2014 | 46.4 |
Glenn | Backpropagation performed on a neural network with 0 hidden layers for 100 iterations. The learning rate was set to 0.01 and the layers (from input to output) contain [ 105 1 ] units, including a bias unit for each non-output layer. | Tue Jan 14 13:06:55 2014 | 46.5 |
null | null | Fri Jan 3 23:54:58 2014 | 46.9 |
ASapp | Nearest Neighbor Algorithm with k = 5. Normalizes using mean and standard deviation of each attribute. | Sat Jan 11 01:24:41 2014 | 48.4 |
Valya | Implements a single layer neural net, much like thealgorithm used in W6, with alpha = 0.1 | Sun Jan 12 20:44:33 2014 | 49.4 |
Valya | Implements neural nets, much like thealgorithm used in W6, with alpha = 0.1 | Thu Jan 9 14:20:29 2014 | 49.5 |
Benjamin Chen | A Naive Bayes approach to classification (fill this out more) | Sat Jan 4 22:17:00 2014 | 49.6 |
bcfour | A Naive Bayes approach to classification. | Sat Jan 4 22:19:02 2014 | 49.6 |
lilt | an decision stump implementation | Mon Jan 13 15:03:39 2014 | 49.6 |
Joshua A. Zimmer | A working (hopefully) attempt at a learning algorithm that uses weighting of the training examples via decision stumps to predict the classification of the test examples. | Mon Jan 20 16:17:10 2014 | 49.6 |
null | null | Thu Jan 2 21:29:49 2014 | 50.4 |
Jake Barnes | Single layer artificial neural network with 125 rounds of training. Learning rate is 0.1 | Wed Jan 8 16:47:50 2014 | 50.4 |
Khoa | An algo based on decision stumps | Thu Jan 9 23:57:24 2014 | 50.4 |
Learner | Implementation of Adaboost with decision stumps as the weak learner. | Fri Jan 10 02:38:34 2014 | 50.4 |
Guessing | Minimally outputs a result by applying a random function. | Sat Jan 11 15:26:17 2014 | 50.4 |
Dr. Steve Brule (For Your Health) | Neural Network. | Sun Jan 12 19:58:48 2014 | 50.4 |
PandaBear | Adaboost on decision stumps, 1000 rounds | Thu Jan 9 18:11:53 2014 | 55.9 |
Estranged Egomaniac | AdaBoost with decision stumps. 250 rounds of boosting. | Sun Jan 12 13:54:21 2014 | 59.9 |
CC | AdaBoost with Neural Networks as the learner. Uses the percentage with the highest weights of the data to make the hypothesis on a given round | Tue Jan 14 11:41:07 2014 | 77.1 |
Solving From Classifier | The Naive Bayes algorithm executes the maximum-likelihood parameter learning problem and uses the learned parameters (obtained from observed attribute values) to find the maximum-likelihood naive Bayes hypothesis. | Tue Jan 7 13:49:21 2014 | 80.6 |
Marius | AdaBoost using decision stumps as the weak-learning algorithms. It is run for 200 iterations. | Thu Jan 9 20:13:44 2014 | 81.6 |
Table generated: Mon Jan 20 16:17:13 2014