COS402 Program P6 results for census sorted by date


View results for: census dna ocr17 ocr49
Resort by: author date error

AuthorDescriptionDate submitted% Error rate
Shaheed ChaganiNaive Bayes ClassifierWed Dec 18 07:43:08 201328.0 
Ravi TandonThis algorithm is the implementation of the bootstrap aggregation algorithm.Mon Dec 23 18:40:44 201322.5 
Aaron DollDecision tree with reduced error pruning.Thu Dec 26 16:45:48 201323.4 
vluuAn attempt at AdaBoost with Naive BayesThu Dec 26 21:36:21 201319.5 
haoyuRandom Forest with Decision TreeFri Dec 27 00:48:21 201319.7 
skarpAdaBoost with decision stumps as the weak learner (chosen to minimize the weighted training error) and 200 rounds of boosting.Sat Dec 28 16:13:58 201318.4 
GewangA very simple learning algorithm that, on each test example, predicts the classification based on the k nearest neighbors during trainingSun Dec 29 11:16:57 201321.8 
bfangBoosting with decision stumps and early stoppingSun Dec 29 23:30:12 201318.9 
qshenAn implementation of AdaBoost that uses a weak learner that chooses the decision stump that minimizes the weighted training error and is iterated 500 times.Mon Dec 30 13:58:28 201320.2 
Ravi TandonImplementation of Adaboost, using decision stump as weak learning algorithm.Tue Dec 31 02:31:23 201318.8 
LKBagging with decision stump!Tue Dec 31 08:04:32 201324.0 
vvsprAn implementation of the Naive Bayes AlgorithmTue Dec 31 13:34:32 201323.1 
bfangBagging with decision treesWed Jan 1 11:30:09 201423.1 
GewangPredicts the classification label based on the k nearest neighborsThu Jan 2 13:12:19 201421.5 
anon5An implementation of the AdaBoost algorithm using decision stumps as the learner with 200 rounds of boostingThu Jan 2 15:28:13 201418.4 
AFCAn (initial) implementation of K nearest neighbors with K = sqrt(number of training samples).Thu Jan 2 16:52:21 201422.9 
AFCAn implementation of K nearest neighbors with empirically optimized K values.Thu Jan 2 20:59:59 201426.0 
nullnullThu Jan 2 21:29:49 201450.4 
Andrew WernerAdaBoost using vanilla decision trees as the weak learnerThu Jan 2 22:55:08 201421.5 
LKAdaBoost using decision stumpFri Jan 3 04:28:24 201421.2 
bchouAdaBoost on Binary Decision Stumps. 150 rounds of BoostingFri Jan 3 07:09:15 201418.1 
anon5An implementation of the AdaBoost algorithm using decision trees with pruning as the learner with 200 rounds of boostingFri Jan 3 20:00:02 201421.9 
bchouNearest 7-neighborsFri Jan 3 20:02:09 201420.8 
anon5An implementation of the AdaBoost algorithm using vanilla decision trees as the learner with 200 rounds of boostingFri Jan 3 20:15:30 201421.9 
bfangBoosting with decision stumps (100 rounds)Fri Jan 3 21:10:37 201418.6 
anon5An implementation of a decision-tree-learning algorithm with pruningFri Jan 3 23:11:54 201421.1 
anon5An implementation of vanilla decision-tree-learningFri Jan 3 23:15:26 201424.8 
nullnullFri Jan 3 23:54:58 201446.9 
hpBagging Decision StumpsSat Jan 4 03:42:00 201423.9 
FannyAn ensemble learning algorithm that consists of AdaBoost using decision stumps as weak learner.Sat Jan 4 06:55:07 201419.3 
Andrew WernerAdaBoost using vanilla decision trees as the weak learnerSat Jan 4 15:01:33 201421.6 
TaurielRandomForest w/ DecisionTreesSat Jan 4 16:30:35 201419.8 
Benjamin ChenA Naive Bayes approach to classification (fill this out more)Sat Jan 4 22:17:00 201449.6 
bcfourA Naive Bayes approach to classification.Sat Jan 4 22:19:02 201449.6 
bcfourA Naive Bayes approach to classification.Sat Jan 4 22:28:10 201419.4 
David Hammerbagging (using binary decision trees)Sun Jan 5 12:29:43 201422.4 
anonAdaBoost (using shallow binary decision trees as weak learner)Sun Jan 5 12:34:32 201436.5 
JordanThe Adaboost Algorithm with 2000 Decision StumpsSun Jan 5 15:21:07 201418.5 
DHAdaBoost with decision treesSun Jan 5 16:28:21 201420.2 
David HRandom Forest with 500 treesSun Jan 5 20:16:10 201422.3 
JordanAdaboost to create a new feature space, then KNNSun Jan 5 21:17:16 201419.6 
Tiny WingsDecision tree with chi-square pruning (pruning significance level = 0.01)Mon Jan 6 05:05:36 201420.2 
Tiny WingsAdaBoost with decision tree algorithm as weak learner (maximum depth of decision trees = 5, chi-square pruning significancel level = 0.01, # of AdaBoost rounds = 200)Mon Jan 6 05:07:34 201418.9 
LKBagging with AdaBoost that uses decision stumpsMon Jan 6 08:57:03 201421.4 
Janie GuAdaBoost algorithm with decision trees as the weak learner (with a random subset of training examples selected each round by resampling).Mon Jan 6 15:26:52 201419.0 
CTTTAdaboost + Decision Stumps (200 rounds).Mon Jan 6 20:44:55 201418.4 
CookieMonsterThis is a bagging algorithm which uses a nearest neighbor algorithm as its weak classifier.Mon Jan 6 20:47:22 201423.8 
CTTTDecision Tree Algorithm with Chi-Squared Pre-PruningMon Jan 6 20:48:45 201424.6 
ebpAdaboost with decision stumps minimizing smoothed weighted training error, 100 rounds of boosting.Mon Jan 6 21:02:21 201418.3 
Katie and GeorgeAn implementation of the (voted) perceptron algorithm run for 100 epochs.Mon Jan 6 21:04:33 201422.8 
CTTTA decision stump weak learner.Mon Jan 6 22:58:42 201424.0 
GewangPredicts the classification label based on the k nearest neighborsTue Jan 7 11:36:12 201421.7 
GewangPredicts the classification label based on the k nearest neighborsTue Jan 7 11:43:15 201421.4 
GewangPredicts the classification label based on the k nearest neighborsTue Jan 7 11:58:55 201421.3 
Solving From ClassifierThe Naive Bayes algorithm executes the maximum-likelihood parameter learning problem and uses the learned parameters (obtained from observed attribute values) to find the maximum-likelihood naive Bayes hypothesis.Tue Jan 7 13:49:21 201480.6 
Solving From ClassifierThe Naive Bayes algorithm executes the maximum-likelihood parameter learning problem and uses the learned parameters (obtained from observed attribute values) to find the maximum-likelihood naive Bayes hypothesis.Tue Jan 7 13:55:17 201419.4 
Hello!AdaBoost with Naive BayesTue Jan 7 15:09:21 201425.2 
K.L.Decision StumpsTue Jan 7 15:29:40 201424.0 
K.L.AdaBoost run with 1000 iterations.Tue Jan 7 15:33:41 201422.8 
K.L.Single layer neural net, 100 training rounds, learning rate = .01.Tue Jan 7 15:36:37 201417.7 
Mickey MouseA implementation of AdaBoost withDecision Stump as the weak learner and 200 rounds of boostingTue Jan 7 15:44:27 201418.4 
Mickey MouseAn implementation of Decision Stump AlgorithmTue Jan 7 15:47:47 201424.0 
Mickey MouseAn implementation of Naive BayesTue Jan 7 15:50:34 201419.4 
ECNeural NetTue Jan 7 18:40:44 201432.0 
ECNeural NetTue Jan 7 18:50:00 201446.4 
ECNeural NetTue Jan 7 18:57:03 201439.0 
JamehA learning algorithm using Adaboost along with decision stumps to determine a classifier to use in future test cases. Give a BinaryDataSet and number of rounds for boosting.Tue Jan 7 22:52:25 201418.5 
T.C.Multi-layered Neural Net, 100 iterations, .1 learning rateWed Jan 8 04:38:21 201436.3 
FannyA voted perceptron algorithm (epoch = 30)Wed Jan 8 13:14:07 201420.1 
dlackeyThis is an implementation of AdaBoost that uses 175 rounds of boosting. The weak learning algorithm used is a decision stump that directly minimizes the weighted training error.Wed Jan 8 14:10:54 201418.3 
Lil ThugA simple decision tree algorithm with chi-squared pruning.Wed Jan 8 15:12:38 201419.4 
Mickey MouseAn Implementation of Voted Perceptron algorithm with 200 epochsWed Jan 8 15:52:53 201423.5 
Jake BarnesSingle layer artificial neural network with 125 rounds of training. Learning rate is 0.1Wed Jan 8 16:47:50 201450.4 
NYRandom forest with 500 iterationsWed Jan 8 16:53:40 201420.4 
NYBagged Decision Trees with 500 treesWed Jan 8 16:58:45 201421.3 
NYPurifies training set for decision tree (pruning alternative)Wed Jan 8 17:03:23 201423.6 
Stephen McDonaldA K-nearest neighbours algorithm that predicts a test example by taking a majority vote of the k nearest neighbours, as measured by Manhattan distance (k is set to 1 for this trial). Additionally, this algorithm first converts the attribute types to numeric and normalizes each attribute to have zero mean and unit variance.Wed Jan 8 18:29:05 201425.0 
0108Adaboost with decision stump as weak learnerWed Jan 8 18:35:33 201434.1 
CaligulaAn implementation of AdaBoost with decision stumps and 800 rounds of boosting.Wed Jan 8 19:46:23 201418.3 
bclamSingle-layer Neural Network - 1000 epochs, Learning rate of 0.05Wed Jan 8 22:16:33 201442.2 
BPMI implemented AdaBoost with binary decision stumps and 100 rounds of boosting.Wed Jan 8 22:42:32 201418.5 
Catherine Wu and Yan WuAdaBoost using random sampling and Decision TreesWed Jan 8 22:51:13 201424.7 
George and KatieRandom Forests implemented using vanilla Decision Trees and customizable depth, tree size, and bootstrap size.Wed Jan 8 23:01:13 201421.3 
weezyImplements AdaBoost using decision stumps as a weak learner and running for 1000 rounds of boosting.Wed Jan 8 23:17:50 201418.2 
SkyNet1000-iteration AdaBoost with Decision StumpThu Jan 9 00:20:25 201418.2 
liltAn implementation of 10-nearest neighborsThu Jan 9 00:23:16 201423.7 
lolzAdaboost with decision stumps as the weak learner algorithm (k = 200)Thu Jan 9 00:39:23 201418.4 
T.C.Multi-layered Neural Net, 200 iterations, .1 learning rateThu Jan 9 03:05:38 201421.1 
L.M.K-nearestThu Jan 9 03:40:10 201419.8 
WafflepocalypseA random forest classifier with 1001 trees.Thu Jan 9 04:53:45 201419.6 
KiwisAdaBoost with decision stumps and 150 iterations.Thu Jan 9 07:10:52 201420.9 
RockyNearest Neighbors with weighted vote(weight is inversely proportional to distance), Manhattan distance for normalized attributes, linear scan all examples to find K nearest neighbors(not good for very large training set)Thu Jan 9 10:11:26 201422.8 
Jake BarnesSingle layer artificial neural network with 125 rounds of training. Learning rate is 0.1Thu Jan 9 11:30:41 201420.0 
bcfour,jkwokNaive Bayes with standard Laplacian correctionThu Jan 9 13:03:05 201419.4 
Dr. RobertoSingle layer Neural Net run for 100 epochs with a learning value 0f 0.01Thu Jan 9 13:42:15 201420.4 
finn&jakeKnn, K=40, Euclidean distance for numeric and standardized distance for discrete variables; majority vote for nearest neighbors.Thu Jan 9 14:13:25 201420.2 
ValyaImplements neural nets, much like thealgorithm used in W6, with alpha = 0.1Thu Jan 9 14:20:29 201449.5 
SquirtleAn implementation of the vanillia decisionstumps classifierThu Jan 9 14:27:10 201424.0 
CCnullThu Jan 9 14:40:01 201418.2 
CCAdaBoost with Decision Stump for 5000 rounds.Thu Jan 9 14:49:58 201418.5 
Dr. RobertoADABoost with 100 rounds of Single Layer Neural Net run for 10 epochs with a varying learning value of around 0.01Thu Jan 9 14:51:08 201420.8 
Epic HarborsAdaboost with decision stumps as the weak learner and 250 rounds of boostingThu Jan 9 15:00:12 201419.1 
Igor ZabukovecSVMThu Jan 9 15:14:08 201439.0 
S1An implementation of AdaBoost, with decision stumps as the weak learner for the algorithm and 500 rounds of boosting.Thu Jan 9 15:22:39 201418.2 
tenrburritoAdaBoost algorithm with Decision Trees as weak learning algorithmThu Jan 9 15:24:10 201440.2 
SAJEADABOOSTThu Jan 9 15:27:21 201418.5 
SAJEADABOOSTThu Jan 9 15:32:14 201418.4 
SAJEADABOOSTThu Jan 9 15:35:46 201418.2 
JSSingle-layer Neural Network, 200 epochs, Learning Rate = 0.01Thu Jan 9 15:36:09 201446.1 
SAJEADABOOSTThu Jan 9 15:43:16 201418.7 
SAJEADABOOST 2Thu Jan 9 15:48:58 201418.7 
Dr. Steve Brule (For Your Health)Neural Network.Thu Jan 9 17:28:18 201419.7 
Chuck and LarryPerceptron neural networkThu Jan 9 18:00:04 201418.4 
Chuck and LarryAdaBooooooost!!! using binary decision stumps with 200 rounds of boostingThu Jan 9 18:04:16 201418.4 
PandaBearAdaboost on decision stumps, 1000 roundsThu Jan 9 18:11:53 201455.9 
Aaron DollThis is an implementation of the random forest algorithmThu Jan 9 18:17:46 201424.1 
CharliezscAdaboost with Decision StumpsThu Jan 9 18:21:10 201418.4 
smAdaBoost, using pruned Decision Trees as the weak learner.Thu Jan 9 19:13:02 201418.8 
CharliezscDecision Stump without boostingThu Jan 9 19:19:26 201424.0 
ytterbiumAdaBoost with decision stumps. (1000 rounds)Thu Jan 9 20:00:48 201418.2 
weezyImplements a k-Nearest Neighbor algorithm with k = 15.Thu Jan 9 20:02:27 201420.9 
mdrjrThis is an implementation of k nearest neighbors. I've played around with both k and the distance function.Thu Jan 9 20:04:54 201423.6 
Katie and GeorgeAn implementation of the (voted) perceptron algorithm run for 25 epochs.Thu Jan 9 20:08:04 201423.8 
CharliezscBagging with Decision Stumps (200 weak learners and half bootstrap samples)Thu Jan 9 20:08:14 201424.0 
MariusAdaBoost using decision stumps as the weak-learning algorithms. It is run for 200 iterations.Thu Jan 9 20:13:44 201481.6 
George and KatieRandom Forests implemented using vanilla Decision Trees and customizable depth, tree size, and bootstrap size.Thu Jan 9 20:27:28 201418.9 
Bob DonderoA decision tree learning algorithm using information gain and chi-squared pruning.Thu Jan 9 20:39:27 201422.7 
Katie and GeorgeAn implementation of the (voted) perceptron algorithm run for 25 epochs.Thu Jan 9 20:47:01 201421.6 
RockyBagging algorithm with single layer neural network as the weak learnerThu Jan 9 20:49:05 201423.6 
CharliezscBagging with Decision Stumps (200 weak learners and 2 percent bootstrap samples)Thu Jan 9 20:49:51 201424.0 
WumiAdaBoost with Decision Stump and 200 boostsThu Jan 9 20:52:31 201418.4 
Bob DonderoAdaboost (200 rounds) with weak learner as a decision tree (max depth 5) and chi-squared pruning (1%).Thu Jan 9 20:54:23 201422.3 
Andra Constantinescu and Bar ShabtaiVanilla single layer neural network algorithm. Takes binary input and loops through all training examples to update the weights of each attribute. Alpha = 0.1. Very nice, I like!Thu Jan 9 21:00:47 201421.6 
George and KatieA simple implementation of decision trees as per R&N.Thu Jan 9 21:01:56 201420.9 
akdoteNaive Bayes AlgorithmThu Jan 9 21:10:07 201421.7 
dmmckenn_pthorpeBasic Stumps ImplementationThu Jan 9 21:12:17 201424.0 
dmmckenn_pthorpeImplements Naive Bayes using discretization as opposed to continuous values.Thu Jan 9 21:21:27 201419.4 
SkyNet1000-iteration AdaBoost with Decision StumpThu Jan 9 21:22:55 201421.1 
Mike HonchoAdaboost implementationThu Jan 9 21:47:00 201418.8 
R.A.B.K nearest neighbors with k = 20Thu Jan 9 22:26:05 201420.1 
Anna Ren (aren) and Sunny Xu (ziyangxu)<BR><BR>SunnannaAdaboost using 150 rounds of boosting and decision stumps as a weak learnerThu Jan 9 22:39:29 201418.1 
Cam PorterA version of the AdaBoost learning algorithm that uses decision stumps as a weak learning base.Thu Jan 9 22:47:38 201418.2 
The Whitman WhaleNearest neighbor classification with 17 neighbors and manhattan distanceThu Jan 9 22:48:28 201422.5 
hbKNN with L2 distance, k empirically set after cross validationThu Jan 9 22:52:35 201421.9 
CookieMonsterThis is a nearest-neighbor classifier which takes a majority vote from the k nearest points in feature space using Euclidean Distance.Thu Jan 9 22:57:24 201423.8 
CAPSLOCKA mostly vanilla decision tree. Uses some cool data structures though.Thu Jan 9 22:57:33 201430.3 
sabardA decision tree learning algorithm.Thu Jan 9 23:04:47 201440.0 
Hello!Naive BayesThu Jan 9 23:05:21 201419.4 
asdfA single perceptron (using a logistic threshold) with a learning rate of 0.001 and 100 epochs of training.Thu Jan 9 23:08:08 201426.7 
CookieMonsterThis is a bagging algorithm which uses a nearest neighbor algorithm as its weak classifier.Thu Jan 9 23:13:29 201425.1 
B&YWe use the voted-perceptron algorithm. It runs repeatedly on each training set until it finds a prediction vector which is correct on all examples. We keep track of the survival times for each new prediction vector. These weights help us make a final binary prediction using a weighted majority vote.Thu Jan 9 23:14:49 201418.7 
meAdaboost using decision stumps and 400 rounds of boosting.Thu Jan 9 23:18:51 201418.2 
GlennBackpropagation performed on a neural network with 1 hidden layers for 3000 iterations. The learning rate was set to 0.1 and the layers (from input to output) contain [ 105 4 1 ] units, including a bias unit for each non-output layer.Thu Jan 9 23:20:33 201439.0 
JgsAn implementaiton of AdaBoost with Decision Stumps that is optimized by only using the best possible decision stump for each attribute. Rounds of boosting = 250Thu Jan 9 23:27:29 201418.6 
Mr. BlobbyAdaBoost (200 rounds) with decision stumpsThu Jan 9 23:32:26 201418.4 
dmmckenn_pthorpeImplements Adaboost with 1,000 rounds of boosting with decision stumps as the weak learner.Thu Jan 9 23:32:58 201418.2 
corgibasic decision stumpThu Jan 9 23:35:40 201424.0 
akdoteNaive Bayes AlgorithmThu Jan 9 23:47:14 201422.6 
ASappNearest Neighbor Algorithm with k = 5. Normalizes using mean and standard deviation of each attribute.Thu Jan 9 23:48:40 201421.8 
jabreezyAdaboost with decision stumps (boosted 200 roundsThu Jan 9 23:49:49 201418.4 
hian implementation of AdaBoost woo hooThu Jan 9 23:57:10 201418.4 
KhoaAn algo based on decision stumpsThu Jan 9 23:57:24 201450.4 
Linda ZhongBasic decision tree algorithm implementation , no pruning.Fri Jan 10 00:47:46 201422.2 
LearnerImplementation of Adaboost with decision stumps as the weak learner.Fri Jan 10 02:38:34 201450.4 
Joshua A. ZimmerA learning algorithm that uses weightingof the training examples via decision stumps to predict theclassification of the test examples.Fri Jan 10 04:11:05 201445.1 
KiwisAdaBoost with decision stumps and 80 iterations.Fri Jan 10 07:39:57 201419.0 
bfangSingle layer neural network, 80 epochs, alpha=0.01Fri Jan 10 12:43:09 201446.4 
Aaron DollThis is an implementation of the random forest algorithmFri Jan 10 14:50:20 201420.0 
Mr. BlobbyAdaBoost (1000 rounds) with decision stumpsFri Jan 10 20:11:34 201418.2 
Mr. BlobbyAdaBoost (150 rounds) with decision stumpsFri Jan 10 20:22:29 201418.1 
Aaron DollThis is an implementation of the random forests with m=1, 400 treesFri Jan 10 22:18:09 201420.1 
Bob DonderoAdaboost (200 rounds) with weak learner as a decision tree (max depth 5) and chi-squared pruning (1%)Fri Jan 10 23:04:11 201428.4 
weezyImplements a k-Nearest Neighbor algorithm with k = 27.Sat Jan 11 00:38:42 201421.9 
ASappNearest Neighbor Algorithm with k = 5. Normalizes using mean and standard deviation of each attribute.Sat Jan 11 01:24:41 201448.4 
ASappNearest Neighbor Algorithm with k = 5. Normalizes using mean and standard deviation of each attribute.Sat Jan 11 01:33:37 201422.9 
Aaron DollThis is an implementation of the random forests with m=1, 400 treesSat Jan 11 02:40:37 201419.6 
ebp and WafflepocalypseAdaboost on random forests of 30 trees, sampling .65 of the weighted training data with replacement for each hypothesis, 150 rounds of boosting.Sat Jan 11 02:46:53 201438.0 
ebp and WafflepocalypseAdaboost on random forests of 30 trees, sampling .65 of the weighted training data with replacement for each hypothesis, 100 rounds of boosting.Sat Jan 11 03:58:16 201432.9 
JgsAn implementation of AdaBoost with Decision Stumps that is optimized by only using the best possible decision stump for each attribute. Rounds of boosting = 20Sat Jan 11 13:50:02 201419.1 
JgsAn implementation of AdaBoost with Decision Stumps that is optimized by only using the best possible decision stump for each attribute. Rounds of boosting = 150Sat Jan 11 13:56:33 201418.5 
GuessingMinimally outputs a result by applying a random function.Sat Jan 11 15:26:17 201450.4 
Wu-Tang DynastyAdaBoost using random sampling and Decision TreesSat Jan 11 18:34:42 201426.7 
S1Random forests with decision trees (500 trees).Sat Jan 11 20:12:48 201419.8 
Solving From ClassifierThe Naive Bayes algorithm using a binary representation as opposed to a discrete representation.Sat Jan 11 21:42:28 201419.8 
Solving From ClassifierThe Naive Bayes algorithm using a binary representation at times an a discrete representation at other times.Sat Jan 11 21:58:17 201419.8 
corgi2.0AdaBoost 150 w/ basic decision stumpsSat Jan 11 23:14:44 201419.9 
FannyAn ensemble learning algorithm that consists of AdaBoost using decision stumps as weak learner.Sun Jan 12 04:25:11 201418.5 
Mr. BlobbyAdaBoost (200 rounds) with decision trees (depth limit of 5)Sun Jan 12 06:13:26 201419.7 
John WhelchelBasic implementation of AdaBoost using decision stumps as weak learners and 100 rounds of boosting.Sun Jan 12 11:32:16 201429.4 
0108Adaboost with decision stump as weak learnerSun Jan 12 13:35:33 201430.2 
bfangSingle layer neural network, 80 epochs, alpha=0.01Sun Jan 12 13:49:39 201417.6 
Estranged EgomaniacAdaBoost with decision stumps. 250 rounds of boosting.Sun Jan 12 13:54:21 201459.9 
MariusAdaBoost using decision stumps as the weak-learning algorithms. It is run for 200 iterations.Sun Jan 12 14:09:03 201418.4 
TaurielAdaBoost w/ DecisionTreeSun Jan 12 14:11:25 201420.2 
0108Bagging with decision stump as weak learnerSun Jan 12 14:19:35 201423.8 
0108Adaboost with decision stump as weak learnerSun Jan 12 14:22:07 201418.4 
0108Bagging with decision stump as weak learnerSun Jan 12 14:59:40 201423.8 
Green Gmoney ChoiThis is an implementation of the AdaBoostalgorithm with decision stumps.Sun Jan 12 15:10:36 201418.1 
TaurielRandomForest w/ DecisionTreesSun Jan 12 15:23:34 201418.9 
NYDecision TreeSun Jan 12 15:34:37 201425.4 
TaurielRandomForest w/ DecisionTreesSun Jan 12 15:40:11 201425.3 
KiwisAdaBoost with decision trees. Number of iterations: 200. Max depth for tree: 1.Sun Jan 12 15:46:23 201419.5 
TaurielRandomForest w/ DecisionTrees pruned at significance level 0.95Sun Jan 12 15:51:47 201420.0 
KiwisAdaBoost with decision trees. Number of iterations: 200. Max depth for tree: 3.Sun Jan 12 16:01:54 201418.0 
KiwisAdaBoost with decision trees. Number of iterations: 100. Max depth for tree: 4.Sun Jan 12 16:18:27 201418.4 
Linda ZhongBasic decision tree algorithm implementation , no pruning.Sun Jan 12 16:29:37 201422.0 
Dr. Steve Brule (For Your Health)Neural Network.Sun Jan 12 19:58:48 201450.4 
JgsAn implementation of AdaBoost with vanilla Decision Trees. Rounds of boosting = 250Sun Jan 12 20:27:00 201423.6 
John WhelchelBasic implementation of AdaBoost using decision stumps as weak learners and 150 rounds of boosting.Sun Jan 12 20:29:33 201429.4 
ValyaImplements a single layer neural net, much like thealgorithm used in W6, with alpha = 0.1Sun Jan 12 20:44:33 201449.4 
Ameera and DavidDecision Tree Learning algorithm implementation.Sun Jan 12 22:04:52 201424.5 
JgsAn implementation of AdaBoost with vanilla Decision Trees. Rounds of boosting = 150Sun Jan 12 22:08:45 201423.1 
Shaheed ChaganiNaive Bayes ClassifierSun Jan 12 22:41:12 201437.5 
FannyA voted perceptron algorithm (epoch = 10)Sun Jan 12 22:58:47 201420.0 
Hello!AdaBoost with Decision StumpsSun Jan 12 23:21:29 201418.4 
SupahakaAdaBoost with Decision Stumps with 100000 rounds of boosting.Mon Jan 13 00:38:32 201419.3 
Wu-Tang DynastyAdaBoost using random sampling and Decision TreesMon Jan 13 07:30:13 201422.8 
Shaheed ChaganiNaive Bayes ClassifierMon Jan 13 14:01:58 201425.7 
Sunnannanearest neighbors algorithm with k = 7Mon Jan 13 14:49:40 201420.9 
liltan decision stump implementationMon Jan 13 15:03:39 201449.6 
smAdaBoost, using pruned Decision Trees as the weak learner.Mon Jan 13 15:35:01 201419.7 
CCAdaBoost with Neural Networks as the learner. Uses the percentage with the highest weights of the data to make the hypothesis on a given roundMon Jan 13 16:07:10 201418.7 
Sunnannasingle-layer feedforward neural net usinglogistic functionMon Jan 13 16:18:07 201418.7 
dericc, sigatapu200-iteration AdaBoost with Decision TreesMon Jan 13 16:45:48 201421.5 
S1An implementation of AdaBoost, with pruned decision trees as the weak learner for the algorithm and 500 rounds of boosting.Mon Jan 13 16:46:25 201421.5 
Jake BarnesMultiple layer artificial neural network (5 hidden nodes) with 125 rounds of training. Learning rate is 0.1Mon Jan 13 17:03:22 201436.9 
SunnannaNaive Bayes Alogrithm using maximum likelihood estimatorMon Jan 13 18:54:30 201419.4 
John WhelchelBasic implementation of AdaBoost using decision stumps as weak learners and 101 rounds of boosting.Mon Jan 13 20:42:40 201418.5 
FannyA voted perceptron algorithm (epoch = 10)Mon Jan 13 20:46:33 201421.1 
RockyAdaBoost algorithm with the decision stumps as the weak learner, T=150Mon Jan 13 21:17:55 201418.1 
Wu-Tang DynastyAdaBoost using random sampling and Decision TreesMon Jan 13 21:22:24 201418.9 
Wu-Tang DynastyAdaBoost using random sampling and Decision StumpsMon Jan 13 21:24:34 201419.4 
Shaheed ChaganiAdaBoostMon Jan 13 21:31:38 201418.8 
CCAdaBoost with Decision Stump for 50 rounds.Mon Jan 13 21:31:50 201418.6 
Wu-Tang DynastyAdaBoost using random sampling and Decision TreesMon Jan 13 21:53:03 201422.8 
PandaBearAdaboost on decision stumps, 1000 roundsMon Jan 13 22:01:39 201424.0 
ValyaImplements a single layer neural net, much like the algorithm used in W6, with alpha = 0.01, running for 1000 epochs.Mon Jan 13 22:06:10 201423.3 
R.A.B.k-NN with K = 21 and votes weighted by the inverse of distanceTue Jan 14 00:03:23 201421.5 
GodImplements naive Bayes algorithm.Tue Jan 14 00:19:12 201419.9 
SupahakaAdaBoost with Decision Stumps with 500 rounds of boosting.Tue Jan 14 01:07:47 201418.4 
MacrameAdaboost, decision stumps, 250 roundsTue Jan 14 01:55:44 201418.3 
R.A.B.Adaboost on decision stumps 1000 roundsTue Jan 14 02:04:14 201418.2 
Andra Constantinescu and Bar ShabtaiVanilla single layer neural network algorithm. Takes binary input and loops through all training examples to update the weights of each attribute. Alpha and epochs optimized for each dataset.Tue Jan 14 02:29:11 201420.2 
corgi3.0basic decision treeTue Jan 14 02:39:38 201433.3 
Andra Constantinescu and Bar ShabtaiAdaBoost with decision stump as the weak learner.Number of iterations of AdaBoost optimized per example.Tue Jan 14 02:55:45 201419.4 
R.A.B.Adaboost on decision stumps 150 roundsTue Jan 14 03:15:28 201418.1 
corgi4.0decision tree with chi squared pruningTue Jan 14 04:16:18 201433.3 
sabardA decision tree learning algorithm with chi squared pruning.Tue Jan 14 05:27:30 201422.1 
bclamSingle-layer Neural Network - 2000 epochs, Learning rate of 0.01Tue Jan 14 07:12:01 201423.0 
Andra Constantinescu and Bar ShabtaiRandom Forest with number of trees, N and M optimized for each dataset!Tue Jan 14 07:12:37 201419.1 
bclamSingle-layer Neural Network - 10000 epochs, Learning rate of 0.05Tue Jan 14 07:20:25 201439.0 
CCAdaBoost with Decision Stump for 80 rounds.Tue Jan 14 10:51:50 201418.6 
Boar CiphersImplements a single-layer neural network with 100 epochs and a 0.001 learning rateTue Jan 14 10:53:40 201424.2 
PandaBearAdaboost on decision stumps, 500 roundsTue Jan 14 11:02:10 201424.0 
CCAdaBoost with Neural Networks as the learner. Uses the percentage with the highest weights of the data to make the hypothesis on a given roundTue Jan 14 11:13:03 201418.1 
CCAdaBoost with Neural Networks as the learner. Uses the percentage with the highest weights of the data to make the hypothesis on a given roundTue Jan 14 11:41:07 201477.1 
skarpAdaBoost with decision stumps as the weak learner (chosen to minimize the weighted training error) and 300 rounds of boosting.Tue Jan 14 12:07:17 201418.4 
skarpAdaBoost with decision stumps as the weak learner (chosen to minimize the weighted training error) and 100 rounds of boosting.Tue Jan 14 12:12:07 201418.5 
Mike HonchoAdaboost implementationTue Jan 14 12:23:10 201422.5 
Mike Honcho 100Adaboost implementationTue Jan 14 12:25:16 201422.1 
B&YWe use the voted-perceptron algorithm. It runs repeatedly on each training set until it finds a prediction vector which is correct on all examples. We keep track of the survival times for each new prediction vector. These weights help us make a final binary prediction using a weighted majority vote.Tue Jan 14 12:25:17 201419.1 
Mike Honcho 10Adaboost implementationTue Jan 14 12:26:40 201424.0 
Mike Honcho 500Adaboost implementationTue Jan 14 12:28:11 201419.2 
skarpAdaBoost with decision trees as the weak learner (chosen to minimize the entropy, where each tree is restricted to a maximum depth of 4) and 200 rounds of boosting.Tue Jan 14 12:29:29 201418.9 
LKBagging with AdaBoost that uses decision stumpsTue Jan 14 12:29:37 201421.5 
skarpAdaBoost with decision trees as the weak learner (chosen to minimize the entropy, where each tree is restricted to a maximum depth of 5) and 200 rounds of boosting.Tue Jan 14 12:38:05 201418.6 
corgi4.0decision tree with chi squared pruningTue Jan 14 12:52:05 201433.3 
corgi3.0decision tree, discrete attribute splittingTue Jan 14 12:55:01 201433.5 
GlennBackpropagation performed on a neural network with 1 hidden layers for 5000 iterations. The learning rate was set to 0.001 and the layers (from input to output) contain [ 105 51 1 ] units, including a bias unit for each non-output layer.Tue Jan 14 12:55:42 201421.3 
GlennBackpropagation performed on a neural network with 0 hidden layers for 100 iterations. The learning rate was set to 0.01 and the layers (from input to output) contain [ 105 1 ] units, including a bias unit for each non-output layer.Tue Jan 14 13:06:55 201446.5 
KhoaAn algorithm that classifies.Tue Jan 14 14:16:06 201425.7 
Hello!AdaBoost with Naive Bayes(200)Tue Jan 14 14:19:36 201418.8 
corgi5.0decision tree with chi squared pruningTue Jan 14 14:26:38 201433.3 
GodImplements Basic Decision Stumps and chooses the one which performs the bestTue Jan 14 14:35:39 201424.0 
hbKNN with L2 distance, k empirically set after cross validationTue Jan 14 14:50:46 201421.7 
RockyBagging algorithm with single layer neural network as the weak learnerTue Jan 14 15:01:01 201421.0 
Nihar the GodUses Adaboost with decision stumps as weak learners and then uses 150 rounds of boostingTue Jan 14 15:07:58 201419.9 
Nihar the GodUses Adaboost with decision stumps as weak learners and then uses 200 rounds of boostingTue Jan 14 15:12:20 201420.3 
hbAdaBoost, basic decision stumpsTue Jan 14 15:14:42 201418.5 
LearnerImplementation of Adaboost with decision stumps as the weak learner.Tue Jan 14 15:17:01 201424.0 
sabardA decision tree learning algorithm with chi squared pruning (5%).Tue Jan 14 15:32:46 201423.1 
LearnerImplementation of Adaboost with decision stumps as the weak learner. 100 rounds of boosting.Tue Jan 14 15:33:13 201424.0 
JSSingle-layer Neural Network, 100 epochs, Learning Rate = 0.001Tue Jan 14 15:33:47 201424.2 
sabardDecision Stump weak learning algorithm to be used with AdaBoostTue Jan 14 15:39:42 201424.0 
Dr. Steve Brule (For Your Health)Neural Network trained for 100 epochs.Tue Jan 14 15:40:58 201419.9 
Joshua A. ZimmerA learning algorithm that uses weightingof the training examples via decision stumps to predict theclassification of the test examples.Tue Jan 14 16:01:11 201442.0 
hbAdaBoost, KNN as week learner, k chosen empiricallyTue Jan 14 16:02:40 201424.2 
Andra Constantinescu and Bar ShabtaiAdaBoost on a single layer neural network The neural classifier takes binary input and loops through all training examples to update the weights of each attribute. Number of boosting rounds optimized for data set (here 2)Tue Jan 14 16:20:19 201419.2 
hbAdaBoost, KNN as week learner, k chosen empiricallyTue Jan 14 16:33:16 201418.7 
tenrburritoAdaBoost algorithm with Decision Trees as weak learning algorithmTue Jan 14 16:38:21 201419.1 
Joshua A. ZimmerA working (hopefully) attempt at a learning algorithm that uses weighting of the training examples via decision stumps to predict the classification of the test examples.Mon Jan 20 16:17:10 201449.6 

Table generated: Mon Jan 20 16:17:13 2014