one as the training sample and another as the hold-out sample. While talk is cheap, I would use the example below to show that using the monotonic binning algorithm to pre-process predictors in a GRNN is actually able to alleviate the over-fitting and to improve the prediction accuracy for the hold-out sample.įirst of all, the whole dataset was split into half, e.g. Yap::logl(y_pred = yap::pnn.predict(n2, X), y_true = yap::dummies(Y))Ī major criticism on the binning algorithm as well as on the WoE transformation is that the use of binned predictors will decrease the model predictive power due to the loss of data granularity after the WoE transformation. N2 <- yap::pnn.fit(X, Y, sigma = parm$best$sigma) Yap::logl(y_pred = predict(m1, newdata = X, type = "prob"), y_true = yap::dummies(Y)) # FIT A MULTINOMIAL REGRESSION AS A BENCHMARK # In this particular example, PNN even performed slightly better in terms of the cross-entropy for a separate testing dataset. As shown below, both approaches delivered very comparable predictive performance. Similar to GRNN, PNN shares same benefits of instantaneous training, simple structure, and global convergence.īelow is a demonstration showing how to use the YAP package and a comparison between the multinomial regression and the PNN. By the end of 2019, I finally managed to wrap up my third R package YAP ( ) that implements the Probabilistic Neural Network (Specht, 1990) for the N-category pattern recognition with N > 2.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |