tag:blogger.com,1999:blog-3767490270074501631.post3612002360976971613..comments2023-12-24T04:32:23.975-06:00Comments on Datalligence: Fraud Prediction - Decision Trees & Support Vector Machines (Classification)DataLLigencehttp://www.blogger.com/profile/12220838135139526953noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-3767490270074501631.post-75049733445985556432008-12-22T19:42:00.000-06:002008-12-22T19:42:00.000-06:00have you considered random forrests?have you considered random forrests?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3767490270074501631.post-15751123379238345982008-12-17T15:12:00.000-06:002008-12-17T15:12:00.000-06:00Bhupendra wrote: But issues with SVM and NN is tha...Bhupendra wrote: <I>But issues with SVM and NN is that they too much overfit with the data.</I><BR/><BR/>Could you expand on this, or provide references? Is there some reason that early stopping (in the case of neural networks) or constraining the number of hidden nodes do not address the over-fitting issue?<BR/><BR/><BR/>-Will Dwinnell<BR/><A HREF="http://matlabdatamining.blogspot.com/" REL="nofollow">Data Mining in MATLAB</A>Will Dwinnellhttps://www.blogger.com/profile/03379859054257561952noreply@blogger.comtag:blogger.com,1999:blog-3767490270074501631.post-89870955745083953622008-12-11T15:14:00.000-06:002008-12-11T15:14:00.000-06:00Oracle Data Mining, release 10.2 introduced a new ...Oracle Data Mining, release 10.2 introduced a new variation of the Support Vector Machine algorithm, called 1-class SVM. 1-class SVMs are specially designed for fraud and anomaly detection when you lack examples of the "rare events". In those cases, you can do various tricks w/ stratified samples, ROC, and/or cost matrix for false pos/neg costs, but all struggle. <BR/><BR/>1-class SVMs work on the principle of learning what is considered "normal" e.g. expenses, phone calls, employees, etc. If your training data contains examples of the rare events, remove them for 1-class SVM model building. Applying the 1-class SVM model then scores each record on the likelihood that it is "abnormal". Oracle Data Mining's 1-class SVM can also mine transactional (nested data), unstructured data (i.e. text), and star schema data. You can read more about them in the documentation available online at: http://www.oracle.com/technology/products/bi/odm/index.htmlhttp://www.oracle.com/technology/products/bi/odm/index.html and in the OTN web site tech info posted at: <BR/><BR/>Hope this helps!<BR/><BR/>cbAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3767490270074501631.post-43050632872906806632008-12-01T23:23:00.000-06:002008-12-01T23:23:00.000-06:00yup bhupen, SVM & ANN would be a more appropri...yup bhupen, SVM & ANN would be a more appropriate comparison but ODM (10g) doesn't have ANN :-)<BR/>i tested the models on a few more datasets; performances dropped in both cases, with a lot more in the case of DT.<BR/><BR/>have heard a lot about MS SQL Server DM, would love to check it out. yeah, you can be sure that the 2 biggies are going to be major players in the DM market soon, 'coz the world's data reside in their systems!Datalligencehttps://www.blogger.com/profile/16461960582799657275noreply@blogger.comtag:blogger.com,1999:blog-3767490270074501631.post-36490549963229583522008-11-27T06:23:00.000-06:002008-11-27T06:23:00.000-06:00sandro: SPSS has 4 algos for Decision Tree. I buil...sandro: SPSS has 4 algos for Decision Tree. I built and tested a DT model on the same dataset with the same inputs (same transformation) using CHAID as the tree-growing criteria. The accuracy was in the 50's range. I will check on the other 3 algos in SPSS and let you know if i have time:-)Datalligencehttps://www.blogger.com/profile/16461960582799657275noreply@blogger.comtag:blogger.com,1999:blog-3767490270074501631.post-15953045355417643772008-11-26T03:37:00.000-06:002008-11-26T03:37:00.000-06:00Great article. Thanks for the details.I have two f...Great article. Thanks for the details.<BR/><BR/>I have two feedback on the article.<BR/><BR/>1. I am not surprised that SVM over-performed DT, as it almost always does. Neural Networks would have been a better comparison. But issues with SVM and NN is that they too much overfit with the data. It will be interested to see if you have similar results in intime and outtime validation data sets. I have always seen a significant drop in performance for SVM models.<BR/><BR/>2. It is really nice to see that Oracle's tools provide so much of facilities. I worked on Microsoft SQL Server Analytics Services for a week, and was impressed with their tool too. With biggies joining this market, it is going to be interesting.<BR/><BR/>-- BhupendraBhupendrahttps://www.blogger.com/profile/11979831334460992121noreply@blogger.comtag:blogger.com,1999:blog-3767490270074501631.post-34469630773490465402008-11-25T23:03:00.000-06:002008-11-25T23:03:00.000-06:00you are absolutely right jonathan. this post was m...you are absolutely right jonathan. this post was meant to be a comparison of the 2 techniques on a very very basic level (i mentioned that too).<BR/><BR/>any model's performance has to be decided on a whole lot of parameters - true +ves/-ves, false -ves/-ves, misclassification costs (if info is available), gains/lift charts....<BR/><BR/>also, in a majority of fraud prediction problems, the emphasis is on true positives as the cost of fraud typically tends to be higher (i'm generalizing here!) than the time/cost/inconvenience arising from the false positives.Datalligencehttps://www.blogger.com/profile/16461960582799657275noreply@blogger.comtag:blogger.com,1999:blog-3767490270074501631.post-35219416039748848702008-11-25T21:42:00.000-06:002008-11-25T21:42:00.000-06:00I don't necessarily agree that SVM outperforms dec...I don't necessarily agree that SVM outperforms decision trees. While SVM correctly classified a larger number of the fraudulent cases, it also had a greater number of false positives (non-fraud classified as fraud). The better model depends on the relative costs of misclassifying fraud as non-fraud and misclassifying non-fraud as fraud. Alternatively, you can adjust some settings (like the prior probabilities or misclassification matrix on the trees) until both methods correctly predict the same number of fraudulent cases - the better model will then be the one with fewer false positives.jonathan polonhttps://www.blogger.com/profile/04963354611313532025noreply@blogger.comtag:blogger.com,1999:blog-3767490270074501631.post-7989444885702939322008-11-25T11:12:00.000-06:002008-11-25T11:12:00.000-06:00Thanks for the details Romakanta. Did you obtain t...Thanks for the details Romakanta. Did you obtain the same kind of results with Decision Trees (using SPSS Answer Tree) than your current results (72%)?Sandro Saittahttps://www.blogger.com/profile/17682082649770875583noreply@blogger.com