Open Science Research Excellence

Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Publications Count: 30465


Select areas to restrict search in scientific publication database:
10011084
A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification
Abstract:
Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.
Digital Object Identifier (DOI):

References:

[1] Calin, G.A., et al., Frequent deletions and down-regulation of micro-RNA genes miR15 and miR16 at 13q14 in chronic lymphocytic leukemia. Proceedings of the National Academy of Sciences, 2002. 99(24): p. 15524-15529.
[2] Peng, Y. and C.M. Croce, The role of MicroRNAs in human cancer. Signal transduction and targeted therapy, 2016. 1: p. 15004.
[3] Sauter, E.R. and N. Patel, Body fluid micro (mi) RNAs as biomarkers for human cancer. Journal of Nucleic Acids Investigation, 2011. 2(1): p. e1-e1.
[4] Bartels, C.L. and G.J. Tsongalis, MicroRNAs: novel biomarkers for human cancer. Clinical chemistry, 2009. 55(4): p. 623-631.
[5] Cortez, M.A., et al., MicroRNAs in body fluids—the mix of hormones and biomarkers. Nature reviews Clinical oncology, 2011. 8(8): p. 467.
[6] Liu, B., et al., Identification of microRNA precursor with the degenerate K-tuple or Kmer strategy. Journal of theoretical biology, 2015. 385: p. 153-159.
[7] Jain, S., S. Shukla, and R. Wadhvani, Dynamic selection of normalization techniques using data complexity measures. Expert Systems with Applications, 2018. 106: p. 252-262.
[8] Sarbazi-Azad, S., M.S. Abadeh, and M.I.N. Abadi. Feature Selection in Microarray Gene Expression Data Using Fisher Discriminant Ratio. in 2018 8th International Conference on Computer and Knowledge Engineering (ICCKE). 2018. IEEE.
[9] Lopez-Rincon, A., et al., Evolutionary optimization of convolutional neural networks for cancer miRNA biomarkers classification. Applied Soft Computing, 2018. 65: p. 91-100.
[10] Alshammari, T., et al., Evaluating machine learning techniques for activity classification in smart home environments. Int. J. Comput. Electr. Autom. Control Inf. Eng, 2018. 12: p. 48-54.
[11] Pedregosa, F., et al., Scikit-learn: Machine learning in Python. Journal of machine learning research, 2011. 12(Oct): p. 2825-2830.
[12] Hastie, T., et al., Multi-class adaboost. Statistics and its Interface, 2009. 2(3): p. 349-360.
[13] Breiman, L., Pasting small votes for classification in large databases and on-line. Machine learning, 1999. 36(1-2): p. 85-103.
[14] Friedman, J.H., Greedy function approximation: a gradient boosting machine. Annals of statistics, 2001: p. 1189-1232.
[15] Breiman, L., Random forests. Machine learning, 2001. 45(1): p. 5-32.
[16] Cox, D.R., The regression analysis of binary sequences. Journal of the Royal Statistical Society: Series B (Methodological), 1958. 20(2): p. 215-232.
[17] Crammer, K., et al., Online passive-aggressive algorithms. Journal of Machine Learning Research, 2006. 7(Mar): p. 551-585.
[18] Tikhonov, A.N. On the stability of inverse problems. in Dokl. Akad. Nauk SSSR. 1943.
[19] Hearst, M.A., et al., Support vector machines. IEEE Intelligent Systems and their applications, 1998. 13(4): p. 18-28.
[20] Altman, N.S., An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 1992. 46(3): p. 175-185.
[21] Tibshirani, R., et al., Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proceedings of the National Academy of Sciences, 2002. 99(10): p. 6567-6572.
[22] Boser, B.E., I.M. Guyon, and V.N. Vapnik. A training algorithm for optimal margin classifiers. in Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory. 2003.
[23] Breiman, L., et al., Classification and regression trees–crc press. Boca Raton, Florida, 1984.
[24] Geurts, P., D. Ernst, and L. Wehenkel, Extremely randomized trees. Machine learning, 2006. 63(1): p. 3-42.
[25] Potharaju, S.P. and M. Sreedevi, Distributed feature selection (DFS) strategy for microarray gene expression data to improve the classification performance. Clinical Epidemiology and Global Health, 2019. 7(2): p. 171-176.
Vol:14 No:06 2020Vol:14 No:05 2020Vol:14 No:04 2020Vol:14 No:03 2020Vol:14 No:02 2020Vol:14 No:01 2020
Vol:13 No:12 2019Vol:13 No:11 2019Vol:13 No:10 2019Vol:13 No:09 2019Vol:13 No:08 2019Vol:13 No:07 2019Vol:13 No:06 2019Vol:13 No:05 2019Vol:13 No:04 2019Vol:13 No:03 2019Vol:13 No:02 2019Vol:13 No:01 2019
Vol:12 No:12 2018Vol:12 No:11 2018Vol:12 No:10 2018Vol:12 No:09 2018Vol:12 No:08 2018Vol:12 No:07 2018Vol:12 No:06 2018Vol:12 No:05 2018Vol:12 No:04 2018Vol:12 No:03 2018Vol:12 No:02 2018Vol:12 No:01 2018
Vol:11 No:12 2017Vol:11 No:11 2017Vol:11 No:10 2017Vol:11 No:09 2017Vol:11 No:08 2017Vol:11 No:07 2017Vol:11 No:06 2017Vol:11 No:05 2017Vol:11 No:04 2017Vol:11 No:03 2017Vol:11 No:02 2017Vol:11 No:01 2017
Vol:10 No:12 2016Vol:10 No:11 2016Vol:10 No:10 2016Vol:10 No:09 2016Vol:10 No:08 2016Vol:10 No:07 2016Vol:10 No:06 2016Vol:10 No:05 2016Vol:10 No:04 2016Vol:10 No:03 2016Vol:10 No:02 2016Vol:10 No:01 2016
Vol:9 No:12 2015Vol:9 No:11 2015Vol:9 No:10 2015Vol:9 No:09 2015Vol:9 No:08 2015Vol:9 No:07 2015Vol:9 No:06 2015Vol:9 No:05 2015Vol:9 No:04 2015Vol:9 No:03 2015Vol:9 No:02 2015Vol:9 No:01 2015
Vol:8 No:12 2014Vol:8 No:11 2014Vol:8 No:10 2014Vol:8 No:09 2014Vol:8 No:08 2014Vol:8 No:07 2014Vol:8 No:06 2014Vol:8 No:05 2014Vol:8 No:04 2014Vol:8 No:03 2014Vol:8 No:02 2014Vol:8 No:01 2014
Vol:7 No:12 2013Vol:7 No:11 2013Vol:7 No:10 2013Vol:7 No:09 2013Vol:7 No:08 2013Vol:7 No:07 2013Vol:7 No:06 2013Vol:7 No:05 2013Vol:7 No:04 2013Vol:7 No:03 2013Vol:7 No:02 2013Vol:7 No:01 2013
Vol:6 No:12 2012Vol:6 No:11 2012Vol:6 No:10 2012Vol:6 No:09 2012Vol:6 No:08 2012Vol:6 No:07 2012Vol:6 No:06 2012Vol:6 No:05 2012Vol:6 No:04 2012Vol:6 No:03 2012Vol:6 No:02 2012Vol:6 No:01 2012
Vol:5 No:12 2011Vol:5 No:11 2011Vol:5 No:10 2011Vol:5 No:09 2011Vol:5 No:08 2011Vol:5 No:07 2011Vol:5 No:06 2011Vol:5 No:05 2011Vol:5 No:04 2011Vol:5 No:03 2011Vol:5 No:02 2011Vol:5 No:01 2011
Vol:4 No:12 2010Vol:4 No:11 2010Vol:4 No:10 2010Vol:4 No:09 2010Vol:4 No:08 2010Vol:4 No:07 2010Vol:4 No:06 2010Vol:4 No:05 2010Vol:4 No:04 2010Vol:4 No:03 2010Vol:4 No:02 2010Vol:4 No:01 2010
Vol:3 No:12 2009Vol:3 No:11 2009Vol:3 No:10 2009Vol:3 No:09 2009Vol:3 No:08 2009Vol:3 No:07 2009Vol:3 No:06 2009Vol:3 No:05 2009Vol:3 No:04 2009Vol:3 No:03 2009Vol:3 No:02 2009Vol:3 No:01 2009
Vol:2 No:12 2008Vol:2 No:11 2008Vol:2 No:10 2008Vol:2 No:09 2008Vol:2 No:08 2008Vol:2 No:07 2008Vol:2 No:06 2008Vol:2 No:05 2008Vol:2 No:04 2008Vol:2 No:03 2008Vol:2 No:02 2008Vol:2 No:01 2008
Vol:1 No:12 2007Vol:1 No:11 2007Vol:1 No:10 2007Vol:1 No:09 2007Vol:1 No:08 2007Vol:1 No:07 2007Vol:1 No:06 2007Vol:1 No:05 2007Vol:1 No:04 2007Vol:1 No:03 2007Vol:1 No:02 2007Vol:1 No:01 2007