Feature Reinforcement using Autoencoders

  • Devesh raj
  • Rishabh Goyal
Keywords: Deep Learning, Feature Engineering, Data Representation, Auto encoders

Abstract

Cardiovascular disease (CVD) is the number one cause of death globally, more people die annually from CVDs than from any other cause. People with cardiovascular disease or who are at high cardiovascular risk need early detection and management using counselling and medicines, as appropriate. The early detection of CVDs needs an expert hand and awareness amongst people. Here is where Data analytics can help in predicting the cardiovascular cases before-hand by helping to make informed decisions faster, with great accuracy and at a much earlier date. The dataset used is the Cleveland Heart disease Database taken from UCI learning data set repository. The dataset is being divided into five classes, 0 corresponding to absence of any disease and 1,2,3,4 corresponding to grades of heart disease. The dataset has been bifurcated into absence (0) and presence (1, 2, 3 and 4) of the heart disease. Using medical profiles such as age, sex, blood pressure, cholesterol, sugar level etc. The classifiers can predict the probability of patients getting a heart disease. There is no dearth of classification techniques but feature engineering and data representation is the crux of the model building pre-activity. When done efficiently, this could make the model more robust and accurate. We are introducing an idea of feature reinforcement technique using Artificial Neural Networks (MLP)-Auto encoders. In this technique we would represent the features in an abstracted format using MLPAutoencoders and then reinforce the input features with the abstracted features. This activity would exhaustively capture the latency in input features thus making our feature representation more robust and resilient. We have tested our technique on Cleveland Heart disease dataset. The results obtained by using our technique had higher degree of accuracy than the results obtained with input features alone

References

[1] P. Vincent, H. Larochelle, Y. Bengio, P.A. Manzagol, ”Extracting and Composing Robust Features with Denoising Autoencoders”, 25th International Conference on Machine Learning (ICML’08), pp. 1096-1103, 2008.
[2] [Hinton and Salakhutdinov, 2006] Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786):504507.
[3] Ian Goodfellow, Yoshua Bengio, Aaron Courville, Deep Learning Book
[4] Random Foreset –Random Features Leo Breiman Statistics Department University of California Berkeley, CA 94720 Technical Report 567
[5] Alain, G. and Bengio, Y. (2012). What regularized auto-encoders learn from the data generating distribution. Technical Report Arxiv report 1211.4246, Universite de Montreal.
[6] Rifai, S., Vincent, P., Muller, X., Glorot, X., and Bengio, Y. (2011a). Contractive auto-encoders: Explicit invariance during feature extraction. In ICML2011.
[7] Weston, J., Ratle, F., and Collobert, R. (2008). Deep learning via semisupervised embedding. In ICML 2008.
[8] Representation Learning: A Review and New Perspectives. Yoshua Bengio, Aaron Courville, and Pascal Vincent Department of computer science and operations research, U. Montreal also, Canadian Institute for Advanced Research (CIFAR) 13-15
[9] Hinton, G., Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313, 504507.
Published
2018-12-10
How to Cite
raj, D., & Goyal, R. (2018). Feature Reinforcement using Autoencoders. Asian Journal For Convergence In Technology (AJCT) ISSN -2350-1146, 4(3). Retrieved from https://asianssr.org/index.php/ajct/article/view/696

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.