Comparative Analysis of Long Short-Term Memory Architecture for Text Classification


Moh Fajar Abdillah(1); Kusnawi Kusnawi(2*);

(1) Universitas Amikom Yogyakarta
(2) Universitas Amikom Yogyakarta
(*) Corresponding Author

  

Abstract


Text classification which is a part of NLP is a grouping of objects in the form of text based on certain characteristics that show similarities between one document and another. One of methods used in text classification is LSTM. The performance of the LSTM method itself is influenced by several things such as datasets, architecture, and tools used to classify text. On this occasion, researchers analyse the effect of the number of layers in the LSTM architecture on the performance generated by the LSTM method. This research uses IMDB movie reviews data with a total of 50,000 data. The data consists of positive, negative data and there is data that does not yet have a label. IMDB Movie Reviews data go through several stages as follows: Data collection, data pre-processing, conversion to numerical format, text embedding using the pre-trained word embedding model: Fastext, train and test classification model using LSTM, finally validate and test the model so that the results are obtained from the stages of this research. The results of this study show that the one-layer LSTM architecture has the best accuracy compared to two-layer and three-layer LSTM with training accuracy and testing accuracy of one-layer LSTM which are 0.856 and 0.867. While the training accuracy and testing accuracy on two-layer LSTM are 0.846 and 0.854, the training accuracy and testing accuracy on three layers are 0.848 and 864.


Keywords


FastText; LSTM; NLP; Text Classification

  
  

Full Text:

PDF
  

Article Metrics

Abstract view: 202 times
PDF view: 101 times
     

Digital Object Identifier

doi  https://doi.org/10.33096/ilkom.v15i3.1906.455-464
  

Cite

References


F. B. Putra et al., “Identification of Symptoms Based on Natural Language Processing (NLP) for Disease Diagnosis Based on International Classification of Diseases and Related Health Problems (ICD-11),” IES 2019 - International Electronics Symposium: The Role of Techno-Intelligence in Creating an Open Energy System Towards Energy Democracy, Proceedings, pp. 1–5, 2019, doi: 10.1109/ELECSYM.2019.8901644.

Y. Kang, Z. Cai, C. W. Tan, Q. Huang, and H. Liu, “Natural language processing (NLP) in management research: A literature review,” https://doi.org/10.1080/23270012.2020.1756939, vol. 7, no. 2, pp. 139–172, Apr. 2020, doi: 10.1080/23270012.2020.1756939.

M. K. Nammous and K. Saeed, Natural language processing: Speaker, language, and gender identification with LSTM, vol. 883. Springer Singapore, 2019. doi: 10.1007/978-981-13-3702-4_9.

A. M. Alayba, V. Palade, M. England, and R. Iqbal, “A combined CNN and LSTM model for Arabic sentiment analysis,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11015 LNCS, pp. 179–191, 2018, doi: 10.1007/978-3-319-99740-7_12/FIGURES/5.

D. Rohidin, N. A. Samsudin, and M. M. Deris, “Association rules of fuzzy soft set based classification for text classification problem,” Journal of King Saud University - Computer and Information Sciences, vol. 34, no. 3, pp. 801–812, Mar. 2022, doi: 10.1016/J.JKSUCI.2020.03.014.

D. Bennett, K. E. Dewi, and R. E. Sagala, “You may also like Document Image Extraction System Design N I Widiastuti and K E Dewi-Near Surface Disposal of Radioactive Waste: Safety Requirements (Safety Standards Series No WS-R-1) and Safety Assessment for Near Surface Disposal of Radioactive Waste: Safety Guide (Safety Standards Series No WS-G-1.1) Using Summarization to Optimize Text Classification”, doi: 10.1088/1757-899X/407/1/012157.

M. Abidin, M. I. Abidin, I. Nurtanio, and A. Achmad, “Deepfake Detection in Videos Using Long Short-Term Memory and CNN ResNext,” ILKOM Jurnal Ilmiah, vol. 14, no. 3, pp. 178–185, Dec. 2022, doi: 10.33096/ilkom.v14i3.1254.178-185.

A. G. Salman, Y. Heryadi, E. Abdurahman, and W. Suparta, “Single Layer & Multi-layer Long Short-Term Memory (LSTM) Model with Intermediate Variables for Weather Forecasting,” Procedia Comput Sci, vol. 135, pp. 89–98, 2018, doi: 10.1016/J.PROCS.2018.08.153.

A. Setyanto et al., “Arabic Language Opinion Mining Based on Long Short-Term Memory (LSTM),” Applied Sciences 2022, Vol. 12, Page 4140, vol. 12, no. 9, p. 4140, Apr. 2022, doi: 10.3390/APP12094140.

N. Halpern-Wight, M. Konstantinou, A. G. Charalambides, and A. Reinders, “Training and Testing of a Single-Layer LSTM Network for Near-Future Solar Forecasting,” Applied Sciences 2020, Vol. 10, Page 5873, vol. 10, no. 17, p. 5873, Aug. 2020, doi: 10.3390/APP10175873.

A. H. Ombabi, W. Ouarda, and A. M. Alimi, “Deep learning CNN–LSTM framework for Arabic sentiment analysis using textual information shared in social networks,” Soc Netw Anal Min, vol. 10, no. 1, pp. 1–13, 2020, doi: 10.1007/s13278-020-00668-1.

T. Mikolov, E. Grave, P. Bojanowski, C. Puhrsch, and A. Joulin, “Advances in Pre-Training Distributed Word Representations,” LREC 2018 - 11th International Conference on Language Resources and Evaluation, pp. 52–55, Dec. 2017, Accessed: Jul. 05, 2023. [Online]. https://arxiv.org/abs/1712.09405v1.

A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, “Learning Word Vectors for Sentiment Analysis,” in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Oregon, USA: Association for Computational Linguistics, Jun. 2011, pp. 142–150. [Online]. Available: http://www.aclweb.org/anthology/P11-1015.

E. H. Muktafin and P. Kusrini, “Sentiments analysis of customer satisfaction in public services using K-nearest neighbors algorithm and natural language processing approach,” TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 19, no. 1, pp. 146–154, Feb. 2021, doi: 10.12928/TELKOMNIKA.V19I1.17417.

M. S. Islam et al., “Machine Learning-Based Music Genre Classification with Pre-Processed Feature Analysis,” Jurnal Ilmiah Teknik Elektro Komputer dan Informatika, vol. 7, no. 3, pp. 491–502, Jan. 2022, doi: 10.26555/JITEKI.V7I3.22327.

I. Irawanto, C. Widodo, A. Hasanah, P. Adhitya, and D. Kusumah, “Sentiment analysis and classification of Forest Fires in Indonesia,” vol. 15, no. 1, pp. 175–185, 2023.

M. A. Nurrohmat and A. SN, “Sentiment Analysis of Novel Review Using Long Short-Term Memory Method,” IJCCS (Indonesian Journal of Computing and Cybernetics Systems), vol. 13, no. 3, pp. 209–218, Jul. 2019, doi: 10.22146/ijccs.41236.

M. Hayaty and A. H. Pratama, “Performance of Lexical Resource and Manual Labeling on Long Short-Term Memory Model for Text Classification,” Jurnal Ilmiah Teknik Elektro Komputer dan Informatika, vol. 9, no. 1, pp. 74–84, Feb. 2023, doi: 10.26555/jiteki.v9i1.25375.

L. A. Mullen, K. Benoit, O. Keyes, D. Selivanov, and J. Arnold, “Fast, Consistent Tokenization of Natural Language Text,” J Open Source Softw, vol. 3, no. 23, p. 655, 2018, doi: 10.21105/joss.00655.

A. I. Saad, “Opinion Mining on US Airline Twitter Data Using Machine Learning Techniques,” 16th International Computer Engineering Conference, ICENCO 2020, pp. 59–63, 2020, doi: 10.1109/ICENCO49778.2020.9357390.

F. Almeida and G. Xexéo, “Word Embeddings: A Survey”.

B. Wang, A. Wang, F. Chen, Y. Wang, and C. C. J. Kuo, “Evaluating word embedding models: methods and experimental results,” APSIPA Trans Signal Inf Process, vol. 8, p. e19, 2019, doi: 10.1017/ATSIP.2019.12.

K. K. Agustiningsih, E. Utami, and O. M. A. Alsyaibani, “Sentiment Analysis and Topic Modelling of The COVID-19 Vaccine in Indonesia on Twitter Social Media Using Word Embedding,” Jurnal Ilmiah Teknik Elektro Komputer dan Informatika, vol. 8, no. 1, pp. 64–75, Apr. 2022, doi: 10.26555/jiteki.v8i1.23009.

W. K. Sari, D. P. Rini, and R. F. Malik, “Text Classification Using Long Short-Term Memory With GloVe Features,” Jurnal Ilmiah Teknik Elektro Komputer dan Informatika, vol. 5, no. 2, pp. 85–100, Dec. 2019, doi: 10.26555/JITEKI.V5I2.15021.

J.-H. Wang, T.-W. Liu, X. Luo, and L. Wang, “An LSTM Approach to Short Text Sentiment Classification with Word Embeddings,” pp. 214–223.

Z. Huang, B. Research, W. Xu, and K. Y. Baidu, “Bidirectional LSTM-CRF Models for Sequence Tagging”.

P. Wang, Y. Qian, F. K. Soong, L. He, and H. Zhao, “Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Recurrent Neural Network,” Oct. 2015, Accessed: Aug. 04, 2023. [Online]. Available: https://arxiv.org/abs/1510.06168v1.

I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” Adv Neural Inf Process Syst, vol. 4, no. January, pp. 3104–3112, Sep. 2014, Accessed: Aug. 04, 2023. [Online]. Available: https://arxiv.org/abs/1409.3215v3.

P. F. Muhammad, R. Kusumaningrum, and A. Wibowo, “Sentiment Analysis Using Word2vec And Long Short-Term Memory (LSTM) For Indonesian Hotel Reviews,” Procedia Comput Sci, vol. 179, pp. 728–735, Jan. 2021, doi: 10.1016/J.PROCS.2021.01.061.

A. W. Saputra, A. W. Saputra, A. P. Wibawa, U. Pujianto, A. B. P. Utama, and A. Nafalski, “LSTM-based Multivariate Time-Series Analysis: A Case of Journal Visitors Forecasting,” ILKOM Jurnal Ilmiah, vol. 14, no. 1, pp. 57–62, Apr. 2022, doi: 10.33096/ilkom.v14i1.1106.57-62.

T. Yu, J. Chen, N. Yan, and X. Liu, “A Multi-Layer Parallel LSTM Network for Human Activity Recognition with Smartphone Sensors,” 2018 10th International Conference on Wireless Communications and Signal Processing, WCSP 2018, 2018, doi: 10.1109/WCSP.2018.8555945.

X. She and D. Zhang, “Text Classification Based on Hybrid CNN-LSTM Hybrid Model,” Proceedings - 2018 11th International Symposium on Computational Intelligence and Design, ISCID 2018, vol. 2, pp. 185–189, 2018, doi: 10.1109/ISCID.2018.10144.

L. Huang, D. Yang, B. Lang, and J. Deng, “Decorrelated Batch Normalization”.

S. Santurkar, D. Tsipras, A. Ilyas, and A. M. ˛ A. Mit, “How Does Batch Normalization Help Optimization?”.

J. Bjorck, C. Gomes, B. Selman, and K. Q. Weinberger, “Understanding Batch Normalization”.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Moh Fajar Abdillah, Kusnawi Kusnawi

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.