Abstractive Text Summarization using Pre-Trained Language Model "Text-to-Text Transfer Transformer (T5)"


Qurrota A’yuna Itsnaini(1); Mardhiya Hayaty(2*); Andriyan Dwi Putra(3); Nidal A.M Jabari(4);

(1) Universitas Amikom Yogyakarta
(2) Universitas Amikom Yogyakarta
(3) Universitas Amikom Yogyakarta
(4) Palestine technical university Kadoorie
(*) Corresponding Author

  

Abstract


Automatic Text Summarization (ATS) is one of the utilizations of technological sophistication in terms of text processing assisting humans in producing a summary or key points of a document in large quantities. We use Indonesian language as objects because there are few resources in NLP research using Indonesian language. This paper utilized PLTMs (Pre-Trained Language Models) from the transformer architecture, namely T5 (Text-to-Text Transfer Transformer) which has been completed previously with a larger dataset. Evaluation in this study was measured through comparison of the ROUGE (Recall-Oriented Understudy for Gisting Evaluation) calculation results between the reference summary and the model summary. The experiments with the pre-trained t5-base model with fine tuning parameters of 220M for the Indonesian news dataset yielded relatively high ROUGE values, namely ROUGE-1 = 0.68, ROUGE-2 = 0.61, and ROUGE-L = 0.65. The evaluation value worked well, but the resulting model has not achieved satisfactory results because in terms of abstraction, the model did not work optimally. We also found several errors in the reference summary in the dataset used.


Keywords


Automatic Text Summarization, Transformer, Pre-Trained Model, T5, ROUGE

  
  

Full Text:

PDF
  

Article Metrics

Abstract view: 739 times
PDF view: 251 times
     

Digital Object Identifier

doi  https://doi.org/10.33096/ilkom.v15i1.1532.124-131
  

Cite

References


A. P. Widyassari et al., ‘Review of automatic text summarization techniques & methods’, J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 4, pp. 1029–1046, 2022, doi: 10.1016/j.jksuci.2020.05.006.

M. Allahyari et al., ‘Text summarization techniques: A Brief Survey’, Int. J. Adv. Comput. Sci. Appl., vol. 8, no. 10, 2017, doi: 10.14569/ijacsa.2017.081052.

A. Gupta, D. Chugh, Anjum, and R. Katarya, ‘Automated news summarization using Transformers’, Lect. Notes Electr. Eng., vol. 840, pp. 249–259, 2022, doi: 10.1007/978-981-16-9012-9_21.

Z. Hao and B. Xue, ‘2020 5th Asia-Pacific Conference on Intelligent Robot Systems, ACIRS 2020’, 2020 5th Asia-Pacific Conf. Intell. Robot Syst. ACIRS 2020, pp. 163–167, 2020.

M. Ramina, N. Darnay, C. Ludbe, and A. Dhruv, ‘Topic level summary generation using BERT induced Abstractive Summarization model’, Proc. Int. Conf. Intell. Comput. Control Syst. ICICCS 2020, no. Iciccs, pp. 747–752, 2020, doi: 10.1109/ICICCS48265.2020.9120997.

A. Vaswani et al., ‘Attention is all you need’, Adv. Neural Inf. Process. Syst., vol. 2017-Decem, no. Nips, pp. 5999–6009, 2017.

J. Yang and J. Yang, ‘Aspect based sentiment analysis with Self-Attention and Gated Convolutional Networks’, Proc. IEEE Int. Conf. Softw. Eng. Serv. Sci. ICSESS, vol. 2020-Octob, pp. 146–149, 2020, doi: 10.1109/ICSESS49938.2020.9237640.

Z. Liang, J. Du, and C. Li, ‘Abstractive social media text summarization using selective reinforced Seq2Seq attention model’, Neurocomputing, vol. 410, pp. 432–440, 2020, doi: 10.1016/j.neucom.2020.04.137.

A. D. Rendragraha, M. A. Bijaksana, and A. Romadhony, ‘Pendekatan metode Transformers untuk deteksi bahasa kasar dalam komentar berita online Indonesia’, e-Proceeding Eng., vol. 8, no. 2, pp. 3385–3395, 2021.

N. Kitaev, Ł. Kaiser, and A. Levskaya, ‘Reformer: The Efficient Transformer’, ICLR 2020, pp. 1–12, 2020, [Online]. Available: http://arxiv.org/abs/2001.04451.

S. Sukhbaatar, E. Grave, P. Bojanowski, and A. Joulin, ‘Adaptive attention span in transformers’, ACL 2019 - 57th Annu. Meet. Assoc. Comput. Linguist. Proc. Conf., no. 2015, pp. 331–335, 2020, doi: 10.18653/v1/p19-1032.

D. Britz, A. Goldie, M. T. Luong, and Q. V. Le, ‘Massive exploration of neural machine translation architectures’, EMNLP 2017 - Conf. Empir. Methods Nat. Lang. Process. Proc., pp. 1442–1451, 2017, doi: 10.18653/v1/d17-1151.

Y. Belinkov, S. Gehrmann, G. Ai, and E. Pavlick, ‘Tutorial Proposal: Interpretability and Analysis in Neural NLP. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts’, ACL Meet. Assoc. Comput. Linguist., pp. 1–5, 2020, [Online]. Available: https://doi.org/10.18653/v1/P17.

N. Hayatin, K. M. Ghufron, and G. W. Wicaksono, ‘Summarization of COVID-19 news documents deep learning-based using transformer architecture’, Telkomnika (Telecommunication Comput. Electron. Control., vol. 19, no. 3, pp. 754–761, 2021, doi: 10.12928/TELKOMNIKA.v19i3.18356.

C. Raffel et al., ‘Exploring the limits of transfer learning with a unified text-to-text transformer’, J. Mach. Learn. Res., vol. 21, pp. 1–67, 2020.

A. Alomari, N. Idris, A. Q. M. Sabri, and I. Alsmadi, ‘Deep reinforcement and transfer learning for abstractive text summarization: A review’, Comput. Speech Lang., vol. 71, no. August 2021, p. 101276, 2022, doi: 10.1016/j.csl.2021.101276.

T. Wolf et al., ‘Transformers: State-of-the-Art Natural Language Processing’, pp. 38–45, 2020, doi: 10.18653/v1/2020.emnlp-demos.6.

K. Song, X. Tan, T. Qin, J. Lu, and T. Y. Liu, ‘MASS: Masked sequence to sequence pre-training for language generation’, 36th Int. Conf. Mach. Learn. ICML 2019, vol. 2019-June, pp. 10384–10394, 2019.

A. Roberts, C. Raffel, and N. Shazeer, ‘How much knowledge can you pack into the parameters of a language model?’, EMNLP 2020 - 2020 Conf. Empir. Methods Nat. Lang. Process. Proc. Conf., pp. 5418–5426, 2020, doi: 10.18653/v1/2020.emnlp-main.437.

K. Kurniawan and S. Louvan, ‘INDOSUM : A New Benchmark Dataset for Indonesian Text Summarization’, 2018 Int. Conf. Asian Lang. Process., pp. 215–220, 2018.

S. Ontoum and J. H. Chan, ‘Automatic Text Summaration of COVID-19 Scientific Research Topics Using Pre-trained Model from HuggingFace®’, pp. 1–8, 2022.

T. He et al., ‘ROUGE-C: A fully automated evaluation method for multi-document summarization’, 2008 IEEE Int. Conf. Granul. Comput. GRC 2008, pp. 269–274, 2008, doi: 10.1109/GRC.2008.4664680.

Y. Yuliska and K. U. Syaliman, ‘Literatur Review Terhadap Metode, Aplikasi dan Dataset Peringkasan Dokumen Teks Otomatis untuk Teks Berbahasa Indonesia’, IT J. Res. Dev., vol. 5, no. 1, pp. 19–31, 2020, doi: 10.25299/itjrd.2020.vol5(1).4688.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Qurrota A’yuna Itsnaini, Mardhiya Hayaty, Andriyan Dwi Putra, Nidal A.M Jabari

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.