Abstractive Text Summarization using Pre-Trained Language Model "Text-to-Text Transfer Transformer (T5)"
Dublin Core | PKP Metadata Items | Metadata for this Document | |
1. | Title | Title of document | Abstractive Text Summarization using Pre-Trained Language Model "Text-to-Text Transfer Transformer (T5)" |
2. | Creator | Author's name, affiliation, country | Qurrota A’yuna Itsnaini; Universitas Amikom Yogyakarta; Indonesia |
2. | Creator | Author's name, affiliation, country | Mardhiya Hayaty; Universitas Amikom Yogyakarta; Indonesia |
2. | Creator | Author's name, affiliation, country | Andriyan Dwi Putra; Universitas Amikom Yogyakarta; Indonesia |
2. | Creator | Author's name, affiliation, country | Nidal A.M Jabari; Palestine technical university Kadoorie; Palestinian Territory, Occupied |
3. | Subject | Discipline(s) | |
3. | Subject | Keyword(s) | Automatic Text Summarization, Transformer, Pre-Trained Model, T5, ROUGE |
4. | Description | Abstract | Automatic Text Summarization (ATS) is one of the utilizations of technological sophistication in terms of text processing assisting humans in producing a summary or key points of a document in large quantities. We use Indonesian language as objects because there are few resources in NLP research using Indonesian language. This paper utilized PLTMs (Pre-Trained Language Models) from the transformer architecture, namely T5 (Text-to-Text Transfer Transformer) which has been completed previously with a larger dataset. Evaluation in this study was measured through comparison of the ROUGE (Recall-Oriented Understudy for Gisting Evaluation) calculation results between the reference summary and the model summary. The experiments with the pre-trained t5-base model with fine tuning parameters of 220M for the Indonesian news dataset yielded relatively high ROUGE values, namely ROUGE-1 = 0.68, ROUGE-2 = 0.61, and ROUGE-L = 0.65. The evaluation value worked well, but the resulting model has not achieved satisfactory results because in terms of abstraction, the model did not work optimally. We also found several errors in the reference summary in the dataset used. |
5. | Publisher | Organizing agency, location | Prodi Teknik Informatika FIK Universitas Muslim Indonesia |
6. | Contributor | Sponsor(s) | |
7. | Date | (YYYY-MM-DD) | 2023-04-07 |
8. | Type | Status & genre | Peer-reviewed Article |
8. | Type | Type | |
9. | Format | File format | |
10. | Identifier | Uniform Resource Identifier | https://jurnal.fikom.umi.ac.id/index.php/ILKOM/article/view/1532 |
10. | Identifier | Digital Object Identifier (DOI) | https://doi.org/10.33096/ilkom.v15i1.1532.124-131 |
11. | Source | Title; vol., no. (year) | ILKOM Jurnal Ilmiah; Vol 15, No 1 (2023) |
12. | Language | English=en | en |
13. | Relation | Supp. Files | |
14. | Coverage | Geo-spatial location, chronological period, research sample (gender, age, etc.) | |
15. | Rights | Copyright and permissions |
Copyright (c) 2023 Qurrota A’yuna Itsnaini, Mardhiya Hayaty, Andriyan Dwi Putra, Nidal A.M Jabari![]() This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. |