Identifying Optimal Baseline Variant of Unsupervised Term Weighting in Question Classification Based on Bloom Taxonomy

  • Anbuselvan Sangodiah Department of Information System, Universiti Tunku Abdul Rahman, Kampar, Malaysia
  • Tham Jee San Department of Information System, Universiti Tunku Abdul Rahman, Kampar, Malaysia
  • Yong Tien Fui Department of Information System, Universiti Tunku Abdul Rahman, Kampar, Malaysia
  • Lim Ean Heng Department of Information System, Universiti Tunku Abdul Rahman, Kampar, Malaysia
  • Ramesh Kumar Ayyasamy Department of Information System, Universiti Tunku Abdul Rahman, Kampar, Malaysia
  • Norazira A Jalil Department of Information System, Universiti Tunku Abdul Rahman, Kampar, Malaysia


Examination is one of the common ways to evaluate the students’ cognitive levels in higher education institutions. Exam questions are labeled manually by educators in accordance with Bloom’s taxonomy cognitive domain. To ease the burden of the educators, several past research works have proposed the automated question classification based on Bloom’s taxonomy using the machine learning technique. Feature selection, feature extraction and term weighting are common ways to improve the accuracy of question classification. Commonly used term weighting method in the past work is unsupervised namely TF and TF-IDF. There are several variants of TF and TFIDF and the most optimal variant has yet to be identified in the context of question classification based on BT. Therefore, this paper aims to study the TF, TF-IDF and normalized TF-IDF variants and identify the optimal variant that can enhance the exam question classification accuracy. To investigate the variants two different classifiers were used, which are Support Vector Machine (SVM) and Naïve Bayes. The average accuracies achieved by TF-IDF and normalized TF-IDF variants using SVM classifier were 64.3% and 72.4% respectively, while using Naïve Bayes classifier the average accuracies for TF-IDF and normalized TF-IDF were 61.9% and 63.0% respectively. Generally, the normalized TF-IDF variants outperformed TF and TF-IDF variants in accuracy and F1-measure respectively. Further statistical analysis using t-test and Wilcoxon Signed also shows that the differences in accuracy between normalized TF-IDF and TF, TF-IDF are significant. The findings from this study show that the Normalized TF-IDF3 variant recorded the highest accuracy of 74.0% among normalized TF-IDF variants. Also, the differences in accuracy between Normalized TF-IDF3 and other normalized variants are generally significant, thus the optimal variant is Normalized TF-IDF3. Therefore, the normalized TF-IDF3 variant is important for benchmarking purposes, which can be used to compare with other term weighting techniques in future work.


Abduljabbar, D. A., and Omar, N. Exam questions classification based on bloom’s taxonomy cognitive level using classifiers combination. Journal of Theoretical and Applied Information Technology 78, 3 (2015), 447.

Abdulrahman, A., and Baykara, M. Fake news detection using machine learning and deep learning algorithms. In 2020 International Conference on Advanced Science and Engineering (ICOASE) (2020), IEEE, pp. 18–23.

Alsaeedi, A. A survey of term weighting schemes for text classification. International Journal of Data Mining, Modelling and Management 12, 2 (2020), 237–254.

Aninditya, A., Hasibuan, M. A., and Sutoyo, E. Text mining approach using tf-idf and naive bayes for classification of exam questions based on cognitive level of bloom’s taxonomy. In 2019 IEEE International onference on Internet of Things and Intelligence System (Io-TaIS) (2019), IEEE, pp. 112–117.

Brownlee, J. A gentle introduction to k-fold cross-validation [online, accessed 07 october, 2021], 2018.

Chen, K., Zhang, Z., Long, J., and Zhang, H. Turning from tf-idf to tf-igm for term weighting in text classification. Expert Systems with Applications 66 (2016), 245–260.

Dalaorao, G. A., Sison, A. M., and Medina, R. P. Integrating collocation as tf-idf enhancement to improve classification accuracy. In 2019 IEEE 13th International Conference on Telecommunication Systems, Services, and Applications (TSSA) (2019), IEEE, pp. 282–285.

Deller, J. Bloom’s taxonomy levels of learning: The complete post [online, accessed 07 october, 2021], 2019.

Djajadinata, K., Faisol, H., Shidik, G. F., Fanani, A. Z., et al. Evaluation of feature extraction for indonesian news classification. In 2020 International Seminar on Application for Technology of Information and Communication (iSemantic) (2020), IEEE, pp. 585–591.

Gandhi, R. Naive bayes classifier. what is a classifier? [online, accessed 02 october, 2021], 2018.

Jayakodi, K., Bandara, M., and Perera, I. An automatic classifier for exam questions in engineering: A process for bloom’s taxonomy. In 2015 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (2015), IEEE, pp. 195–202.

Jiang, Z., Gao, B., He, Y., Han, Y., Doyle, P., and Zhu, Q. Text classification using novel term weighting scheme-based improved tf-idf for internet media reports. Mathematical Problems in Engineering 2021 (2021).

Jiang, Z.-Y., Gao, B., Tian, X., He, Y.-L., and Zhu, Q.-X. An improved term weighting method for content analysis on chinese internet media contents. In 2020 7th International Conference on Control, Decision and Information Technologies (CoDIT) (2020), vol. 1, IEEE, pp. 48–52.

Kusuma, S. F., Siahaan, D., and Yuhana, U. L. Automatic indonesia’s questions classification based on bloom’s taxonomy using natural language processing a preliminary study. In 2015 International Conference on Information Technology Systems and Innovation (ICITSI) (2015), IEEE, pp. 1–6.

Lan, M. A new term weighting method for text categorization. PhD Theses, School of Computing, National University of Singapore (2007).

Lan, M., Tan, C. L., Su, J., and Lu, Y. Supervised and traditional term weighting methods for automatic text categorization. IEEE transactions on pattern analysis and machine intelligence 31, 4 (2008), 721–735.

Liu, C.-z., Sheng, Y.-x., Wei, Z.-q., and Yang, Y.-Q. Research of text classification based on improved tf-idf algorithm. In 2018 IEEE International Conference of Intelligent Robotic and Control Engineering (IRCE) (2018), IEEE, pp. 218–222.

Makhlouf, K., Amouri, L., Chaabane, N., and Nahla, E.-H. Exam questions classification based on bloom’s taxonomy: Approaches and techniques. In 2020 2nd International Conference on Computer and Information Sciences (ICCIS) (2020), IEEE, pp. 1–6.

Mazyad, A., Teytaud, F., and Fonlupt, C. A comparative study on term weighting schemes for text classification. In International Workshop on Machine Learning, Optimization, and Big Data (2017), Springer, pp. 100–108.

Meng, F., and Xu, L. An improved native bayes classifier for imbalanced text categorization based on k-means and chi-square feature selection. In 2018 Eighth International Conference on Instrumentation & Measurement, Computer, Communication and Control (IMCCC) (2018), IEEE, pp. 894–898.

Mohammed, M., and Omar, N. Question classification based on bloom’s taxonomy using enhanced tf-idf. Int J Adv Sci Eng Inf Technol 8 (2018), 1679–1685.

Mohammed, M., and Omar, N. Question classification based on bloom’s taxonomy cognitive domain using modified tf-idf and word2vec. PloS one 15, 3 (2020), e0230442.

Moreo, A., Esuli, A., and Sebastiani, F. Learning to weight for text classification. IEEE Transactions on Knowledge and Data Engineering 32, 2 (2018), 302–316.

Mujtaba, H. An introduction to bag of words in nlp using python [online, accessed 07 october, 2021], 2020.

Navlani, A. Sklearn svm (support vector machines) with python - datacamp [online, accessed 07 october, 2021], 2019.

Nayak, S., Misra, B. B., and Behera, H. S. Impact of data normalization on stock index forecasting. International Journal of Computer Information Systems and Industrial Management Applications 6, 2014 (2014), 257–269.

Nidaa, G. A., and Dhiyaa, S. H. Classifying exam questions based on bloom’s taxonomy using machine learning approach. In Technol. Dev. Inf. Syst. Tris-2019 (2019), pp. 260–269.

Osadi, K., Fernando, M., Welgama, W., et al. Ensemble classifier based approach for classification of examination questions into bloom’s taxonomy cognitive levels. International Journal of Computer Applications 162, 4 (2017), 1–6.

Osman, A., and Yahya, A. Classifications of exam questions using linguistically-motivated features: a case study based on bloom’s taxonomy. In The Sixth International Arab Conference on Quality Assurance in Higher Education (IACQA’2016) (2016), vol. 467, p. 474.

Potochnik, A., Colombo, M., and Wright, C. Recipes for Science. Routledge, 2018, ch. Statistics and Probability, pp. 167–206.

Prabhakaran, S. Lemmatization approaches with examples in python [online, accessed 07 october, 2021], 2018.

Ray, S. Support vector machine algorithm in machine learning [online, accessed 07 october, 2021], 2017.

Sangodiah, A., Ahmad, R., and WAN AHMAD, W. F. Taxonomy based features in question classification using support vector machine. Journal of Theoretical & Applied Information Technology 95, 12 (2017).

Sangodiah, A., Fui, Y. T., Heng, L. E., Jalil, N. A., Ayyasamy, R. K., and Meian, K. H. A comparative analysis on term weighting in exam question classification. In 2021 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) (2021), IEEE, pp. 199–206.

Shaikh, S., Daudpotta, S. M., and Imran, A. S. Bloom’s learning outcomes’ automatic classification using lstm and pretrained word embeddings. IEEE Access 9 (2021), 117887–117909.

Shimomoto, E. K., Souza, L. S., Gatto, B. B., and Fukui, K. Text classification based on word subspace with term-frequency. In 2018 International Joint Conference on Neural Networks (IJCNN) (2018), IEEE, pp. 1–8.

Singh, D., and Singh, B. Investigating the impact of data normalization on classification performance. Applied Soft Computing 97 (2020), 105524.

Sundus, K., Al-Haj, F., and Hammo, B. A deep learning approach for arabic text classification. In 2019 2nd International Conference on New Trends in Computing Sciences (ICTCS) (2019), IEEE, pp. 1–7.

Taqi, M. K., and Ali, R. Automatic question classification models for computer programming examination: A systematic literature review. Journal of Theoretical & Applied Information Technology 93, 2 (2016).

Tongman, S., and Wattanakitrungroj, N. Classifying positive or negative text using features based on opinion words and term frequencyinverse document frequency. In 2018 5th international conference on advanced informatics: Concept theory and applications (ICAICTA) (2018), IEEE, pp. 159–164.

Utomo, B. Y., and Bijaksana, M. A. Comprehensive comparison of term weighting method for classification in indonesian corpus. In 2016 4th International Conference on Information and Communication Technology (ICoICT) (2016), IEEE, pp. 1–5.

Waheed, A., Goyal, M., Mittal, N., Gupta, D., Khanna, A., and Sharma, M. Bloomnet: A robust transformer based model for bloom’s learning outcome classification. arXiv preprint arXiv:2108.07249 (2021).

Wijaya, M. The classification of documents in malay and indonesian using the naive bayesian method uses words and phrases as a training set Mendel Journal 26, 2 (Dec. 2020), 23–28.

Yahya, A. A., and Osman, A. Automatic classification of questions into bloom’s cognitive levels using support vector machines.

Yahya, A. A., Osman, A., Taleb, A., and Alattab, A. A. Analyzing the cognitive level of classroom questions using machine learning techniques. Procedia-Social and Behavioral Sciences 97 (2013), 587–595.

Zelinka, I., and Dao, T. On voynich alphabet analysis with relation to the old indian dialects. Mendel Journal 26, 1 (Aug. 2020), 15–22.

Zhang, S., Wang, Y., and Tan, C. Research on text classification for identifying fake news. In 2018 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC) (2018), IEEE, pp. 178–181.

How to Cite
Sangodiah, A., Jee San, T., Tien Fui, Y., Ean Heng, L., Ayyasamy, R.K. and A Jalil, N. 2022. Identifying Optimal Baseline Variant of Unsupervised Term Weighting in Question Classification Based on Bloom Taxonomy. MENDEL. 28, 1 (Jun. 2022), 8-22.