• Login
    View Item 
    •   NM-AIST Home
    • Computational and Communication Science Engineering
    • Research Articles [CoCSE]
    • View Item
    •   NM-AIST Home
    • Computational and Communication Science Engineering
    • Research Articles [CoCSE]
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    The Effect of Hyperparameter Optimization on the Estimation of Performance Metrics in Network Traffic Prediction using the Gradient Boosting Machine Model

    Thumbnail
    View/Open
    Full text (240.3Kb)
    Date
    2023-06
    Author
    Mbelwa, Jimmy
    Agbinya, Johnson
    Mwita, Machoke
    Sam, Anael
    Metadata
    Show full item record
    Abstract
    Information and Communication Technology (ICT) has changed the way we communicate and access information, resulting in the high generation of heterogeneous data. The amount of network traffic generated constantly increases in velocity, veracity, and volume as we enter the era of big data. Network traffic classification and intrusion detection are very important for the early detection and identification of unnecessary network traffic. The Machine Learning (ML) approach has recently entered the center stage in network traffic accurate classification. However, in most cases, it does not apply model hyperparameter optimization. In this study, gradient boosting machine prediction was used with different hyperparameter optimization configurations, such as interaction depth, tree number, learning rate, and sampling. Data were collected through an experimental setup by using the Sophos firewall and Cisco router data loggers. Data analysis was conducted with R software version 4.2.0 with Rstudio Integrated Development Environment. The dataset was split into two partitions, where 70% was used for training the model and 30% for testing. At a learning rate of 0.1, interaction depth of 14, and tree number of 2500, the model estimated the highest performance metrics with an accuracy of 0.93 and R of 0.87 compared to 0.90 and 0.85 before model optimization. The same configuration attained the minimum classification error of 0.07 than 0.10 before model optimization. After model tweaking, a method was developed for achieving improved accuracy, R square, mean decrease in Gini coefficients for more than 8 features, lower classification error, root mean square error, logarithmic loss, and mean square error in the model.
    URI
    https://doi.org/10.48084/etasr.5548
    https://dspace.nm-aist.ac.tz/handle/20.500.12479/2388
    Collections
    • Research Articles [CoCSE]

    Nelson Mandela-AIST copyright © 2021  DuraSpace
    Theme by 
    Atmire NV
     

     

    Browse

    All PublicationsCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Nelson Mandela-AIST copyright © 2021  DuraSpace
    Theme by 
    Atmire NV