Online architecture for predicting live video transcoding resources

Pekka Pääkkönen*, Antti Heikkinen, Tommi Aihkisalo

*Corresponding author for this work

    Research output: Contribution to journalArticleScientificpeer-review

    13 Citations (Scopus)

    Abstract

    End users stream video increasingly from live broadcasters (via YouTube Live, Twitch etc.). Adaptive live video streaming is realised by transcoding different representations of the original video content. Management of transcoding resources creates costs for the service provider, because transcoding is a CPU-intensive task. Additionally, the content must be transcoded within real time with the transcoding resources in order to provide satisfying Quality of Service. The contribution of this paper is validation of an online architecture for enabling live video transcoding with Docker in a Kubernetes-based cloud environment. Particularly, online cloud resource allocation has been focused on by executing experiments in several configurations. The results indicate that Random Forest regressor provided the best overall performance in terms of precision regarding transcoding speed and CPU consumption on resources, and the amount of realised transcoding tasks. Reinforcement Learning provided lower performance, and required more effort in terms of training.

    Original languageEnglish
    Article number9
    JournalJournal of Cloud Computing
    Volume8
    Issue number1
    DOIs
    Publication statusPublished - 2019
    MoE publication typeA1 Journal article-refereed

    Keywords

    • Cassandra
    • Docker
    • FFmpeg
    • Gym
    • Rancher
    • Random Forest
    • Reinforcement learning
    • RL-Keras

    Fingerprint

    Dive into the research topics of 'Online architecture for predicting live video transcoding resources'. Together they form a unique fingerprint.

    Cite this