International Journal of Electrical and Computer Engineering (IJECE) Vol. 14, No. 4, August 2024, pp. 4445~4455 ISSN: 2088-8708, DOI: 10.11591/ijece.v14i4.pp4445-4455 4445 Journal homepage: http://ijece.iaescore.com Efficient offloading and task scheduling in internet of things- cloud-fog environment Marwa Gamal 1 , Samar Awad 1 , Rehab F. Abdel-Kader 2 , Khaled Abd El Salam 1 1 Electrical Engineering Department, Faculty of Engineering, Suez Canal University, Ismailia, Egypt 2 Electrical Engineering Department, Faculty of Engineering, Port Said University, Port Said, Egypt Article Info ABSTRACT Article history: Received Nov 16, 2023 Revised Apr 25, 2024 Accepted May 12, 2024 Efficient offloading and scientific task scheduling are crucial for managing computational tasks in research environments. This involves determining the optimal location for executing a workflow task and allocating the task to computing resources to optimize performance. The challenge is to minimize completion time, energy consumption, and cost. This study proposes three methods: latency-centric offloading (LCO) for delay-sensitive applications; energy-based offloading (EBO) for energy-saving; and efficient offloading (EO) for balanced task distribution across tiers. Scheduling in this paper uses a genetic algorithm (GA) with a weighted sum objective function considering makespan, cost, and energy for internet of things-cloud-fog (IoT-fog-cloud). Comparative studies involving montage, Cybershake, and epigenomics workflows indicate that LCO excels in terms of makespan and cost but ranks the lowest in energy. EBO excels in energy efficiency, aligning closely with the base method. EO competes effectively with the base method in terms of makespan and cost but consumes more energy. This research enables the selection of the most suitable method based on the type of application and its prioritization of makespan, energy, or cost. Keywords: Cloud computing Fog computing Offloading strategy Scientific workflow Task scheduling This is an open access article under the CC BY-SA license. Corresponding Author: Samar Awad Department of Electrical Engineering, Faculty of Engineering, Suez Canal University Alfrosia street, Ismalia, Egypt Email: Samar_Awad@eng.suez.edu.eg 1. INTRODUCTION A scientific workflow outlines a process to achieve a scientific goal, defined by tasks and their interdependencies [1]. Dependencies occur at various stages, directing tasks to be executed sequentially to achieve the scientific goal [2]. Scientific workflows commonly use directed acyclic graphs (DAGs) to model these dependencies, with tasks as nodes and dependencies as edges [3]. The evolution of computationally and data-intensive methods in the natural sciences has driven the creation of scientific workflows designed to automate repetitive computational tasks [4]. Initially, scientific workflows were primarily deployed on distributed systems [5], [6] and on high-performance computing (HPC) [7]. During this period, the focus centered around treating systems and applications as opaque entities, emphasizing distributed resource management and workload execution [6]. Recently, there has been a shift toward using cloud computing infrastructure to conduct scientific workflows [8]–[10]. Unlike distributed systems, cloud computing operates on a client-server architecture, centrally utilizing resources with a pay-as-you-go model [11]. Cloud solutions may not consistently meet quality of service (QoS) and quality of experience (QoE) for certain latency-sensitive internet of things (IoT) applications due to distance and connectivity issues. This led to fog computing, extending cloud resources closer to IoT devices. Fog devices process and offload functions from cloud servers, ensuring enhanced performance [12]. Advancements in networking technology,