(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 14, No. 1, 2023 420 | Page www.ijacsa.thesai.org Three on Three Optimizer: A New Metaheuristic with Three Guided Searches and Three Random Searches Purba Daru Kusuma, Ashri Dinimaharawati Computer Engineering, Telkom University, Bandung, Indonesia Abstract—This paper presents a new swarm intelligence- based metaheuristic called a three-on-three optimizer (TOTO). This name is chosen based on its novel mechanism in adopting multiple searches into a single metaheuristic. These multiple searches consist of three guided searches and three random searches. These three guided searches are searching toward the global best solution, searching for the global best solution to avoid the corresponding agent, and searching based on the interaction between the corresponding agent and a randomly selected agent. The three random searches are the local search of the corresponding agent, the local search of the global best solution, and the global search within the entire search space. TOTO is challenged to solve the classic 23 functions as a theoretical optimization problem and the portfolio optimization problem as a real-world optimization problem. There are 13 bank stocks from Kompas 100 index that should be optimized. The result indicates that TOTO performs well in solving the classic 23 functions. TOTO can find the global optimal solution of eleven functions. TOTO is superior to five new metaheuristics in solving 17 functions. These metaheuristics are grey wolf optimizer (GWO), marine predator algorithm (MPA), mixed leader-based optimizer (MLBO), golden search optimizer (GSO), and guided pelican algorithm (GPA). TOTO is better than GWO, MPA, MLBO, GSO, and GPA in solving 22, 21, 21, 19, and 15 functions, respectively. It means TOTO is powerful to solve high- dimension unimodal, multimodal, and fixed-dimension multimodal problems. TOTO performs as the second-best metaheuristic in solving a portfolio optimization problem. Keywords—Optimization; metaheuristic; swarm intelligence; portfolio optimization; Kompas 100; bank I. INTRODUCTION Many real-world problems can be seen as optimization problems. This circumstance comes from the nature of human behavior or activity in achieving their objective most efficiently. Ironically, people always find certain limitations or constraints. This consideration is the same as the optimization work. In general, optimization is constructed by two elements: objective and constraint. In the optimization problem, many solutions can be chosen in the solution space. However, some solutions are better than others. One solution that is the best one is called the optimal global solution. The objective of optimization can be minimization or maximization. In the minimization, the optimal global solution is the solution with the lowest value. Some experimental parameters in the minimization, such as delay [1], total order completion time [2], idle time [2], tardiness cost and maintenance [3], project duration [4], energy consumption [5], transmission losses [6], and so on. On the other hand, in maximization, the optimal global solution is the solution with the highest value. Some parameters in the maximization are profit [7], voltage stability [6], revenue [8], service level [9], and so on. Two ways can be chosen to solve the optimization problem. These methods are a mathematical method and a metaheuristic [10]. The mathematical method is robust in solving a simple optimization problem. It guarantees finding the optimal global solution. However, the mathematical or deterministic method often fails to solve a complex optimization problem, such as a non-convex or multimodal problem. Moreover, the mathematical method is less flexible in facing various real- world optimization problems [11]. On the other hand, metaheuristic is widely used in many optimizations. Metaheuristics have several advantages. First, it is flexible enough to be implemented in various problems because it focuses on the objectives and constraints [10]. Second, it can be implemented in an environment with limited computational resources because of its approximate approach so that not all possible solutions are traced [12]. This approximate approach comes with the consequence that metaheuristic does not guarantee finding the optimal global solution. There are hundreds of metaheuristics developed in recent decades. Moreover, many metaheuristics have been developed in recent years. Many of them used metaphors for their name. Many of these shortcoming metaheuristics were inspired by animal behavior, such as the butterfly optimization algorithm (BOA) [13], chameleon swam algorithm (CSA) [14], coati optimization algorithm (COA) [15], Komodo mlipir algorithm (KMA) [16], northern goshawk optimizer (NGO) [17], raccoon optimization algorithm (ROA) [18], marine predator algorithm (MPA) [19], Tasmanian devil optimizer (TDO) [11], snake optimizer (SO) [10], white shark optimizer (WSO) [20], guided pelican algorithm (GPA) [21], and so on. Some metaheuristics were inspired by the mechanics of plants, such as the tunicate swarm algorithm (TSA) [22], flower pollination algorithm (FPA) [23], and so on. Some metaheuristics are named based on their references in their guided search, such as three influential member-based optimizers (TIMBO) [24], mixed leader-based optimizers (MLBO) [25], multi-leader optimizers (MLO) [26], random selected leader-based optimizer (RLSBO) [27], and so on. Some metaheuristics were inspired by human activities, such as stochastic paint optimizer (SPO) [28], modified social forces algorithm (MSFA) [29], driving training-based optimizer (DTBO) [30], and so on. Some metaheuristics were free from metaphor and named based on