Generalized Damped Newton Algorithms in Nonsmooth Optimization with Applications to Lasso Problems Pham Duy Khanh * Boris S. Mordukhovich † Vo Thanh Phat ‡ Dat Ba Tran § February 8, 2021 Abstract. The paper proposes and develops new globally convergent algorithms of the generalized damped Newton type for solving important classes of nonsmooth optimization problems. These algorithms are based on the theory and calculations of second-order subdifferentials of nonsmooth functions with employing the machinery of second-order variational analysis and generalized differentiation. First we develop a globally superlinearly convergent damped Newton-type algorithm for the class of continuously differentiable functions with Lipschitzian gradients, which are nonsmooth of second order. Then we design such a globally convergent algorithm to solve a class of nonsmooth convex composite problems with extended-real-valued cost functions, which typically arise in machine learning and statistics. Finally, the obtained algorithmic developments and justifications are applied to solving a major class of Lasso problems with detailed numerical implementations. We present the results of numerical experiments and compare the performance of our main algorithm applied to Lasso problems with those achieved by other first-order and second-order methods. Key words. Variational analysis and nonsmooth optimization, damped Newton methods, global convergence, tilt stability of minimizers, superlinear convergence, Lasso problems Mathematics Subject Classification (2000) 90C31, 49J52, 49J53 1 Introduction This paper is mainly devoted to the design, justification, and applications of globally convergent Newton-type algorithms to solve problems of nonsmooth (of the first or second order) optimization problems in finite-dimensional spaces. Considering the unconstrained optimization problem minimize ϕ(x) subject to x ∈ IR n (1.1) with a continuously differentiable (C 1 -smooth) cost function ϕ : IR n → IR, recall that one of the most natural approaches to solve (1.1) globally is by using line search methods; see, e.g., [20, 32, 52]. Given a starting point x 0 ∈ IR n , such methods construct an iterative procedure of the form x k+1 := x k + τ k d k for all k ∈ IN := {1, 2,...}, (1.2) where τ k ≥ 0 is a step size at iteration k, and where d k 6= 0 is a search direction. The precise choice of d k and τ k at each iteration in (1.2) distinguishes one algorithm from another. The main goal of line search methods is to construct a sequence of iterates {x k } such that the corresponding sequence * Department of Mathematics, Ho Chi Minh City University of Education, Ho Chi Minh City, Vietnam. E-mail: pdkhanh182@gmail.com † Department of Mathematics, Wayne State University, Detroit, Michigan, USA. E-mail: aa1086@wayne.edu. Research of this author was partly supported by the US National Science Foundation under grants DMS-1512846 and DMS- 1808978, by the US Air Force Office of Scientific Research under grant #15RT0462, and by the Australian Research Council under Discovery Project DP-190100555. ‡ Department of Mathematics, Wayne State University, Detroit, Michigan, USA. E-mail: phatvt@wayne.edu. Research of this author was partly supported by the US National Science Foundation under grants DMS-1512846 and DMS- 1808978, and by the US Air Force Office of Scientific Research under grant #15RT0462. § Department of Mathematics, Wayne State University, Detroit, Michigan, USA. E-mail: tranbadat@wayne.edu. Re- search of this author was partly supported by the US National Science Foundation under grant DMS-1808978. 1 arXiv:2101.10555v2 [math.OC] 5 Feb 2021