Asian Journal of Applied Sciences (ISSN: 2321 0893) Volume 02 Issue 06, December 2014 853 ) www.ajouronline.com Asian Online Journals ( Selection of Variables in Quantile Regression (Linear Lasso- Goal Programming) Neveen Sayed Ahmed, Elham Abdul-Razik Ismail Faculty of Commerce, Al-Azhar University (Girls’ Branch), Egypt Corresponding authors email: alazhar_statistic {at} hotmail.com ______________________________________________________________________________ ABSTRACT---- Quantile regression is a statistical technique intended to estimate, and conduct inference about the conditional quantile functions. Since Koenker and Bassett (1978) introduced quantile regression, which models conditional quantiles as functions of predictors. The quantile regression can give complete information about the relationship between the response variable and covariates on the entire conditional distribution, and has no distributional assumption about the error term in the model. The study evaluates the performance of three methods; two methods of linear programming linear lasso ( -Lasso, -Lasso) and one method of Goal programming. The three methods are used to select the best subset of variables and estimate the parameters of the quantile regression equation when four error distributions, with two different sample sizes and two different parameters for each error distribution. The study found that the estimated risk and relative estimated risk which produced from Goal programming method is less than ER and ERE of ( -Lasso and -Lasso methods. Keywords--- Quantile Regression - Linear Lasso- Selection of Variables - goal programming - estimated risk - relative estimated risk. _________________________________________________________________________________________________ 1. INTRODUCTION Koenker and Bassett (1978) introduced quantile regression, which models conditional quantiles as functions of predictors. The quantile regression model is a natural extension of the linear regression model. Quantile regression studies the relationship between the response variable and the independent variables at any quantile of the conditional distribution function. The quantile regression can give complete information about the relationship between the response variable and covariates on the entire conditional distribution, and has no distributional assumption about the error term in the model. Quantile regression is very useful for visualizing changes in the conditional distribution of longitudinal data sets over time. The quantile regression estimates are reliable in the presence of extreme outliers. Quantile regression also goes beyond the location shift model to determine the effect of covariates on the shape and scale of the entire response distribution. The spacing of the quantile lines indicates whether the distribution is skewed to the right or left. Quantile regression is a statistical technique intended to estimate, and conduct inference about the conditional quantile functions. Just as the classical linear regression methods estimate models for conditional mean function, quantile regression offers a mechanism for estimating models for conditional median function, and the full range of other conditional quantile functions. Variable selection is an important problem in quantile regression when the number of predictor variables is large. Variable selection in linear regression is a problem of great practical importance. There are various methods for subset selection and various selection criteria. While there is no clear consensus regarding which method is the best and which criterion is the most appropriate, there are a general agreement an effective method is needed. The primary purpose of this research is to provide a review of the concepts and methods associated with variable selection in linear regression model. Some of the reasons for using only a subset of the available predictor variables given by Miller (1990) are to estimate or predict at a lower cost by reducing the number of variables on which data are to be collected, they increase the complexity and number of parameters of the model, to predict more accurately by eliminating uninformative variables, To describe multivariate data sets parsimoniously, to estimate regression coefficients with smaller standard errors (par- ticularly when some of the predictors are highly correlated). Lasso penalization or regularization methods are use full tools for estimating quantile regression parameters and selection variables. The Lasso reduces the variability of the estimates by shrinking the coefficients and at the same time produces interpretable models by shrinking some coefficients to exactly zero. The key strength of Lasso lies in its ability to do simultaneous parameter estimation and variable selection. The penalty and penalty was used in the Lasso for variable selection. The least square ( ) and least absolute deviation ( ) regression are a useful method for robust