基于非凸惩罚回归的参数估计和异常值检测-凯发娱乐官网

基于非凸惩罚回归的参数估计和异常值检测
parameter estimation and outliers detection based on nonconvex penalized regression
doi: , , ,   
作者: 张尊皓, 彭定涛*, 苏妍妍:贵州大学数学与统计学院,贵州 贵阳
关键词: ;;;;;;;;;
摘要: 本文基于最优化理论提出了一种实现多元线性回归模型的参数估计和异常值检测的方法。首先,建立了基于huber损失函数和l0惩罚项的回归模型,为便于求解进一步将该模型中l0惩罚项松弛为capped-l1惩罚;其次,刻画了松弛问题的方向稳定点,并建立了原问题和松弛问题的等价性。 最后提出了松弛问题的光滑化模型,并证明了光滑模型与松弛问题稳定点的一致性。
abstract: this paper presents a method to implement parameter estimation and outliers de- tection for multiple linear regression models based on optimization theory. first, a regression model based on the huber loss function and the l0 penalty term is devel- oped, and the l0 penalty term in this model is further relaxed to the capped-l1 penalty to facilitate the solution; and second, the directional stability point of the relaxation problem is characterized, and the equivalence of the original and relaxation problems is established. finally, we propose a smooth model for the relaxation problem and prove the consistency of the stable point of smooth model and the relaxation problem.
文章引用:张尊皓, 彭定涛, 苏妍妍. 基于非凸惩罚回归的参数估计和异常值检测[j]. 应用数学进展, 2022, 11(12): 9081-9095.

参考文献

[1] papageorgiou, g., bouboulis, p. and theodoridis, s. (2015) robust linear regression analysis—a greedy approach. ieee transactions on signal processing, 63, 3872-3887.
[2] huber, p. (1972) the 1972 wald lecture robust statistics: a review. annals of mathematical statistics, 43, 1041-1067.
[3] rousseeuw, p. and leroy, a. (1987) robust regression and outlier detection. wiley, new york, ny.
[4] maronna, r., martin, r. and yohai, v. (2006) robust statistics: theory and methods. wiley, new york, ny.
[5] huber, p. (1981) robust statistics. wiley, new york, ny.
[6] cook, r. and weisberg, s. (1982) residuals and influence in regression. chapman and hall, new york, ny.
[7] natarajan, b. (1995) sparse approximate solutions to linear systems. siam journal on computing, 24, 227-234.
[8] nguyen, n. and tran, t. (2013) robust lasso with missing and grossly corrupted observa- tions. ieee transactions on information theory, 59, 2036-2058.
[9] chen, j. and liu, y. (2019) stable recovery of structured signals from corrupted subgaussian measurements. ieee transactions on information theory, 65, 2976- 2994.
[10] katayama, s. and fujisawa, h. (2017) sparse and robust linear regression: an optimization algorithm and its statistical properties. statistica sinica, 27, 1243-1264.
[11] fan, j. and li, y. (2001) variable selection via nonconcave penalized likelihood and its oracle properties. journal of the american statistical association, 96, 1348-1360.
[12] ong, c. and an, l. (2013) learning sparse classifiers with difference of convex functions algorithms. optimization methods and software, 28, 830-854.
[13] peleg, d. and meir, r. (2008) a bilinear formulation for vector sparsity optimization. signal processing, 88, 375-389.
[14] zhang, c. (2010) nearly unbiased variable selection under minimax concave penalty. annals of statistics, 38, 894-942.
[15] zhang, t. (2010) analysis of multi-stage convex relaxation for sparse regularization. journal of machine learning research, 11, 1081-1107.
[16] cand´es, e., romberg, j. and tao, t. (2006) robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. ieee transactions on infor- mation theory, 52, 489-509.
[17] huber, p. (1964) robust estimation of a location parameter. annals of mathematical statis- tics, 35, 73-101.
[18] fan, j., li, q. and wang, y. (2017) estimation of high dimensional mean regression in the absence of symmetry and light tail assumptions. journal of royal statistical society, series b, 79, 247-265.
[19] yi, c. and huang, j. (2017) semismooth newton coordinate descent algorithm for elastic- net penalized huber loss regression and quantile regression. journal of computational and graphical statistics, 26, 547-557.
[20] [20] sun, q., zhou, w. and fan, j. (2020) adaptive huber regression. journal of the american statistical association, 115, 254-265.
[21] peng, d. and chen, x. (2020) computation of second-order directional stationary points for group sparse optimization. optimization methods and software, 35, 348-376.
[22] zhang, x. and peng, d. (2022) solving constrained nonsmooth group sparse optimization via group capped-l1 relaxation and group smoothing proximal gradient algorithm. com- putational optimization and applications, 83, 801-844.
[23] 彭定涛, 唐琦, 张弦. 组稀疏优化问题精确连续capped-l1松弛[j]. 数学学报, 2022, 65(2): 243-262.
[24] 罗孝敏, 彭定涛, 张弦. 基于mcp正则的最小一乘回归问题研究[j]. 系统科学与数学, 2021, 41(8): 2327-2337.
[25] ahn, m., pang, j. and jack, x. (2017) difference-of-convex learning: directional stationarity, optimality, and sparsity. siam journal on optimization, 27, 1637-1665.
[26] chen, x., niu, l. and yuan, y. (2013) optimality conditions and a smoothing trust region newton method for non-lipschitz optimization. siam journal on optimization, 23, 1528- 1552.
为你推荐
凯发娱乐官网的友情链接
网站地图