Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization Yuchen Zhang, Lin Xiao; 18(84):1−42, 2017. Abstract We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordin
Maruan Al-Shedivat, Andrew Gordon Wilson, Yunus Saatchi, Zhiting Hu, Eric P. Xing; 18(82):1−37, 2017. Abstract Many applications in speech, robotics, finance, and biology deal with sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions. To model such structure, we propose expressive closed-form ker
Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, David M. Blei; 18(14):1−45, 2017. Abstract Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and comput
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:478-487, 2016. Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simul
A General Framework for Constrained Bayesian Optimization using Information-based Search José Miguel Hern\'{a}ndez-Lobato, Michael A. Gelbart, Ryan P. Adams, Matthew W. Hoffman, Zoubin Ghahramani; 17(160):1−53, 2016. Abstract We present an information-theoretic framework for solving global black-box optimization problems that also have black-box constraints. Of particular interest to us is to effi
Mingkui Tan, Ivor W. Tsang, Li Wang; 15(40):1371−1429, 2014. Abstract In this paper, we present a new adaptive feature scaling scheme for ultrahigh-dimensional feature selection on Big Data, and then reformulate it as a convex semi-infinite programming (SIP) problem. To address the SIP, we propose an efficient feature generating paradigm. Different from traditional gradient-based approaches that c
リリース、障害情報などのサービスのお知らせ
最新の人気エントリーの配信
処理を実行中です
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く