My Publications

JabRef references
Cevher, V., Vu, B. and Yurtsever, A. (2019), "Inertial Three-Operator Splitting Method and Applications" . arXiv:1904.12980.
Abstract: We introduce an inertial variant of the forward-Douglas-Rachford splitting and analyze its convergence. We specify an instance of the proposed method to the three-composite convex minimization template. We provide practical guidance on the selection of the inertial parameter based on the adaptive starting idea. Finally, we illustrate the practical performance of our method in various machine learning applications.
BibTeX:
@techreport{Cevher2019-InertialTOS,
  author = {Cevher, V. and B. Vu and Yurtsever, A.},
  title = {Inertial Three-Operator Splitting Method and Applications},
  school = {arXiv:1904.12980},
  year = {2019},
  url = {https://arxiv.org/pdf/1904.12980.pdf}
}
Ding, L., Yurtsever, A., Cevher, V., Tropp, J. and Udell, M. (2019), "An Optimal-Storage Approach to Semidefinite Programming using Approximate Complementarity" . arXiv:1902.03373.
Abstract: This paper develops a new storage-optimal algorithm that provably solves generic semidefinite programs (SDPs) in standard form. This method is particularly effective for weakly constrained SDPs. The key idea is to formulate an approximate complementarity principle: Given an approximate solution to the dual SDP, the primal SDP has an approximate solution whose range is contained in the eigenspace with small eigenvalues of the dual slack matrix. For weakly constrained SDPs, this eigenspace has very low dimension, so this observation significantly reduces the search space for the primal solution.
This result suggests an algorithmic strategy that can be implemented with minimal storage:
(1) Solve the dual SDP approximately;
(2) compress the primal SDP to the eigenspace with small eigenvalues of the dual slack matrix;
(3) solve the compressed primal SDP.
The paper also provides numerical experiments showing that this approach is successful for a range of interesting large-scale SDPs.
BibTeX:
@techreport{Ding2019-ComplementarySlacknessSDP,
  author = {Ding, L. and Yurtsever, A. and Cevher, V. and Tropp, J. and Udell, M.},
  title = {An Optimal-Storage Approach to Semidefinite Programming using Approximate Complementarity},
  school = {arXiv:1902.03373},
  year = {2019},
  url = {https://arxiv.org/pdf/1902.03373.pdf}
}
Locatello, F., Yurtsever, A., Fercoq, O. and Cevher, V. (2019), "Stochastic Conditional Gradient Method for Composite Convex Minimization" , In Advances in Neural Information Processing Systems 32 (NeurIPS).
Abstract: A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), minimization of a convex function over the positive-semidefinite cone subject to some affine constraints. The majority of classical SDP solvers are designed for the deterministic setting where problem data is readily available. In this setting, generalized conditional gradient methods (aka Frank-Wolfe-type methods) provide scalable solutions by leveraging the so-called linear minimization oracle instead of the projection onto the semidefinite cone. Most problems in machine learning and modern engineering applications, however, contain some degree of stochasticity. In this work, we propose the first conditional-gradient-type method for solving stochastic optimization problems under affine constraints. Our method guarantees $O(k^−1/3)$ convergence rate in expectation on the objective residual and $O(k^−5/12)$on the feasibility gap.
BibTeX:
@conference{Locatello2019-StochasticHomotopyCGM,
  author = {Locatello, F. and Yurtsever, A. and Fercoq, O. and Cevher, V.},
  title = {Stochastic Conditional Gradient Method for Composite Convex Minimization},
  booktitle = {Advances in Neural Information Processing Systems 32 (NeurIPS)},
  year = {2019},
  url = {https://papers.nips.cc/paper/9572-stochastic-frank-wolfe-for-composite-convex-minimization.pdf}
}
Tropp, J., Yurtsever, A., Udell, M. and Cevher, V. (2019), "Streaming Low-Rank Matrix Approximation with an Application to Scientific Simulation" , SIAM Journal on Scientific Computing. Vol. 41 (4) , pp. A2430-A2463.
Abstract: This paper argues that randomized linear sketching is a natural tool for on-the-fly compression of data matrices that arise from large-scale scientific simulations and data collection. The technical contribution consists in a new algorithm for constructing an accurate low-rank approximation of a matrix from streaming data. This method is accompanied by an a priori analysis that allows the user to set algorithm parameters with confidence and an a posteriori error estimator that allows the user to validate the quality of the reconstructed matrix. In comparison to previous techniques, the new method achieves smaller relative approximation errors and is less sensitive to parameter choices. As concrete applications, the paper outlines how the algorithm can be used to compress a Navier--Stokes simulation and a sea surface temperature dataset.
BibTeX:
@article{Tropp2019-SketchingScientificSimulation,
  author = {Tropp, J.A. and Yurtsever, A. and Udell, M. and Cevher, V.},
  title = {Streaming Low-Rank Matrix Approximation with an Application to Scientific Simulation},
  journal = {SIAM Journal on Scientific Computing},
  year = {2019},
  volume = {41},
  number = {4},
  pages = {A2430--A2463},
  url = {https://arxiv.org/pdf/1902.08651.pdf}
}
Yurtsever, A., Fercoq, O. and Cevher, V. (2019), "A Conditional Gradient-Based Augmented Lagrangian Framework" , In Proc. 36th Int. Conf. Machine Learning (ICML).
Abstract: This paper considers a generic convex minimization template with affine constraints over a compact domain, which covers key semidefinite programming applications. The existing conditional gradient methods either do not apply to our template or are too slow in practice. To this end, we propose a new conditional gradient method, based on a unified treatment of smoothing and augmented Lagrangian frameworks. The proposed method maintains favorable properties of the classical conditional gradient method, such as cheap linear minimization oracle calls and sparse representation of the decision variable. We prove $O(k^-1/2)$ convergence rate for our method in the objective residual and the feasibility gap. This rate is essentially the same as the state of the art CG-type methods for our problem template, but the proposed method is arguably superior in practice compared to existing methods in various applications.
BibTeX:
@conference{Yurtsever2019-Cgal,
  author = {Yurtsever, A. and Fercoq, O. and Cevher, V.},
  title = {A Conditional Gradient-Based Augmented Lagrangian Framework},
  booktitle = {Proc. 36th Int. Conf. Machine Learning (ICML)},
  year = {2019},
  url = {http://proceedings.mlr.press/v97/yurtsever19a/yurtsever19a.pdf}
}
Yurtsever, A., Tropp, J., Fercoq, O., Udell, M. and Cevher, V. (2019), "Scalable Semidefinite Programming" . arXiv:1912.02949.
Abstract: Semidefinite programming (SDP) is a powerful framework from convex optimization that has striking potential for data science applications. This paper develops a provably correct algorithm for solving large SDP problems by economizing on both the storage and the arithmetic costs. Numerical evidence shows that the method is effective for a range of applications, including relaxations of MaxCut, abstract phase retrieval, and quadratic assignment. Running on a laptop, the algorithm can handle SDP instances where the matrix variable has over $10^13$ entries.
BibTeX:
@techreport{Yurtsever2019-ScalableSDP,
  author = {Yurtsever, A. and Tropp, J. and Fercoq, O. and Udell, M. and Cevher, V.},
  title = {Scalable Semidefinite Programming},
  school = {arXiv:1912.02949},
  year = {2019},
  url = {https://arxiv.org/pdf/1912.02949.pdf}
}
Yurtsever, A., Suvrit, S. and Cevher, V. (2019), "Conditional Gradient Method via Stochastic Path-Integrated Differential Estimators" , In Proc. 36th Int. Conf. Machine Learning (ICML).
Abstract: We propose a class of variance-reduced stochastic conditional gradient methods. By adopting the recent stochastic path-integrated differential estimator technique (SPIDER) of Fang et. al. (2018) for the classical Frank-Wolfe (FW) method, we introduce SPIDER-FW for finite-sum minimization as well as the more general expectation minimization problems. SPIDER-FW enjoys superior complexity guarantees in the non-convex setting, while matching the best known FW variants in the convex case. We also extend our framework a la conditional gradient sliding (CGS) of Lan & Zhou. (2016), and propose SPIDER-CGS.
BibTeX:
@conference{Yurtsever2019-SpiderFW,
  author = {Yurtsever, A. and Suvrit, S. and Cevher, V.},
  title = {Conditional Gradient Method via Stochastic Path-Integrated Differential Estimators},
  booktitle = {Proc. 36th Int. Conf. Machine Learning (ICML)},
  year = {2019},
  url = {http://proceedings.mlr.press/v97/yurtsever19b/yurtsever19b.pdf}
}
Vu, B., Alacaoglu, A., Sahin, M., Yurtsever, A. and Cevher, V. (2018), "A First-Order Augmented Lagrangian Framework for Nonconvex Optimization with Nonlinear Constraints" , In Modern Trends in Nonconvex Optimization for Machine Learning (ICML Workshop).
Abstract: We consider a canonical nonlinear-constrained nonconvex problem with broad applications in machine learning, theoretical computer science, and signal processing. We propose a simple primal-dual splitting scheme that provably converges to a stationary point of the non-convex problem. We achieve this desideratum via an adaptive and inexact augmented Lagrangian method. The new algorithm features a slow $O(1/)$ convergence rate, which it counteracted by its cheap per-iteration complexity. We provide numerical evidence on large-scale machine learning problems, modeled typically via semidefinite relaxations.
BibTeX:
@conference{Bang2018-NonconvexAugmentedLagrangian,
  author = {Vu, B.C. and Alacaoglu, A. and Sahin, M.F. and Yurtsever, A. and Cevher, V.},
  title = {A First-Order Augmented Lagrangian Framework for Nonconvex Optimization with Nonlinear Constraints},
  booktitle = {Modern Trends in Nonconvex Optimization for Machine Learning (ICML Workshop)},
  year = {2018},
  url = {https://drive.google.com/file/d/1r0pM67WKxqeh0ojJNaAJeq_9VKOEv4C7/preview}
}
Cevher, V., Vu, B. and Yurtsever, A. (2018), "Stochastic Forward Douglas-Rachford Splitting Method for Monotone Inclusions" , In Large-Scale and Distributed Optimization. Springer International Publishing.
Abstract: We propose a stochastic Forward-Douglas-Rachford Splitting framework for finding a zero point of the sum of three maximally monotone operators in real separable Hilbert space, where one of the operators is cocoercive. We characterize the rate of convergence in expectation in the case of strongly monotone operators. We provide guidance on step-size sequences that achieve this rate, even if the strong convexity parameter is unknown.
BibTeX:
@incollection{Cevher2018-StochasticFDR,
  author = {Cevher, V. and Vu, B.C. and Yurtsever, A.},
  editor = {Giselsson, P. and Rantzer, A.},
  title = {Stochastic Forward Douglas-Rachford Splitting Method for Monotone Inclusions},
  booktitle = {Large-Scale and Distributed Optimization},
  publisher = {Springer International Publishing},
  year = {2018}
}
Hsieh, Y.-P., Kao, Y.-C., Mahabadi, R.K., Yurtsever, A., Kyrillidis, A. and Cevher, V. (2018), "A Non-Euclidean Gradient Descent Framework for Non-Convex Matrix Factorization" , IEEE Transactions on Signal Processing. Vol. 22 (66) , pp. 5917-5926.
Abstract: We study convex optimization problems that feature low-rank matrix solutions. In such scenarios, non-convex methods offer significant advantages over convex methods due to their lower space complexity, as well as practical faster convergence. Under mild assumptions, these methods feature global convergence guarantees.


In this paper, we extend the results on this matter by following a different path. We derive a non-Euclidean optimization framework in the non-convex setting that takes nonlinear gradient steps on the factors. Our framework enables the possibility to further exploit the underlying problem structures, such as sparsity or low-rankness on the factorized domain, or better dimensional dependence of the smoothness parameters of the objectives. We prove that the non-Euclidean methods enjoy the same rigorous guarantees as their Euclidean counterparts, under appropriate assumptions. Numerical evidence with Fourier Ptychography and FastText applications, using real data, shows that our approach can enhance solution quality, as well as convergence speed over the standard non-convex approaches.
BibTeX:
@article{Hsieh2018-NonEuclideanFactorization,
  author = {Y.-P. Hsieh and Y.-C. Kao and R. K. Mahabadi and Yurtsever, A. and Kyrillidis, A. and Cevher, V.},
  title = {A Non-Euclidean Gradient Descent Framework for Non-Convex Matrix Factorization},
  journal = {IEEE Transactions on Signal Processing},
  year = {2018},
  volume = {22},
  number = {66},
  pages = {5917--5926},
  url = {https://infoscience.epfl.ch/record/231191/files/matrix_fact.pdf}
}
Levy, K., Yurtsever, A. and Cevher, V. (2018), "Online Adaptive Methods, Universality and Acceleration" , In Advances in Neural Information Processing Systems 31 (NeurIPS).
Abstract: We present a novel method for convex unconstrained optimization that, without any modifications, ensures: (i) accelerated convergence rate for smooth objectives, (ii) standard convergence rate in the general (non-smooth) setting, and (iii) standard convergence rate in the stochastic optimization setting. To the best of our knowledge, this is the first method that simultaneously applies to all of the above settings.

At the heart of our method is an adaptive learning rate rule that employs importance weights, in the spirit of adaptive online learning algorithms, combined with an update that linearly couples two sequences. An empirical examination of our method demonstrates its applicability to the above mentioned scnearios and corroborates our theoretical findings.
BibTeX:
@conference{Levy2018-Accelegrad,
  author = {Levy, K.Y. and Yurtsever, A. and Cevher, V.},
  title = {Online Adaptive Methods, Universality and Acceleration},
  booktitle = {Advances in Neural Information Processing Systems 31 (NeurIPS)},
  year = {2018},
  url = {https://papers.nips.cc/paper/7885-online-adaptive-methods-universality-and-acceleration.pdf}
}
Yurtsever, A., Fercoq, O., Locatello, F. and Cevher, V. (2018), "A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming" , In Proc. 35th Int. Conf. Machine Learning (ICML).
Abstract: We propose a conditional gradient framework for a composite convex minimization template with broad applications. Our approach combines smoothing and homotopy techniques under the CGM framework, and provably achieves the optimal $O(1/k)$ convergence rate. We demonstrate that the same rate holds if the linear subproblems are solved approximately with additive or multiplicative error. In contrast with the relevant work, we are able to characterize the convergence when the non-smooth term is an indicator function. Specific applications of our framework include the non-smooth minimization, semidefinite programming, and minimization with linear inclusion constraints over a compact domain. Numerical evidence demonstrates the benefits of our framework
BibTeX:
@conference{Yurtsever2018-HomotopyCGM,
  author = {Yurtsever, A. and Fercoq, O. and Locatello, F. and Cevher, V.},
  title = {A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming},
  booktitle = {Proc. 35th Int. Conf. Machine Learning (ICML)},
  year = {2018},
  url = {http://proceedings.mlr.press/v80/yurtsever18a/yurtsever18a.pdf}
}
Tropp, J., Yurtsever, A., Udell, M. and Cevher, V. (2017), "Fixed-rank approximation of a positive-semidefinite matrix from streaming data" , In Advances in Neural Information Processing Systems 31 (NeurIPS).
Abstract: Several important applications, such as streaming PCA and semidefinite programming, involve a large-scale positive-semidefinite (psd) matrix that is presented as a sequence of linear updates. Because of storage limitations, it may only be possible to retain a sketch of the psd matrix. This paper develops a new algorithm for fixed-rank psd approximation from a sketch. The approach combines the Nyström
approximation with a novel mechanism for rank truncation. Theoretical analysis establishes that the proposed method can achieve any prescribed relative error in the Schatten 1-norm and that it exploits the spectral decay of the input matrix. Computer experiments show that the proposed method dominates alternative techniques for fixed-rank psd matrix approximation across a wide range of examples.
BibTeX:
@conference{Tropp2017-NystromSketch,
  author = {Tropp, J.A. and Yurtsever, A. and Udell, M. and Cevher, V.},
  title = {Fixed-rank approximation of a positive-semidefinite matrix from streaming data},
  booktitle = {Advances in Neural Information Processing Systems 31 (NeurIPS)},
  year = {2017},
  url = {http://papers.nips.cc/paper/6722-fixed-rank-approximation-of-a-positive-semidefinite-matrix-from-streaming-data.pdf}
}
Tropp, J., Yurtsever, A., Udell, M. and Cevher, V. (2017), "Practical sketching algorithms for low-rank matrix approximation" , SIAM Journal on Matrix Analysis and Applications. Vol. 38 (4) , pp. 1454-1485.
Abstract: This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.
BibTeX:
@article{Tropp2017-PracticalSketch,
  author = {Tropp, J.A. and Yurtsever, A. and Udell, M. and Cevher, V.},
  title = {Practical sketching algorithms for low-rank matrix approximation},
  journal = {SIAM Journal on Matrix Analysis and Applications},
  year = {2017},
  volume = {38},
  number = {4},
  pages = {1454--1485},
  url = {https://arxiv.org/pdf/1609.00048.pdf}
}
Tropp, J., Yurtsever, A., Udell, M. and Cevher, V. (2017), "Randomized single-view algorithms for low-rank matrix approximation" . ACM Report 2017-01, Caltech.
Abstract: This paper develops a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by computer experiments.
BibTeX:
@techreport{Tropp2017-RandomizedSingleView,
  author = {Tropp, J.A. and Yurtsever, A. and Udell, M. and Cevher, V.},
  title = {Randomized single-view algorithms for low-rank matrix approximation},
  school = {ACM Report 2017-01, Caltech},
  year = {2017},
  url = {https://authors.library.caltech.edu/74347/1/ACM_TR_2017_01.pdf}
}
Yurtsever, A., Udell, M., Tropp, J. and Cevher, V. (2017), "Sketchy Decisions: Convex Low-Rank Matrix Optimization with Optimal Storage" , In Proc. 20th Int. Conf. Artificial Intelligence and Statistics (AISTATS).
Abstract: This paper concerns a fundamental class of convex matrix optimization problems. It presents the first algorithm that uses optimal storage and provably computes a lowrank approximation of a solution. In particular, when all solutions have low rank, the algorithm converges to a solution. This algorithm, SketchyCGM, modifies a standard convex optimization scheme, the conditional gradient method, to store only a small randomized sketch of the matrix variable. After the optimization terminates, the algorithm extracts a low-rank approximation of the solution from the sketch. In contrast to nonconvex heuristics, the guarantees for SketchyCGM do not rely on statistical models for the problem data. Numerical work demonstrates the benefits of SketchyCGM over heuristics.
BibTeX:
@conference{Yurtsever2016-SketchyDecision,
  author = {Yurtsever, A. and Udell, M. and Tropp, J.A. and Cevher, V.},
  title = {Sketchy Decisions: Convex Low-Rank Matrix Optimization with Optimal Storage},
  booktitle = {Proc. 20th Int. Conf. Artificial Intelligence and Statistics (AISTATS)},
  year = {2017},
  url = {https://arxiv.org/pdf/1702.06838.pdf}
}
Odor, G., Li, Y.-H., Yurtsever, A., Hsieh, Y.-P., Tran-Dinh, Q., El Halabi, M. and Cevher, V. (2016), "Frank-Wolfe Works for Non-Lipschitz Continuous Gradient Objectives: Scalable Poisson Phase Retrieval" , In 41st IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Abstract: We study a phase retrieval problem in the Poisson noise model. Motivated by the PhaseLift approach, we approximate the maximumlikelihood estimator by solving a convex program with a nuclear norm constraint. While the Frank-Wolfe algorithm, together with the Lanczos method, can efficiently deal with nuclear norm constraints, our objective function does not have a Lipschitz continuous gradient, and hence existing convergence guarantees for the Frank-Wolfe algorithm do not apply. In this paper, we show that the Frank-Wolfe algorithm works for the Poisson phase retrieval problem, and has a
global convergence rate of $O(1/t)$, where t is the iteration counter. We provide rigorous theoretical guarantee and illustrating numerical results.
BibTeX:
@conference{Odor2016-PoissonPhaseRetrieval,
  author = {Odor, G. and Li, Y.-H. and Yurtsever, A. and Hsieh, Y.-P. and Tran-Dinh, Q. and El~Halabi, M. and Cevher, V.},
  title = {Frank-Wolfe Works for Non-Lipschitz Continuous Gradient Objectives: Scalable Poisson Phase Retrieval},
  booktitle = {41st IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year = {2016},
  url = {https://arxiv.org/pdf/1602.00724.pdf}
}
Yurtsever, A., Vu, B. and Cevher, V. (2016), "Stochastic Three-Composite Convex Minimization" , In Advances in Neural Information Processing Systems 29 (NeurIPS).
Abstract: We propose a stochastic optimization method for the minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. Our approach is most suitable in the setting where it is computationally advantageous to process smooth term in the decomposition with its stochastic gradient estimate and the other two functions separately with their proximal operators, such as doubly regularized empirical risk minimization problems. We prove the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term. Our method operates in the primal space and can be considered as a stochastic extension of the three-operator splitting method. Numerical evidence supports the effectiveness of our method in real-world problems.
BibTeX:
@conference{Yurtsever2016-StochasticThreeOperator,
  author = {Yurtsever, A. and Vu, B.C. and Cevher, V.},
  title = {Stochastic Three-Composite Convex Minimization},
  booktitle = {Advances in Neural Information Processing Systems 29 (NeurIPS)},
  year = {2016},
  url = {http://papers.nips.cc/paper/6127-stochastic-three-composite-convex-minimization.pdf}
}
Yurtsever, A., Hsieh, Y.-P. and Cevher, V. (2015), "Scalable Convex Methods for Phase Retrieval" , In 6th IEEE Int. Workshop Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP).
Abstract: This paper describes scalable convex optimization methods for phase retrieval. The main characteristics of these methods are the cheap per-iteration complexity and the low-memory footprint. With a variant of the original PhaseLift formulation, we first illustrate how to leverage the scalable Frank-Wolfe (FW) method (also known as the conditional gradient algorithm), which requires a tuning parameter. We demonstrate that we can estimate the tuning parameter of the FW algorithm directly from the measurements, with rigorous theoretical guarantees. We then illustrate numerically that recent advances in universal primal-dual convex optimization methods offer significant scalability improvements over the FW method, by recovering full HD resolution color images from their quadratic measurements.
BibTeX:
@conference{Yurtsever2015-ScalablePhaseRetrieval,
  author = {Yurtsever, A. and Hsieh, Y.-P. and Cevher, V.},
  title = {Scalable Convex Methods for Phase Retrieval},
  booktitle = {6th IEEE Int. Workshop Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)},
  year = {2015},
  url = {https://infoscience.epfl.ch/record/212914/files/ScalableConvexMethodsForPhaseRetrieval.pdf}
}
Yurtsever, A., Tran-Dinh, Q. and Cevher, V. (2015), "A Universal Primal-Dual Convex Optimization Framework" , In Advances in Neural Information Processing Systems 28 (NeurIPS).
Abstract: We propose a new primal-dual algorithmic framework for a prototypical constrained convex optimization template. The algorithmic instances of our framework are universal since they can automatically adapt to the unknown Holder continuity degree and constant within the dual formulation. They are also guaranteed to have optimal convergence rates in the objective residual and the feasibility gap for each Holder smoothness degree. In contrast to existing primal-dual algorithms, our framework avoids the proximity operator of the objective function. We instead leverage computationally cheaper, Fenchel-type operators, which are the main workhorses of the generalized conditional gradient (GCG)-type methods. In contrast to the GCG-type methods, our framework does not require the objective function to be differentiable, and can also process additional general linear inclusion constraints, while guarantees the convergence rate on the primal problem.
BibTeX:
@conference{Yurtsever2015-UniversalPrimalDual,
  author = {Yurtsever, A. and Tran-Dinh, Q. and Cevher, V.},
  title = {A Universal Primal-Dual Convex Optimization Framework},
  booktitle = {Advances in Neural Information Processing Systems 28 (NeurIPS)},
  year = {2015},
  url = {http://papers.nips.cc/paper/5826-a-universal-primal-dual-convex-optimization-framework.pdf}
}
Please find the list ordered by citation count in my Google Scholar page.
Created by JabRef on 24/09/2020.