Implementation of Convex Optimization algorithms
-
Updated
Jul 27, 2018 - Python
Implementation of Convex Optimization algorithms
Convex Optimization Algorithms
R package for SGD inference
MCPy is a python library for McCormick relaxations with sub-gradients. This is quite useful for prototyping and testing new convex relaxation and global optimization algorithms.
A reproduction of Learning Efficient Convolutional Networks through Network Slimming
Dual-Based Procedure and Subgradient method implementations
Minimax NMF
Non-linear topology identification using Deep Learning. Sparsity (lasso) is enforced in the sensor connections. The non-convex and non-differentiable function is solved using sub-gradient descent algorithm.
Implementation and brief comparison of different First Order and different Proximal gradient methods, comparison of their convergence rates
Subgradient methods for Multicommodity Network Design
Optimization includes a class of methods to find global or local optima for discrete or continuous objectives; from evolutionary-based algorithms to swarm-based ones.
Solving quadratic programming problem using subgradient optimizer
In this work, we consider learning sparse models in large scale setting, where the number of samples and the feature dimension can grow as large as millions or billions. Two immediate issues occur under such challenging scenarios: (i) com- putational cost; (ii) memory overhead.
Add a description, image, and links to the subgradient topic page so that developers can more easily learn about it.
To associate your repository with the subgradient topic, visit your repo's landing page and select "manage topics."