Page Not Found
Page not found. Your pixels are in another canvas. Read more
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas. Read more
About me Read more
This is a page not in th emain menu Read more
Published:
In this post, we review the basic policy gradient algorithm for deep reinforcement learning and the actor-critic algorithm. Most of the contents are derived from CS 285 at UC Berkeley. Read more
Published:
In this post, we will continue on our discuss of mirror descent. We will present a variant of mirror descent: the lazy mirror descent, also known as Nesterov’s dual averaging. Read more
Published:
In this post, we describe a new geometry dependent algorithm that relies on different set of assumptions. The algorithm is called conditional gradient descent, aka Frank-Wolfe. Read more
Published:
In this post, we will introduce the Mirror Descent algorithm that solves the convex optimization algorithm. Read more
Published:
In this post, we will continue our analysis for gradient descent. Different from the previous post, we will not assume that the function is smooth. We will only assume that the function is convex and has some Lipschitz constant. Read more
Published:
In this post, we will review the most basic and the most intuitive optimization method – the gradient decent method – in optimization. Read more
Published:
Recently, I find an interesting course taught by Prof. Yin Tat Lee at UW. The course is called `Theory of Optimization and Continuous Algorithms’, and the lecture notes are available under the homepage of this courseuw-cse535-winter19. As a great fan of optimization theory and algorithm design, I think I will follow this course and write a bunch of blogs to record my study of this course. Most of the materials in this series of blogs will follow the lecture notes of the course, and and interesting optimization book Convex Optimization: Algorithms and Complexity by Sebastien Bubeck. Since this is the first blog about this course, I will present the preliminaries of the optimization theory, and some basic knowledge about convex optimization, including some basic properties of convex functions. Read more
Short description of portfolio item number 1
Read more
Short description of portfolio item number 2
Read more
Published in International Frontiers of Algorithmics Workshop, 2019
Download here
Published in The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2019
Download here
Published in AAAI Conference on Artificial Intelligence, 2019
Download here
Published in AAAI Conference on Artificial Intelligence, 2019
Download here
Published in , 2019
Download here
Published in International Conference on Machine Learning, 2020
Download here
Published in The Conference on Uncertainty in Artificial Intelligence, 2021
Download here
Published in , 2021
Download here
Published in Conference on Neural Information Processing Systems, 2022
Download here
Published in Conference on Neural Information Processing Systems, 2022
Download here
Published in Conference on Neural Information Processing Systems, 2022
Download here
Published in International Conference on Machine Learning, 2023
Download here
Published in SIAM Journal on Mathematics of Data Science, 2023
Download here
Published in Conference on Empirical Methods in Natural Language Processing, 2023
Download here
Published in International Conference on Machine Learning, 2024
Download here
Published in Conference on Neural Information Processing Systems, 2024
Download here
Published in Conference on Neural Information Processing Systems, 2024
Download here
Published:
In this talk, I presented my work with Prof. Wei Chen @MSRA on our paper Stochastic One-Sided Full-Information Bandit. The paper can be downloaded here. Read more
Published:
In this talk, I presented my work with Prof. Wei Chen @MSRA on our paper Online Second Price Auction with Semi-bandit Feedback Under the Non-Stationary Setting. Because of the virus in China, I cannot go the the AAAI main conference, and I will give my oral presentation remotely. The paper can be downloaded here. The PPT is available at here. Read more
Published:
In this talk, I presented my work with Zhize Li @KAUST and Peter Richtárik on our paper FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning. Read more
Published:
In this talk, I use 5 minutes to present our paper BEER: Fast O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression. You can visit my talk online here. Read more
Published:
In this talk, I use 5 minutes to present our paper Coresets for Vertical Federated Learning: Regularized Linear Regression and K-Means Clustering. You can visit my talk online here Read more
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.