Wenlin Chen's Homepage
Bio: Wenlin Chen is curently a research scientist at Facebook, working on machine learning. He obtained the PhD degree from Washington University in St. Louis. During the PhD study, he worked with Prof. Yixin Chen (Primary Advisor) and Prof. Kilian Weinberger (Now at Cornell) on machine learning and its applications. Before joining WashU, he got his bachelor degree in computer science from University of Science and Technology of China (USTC) in 2011. In 2015, he was an intern at Facebook AI Research (FAIR) doing research on deep learning with large output space with application on language modeling. In 2014, he was an intern at Google Research where he contributed to the development of a large-scale recommender system. From 2010 to 2011 when he was a senior undergrad, he also did some research on machine learning and Internet advertising at Microsoft Research Asia (MSRA). Wenlin received the runner up for best student paper award in KDD14 and nomination for best paper award in ICDM13.
Machine Learning, Data Mining, Optimization
with particular interests in deep learning and large-scale machine learning.
W. Chen, Y. Chen, and K. Weinberger, Filtered Search for Submodular Maximization with Controllable Approximation Bounds, Proc. The 18th International Conference on Artificial Intelligence and Statistics (AISTATS-15), 2015
W. Chen, J. Wilson, S. Tyree, K. Weinberger and Y. Chen, Compressing Convolutional Neural Networks in Frequency Domain, arXiv preprint arXiv:1506.04449 (2015).(PDF)
Z. Cui, W. Chen, Y. He and Y. Chen, Optimal Action Extraction for Random Forests and Boosted Trees, Proc. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD-15), 2015.
Q. Zhou, W. Chen, S. Song, J. Gardner, K. Weinberger, Y. Chen, A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing, Proc. AAAI Conference on Artificial Intelligence (AAAI-15), 2015.
W. Chen, Y. Chen, and D. Levine, A Unifying Learning Framework for Building Artificial Game-Playing Agents, Annals of Mathematics and Artificial Intelligence.
Y. Wang, W. Chen, K. Heard, M. Kollef, T. Bailey, Z. Cui, Y. He, C. Lu, and Y. Chen, Mortality Prediction in ICUs Using A Novel Time-Slicing Cox Regression Method, Proc. American Medical Informatics Annual Fall Symposium (AMIA-15), 2015. Distinguished Paper Award
Y. He, Y. Mao, W. Chen, and Y. Chen, Nonlinear Metric Learning with Kernel Density Estimation, IEEE Transactions on Knowledge and Data Engineering (TKDE), 2015
W. Chen, Y. Chen, K. Weinberger, Fast Flux Discriminant for Large-Scale Sparse Nonlinear Classification, Proc. ACM SIGKDD Conference (KDD-14), 2014. (PDF) Best Student Paper Runner-Up Award
M. Kusner, W. Chen, Q. Zhou, E. Xu, K. Weinberger and Y. Chen, Feature-Cost Sensitive Learning with Submodular Trees of Classifiers. Proc. AAAI Conference on Artificial Intelligence (AAAI-14), 2014. (PDF)
W. Chen, Y. Chen, Y. Mao, and B. Guo, Density-Based Logistic Regression, Proc. ACM SIGKDD Conference (KDD-13), 2013. (PDF)
W. Chen, K. Weinberger, and Y. Chen, Maximum Variance Correction with Application to A* Search, Proc. International Conference on Machine Learning (ICML-13), 2013. (PDF)
W. Chen, Y. Chen, K. Weinberger, Q. Lu, and X. Chen, Goal-Oriented Euclidean Heuristics with Manifold Learning, Proc. AAAI Conference on Artificial Intelligence (AAAI-13), 2013. (PDF)
Q. Lu, W. Chen, Y. Chen, K. Weinberger, and X. Chen, Utilizing Landmarks in Euclidean Heuristics for Optimal Planning, Late-Breaking Track, Proc. AAAI Conference on Artificial Intelligence (AAAI-13), 2013. (PDF)
Y. He, W. Chen, Y. Mao, and Y. Chen, Kernel Density Metric Learning, Proc. IEEE International Conference on Data Mining (ICDM-13), 2013. (PDF) Nomination for Best Paper Award
Y. Mao, W. Chen, Y. Chen, C. Lu, M. Kollef, and T. Bailey, An Integrated Data Mining Approach to Real-time Clinical Monitoring and Deterioration Warning, Proc. ACM SIGKDD Conference (KDD-12),2012. (PDF)
Program Committee: ICML2016, KDD2016, AISTATS2016, NIPS2015, KDD2015, AAAI2015, AISTATS2015, NIPS2014, IJCAI2013
Journal Reviewer: Machine Learning Journal, JMLR, TPAMI, TKDE, Neurocomputing, TIST, JAIR, AMAI
Code for paper "Compressing Neural Networks with the Hashing Trick" on ICML 2015
Maximum Variance Correction finds large-scale feasible solutions to Maximum Variance Unfolding (MVU) by post-processing embeddings from any manifold learning algorithm. It increases the scale of MVU embeddings by several orders of magnitude and is naturally parallel.
FFD is an interpretable machine learning method for large-scale nonlinear classification that supports mix-type data and feature sparsity.
Email: wenlinchen AT wustl DOT edu
Office: Bryan Hall #422
Mailing Address: Campus Box 1045, One Brookings Drive, St. Louis, MO 63130