MCNN (Time-series classification by deep learning)
A novel end-to-end neural network model, Multi-Scale Convolutional Neural Networks (MCNN), incorporates feature extraction and classification in a single framework. Leveraging a novel multi-branch layer and learnable convolutional layers, MCNN automatically extracts features at different scales and frequencies, leading to superior feature representation. Source code released.


CENN (Deep learning with categorical features)
Neural Networks with Categorical Feature Embedding for Classification and Visualization (CENN) is capable of directly handling both numerical and categorical features as well as providing visual insights on feature similarities. At its core, CENN learns a numerical embedding for each category of a categorical feature, based on which we can visualize all categories in the embedding space and extract knowledge of similarity between categories. Source code released.


FreshNets (Deep learning with compressing convolutional NN)
Frequency-Sensitive Hashed Nets (FreshNets) compresses large-scale convolutional neural networks. It exploits inherent redundancy in both convolutional layers and fully-connected layers of a deep learning model, leading to dramatic savings in memory and storage consumption. Source code released.


HashedNets (Deep learning with compression)
As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. HashedNets is a novel network architecture to reduce and limit the memory overhead of neural networks by compressing the networks using a hashing technique. Source code released.


Fast Flux Discriminant (FFD) (Interpretable machine learning)
FFD is a novel and general approach to large-scale nonlinear classification. The main idea is to map the data to a new feature space based on kernel smoothing. A linear discriminative model is then learned to optimize the feature weights. It offers excellent scalability, accuracy, interpretability, and sparsity. Source code released.


Density-based Logistic Regression (DLR)
DLR is a novel general classification model with the potential of achieving state-of-the-art classification accuracy at superior speed and scalability. Source code released.


Maximum Variance Correction (MVC)
Bridging machine learning and AI search, MVC is a large-scale manifold learning algorithm which learns an embedding of a state-space graph. The Euclidean distance in the embedded space provides memory- and time-efficient admissible heuristics for A* search. Its decomposition-based optimization approach gives unprecedented scalability to admissible manifold learning. Source code released



A web platform for drafting metabolic models from the KEGG database, based on our nonlinear optimization research. Collaboratively constructed with Dr. Yinjie Tang's group. See below for a demo video.




A novel SAT encoding for classical planning, which uses SAS+ formalism instead of STRIPS. Source code released

A SAT-based planning that can optimize action-cost preferences, instead of makespan. Source code released.

A STRIPS planner that searches directly in the domain transition graphs instead of the traditional state-space graph. Source code released.

A Python interface to IPOPT, an efficient nonlinear constrained optimization solver. The interface also enables direct manipulation of AMPL .nl files. Source code released.

A PDDL2.2 domain/problem generator for a challenging workflow planning problem arising from mobile computing. Source code released.

A general-purpose high-precision Conditional Random Field (CRF) optimization solver. Source code released.

An optimal STRIPS planner based on SAT solving and long-distance mutual exclusion. Source code released.

Long-distance mutual exclusion for STRIPS planning. Source code released.