A novel link prediction framework which automatically learn heuristics that suit the specific networks. SEAL extracts a local subgraph around each target link as its data representation, and learns a function mapping the subgraph patterns to link existence based on Graph Neural Networks (GNN). SEAL is also able to learn from node embeddings and node attributes as the additional information to the subgraph patterns. SEAL is a general tool for link prediction and has shown superior performance than existing methods. Source code released.
MCNN (Time-series classification by deep learning)
A novel end-to-end neural network model, Multi-Scale Convolutional Neural Networks (MCNN), incorporates feature extraction and classification in a single framework. Leveraging a novel multi-branch layer and learnable convolutional layers, MCNN automatically extracts features at different scales and frequencies, leading to superior feature representation. Source code released.
Neural Networks with Categorical Feature Embedding for Classification and Visualization (CENN) is capable of directly handling both numerical and categorical features as well as providing visual insights on feature similarities. At its core, CENN learns a numerical embedding for each category of a categorical feature, based on which we can visualize all categories in the embedding space and extract knowledge of similarity between categories. Source code released.
FreshNets (Deep learning with compressing convolutional NN)
Frequency-Sensitive Hashed Nets (FreshNets) compresses large-scale convolutional neural networks. It exploits inherent redundancy in both convolutional layers and fully-connected layers of a deep learning model, leading to dramatic savings in memory and storage consumption.
Source code released.
As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. HashedNets is a novel network architecture to reduce and limit the memory overhead of neural networks by compressing the networks using a hashing technique.
Source code released.
FFD is a novel and general approach to large-scale nonlinear classification. The main idea is to map the data to a new feature space based on kernel smoothing. A linear discriminative model is then learned to optimize the feature weights. It offers excellent scalability, accuracy, interpretability, and sparsity. Source code released.
Bridging machine learning and AI search, MVC is a large-scale manifold learning algorithm which learns an embedding of a state-space graph. The Euclidean distance in the embedded space provides memory- and time-efficient admissible heuristics for A* search. Its decomposition-based optimization approach gives unprecedented scalability to admissible manifold learning. Source code released.