Sort by: Year Popularity Relevance

Bridging Ordinary-Label Learning and Complementary-Label Learning

,  - 2020

Unlike ordinary supervised pattern recognition, in a newly proposed framework namely complementary-label learning, each label specifies one class that the pattern does not belong to. In this paper, we propose the natural generalization of learning from an ordinary label and a complementary label, specifically focused on one-versus-all and pairwis...


Operationally meaningful representations of physical systems in neural networks

, , , , , , ,  - 2020

To make progress in science, we often build abstract representations of physical systems that meaningfully encode information about the systems. The representations learnt by most current machine learning techniques reflect statistical structure present in the training data; however, these methods do not allow us to specify explicit and operation...


Signatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU

,  - 2020

Signatory is a library for calculating signature and logsignature transforms and related functionality. The focus is on making this functionality available for use in machine learning, and as such includes features such as GPU support and backpropagation. To our knowledge it is the first publically available GPU-capable library for these operatio...


Few-Shot Learning as Domain Adaptation: Algorithm and Analysis

, , ,  - 2020

To recognize the unseen classes with only few samples, few-shot learning (FSL) uses prior knowledge learned from the seen classes. A major challenge for FSL is that the distribution of the unseen classes is different from that of those seen, resulting in poor generalization even when a model is meta-trained on the seen classes. This class-differe...


Explainable Deep Convolutional Candlestick Learner

, , ,  - 2020

Candlesticks are graphical representations of price movements for a given period. The traders can discovery the trend of the asset by looking at the candlestick patterns. Although deep convolutional neural networks have achieved great success for recognizing the candlestick patterns, their reasoning hides inside a black box. The traders cannot ma...


Objective Social Choice: Using Auxiliary Information to Improve Voting Outcomes

,  - 2020

How should one combine noisy information from diverse sources to make an inference about an objective ground truth? This frequently recurring, normative question lies at the core of statistics, machine learning, policy-making, and everyday life. It has been called "combining forecasts", "meta-analysis", "ensembling", and the "MLE approach to voti...


The Two-Pass Softmax Algorithm

,  - 2020

The softmax (also called softargmax) function is widely used in machine learning models to normalize real-valued scores into a probability distribution. To avoid floating-point overflow, the softmax function is conventionally implemented in three passes: the first pass to compute the normalization constant, and two other passes to compute outputs...


f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation

, , ,  - 2020

Deep neural networks have become a mainstream approach to interactive segmentation. As we show in our experiments, while for some images a trained network provides accurate segmentation result with just a few clicks, for some unknown objects it cannot achieve satisfactory result even with a large amount of user input. Recently proposed backpropag...


Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization

, , ,  - 2020

Bayesian optimization (BO) is a popular approach to optimize expensive-to-evaluate black-box functions. A significant challenge in BO is to scale to high-dimensional parameter spaces while retaining sample efficiency. A solution considered in existing literature is to embed the high-dimensional space in a lower-dimensional manifold, often via a r...


A Common Semantic Space for Monolingual and Cross-Lingual Meta-Embeddings

, ,  - 2020

This paper presents a new technique for creating monolingual and cross-lingual meta-embeddings. Our method integrates multiple word embeddings created from complementary techniques, textual sources, knowledge bases and languages. Existing word vectors are projected to a common semantic space using linear transformations and averaging. With our me...