April 2, 2020

3265 words 16 mins read

Paper Group ANR 279

Paper Group ANR 279

Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data. High–Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality. Incremental Learning In Online Scenario. DriftSurf: A Risk-competitive Learning Algorithm under Concept Drift. EGGS: A Flexible Approach to Relational Modeling of Social Net …

Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data

Title Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data
Authors Kyoung-Woon On, Eun-Sol Kim, Yu-Jung Heo, Byoung-Tak Zhang
Abstract Conventional sequential learning methods such as Recurrent Neural Networks (RNNs) focus on interactions between consecutive inputs, i.e. first-order Markovian dependency. However, most of sequential data, as seen with videos, have complex dependency structures that imply variable-length semantic flows and their compositions, and those are hard to be captured by conventional methods. Here, we propose Cut-Based Graph Learning Networks (CB-GLNs) for learning video data by discovering these complex structures of the video. The CB-GLNs represent video data as a graph, with nodes and edges corresponding to frames of the video and their dependencies respectively. The CB-GLNs find compositional dependencies of the data in multilevel graph forms via a parameterized kernel with graph-cut and a message passing framework. We evaluate the proposed method on the two different tasks for video understanding: Video theme classification (Youtube-8M dataset) and Video Question and Answering (TVQA dataset). The experimental results show that our model efficiently learns the semantic compositional structure of video data. Furthermore, our model achieves the highest performance in comparison to other baseline methods.
Tasks Video Understanding
Published 2020-01-17
URL https://arxiv.org/abs/2001.07613v1
PDF https://arxiv.org/pdf/2001.07613v1.pdf
PWC https://paperswithcode.com/paper/cut-based-graph-learning-networks-to-discover
Repo
Framework

High–Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality

Title High–Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality
Authors Alexander N. Gorban, Valery A. Makarov, Ivan Y. Tyukin
Abstract High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning. The well-known phenomenon of the “curse of dimensionality” states: many problems become exponentially difficult in high dimensions. Recently, the other side of the coin, the “blessing of dimensionality”, has attracted much attention. It turns out that generic high-dimensional datasets exhibit fairly simple geometric properties. Thus, there is a fundamental tradeoff between complexity and simplicity in high dimensional spaces. Here we present a brief explanatory review of recent ideas, results and hypotheses about the blessing of dimensionality and related simplifying effects relevant to machine learning and neuroscience.
Tasks
Published 2020-01-14
URL https://arxiv.org/abs/2001.04959v1
PDF https://arxiv.org/pdf/2001.04959v1.pdf
PWC https://paperswithcode.com/paper/high-dimensional-brain-in-a-high-dimensional
Repo
Framework

Incremental Learning In Online Scenario

Title Incremental Learning In Online Scenario
Authors Jiangpeng He, Runyu Mao, Zeman Shao, Fengqing Zhu
Abstract Modern deep learning approaches have achieved great success in many vision applications by training a model using all available task-specific data. However, there are two major obstacles making it challenging to implement for real life applications: (1) Learning new classes makes the trained model quickly forget old classes knowledge, which is referred to as catastrophic forgetting. (2) As new observations of old classes come sequentially over time, the distribution may change in unforeseen way, making the performance degrade dramatically on future data, which is referred to as concept drift. Current state-of-the-art incremental learning methods require a long time to train the model whenever new classes are added and none of them takes into consideration the new observations of old classes. In this paper, we propose an incremental learning framework that can work in the challenging online learning scenario and handle both new classes data and new observations of old classes. We address problem (1) in online mode by introducing a modified cross-distillation loss together with a two-step learning technique. Our method outperforms the results obtained from current state-of-the-art offline incremental learning methods on the CIFAR-100 and ImageNet-1000 (ILSVRC 2012) datasets under the same experiment protocol but in online scenario. We also provide a simple yet effective method to mitigate problem (2) by updating exemplar set using the feature of each new observation of old classes and demonstrate a real life application of online food image classification based on our complete framework using the Food-101 dataset.
Tasks Image Classification
Published 2020-03-30
URL https://arxiv.org/abs/2003.13191v1
PDF https://arxiv.org/pdf/2003.13191v1.pdf
PWC https://paperswithcode.com/paper/incremental-learning-in-online-scenario
Repo
Framework

DriftSurf: A Risk-competitive Learning Algorithm under Concept Drift

Title DriftSurf: A Risk-competitive Learning Algorithm under Concept Drift
Authors Ashraf Tahmasbi, Ellango Jothimurugesan, Srikanta Tirthapura, Phillip B. Gibbons
Abstract When learning from streaming data, a change in the data distribution, also known as concept drift, can render a previously-learned model inaccurate and require training a new model. We present an adaptive learning algorithm that extends previous drift-detection-based methods by incorporating drift detection into a broader stable-state/reactive-state process. The advantage of our approach is that we can use aggressive drift detection in the stable state to achieve a high detection rate, but mitigate the false positive rate of standalone drift detection via a reactive state that reacts quickly to true drifts while eliminating most false positives. The algorithm is generic in its base learner and can be applied across a variety of supervised learning problems. Our theoretical analysis shows that the risk of the algorithm is competitive to an algorithm with oracle knowledge of when (abrupt) drifts occur. Experiments on synthetic and real datasets with concept drifts confirm our theoretical analysis.
Tasks
Published 2020-03-13
URL https://arxiv.org/abs/2003.06508v1
PDF https://arxiv.org/pdf/2003.06508v1.pdf
PWC https://paperswithcode.com/paper/driftsurf-a-risk-competitive-learning
Repo
Framework

EGGS: A Flexible Approach to Relational Modeling of Social Network Spam

Title EGGS: A Flexible Approach to Relational Modeling of Social Network Spam
Authors Jonathan Brophy, Daniel Lowd
Abstract Social networking websites face a constant barrage of spam, unwanted messages that distract, annoy, and even defraud honest users. These messages tend to be very short, making them difficult to identify in isolation. Furthermore, spammers disguise their messages to look legitimate, tricking users into clicking on links and tricking spam filters into tolerating their malicious behavior. Thus, some spam filters examine relational structure in the domain, such as connections among users and messages, to better identify deceptive content. However, even when it is used, relational structure is often exploited in an incomplete or ad hoc manner. In this paper, we present Extended Group-based Graphical models for Spam (EGGS), a general-purpose method for classifying spam in online social networks. Rather than labeling each message independently, we group related messages together when they have the same author, the same content, or other domain-specific connections. To reason about related messages, we combine two popular methods: stacked graphical learning (SGL) and probabilistic graphical models (PGM). Both methods capture the idea that messages are more likely to be spammy when related messages are also spammy, but they do so in different ways; SGL uses sequential classifier predictions and PGMs use probabilistic inference. We apply our method to four different social network domains. EGGS is more accurate than an independent model in most experimental settings, especially when the correct label is uncertain. For the PGM implementation, we compare Markov logic networks to probabilistic soft logic and find that both work well with neither one dominating, and the combination of SGL and PGMs usually performs better than either on its own.
Tasks
Published 2020-01-14
URL https://arxiv.org/abs/2001.04909v2
PDF https://arxiv.org/pdf/2001.04909v2.pdf
PWC https://paperswithcode.com/paper/eggs-a-flexible-approach-to-relational
Repo
Framework

Exploring Benefits of Transfer Learning in Neural Machine Translation

Title Exploring Benefits of Transfer Learning in Neural Machine Translation
Authors Tom Kocmi
Abstract Neural machine translation is known to require large numbers of parallel training sentences, which generally prevent it from excelling on low-resource language pairs. This thesis explores the use of cross-lingual transfer learning on neural networks as a way of solving the problem with the lack of resources. We propose several transfer learning approaches to reuse a model pretrained on a high-resource language pair. We pay particular attention to the simplicity of the techniques. We study two scenarios: (a) when we reuse the high-resource model without any prior modifications to its training process and (b) when we can prepare the first-stage high-resource model for transfer learning in advance. For the former scenario, we present a proof-of-concept method by reusing a model trained by other researchers. In the latter scenario, we present a method which reaches even larger improvements in translation performance. Apart from proposed techniques, we focus on an in-depth analysis of transfer learning techniques and try to shed some light on transfer learning improvements. We show how our techniques address specific problems of low-resource languages and are suitable even in high-resource transfer learning. We evaluate the potential drawbacks and behavior by studying transfer learning in various situations, for example, under artificially damaged training corpora, or with fixed various model parts.
Tasks Cross-Lingual Transfer, Machine Translation, Transfer Learning
Published 2020-01-06
URL https://arxiv.org/abs/2001.01622v1
PDF https://arxiv.org/pdf/2001.01622v1.pdf
PWC https://paperswithcode.com/paper/exploring-benefits-of-transfer-learning-in
Repo
Framework

VaPar Synth – A Variational Parametric Model for Audio Synthesis

Title VaPar Synth – A Variational Parametric Model for Audio Synthesis
Authors Krishna Subramani, Preeti Rao, Alexandre D’Hooge
Abstract With the advent of data-driven statistical modeling and abundant computing power, researchers are turning increasingly to deep learning for audio synthesis. These methods try to model audio signals directly in the time or frequency domain. In the interest of more flexible control over the generated sound, it could be more useful to work with a parametric representation of the signal which corresponds more directly to the musical attributes such as pitch, dynamics and timbre. We present VaPar Synth - a Variational Parametric Synthesizer which utilizes a conditional variational autoencoder (CVAE) trained on a suitable parametric representation. We demonstrate our proposed model’s capabilities via the reconstruction and generation of instrumental tones with flexible control over their pitch.
Tasks
Published 2020-03-30
URL https://arxiv.org/abs/2004.00001v1
PDF https://arxiv.org/pdf/2004.00001v1.pdf
PWC https://paperswithcode.com/paper/vapar-synth-a-variational-parametric-model
Repo
Framework

AI outperformed every dermatologist: Improved dermoscopic melanoma diagnosis through customizing batch logic and loss function in an optimized Deep CNN architecture

Title AI outperformed every dermatologist: Improved dermoscopic melanoma diagnosis through customizing batch logic and loss function in an optimized Deep CNN architecture
Authors Cong Tri Pham, Mai Chi Luong, Dung Van Hoang, Antoine Doucet
Abstract Melanoma, one of most dangerous types of skin cancer, re-sults in a very high mortality rate. Early detection and resection are two key points for a successful cure. Recent research has used artificial intelligence to classify melanoma and nevus and to compare the assessment of these algorithms to that of dermatologists. However, an imbalance of sensitivity and specificity measures affected the performance of existing models. This study proposes a method using deep convolutional neural networks aiming to detect melanoma as a binary classification problem. It involves 3 key features, namely customized batch logic, customized loss function and reformed fully connected layers. The training dataset is kept up to date including 17,302 images of melanoma and nevus; this is the largest dataset by far. The model performance is compared to that of 157 dermatologists from 12 university hospitals in Germany based on MClass-D dataset. The model outperformed all 157 dermatologists and achieved state-of-the-art performance with AUC at 94.4% with sensitivity of 85.0% and specificity of 95.0% using a prediction threshold of 0.5 on the MClass-D dataset of 100 dermoscopic images. Moreover, a threshold of 0.40858 showed the most balanced measure compared to other researches, and is promisingly application to medical diagnosis, with sensitivity of 90.0% and specificity of 93.8%.
Tasks Medical Diagnosis
Published 2020-03-05
URL https://arxiv.org/abs/2003.02597v1
PDF https://arxiv.org/pdf/2003.02597v1.pdf
PWC https://paperswithcode.com/paper/ai-outperformed-every-dermatologist-improved
Repo
Framework

Reinforcement Learning for Molecular Design Guided by Quantum Mechanics

Title Reinforcement Learning for Molecular Design Guided by Quantum Mechanics
Authors Gregor N. C. Simm, Robert Pinsler, José Miguel Hernández-Lobato
Abstract Automating molecular design using deep reinforcement learning (RL) holds the promise of accelerating the discovery of new chemical compounds. A limitation of existing approaches is that they work with molecular graphs and thus ignore the location of atoms in space, which restricts them to 1) generating single organic molecules and 2) heuristic reward functions. To address this, we present a novel RL formulation for molecular design in Cartesian coordinates, thereby extending the class of molecules that can be built. Our reward function is directly based on fundamental physical properties such as the energy, which we approximate via fast quantum-chemical methods. To enable progress towards de-novo molecular design, we introduce MolGym, an RL environment comprising several challenging molecular design tasks along with baselines. In our experiments, we show that our agent can efficiently learn to solve these tasks from scratch by working in a translation and rotation invariant state-action space.
Tasks
Published 2020-02-18
URL https://arxiv.org/abs/2002.07717v1
PDF https://arxiv.org/pdf/2002.07717v1.pdf
PWC https://paperswithcode.com/paper/reinforcement-learning-for-molecular-design
Repo
Framework

Ensemble Forecasting of Monthly Electricity Demand using Pattern Similarity-based Methods

Title Ensemble Forecasting of Monthly Electricity Demand using Pattern Similarity-based Methods
Authors Paweł Pełka, Grzegorz Dudek
Abstract This work presents ensemble forecasting of monthly electricity demand using pattern similarity-based forecasting methods (PSFMs). PSFMs applied in this study include $k$-nearest neighbor model, fuzzy neighborhood model, kernel regression model, and general regression neural network. An integral part of PSFMs is a time series representation using patterns of time series sequences. Pattern representation ensures the input and output data unification through filtering a trend and equalizing variance. Two types of ensembles are created: heterogeneous and homogeneous. The former consists of different type base models, while the latter consists of a single-type base model. Five strategies are used for controlling a diversity of members in a homogeneous approach. The diversity is generated using different subsets of training data, different subsets of features, randomly disrupted input and output variables, and randomly disrupted model parameters. An empirical illustration applies the ensemble models as well as individual PSFMs for comparison to the monthly electricity demand forecasting for 35 European countries.
Tasks Time Series
Published 2020-03-29
URL https://arxiv.org/abs/2004.00426v1
PDF https://arxiv.org/pdf/2004.00426v1.pdf
PWC https://paperswithcode.com/paper/ensemble-forecasting-of-monthly-electricity
Repo
Framework

Identification of primary and collateral tracks in stuttered speech

Title Identification of primary and collateral tracks in stuttered speech
Authors Rachid Riad, Anne-Catherine Bachoud-Lévi, Frank Rudzicz, Emmanuel Dupoux
Abstract Disfluent speech has been previously addressed from two main perspectives: the clinical perspective focusing on diagnostic, and the Natural Language Processing (NLP) perspective aiming at modeling these events and detect them for downstream tasks. In addition, previous works often used different metrics depending on whether the input features are text or speech, making it difficult to compare the different contributions. Here, we introduce a new evaluation framework for disfluency detection inspired by the clinical and NLP perspective together with the theory of performance from \cite{clark1996using} which distinguishes between primary and collateral tracks. We introduce a novel forced-aligned disfluency dataset from a corpus of semi-directed interviews, and present baseline results directly comparing the performance of text-based features (word and span information) and speech-based (acoustic-prosodic information). Finally, we introduce new audio features inspired by the word-based span features. We show experimentally that using these features outperformed the baselines for speech-based predictions on the present dataset.
Tasks
Published 2020-03-02
URL https://arxiv.org/abs/2003.01018v1
PDF https://arxiv.org/pdf/2003.01018v1.pdf
PWC https://paperswithcode.com/paper/identification-of-primary-and-collateral
Repo
Framework

SOIC: Semantic Online Initialization and Calibration for LiDAR and Camera

Title SOIC: Semantic Online Initialization and Calibration for LiDAR and Camera
Authors Weimin Wang, Shohei Nobuhara, Ryosuke Nakamura, Ken Sakurada
Abstract This paper presents a novel semantic-based online extrinsic calibration approach, SOIC (so, I see), for Light Detection and Ranging (LiDAR) and camera sensors. Previous online calibration methods usually need prior knowledge of rough initial values for optimization. The proposed approach removes this limitation by converting the initialization problem to a Perspective-n-Point (PnP) problem with the introduction of semantic centroids (SCs). The closed-form solution of this PnP problem has been well researched and can be found with existing PnP methods. Since the semantic centroid of the point cloud usually does not accurately match with that of the corresponding image, the accuracy of parameters are not improved even after a nonlinear refinement process. Thus, a cost function based on the constraint of the correspondence between semantic elements from both point cloud and image data is formulated. Subsequently, optimal extrinsic parameters are estimated by minimizing the cost function. We evaluate the proposed method either with GT or predicted semantics on KITTI dataset. Experimental results and comparisons with the baseline method verify the feasibility of the initialization strategy and the accuracy of the calibration approach. In addition, we release the source code at https://github.com/--/SOIC.
Tasks Calibration
Published 2020-03-09
URL https://arxiv.org/abs/2003.04260v1
PDF https://arxiv.org/pdf/2003.04260v1.pdf
PWC https://paperswithcode.com/paper/soic-semantic-online-initialization-and
Repo
Framework

Persistent spectral based machine learning (PerSpect ML) for drug design

Title Persistent spectral based machine learning (PerSpect ML) for drug design
Authors Zhenyu Meng, Kelin Xia
Abstract In this paper, we propose persistent spectral based machine learning (PerSpect ML) models for drug design. Persistent spectral models, including persistent spectral graph, persistent spectral simplicial complex and persistent spectral hypergraph, are proposed based on spectral graph theory, spectral simplicial complex theory and spectral hypergraph theory, respectively. Different from all previous spectral models, a filtration process, as proposed in persistent homology, is introduced to generate multiscale spectral models. More specifically, from the filtration process, a series of nested topological representations, i,e., graphs, simplicial complexes, and hypergraphs, can be systematically generated and their spectral information can be obtained. Persistent spectral variables are defined as the function of spectral variables over the filtration value. Mathematically, persistent multiplicity (of zero eigenvalues) is exactly the persistent Betti number (or Betti curve). We consider 11 persistent spectral variables and use them as the feature for machine learning models in protein-ligand binding affinity prediction. We systematically test our models on three most commonly-used databases, including PDBbind-2007, PDBbind-2013 and PDBbind-2016. Our results, for all these databases, are better than all existing models, as far as we know. This demonstrates the great power of our PerSpect ML in molecular data analysis and drug design.
Tasks
Published 2020-02-03
URL https://arxiv.org/abs/2002.00582v1
PDF https://arxiv.org/pdf/2002.00582v1.pdf
PWC https://paperswithcode.com/paper/persistent-spectral-based-machine-learning
Repo
Framework

Counterexamples to “The Blessings of Multiple Causes” by Wang and Blei

Title Counterexamples to “The Blessings of Multiple Causes” by Wang and Blei
Authors Elizabeth L. Ogburn, Ilya Shpitser, Eric J. Tchetgen Tchetgen
Abstract This brief note is meant to complement our previous comment on “The Blessings of Multiple Causes” by Wang and Blei (2019). We provide a more succinct and transparent explanation of the fact that the deconfounder does not control for multi-cause confounding. The argument given in Wang and Blei (2019) makes two mistakes: (1) attempting to infer independence conditional on one variable from independence conditional on a different, unrelated variable, and (2) attempting to infer joint independence from pairwise independence. We give two simple counterexamples to the deconfounder claim.
Tasks
Published 2020-01-17
URL https://arxiv.org/abs/2001.06555v1
PDF https://arxiv.org/pdf/2001.06555v1.pdf
PWC https://paperswithcode.com/paper/counterexamples-to-the-blessings-of-multiple
Repo
Framework

State Representation and Polyomino Placement for the Game Patchwork

Title State Representation and Polyomino Placement for the Game Patchwork
Authors Mikael Zayenz Lagerkvist
Abstract Modern board games are a rich source of entertainment for many people, but also contain interesting and challenging structures for game playing research and implementing game playing agents. This paper studies the game Patchwork, a two player strategy game using polyomino tile drafting and placement. The core polyomino placement mechanic is implemented in a constraint model using regular constraints, extending and improving the model in (Lagerkvist, Pesant, 2008) with: explicit rotation handling; optional placements; and new constraints for resource usage. Crucial for implementing good game playing agents is to have great heuristics for guiding the search when faced with large branching factors. This paper divides placing tiles into two parts: a policy used for placing parts and an evaluation used to select among different placements. Policies are designed based on classical packing literature as well as common standard constraint programming heuristics. For evaluation, global propagation guided regret is introduced, choosing placements based on not ruling out later placements. Extensive evaluations are performed, showing the importance of using a good evaluation and that the proposed global propagation guided regret is a very effective guide.
Tasks Board Games
Published 2020-01-13
URL https://arxiv.org/abs/2001.04233v1
PDF https://arxiv.org/pdf/2001.04233v1.pdf
PWC https://paperswithcode.com/paper/state-representation-and-polyomino-placement
Repo
Framework
comments powered by Disqus