January 29, 2020

3089 words 15 mins read

Paper Group ANR 653

Paper Group ANR 653

A Machine-Learning Approach for Earthquake Magnitude Estimation. An Intelligent Monitoring System of Vehicles on Highway Traffic. Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices. Stochastic Linear Optimization with Adversarial Corruption. Unsupervised Singing Voice Conversion. Solving Inverse Problems b …

A Machine-Learning Approach for Earthquake Magnitude Estimation

Title A Machine-Learning Approach for Earthquake Magnitude Estimation
Authors S. Mostafa Mousavi, Gregory C. Beroza
Abstract In this study we develop a single-station deep-learning approach for fast and reliable estimation of earthquake magnitude directly from raw waveforms. We design a regressor composed of convolutional and recurrent neural networks that is not sensitive to the data normalization, hence waveform amplitude information can be utilized during the training. Our network can predict earthquake magnitudes with an average error close to zero and standard deviation of ~0.2 based on single-station waveforms without instrument response correction. We test the network for both local and duration magnitude scales and show a station-based learning can be an effective approach for improving the performance. The proposed approach has a variety of potential applications from routine earthquake monitoring to early warning systems.
Tasks
Published 2019-11-14
URL https://arxiv.org/abs/1911.05975v1
PDF https://arxiv.org/pdf/1911.05975v1.pdf
PWC https://paperswithcode.com/paper/a-machine-learning-approach-for-earthquake
Repo
Framework

An Intelligent Monitoring System of Vehicles on Highway Traffic

Title An Intelligent Monitoring System of Vehicles on Highway Traffic
Authors Sulaiman Khan, Hazrat Ali, Zia Ullah, Mohammad Farhad Bulbul
Abstract Vehicle speed monitoring and management of highways is the critical problem of the road in this modern age of growing technology and population. A poor management results in frequent traffic jam, traffic rules violation and fatal road accidents. Using traditional techniques of RADAR, LIDAR and LASAR to address this problem is time-consuming, expensive and tedious. This paper presents an efficient framework to produce a simple, cost efficient and intelligent system for vehicle speed monitoring. The proposed method uses an HD (High Definition) camera mounted on the road side either on a pole or on a traffic signal for recording video frames. On the basis of these frames, a vehicle can be tracked by using radius growing method, and its speed can be calculated by calculating vehicle mask and its displacement in consecutive frames. The method uses pattern recognition, digital image processing and mathematical techniques for vehicle detection, tracking and speed calculation. The validity of the proposed model is proved by testing it on different highways.
Tasks
Published 2019-05-27
URL https://arxiv.org/abs/1905.10982v1
PDF https://arxiv.org/pdf/1905.10982v1.pdf
PWC https://paperswithcode.com/paper/an-intelligent-monitoring-system-of-vehicles
Repo
Framework

Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices

Title Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
Authors Jaouad Mourtada
Abstract The first part of this paper is devoted to the decision-theoretic analysis of random-design linear prediction. It is known that, under boundedness constraints on the response (and thus on regression coefficients), the minimax excess risk scales, up to constants, as $\sigma^2 d / n$ in dimension $d$ with $n$ samples and noise $\sigma^2$. Here, we study the expected excess risk with respect to the full linear class. We show that the ordinary least squares estimator is exactly minimax optimal in the well-specified case for every distribution of covariates. Further, we express the minimax risk in terms of the distribution of \emph{statistical leverage scores} of individual samples. We deduce a precise minimax lower bound of $\sigma^2d/(n-d+1)$ for general covariate distribution, which nearly matches the risk for Gaussian design. We then obtain nonasymptotic upper bounds on the minimax risk for covariates that satisfy a “small ball”-type regularity condition, which scale as $(1+o(1))\sigma^2d/n$ as $d=o(n)$, both in the well-specified and misspecified cases. Our main technical contribution is the study of the lower tail of the smallest singular value of empirical covariance matrices around $0$. We establish a lower bound on this lower tail, valid for any distribution in dimension $d \geq 2$, together with a matching upper bound under a necessary regularity condition. Our proof relies on the PAC-Bayesian technique for controlling empirical processes, and extends an analysis of Oliveira devoted to a different part of the lower tail. Equivalently, our upper bound shows that the operator norm of the inverse sample covariance matrix has bounded $L^q$ norm up to $q \asymp n$, and our lower bound implies that this exponent is unimprovable. Finally, we show that the regularity condition naturally holds for independent coordinates.
Tasks
Published 2019-12-23
URL https://arxiv.org/abs/1912.10754v2
PDF https://arxiv.org/pdf/1912.10754v2.pdf
PWC https://paperswithcode.com/paper/exact-minimax-risk-for-linear-least-squares
Repo
Framework

Stochastic Linear Optimization with Adversarial Corruption

Title Stochastic Linear Optimization with Adversarial Corruption
Authors Yingkai Li, Edmund Y. Lou, Liren Shan
Abstract We extend the model of stochastic bandits with adversarial corruption (Lykouriset al., 2018) to the stochastic linear optimization problem (Dani et al., 2008). Our algorithm is agnostic to the amount of corruption chosen by the adaptive adversary. The regret of the algorithm only increases linearly in the amount of corruption. Our algorithm involves using L"owner-John’s ellipsoid for exploration and dividing time horizon into epochs with exponentially increasing size to limit the influence of corruption.
Tasks
Published 2019-09-04
URL https://arxiv.org/abs/1909.02109v1
PDF https://arxiv.org/pdf/1909.02109v1.pdf
PWC https://paperswithcode.com/paper/stochastic-linear-optimization-with
Repo
Framework

Unsupervised Singing Voice Conversion

Title Unsupervised Singing Voice Conversion
Authors Eliya Nachmani, Lior Wolf
Abstract We present a deep learning method for singing voice conversion. The proposed network is not conditioned on the text or on the notes, and it directly converts the audio of one singer to the voice of another. Training is performed without any form of supervision: no lyrics or any kind of phonetic features, no notes, and no matching samples between singers. The proposed network employs a single CNN encoder for all singers, a single WaveNet decoder, and a classifier that enforces the latent representation to be singer-agnostic. Each singer is represented by one embedding vector, which the decoder is conditioned on. In order to deal with relatively small datasets, we propose a new data augmentation scheme, as well as new training losses and protocols that are based on backtranslation. Our evaluation presents evidence that the conversion produces natural signing voices that are highly recognizable as the target singer.
Tasks Data Augmentation, Voice Conversion
Published 2019-04-13
URL https://arxiv.org/abs/1904.06590v3
PDF https://arxiv.org/pdf/1904.06590v3.pdf
PWC https://paperswithcode.com/paper/unsupervised-singing-voice-conversion
Repo
Framework

Solving Inverse Problems by Joint Posterior Maximization with a VAE Prior

Title Solving Inverse Problems by Joint Posterior Maximization with a VAE Prior
Authors Mario González, Andrés Almansa, Mauricio Delbracio, Pablo Musé, Pauline Tan
Abstract In this paper we address the problem of solving ill-posed inverse problems in imaging where the prior is a neural generative model. Specifically we consider the decoupled case where the prior is trained once and can be reused for many different log-concave degradation models without retraining. Whereas previous MAP-based approaches to this problem lead to highly non-convex optimization algorithms, our approach computes the joint (space-latent) MAP that naturally leads to alternate optimization algorithms and to the use of a stochastic encoder to accelerate computations. The resulting technique is called JPMAP because it performs Joint Posterior Maximization using an Autoencoding Prior. We show theoretical and experimental evidence that the proposed objective function is quite close to bi-convex. Indeed it satisfies a weak bi-convexity property which is sufficient to guarantee that our optimization scheme converges to a stationary point. Experimental results also show the higher quality of the solutions obtained by our JPMAP approach with respect to other non-convex MAP approaches which more often get stuck in spurious local optima.
Tasks
Published 2019-11-14
URL https://arxiv.org/abs/1911.06379v1
PDF https://arxiv.org/pdf/1911.06379v1.pdf
PWC https://paperswithcode.com/paper/solving-inverse-problems-by-joint-posterior
Repo
Framework

Intensity-Based Feature Selection for Near Real-Time Damage Diagnosis of Building Structures

Title Intensity-Based Feature Selection for Near Real-Time Damage Diagnosis of Building Structures
Authors Seyed Omid Sajedi, Xiao Liang
Abstract Near real-time damage diagnosis of building structures after extreme events (e.g., earthquakes) is of great importance in structural health monitoring. Unlike conventional methods that are usually time-consuming and require human expertise, pattern recognition algorithms have the potential to interpret sensor recordings as soon as this information is available. This paper proposes a robust framework to build a damage prediction model for building structures. Support vector machines are used to predict the existence as well as the probable location of the damage. The model is designed to consider probabilistic approaches in determining hazard intensity given the existing attenuation models in performance-based earthquake engineering. Performance of the model regarding accurate and safe predictions is enhanced using Bayesian optimization. The proposed framework is evaluated on a reinforced concrete moment frame. Targeting a selected large earthquake scenario, 6,240 nonlinear time history analyses are performed using OpenSees. Simulation results are engineered to extract low-dimensional intensity-based features that can be used as damage indicators. For the given case study, the proposed model achieves a promising accuracy of 83.1% to identify damage location, demonstrating the great potential of model capabilities.
Tasks Feature Selection
Published 2019-10-23
URL https://arxiv.org/abs/1910.11240v1
PDF https://arxiv.org/pdf/1910.11240v1.pdf
PWC https://paperswithcode.com/paper/intensity-based-feature-selection-for-near
Repo
Framework

Measuring the Data Efficiency of Deep Learning Methods

Title Measuring the Data Efficiency of Deep Learning Methods
Authors Hlynur Davíð Hlynsson, Alberto N. Escalante-B., Laurenz Wiskott
Abstract In this paper, we propose a new experimental protocol and use it to benchmark the data efficiency — performance as a function of training set size — of two deep learning algorithms, convolutional neural networks (CNNs) and hierarchical information-preserving graph-based slow feature analysis (HiGSFA), for tasks in classification and transfer learning scenarios. The algorithms are trained on different-sized subsets of the MNIST and Omniglot data sets. HiGSFA outperforms standard CNN networks when the models are trained on 50 and 200 samples per class for MNIST classification. In other cases, the CNNs perform better. The results suggest that there are cases where greedy, locally optimal bottom-up learning is equally or more powerful than global gradient-based learning.
Tasks Omniglot, Transfer Learning
Published 2019-07-03
URL https://arxiv.org/abs/1907.02549v1
PDF https://arxiv.org/pdf/1907.02549v1.pdf
PWC https://paperswithcode.com/paper/measuring-the-data-efficiency-of-deep
Repo
Framework

Relative Net Utility and the Saint Petersburg Paradox

Title Relative Net Utility and the Saint Petersburg Paradox
Authors Daniel Muller, Tshilidzi Marwala
Abstract The famous St Petersburg Paradox shows that the theory of expected value does not capture the real-world economics of decision-making problem. Over the years, many economic theories were developed to resolve the paradox and explain the subjective utility of the expected outcomes and risk aversion. In this paper, we use the concept of the net utility to resolve the St Petersburg paradox. The reason why the principle of absolute instead of net utility does not work is because it is a first order approximation of some unknown utility function. Because the net utility concept is able to explain both behavioral economics and the St Petersburg paradox it is deemed a universal approach to handling utility. Finally, this paper explored how artificial intelligent (AI) agent will make choices and observed that if AI agent uses the nominal utility approach it will see infinite reward while if it uses the net utility approach it will see the limited reward that human beings see.
Tasks Decision Making
Published 2019-10-21
URL https://arxiv.org/abs/1910.09544v2
PDF https://arxiv.org/pdf/1910.09544v2.pdf
PWC https://paperswithcode.com/paper/relative-net-utility-and-the-saint-petersburg
Repo
Framework

QKD: Quantization-aware Knowledge Distillation

Title QKD: Quantization-aware Knowledge Distillation
Authors Jangho Kim, Yash Bhalgat, Jinwon Lee, Chirag Patel, Nojun Kwak
Abstract Quantization and Knowledge distillation (KD) methods are widely used to reduce memory and power consumption of deep neural networks (DNNs), especially for resource-constrained edge devices. Although their combination is quite promising to meet these requirements, it may not work as desired. It is mainly because the regularization effect of KD further diminishes the already reduced representation power of a quantized model. To address this short-coming, we propose Quantization-aware Knowledge Distillation (QKD) wherein quantization and KD are care-fully coordinated in three phases. First, Self-studying (SS) phase fine-tunes a quantized low-precision student network without KD to obtain a good initialization. Second, Co-studying (CS) phase tries to train a teacher to make it more quantizaion-friendly and powerful than a fixed teacher. Finally, Tutoring (TU) phase transfers knowledge from the trained teacher to the student. We extensively evaluate our method on ImageNet and CIFAR-10/100 datasets and show an ablation study on networks with both standard and depthwise-separable convolutions. The proposed QKD outperformed existing state-of-the-art methods (e.g., 1.3% improvement on ResNet-18 with W4A4, 2.6% on MobileNetV2 with W4A4). Additionally, QKD could recover the full-precision accuracy at as low as W3A3 quantization on ResNet and W6A6 quantization on MobilenetV2.
Tasks Quantization
Published 2019-11-28
URL https://arxiv.org/abs/1911.12491v1
PDF https://arxiv.org/pdf/1911.12491v1.pdf
PWC https://paperswithcode.com/paper/qkd-quantization-aware-knowledge-distillation
Repo
Framework

The TALP-UPC System for the WMT Similar Language Task: Statistical vs Neural Machine Translation

Title The TALP-UPC System for the WMT Similar Language Task: Statistical vs Neural Machine Translation
Authors Magdalena Biesialska, Lluis Guardia, Marta R. Costa-jussà
Abstract Although the problem of similar language translation has been an area of research interest for many years, yet it is still far from being solved. In this paper, we study the performance of two popular approaches: statistical and neural. We conclude that both methods yield similar results; however, the performance varies depending on the language pair. While the statistical approach outperforms the neural one by a difference of 6 BLEU points for the Spanish-Portuguese language pair, the proposed neural model surpasses the statistical one by a difference of 2 BLEU points for Czech-Polish. In the former case, the language similarity (based on perplexity) is much higher than in the latter case. Additionally, we report negative results for the system combination with back-translation. Our TALP-UPC system submission won 1st place for Czech-to-Polish and 2nd place for Spanish-to-Portuguese in the official evaluation of the 1st WMT Similar Language Translation task.
Tasks Machine Translation
Published 2019-08-03
URL https://arxiv.org/abs/1908.01192v1
PDF https://arxiv.org/pdf/1908.01192v1.pdf
PWC https://paperswithcode.com/paper/the-talp-upc-system-for-the-wmt-similar
Repo
Framework

Fast computation of loudness using a deep neural network

Title Fast computation of loudness using a deep neural network
Authors Josef Schlittenlacher, Richard E. Turner, Brian C. J. Moore
Abstract The present paper introduces a deep neural network (DNN) for predicting the instantaneous loudness of a sound from its time waveform. The DNN was trained using the output of a more complex model, called the Cambridge loudness model. While a modern PC can perform a few hundred loudness computations per second using the Cambridge loudness model, it can perform more than 100,000 per second using the DNN, allowing real-time calculation of loudness. The root-mean-square deviation between the predictions of instantaneous loudness level using the two models was less than 0.5 phon for unseen types of sound. We think that the general approach of simulating a complex perceptual model by a much faster DNN can be applied to other perceptual models to make them run in real time.
Tasks
Published 2019-05-24
URL https://arxiv.org/abs/1905.10399v1
PDF https://arxiv.org/pdf/1905.10399v1.pdf
PWC https://paperswithcode.com/paper/fast-computation-of-loudness-using-a-deep
Repo
Framework

Assembly of randomly placed parts realized by using only one robot arm with a general parallel-jaw gripper

Title Assembly of randomly placed parts realized by using only one robot arm with a general parallel-jaw gripper
Authors Jie Zhao, Xin Jiang, Xiaoman Wang, Shengfan Wang, Yunhui Liu
Abstract In industry assembly lines, parts feeding machines are widely employed as the prologue of the whole procedure. They play the role of sorting the parts randomly placed in bins to the state with specified pose. With the help of the parts feeding machines, the subsequent assembly processes by robot arm can always start from the same condition. Thus it is expected that function of parting feeding machine and the robotic assembly can be integrated with one robot arm. This scheme can provide great flexibility and can also contribute to reduce the cost. The difficulties involved in this scheme lie in the fact that in the part feeding phase, the pose of the part after grasping may be not proper for the subsequent assembly. Sometimes it can not even guarantee a stable grasp. In this paper, we proposed a method to integrate parts feeding and assembly within one robot arm. This proposal utilizes a specially designed gripper tip mounted on the jaws of a two-fingered gripper. With the modified gripper, in-hand manipulation of the grasped object is realized, which can ensure the control of the orientation and offset position of the grasped object. The proposal in this paper is verified by a simulated assembly in which a robot arm completed the assembly process including parts picking from bin and a subsequent peg-in-hole assembly.
Tasks
Published 2019-09-19
URL https://arxiv.org/abs/1909.08862v1
PDF https://arxiv.org/pdf/1909.08862v1.pdf
PWC https://paperswithcode.com/paper/assembly-of-randomly-placed-parts-realized-by
Repo
Framework
Title Automatic Text Summarization of Legal Cases: A Hybrid Approach
Authors Varun Pandya
Abstract Manual Summarization of large bodies of text involves a lot of human effort and time, especially in the legal domain. Lawyers spend a lot of time preparing legal briefs of their clients’ case files. Automatic Text summarization is a constantly evolving field of Natural Language Processing(NLP), which is a subdiscipline of the Artificial Intelligence Field. In this paper a hybrid method for automatic text summarization of legal cases using k-means clustering technique and tf-idf(term frequency-inverse document frequency) word vectorizer is proposed. The summary generated by the proposed method is compared using ROGUE evaluation parameters with the case summary as prepared by the lawyer for appeal in court. Further, suggestions for improving the proposed method are also presented.
Tasks Text Summarization
Published 2019-08-24
URL https://arxiv.org/abs/1908.09119v1
PDF https://arxiv.org/pdf/1908.09119v1.pdf
PWC https://paperswithcode.com/paper/automatic-text-summarization-of-legal-cases-a
Repo
Framework

Computational Psychology to Embed Emotions into News or Advertisements to Increase Reader Affinity

Title Computational Psychology to Embed Emotions into News or Advertisements to Increase Reader Affinity
Authors Hrishikesh Kulkarni, P Joshi, P Chande
Abstract Readers take decisions about going through the complete news based on many factors. The emotional impact of the news title on reader is one of the most important factors. Cognitive ergonomics tries to strike the balance between work, product and environment with human needs and capabilities. The utmost need to integrate emotions in the news as well as advertisements cannot be denied. The idea is that news or advertisement should be able to engage the reader on emotional and behavioral platform. While achieving this objective there is need to learn about reader behavior and use computational psychology while presenting as well as writing news or advertisements. This paper based on Machine Learning, tries to map behavior of the reader with the news/advertisements and also provide inputs for affective value for building personalized news or advertisements presentations. The affective value of the news is determined and news artifacts are mapped to reader. The algorithm suggests the most suitable news for readers while understanding emotional traits required for personalization. This work can be used to improve reader satisfaction through embedding emotions in the reading material and prioritizing news presentations. It can be used to map personal reading material range, personalized programs and ranking programs, advertisements with reference to individuals.
Tasks
Published 2019-10-14
URL https://arxiv.org/abs/1910.06859v1
PDF https://arxiv.org/pdf/1910.06859v1.pdf
PWC https://paperswithcode.com/paper/computational-psychology-to-embed-emotions
Repo
Framework
comments powered by Disqus