July 27, 2019

2919 words 14 mins read

Paper Group ANR 698

Paper Group ANR 698

Towards Real-Time Search Planning in Subsea Environments. Tunnel Effects in Cognition: A new Mechanism for Scientific Discovery and Education. Spatial-Temporal Recurrent Neural Network for Emotion Recognition. Dex: Incremental Learning for Complex Environments in Deep Reinforcement Learning. Temporal Pattern Discovery for Accurate Sepsis Diagnosis …

Towards Real-Time Search Planning in Subsea Environments

Title Towards Real-Time Search Planning in Subsea Environments
Authors James McMahon, Harun Yetkin, Artur Wolek, Zachary Waters, Dan Stilwell
Abstract We address the challenge of computing search paths in real-time for subsea applications where the goal is to locate an unknown number of targets on the seafloor. Our approach maximizes a formal definition of search effectiveness given finite search effort. We account for false positive measurements and variation in the performance of the search sensor due to geographic variation of the seafloor. We compare near-optimal search paths that can be computed in real-time with optimal search paths for which real-time computation is infeasible. We show how sonar data acquired for locating targets at a specific location can also be used to characterize the performance of the search sonar at that location. Our approach is illustrated with numerical experiments where search paths are planned using sonar data previously acquired from Boston Harbor.
Tasks
Published 2017-07-24
URL http://arxiv.org/abs/1707.07662v1
PDF http://arxiv.org/pdf/1707.07662v1.pdf
PWC https://paperswithcode.com/paper/towards-real-time-search-planning-in-subsea
Repo
Framework

Tunnel Effects in Cognition: A new Mechanism for Scientific Discovery and Education

Title Tunnel Effects in Cognition: A new Mechanism for Scientific Discovery and Education
Authors Antoine Cornuéjols, Andrée Tiberghien, Gérard Collet
Abstract It is quite exceptional, if it ever happens, that a new conceptual domain be built from scratch. Usually, it is developed and mastered in interaction, both positive and negative, with other more operational existing domains. Few reasoning mechanisms have been proposed to account for the interplay of different conceptual domains and the transfer of information from one to another. Analogical reasoning is one, blending is another. This paper presents a new mechanism, called ‘tunnel effect’, that may explain, in part, how scientists and students reason while constructing a new conceptual domain. One experimental study with high school students and analyses from the history of science, particularly about the birth of classical thermodynamics, provide evidence and illustrate this mechanism. The knowledge organization, processes and conditions for its appearance are detailed and put into the perspective of a computational model. Specifically, we put forward the hypothesis that two levels of knowledge, notional and conceptual, cooperate in the scientific discovery process when a new conceptual domain is being built. The type of conceptual learning that can be associated with tunnel effect is discussed and a thorough comparison is made with analogical reasoning in order to underline the main features of the new proposed mechanism.
Tasks
Published 2017-07-16
URL http://arxiv.org/abs/1707.04903v1
PDF http://arxiv.org/pdf/1707.04903v1.pdf
PWC https://paperswithcode.com/paper/tunnel-effects-in-cognition-a-new-mechanism
Repo
Framework

Spatial-Temporal Recurrent Neural Network for Emotion Recognition

Title Spatial-Temporal Recurrent Neural Network for Emotion Recognition
Authors Tong Zhang, Wenming Zheng, Zhen Cui, Yuan Zong, Yang Li
Abstract Emotion analysis is a crucial problem to endow artifact machines with real intelligence in many large potential applications. As external appearances of human emotions, electroencephalogram (EEG) signals and video face signals are widely used to track and analyze human’s affective information. According to their common characteristics of spatial-temporal volumes, in this paper we propose a novel deep learning framework named spatial-temporal recurrent neural network (STRNN) to unify the learning of two different signal sources into a spatial-temporal dependency model. In STRNN, to capture those spatially cooccurrent variations of human emotions, a multi-directional recurrent neural network (RNN) layer is employed to capture longrange contextual cues by traversing the spatial region of each time slice from multiple angles. Then a bi-directional temporal RNN layer is further used to learn discriminative temporal dependencies from the sequences concatenating spatial features of each time slice produced from the spatial RNN layer. To further select those salient regions of emotion representation, we impose sparse projection onto those hidden states of spatial and temporal domains, which actually also increases the model discriminant ability because of this global consideration. Consequently, such a two-layer RNN model builds spatial dependencies as well as temporal dependencies of the input signals. Experimental results on the public emotion datasets of EEG and facial expression demonstrate the proposed STRNN method is more competitive over those state-of-the-art methods.
Tasks EEG, Emotion Recognition
Published 2017-05-12
URL http://arxiv.org/abs/1705.04515v1
PDF http://arxiv.org/pdf/1705.04515v1.pdf
PWC https://paperswithcode.com/paper/spatial-temporal-recurrent-neural-network-for
Repo
Framework

Dex: Incremental Learning for Complex Environments in Deep Reinforcement Learning

Title Dex: Incremental Learning for Complex Environments in Deep Reinforcement Learning
Authors Nick Erickson, Qi Zhao
Abstract This paper introduces Dex, a reinforcement learning environment toolkit specialized for training and evaluation of continual learning methods as well as general reinforcement learning problems. We also present the novel continual learning method of incremental learning, where a challenging environment is solved using optimal weight initialization learned from first solving a similar easier environment. We show that incremental learning can produce vastly superior results than standard methods by providing a strong baseline method across ten Dex environments. We finally develop a saliency method for qualitative analysis of reinforcement learning, which shows the impact incremental learning has on network attention.
Tasks Continual Learning
Published 2017-06-19
URL http://arxiv.org/abs/1706.05749v1
PDF http://arxiv.org/pdf/1706.05749v1.pdf
PWC https://paperswithcode.com/paper/dex-incremental-learning-for-complex
Repo
Framework

Temporal Pattern Discovery for Accurate Sepsis Diagnosis in ICU Patients

Title Temporal Pattern Discovery for Accurate Sepsis Diagnosis in ICU Patients
Authors Eitam Sheetrit, Nir Nissim, Denis Klimov, Lior Fuchs, Yuval Elovici, Yuval Shahar
Abstract Sepsis is a condition caused by the body’s overwhelming and life-threatening response to infection, which can lead to tissue damage, organ failure, and finally death. Common signs and symptoms include fever, increased heart rate, increased breathing rate, and confusion. Sepsis is difficult to predict, diagnose, and treat. Patients who develop sepsis have an increased risk of complications and death and face higher health care costs and longer hospitalization. Today, sepsis is one of the leading causes of mortality among populations in intensive care units (ICUs). In this paper, we look at the problem of early detection of sepsis by using temporal data mining. We focus on the use of knowledge-based temporal abstraction to create meaningful interval-based abstractions, and on time-interval mining to discover frequent interval-based patterns. We used 2,560 cases derived from the MIMIC-III database. We found that the distribution of the temporal patterns whose frequency is above 10% discovered in the records of septic patients during the last 6 and 12 hours before onset of sepsis is significantly different from that distribution within a similar period, during an equivalent time window during hospitalization, in the records of non-septic patients. This discovery is encouraging for the purpose of performing an early diagnosis of sepsis using the discovered patterns as constructed features.
Tasks
Published 2017-09-06
URL http://arxiv.org/abs/1709.01720v1
PDF http://arxiv.org/pdf/1709.01720v1.pdf
PWC https://paperswithcode.com/paper/temporal-pattern-discovery-for-accurate
Repo
Framework

Temporal Information Extraction for Question Answering Using Syntactic Dependencies in an LSTM-based Architecture

Title Temporal Information Extraction for Question Answering Using Syntactic Dependencies in an LSTM-based Architecture
Authors Yuanliang Meng, Anna Rumshisky, Alexey Romanov
Abstract In this paper, we propose to use a set of simple, uniform in architecture LSTM-based models to recover different kinds of temporal relations from text. Using the shortest dependency path between entities as input, the same architecture is used to extract intra-sentence, cross-sentence, and document creation time relations. A “double-checking” technique reverses entity pairs in classification, boosting the recall of positive cases and reducing misclassifications between opposite classes. An efficient pruning algorithm resolves conflicts globally. Evaluated on QA-TempEval (SemEval2015 Task 5), our proposed technique outperforms state-of-the-art methods by a large margin.
Tasks Question Answering, Temporal Information Extraction
Published 2017-03-17
URL http://arxiv.org/abs/1703.05851v2
PDF http://arxiv.org/pdf/1703.05851v2.pdf
PWC https://paperswithcode.com/paper/temporal-information-extraction-for-question
Repo
Framework

Applying ACO To Large Scale TSP Instances

Title Applying ACO To Large Scale TSP Instances
Authors Darren M. Chitty
Abstract Ant Colony Optimisation (ACO) is a well known metaheuristic that has proven successful at solving Travelling Salesman Problems (TSP). However, ACO suffers from two issues; the first is that the technique has significant memory requirements for storing pheromone levels on edges between cities and second, the iterative probabilistic nature of choosing which city to visit next at every step is computationally expensive. This restricts ACO from solving larger TSP instances. This paper will present a methodology for deploying ACO on larger TSP instances by removing the high memory requirements, exploiting parallel CPU hardware and introducing a significant efficiency saving measure. The approach results in greater accuracy and speed. This enables the proposed ACO approach to tackle TSP instances of up to 200K cities within reasonable timescales using a single CPU. Speedups of as much as 1200 fold are achieved by the technique.
Tasks
Published 2017-09-10
URL http://arxiv.org/abs/1709.03187v1
PDF http://arxiv.org/pdf/1709.03187v1.pdf
PWC https://paperswithcode.com/paper/applying-aco-to-large-scale-tsp-instances
Repo
Framework

Machine Translation Evaluation with Neural Networks

Title Machine Translation Evaluation with Neural Networks
Authors Francisco Guzmán, Shafiq R. Joty, Lluís Màrquez, Preslav Nakov
Abstract We present a framework for machine translation evaluation using neural networks in a pairwise setting, where the goal is to select the better translation from a pair of hypotheses, given the reference translation. In this framework, lexical, syntactic and semantic information from the reference and the two hypotheses is embedded into compact distributed vector representations, and fed into a multi-layer neural network that models nonlinear interactions between each of the hypotheses and the reference, as well as between the two hypotheses. We experiment with the benchmark datasets from the WMT Metrics shared task, on which we obtain the best results published so far, with the basic network configuration. We also perform a series of experiments to analyze and understand the contribution of the different components of the network. We evaluate variants and extensions, including fine-tuning of the semantic embeddings, and sentence-based representations modeled with convolutional and recurrent neural networks. In summary, the proposed framework is flexible and generalizable, allows for efficient learning and scoring, and provides an MT evaluation metric that correlates with human judgments, and is on par with the state of the art.
Tasks Machine Translation
Published 2017-10-05
URL http://arxiv.org/abs/1710.02095v1
PDF http://arxiv.org/pdf/1710.02095v1.pdf
PWC https://paperswithcode.com/paper/machine-translation-evaluation-with-neural
Repo
Framework

Stability and Fluctuations in a Simple Model of Phonetic Category Change

Title Stability and Fluctuations in a Simple Model of Phonetic Category Change
Authors Benjamin Goodman, Paul Tupper
Abstract In spoken languages, speakers divide up the space of phonetic possibilities into different regions, corresponding to different phonemes. We consider a simple exemplar model of how this division of phonetic space varies over time among a population of language users. In the particular model we consider, we show that, once the system is initialized with a given set of phonemes, that phonemes do not become extinct: all phonemes will be maintained in the system for all time. This is in contrast to what is observed in more complex models. Furthermore, we show that the boundaries between phonemes fluctuate and we quantitatively study the fluctuations in a simple instance of our model. These results prepare the ground for more sophisticated models in which some phonemes go extinct or new phonemes emerge through other processes.
Tasks
Published 2017-04-20
URL http://arxiv.org/abs/1704.06358v3
PDF http://arxiv.org/pdf/1704.06358v3.pdf
PWC https://paperswithcode.com/paper/stability-and-fluctuations-in-a-simple-model
Repo
Framework

Block DCT filtering using vector processing

Title Block DCT filtering using vector processing
Authors Mostafa Amin-Naji, Ali Aghagolzadeh
Abstract Filtering is an important issue in signals and images processing. Many images and videos are compressed using discrete cosine transform (DCT). For reducing the computation complexity, we are interested in filtering block and images directly in DCT domain. This article proposed an efficient and yet very simple filtering method directly in DCT domain for any symmetric, asymmetric, separable, inseparable and one or two dimensional filter. The proposed method is achieved by mathematical relations using vector processing for image filtering which it is equivalent to the spatial domain zero padding filtering. Also to avoid the zero padding artifacts around the edge of the block, we prepare preliminary matrices in DCT domain by implementation elements of selected mask which satisfies border replication for a block in the spatial domain. To evaluate the performance of the proposed algorithm, we compared the spatial domain filtering results with the results of the proposed method in DCT domain. The experiments show that the results of our proposed method in DCT are exactly the same as the spatial domain filtering.
Tasks
Published 2017-10-19
URL http://arxiv.org/abs/1710.07193v1
PDF http://arxiv.org/pdf/1710.07193v1.pdf
PWC https://paperswithcode.com/paper/block-dct-filtering-using-vector-processing
Repo
Framework

Fisher consistency for prior probability shift

Title Fisher consistency for prior probability shift
Authors Dirk Tasche
Abstract We introduce Fisher consistency in the sense of unbiasedness as a desirable property for estimators of class prior probabilities. Lack of Fisher consistency could be used as a criterion to dismiss estimators that are unlikely to deliver precise estimates in test datasets under prior probability and more general dataset shift. The usefulness of this unbiasedness concept is demonstrated with three examples of classifiers used for quantification: Adjusted Classify & Count, EM-algorithm and CDE-Iterate. We find that Adjusted Classify & Count and EM-algorithm are Fisher consistent. A counter-example shows that CDE-Iterate is not Fisher consistent and, therefore, cannot be trusted to deliver reliable estimates of class probabilities.
Tasks
Published 2017-01-19
URL http://arxiv.org/abs/1701.05512v2
PDF http://arxiv.org/pdf/1701.05512v2.pdf
PWC https://paperswithcode.com/paper/fisher-consistency-for-prior-probability
Repo
Framework

Depth Structure Preserving Scene Image Generation

Title Depth Structure Preserving Scene Image Generation
Authors Wendong Zhang, Bingbing Ni, Yichao Yan, Jingwei Xu, Xiaokang Yang
Abstract Key to automatically generate natural scene images is to properly arrange among various spatial elements, especially in the depth direction. To this end, we introduce a novel depth structure preserving scene image generation network (DSP-GAN), which favors a hierarchical and heterogeneous architecture, for the purpose of depth structure preserving scene generation. The main trunk of the proposed infrastructure is built on a Hawkes point process that models the spatial dependency between different depth layers. Within each layer generative adversarial sub-networks are trained collaboratively to generate realistic scene components, conditioned on the layer information produced by the point process. We experiment our model on a sub-set of SUNdataset with annotated scene images and demonstrate that our models are capable of generating depth-realistic natural scene image.
Tasks Image Generation, Scene Generation
Published 2017-06-01
URL http://arxiv.org/abs/1706.00212v2
PDF http://arxiv.org/pdf/1706.00212v2.pdf
PWC https://paperswithcode.com/paper/depth-structure-preserving-scene-image
Repo
Framework

Coded Fourier Transform

Title Coded Fourier Transform
Authors Qian Yu, Mohammad Ali Maddah-Ali, A. Salman Avestimehr
Abstract We consider the problem of computing the Fourier transform of high-dimensional vectors, distributedly over a cluster of machines consisting of a master node and multiple worker nodes, where the worker nodes can only store and process a fraction of the inputs. We show that by exploiting the algebraic structure of the Fourier transform operation and leveraging concepts from coding theory, one can efficiently deal with the straggler effects. In particular, we propose a computation strategy, named as coded FFT, which achieves the optimal recovery threshold, defined as the minimum number of workers that the master node needs to wait for in order to compute the output. This is the first code that achieves the optimum robustness in terms of tolerating stragglers or failures for computing Fourier transforms. Furthermore, the reconstruction process for coded FFT can be mapped to MDS decoding, which can be solved efficiently. Moreover, we extend coded FFT to settings including computing general $n$-dimensional Fourier transforms, and provide the optimal computing strategy for those settings.
Tasks
Published 2017-10-17
URL http://arxiv.org/abs/1710.06471v1
PDF http://arxiv.org/pdf/1710.06471v1.pdf
PWC https://paperswithcode.com/paper/coded-fourier-transform
Repo
Framework

Car sharing through the data analysis lens

Title Car sharing through the data analysis lens
Authors Chiara Boldrini, Raffaele Bruno, Haitam Laarabi
Abstract Car sharing is one the pillars of a smart transportation infrastructure, as it is expected to reduce traffic congestion, parking demands and pollution in our cities. From the point of view of demand modelling, car sharing is a weak signal in the city landscape: only a small percentage of the population uses it, and thus it is difficult to study reliably with traditional techniques such as households travel diaries. In this work, we depart from these traditional approaches and we rely on web-based, digital records about vehicle availability in 10 European cities for one of the major active car sharing operators. We discuss how vehicles are used, what are the main characteristics of car sharing trips, whether events happening in certain areas are predictable or not, and how the spatio-temporal information about vehicle availability can be used to infer how different zones in a city are used by customers. We conclude the paper by presenting a direct application of the analysis of the dataset, aimed at identifying where to locate maintenance facilities within the car sharing operational area.
Tasks
Published 2017-07-25
URL http://arxiv.org/abs/1708.00497v1
PDF http://arxiv.org/pdf/1708.00497v1.pdf
PWC https://paperswithcode.com/paper/car-sharing-through-the-data-analysis-lens
Repo
Framework

SLIM: Semi-Lazy Inference Mechanism for Plan Recognition

Title SLIM: Semi-Lazy Inference Mechanism for Plan Recognition
Authors Retuh Mirsky, Ya’akov, Gal
Abstract Plan Recognition algorithms require to recognize a complete hierarchy explaining the agent’s actions and goals. While the output of such algorithms is informative to the recognizer, the cost of its calculation is high in run-time, space, and completeness. Moreover, performing plan recognition online requires the observing agent to reason about future actions that have not yet been seen and maintain a set of hypotheses to support all possible options. This paper presents a new and efficient algorithm for online plan recognition called SLIM (Semi-Lazy Inference Mechanism). It combines both a bottom-up and top-down parsing processes, which allow it to commit only to the minimum necessary actions in real-time, but still provide complete hypotheses post factum. We show both theoretically and empirically that although the computational cost of this process is still exponential, there is a significant improvement in run-time when compared to a state of the art of plan recognition algorithm.
Tasks
Published 2017-03-02
URL http://arxiv.org/abs/1703.00838v1
PDF http://arxiv.org/pdf/1703.00838v1.pdf
PWC https://paperswithcode.com/paper/slim-semi-lazy-inference-mechanism-for-plan
Repo
Framework
comments powered by Disqus