July 29, 2019

3412 words 17 mins read

Paper Group ANR 158

Paper Group ANR 158

A Full Non-Monotonic Transition System for Unrestricted Non-Projective Parsing. Deep Optimization for Spectrum Repacking. On Hash-Based Work Distribution Methods for Parallel Best-First Search. #anorexia, #anarexia, #anarexyia: Characterizing Online Community Practices with Orthographic Variation. A dynamic graph-cuts method with integrated multipl …

A Full Non-Monotonic Transition System for Unrestricted Non-Projective Parsing

Title A Full Non-Monotonic Transition System for Unrestricted Non-Projective Parsing
Authors Daniel Fernández-González, Carlos Gómez-Rodríguez
Abstract Restricted non-monotonicity has been shown beneficial for the projective arc-eager dependency parser in previous research, as posterior decisions can repair mistakes made in previous states due to the lack of information. In this paper, we propose a novel, fully non-monotonic transition system based on the non-projective Covington algorithm. As a non-monotonic system requires exploration of erroneous actions during the training process, we develop several non-monotonic variants of the recently defined dynamic oracle for the Covington parser, based on tight approximations of the loss. Experiments on datasets from the CoNLL-X and CoNLL-XI shared tasks show that a non-monotonic dynamic oracle outperforms the monotonic version in the majority of languages.
Tasks
Published 2017-06-11
URL http://arxiv.org/abs/1706.03367v1
PDF http://arxiv.org/pdf/1706.03367v1.pdf
PWC https://paperswithcode.com/paper/a-full-non-monotonic-transition-system-for
Repo
Framework

Deep Optimization for Spectrum Repacking

Title Deep Optimization for Spectrum Repacking
Authors Neil Newman, Alexandre Fréchette, Kevin Leyton-Brown
Abstract Over 13 months in 2016-17 the FCC conducted an “incentive auction” to repurpose radio spectrum from broadcast television to wireless internet. In the end, the auction yielded $19.8 billion, $10.05 billion of which was paid to 175 broadcasters for voluntarily relinquishing their licenses across 14 UHF channels. Stations that continued broadcasting were assigned potentially new channels to fit as densely as possible into the channels that remained. The government netted more than $7 billion (used to pay down the national debt) after covering costs. A crucial element of the auction design was the construction of a solver, dubbed SATFC, that determined whether sets of stations could be “repacked” in this way; it needed to run every time a station was given a price quote. This paper describes the process by which we built SATFC. We adopted an approach we dub “deep optimization”, taking a data-driven, highly parametric, and computationally intensive approach to solver design. More specifically, to build SATFC we designed software that could pair both complete and local-search SAT-encoded feasibility checking with a wide range of domain-specific techniques. We then used automatic algorithm configuration techniques to construct a portfolio of eight complementary algorithms to be run in parallel, aiming to achieve good performance on instances that arose in proprietary auction simulations. To evaluate the impact of our solver in this paper, we built an open-source reverse auction simulator. We found that within the short time budget required in practice, SATFC solved more than 95% of the problems it encountered. Furthermore, the incentive auction paired with SATFC produced nearly optimal allocations in a restricted setting and substantially outperformed other alternatives at national scale.
Tasks
Published 2017-06-11
URL http://arxiv.org/abs/1706.03304v1
PDF http://arxiv.org/pdf/1706.03304v1.pdf
PWC https://paperswithcode.com/paper/deep-optimization-for-spectrum-repacking
Repo
Framework
Title On Hash-Based Work Distribution Methods for Parallel Best-First Search
Authors Yuu Jinnai, Alex Fukunaga
Abstract Parallel best-first search algorithms such as Hash Distributed A* (HDA*) distribute work among the processes using a global hash function. We analyze the search and communication overheads of state-of-the-art hash-based parallel best-first search algorithms, and show that although Zobrist hashing, the standard hash function used by HDA*, achieves good load balance for many domains, it incurs significant communication overhead since almost all generated nodes are transferred to a different processor than their parents. We propose Abstract Zobrist hashing, a new work distribution method for parallel search which, instead of computing a hash value based on the raw features of a state, uses a feature projection function to generate a set of abstract features which results in a higher locality, resulting in reduced communications overhead. We show that Abstract Zobrist hashing outperforms previous methods on search domains using hand-coded, domain specific feature projection functions. We then propose GRAZHDA*, a graph-partitioning based approach to automatically generating feature projection functions. GRAZHDA* seeks to approximate the partitioning of the actual search space graph by partitioning the domain transition graph, an abstraction of the state space graph. We show that GRAZHDA* outperforms previous methods on domain-independent planning.
Tasks graph partitioning
Published 2017-06-10
URL http://arxiv.org/abs/1706.03254v2
PDF http://arxiv.org/pdf/1706.03254v2.pdf
PWC https://paperswithcode.com/paper/on-hash-based-work-distribution-methods-for
Repo
Framework

#anorexia, #anarexia, #anarexyia: Characterizing Online Community Practices with Orthographic Variation

Title #anorexia, #anarexia, #anarexyia: Characterizing Online Community Practices with Orthographic Variation
Authors Ian Stewart, Stevie Chancellor, Munmun De Choudhury, Jacob Eisenstein
Abstract Distinctive linguistic practices help communities build solidarity and differentiate themselves from outsiders. In an online community, one such practice is variation in orthography, which includes spelling, punctuation, and capitalization. Using a dataset of over two million Instagram posts, we investigate orthographic variation in a community that shares pro-eating disorder (pro-ED) content. We find that not only does orthographic variation grow more frequent over time, it also becomes more profound or deep, with variants becoming increasingly distant from the original: as, for example, #anarexyia is more distant than #anarexia from the original spelling #anorexia. These changes are driven by newcomers, who adopt the most extreme linguistic practices as they enter the community. Moreover, this behavior correlates with engagement: the newcomers who adopt deeper orthographic variants tend to remain active for longer in the community, and the posts that contain deeper variation receive more positive feedback in the form of “likes.” Previous work has linked community membership change with language change, and our work casts this connection in a new light, with newcomers driving an evolving practice, rather than adapting to it. We also demonstrate the utility of orthographic variation as a new lens to study sociolinguistic change in online communities, particularly when the change results from an exogenous force such as a content ban.
Tasks
Published 2017-12-04
URL http://arxiv.org/abs/1712.01411v1
PDF http://arxiv.org/pdf/1712.01411v1.pdf
PWC https://paperswithcode.com/paper/anorexia-anarexia-anarexyia-characterizing
Repo
Framework

A dynamic graph-cuts method with integrated multiple feature maps for segmenting kidneys in ultrasound images

Title A dynamic graph-cuts method with integrated multiple feature maps for segmenting kidneys in ultrasound images
Authors Qiang Zheng, Steven Warner, Gregory Tasian, Yong Fan
Abstract Purpose: To improve kidney segmentation in clinical ultrasound (US) images, we develop a new graph cuts based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. Methods: To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the GC based segmentation iteratively progresses until convergence. The proposed method has been evaluated and compared with state of the art image segmentation methods based on clinical kidney US images of 85 subjects. We randomly selected US images of 20 subjects as training data for tuning the parameters, and validated the methods based on US images of the remaining 65 subjects. The segmentation results have been quantitatively analyzed using 3 metrics, including Dice Index, Jaccard Index, and Mean Distance. Results: Experiment results demonstrated that the proposed method obtained segmentation results for bilateral kidneys of 65 subjects with average Dice index of 0.9581, Jaccard index of 0.9204, and Mean Distance of 1.7166, better than other methods under comparison (p<10-19, paired Wilcoxon rank sum tests). Conclusions: The proposed method achieved promising performance for segmenting kidneys in US images, better than segmentation methods that built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such progression chronic kidney disease.
Tasks Semantic Segmentation
Published 2017-06-11
URL http://arxiv.org/abs/1706.03372v1
PDF http://arxiv.org/pdf/1706.03372v1.pdf
PWC https://paperswithcode.com/paper/a-dynamic-graph-cuts-method-with-integrated
Repo
Framework

A Framework for Accurate Drought Forecasting System Using Semantics-Based Data Integration Middleware

Title A Framework for Accurate Drought Forecasting System Using Semantics-Based Data Integration Middleware
Authors A. K. Akanbi, M. Masinde
Abstract Technological advancement in Wireless Sensor Networks (WSN) has made it become an invaluable component of a reliable environmental monitoring system; they form the digital skin’ through which to ‘sense’ and collect the context of the surroundings and provides information on the process leading to complex events such as drought. However, these environmental properties are measured by various heterogeneous sensors of different modalities in distributed locations making up the WSN, using different abstruse terms and vocabulary in most cases to denote the same observed property, causing data heterogeneity. Adding semantics and understanding the relationships that exist between the observed properties, and augmenting it with local indigenous knowledge is necessary for an accurate drought forecasting system. In this paper, we propose the framework for the semantic representation of sensor data and integration with indigenous knowledge on drought using a middleware for an efficient drought forecasting system.
Tasks
Published 2017-06-20
URL http://arxiv.org/abs/1706.07294v1
PDF http://arxiv.org/pdf/1706.07294v1.pdf
PWC https://paperswithcode.com/paper/a-framework-for-accurate-drought-forecasting
Repo
Framework

Real-time 3D Reconstruction on Construction Site using Visual SLAM and UAV

Title Real-time 3D Reconstruction on Construction Site using Visual SLAM and UAV
Authors Zhexiong Shang, Zhigang Shen
Abstract 3D reconstruction can be used as a platform to monitor the performance of activities on construction site, such as construction progress monitoring, structure inspection and post-disaster rescue. Comparing to other sensors, RGB image has the advantages of low-cost, texture rich and easy to implement that has been used as the primary method for 3D reconstruction in construction industry. However, the image-based 3D reconstruction always requires extended time to acquire and/or to process the image data, which limits its application on time critical projects. Recent progress in Visual Simultaneous Localization and Mapping (SLAM) make it possible to reconstruct a 3D map of construction site in real-time. Integrated with Unmanned Aerial Vehicle (UAV), the obstacles areas that are inaccessible for the ground equipment can also be sensed. Despite these advantages of visual SLAM and UAV, until now, such technique has not been fully investigated on construction site. Therefore, the objective of this research is to present a pilot study of using visual SLAM and UAV for real-time construction site reconstruction. The system architecture and the experimental setup are introduced, and the preliminary results and the potential applications using Visual SLAM and UAV on construction site are discussed.
Tasks 3D Reconstruction, Simultaneous Localization and Mapping
Published 2017-12-19
URL http://arxiv.org/abs/1712.07122v1
PDF http://arxiv.org/pdf/1712.07122v1.pdf
PWC https://paperswithcode.com/paper/real-time-3d-reconstruction-on-construction
Repo
Framework

Characterization of Deterministic and Probabilistic Sampling Patterns for Finite Completability of Low Tensor-Train Rank Tensor

Title Characterization of Deterministic and Probabilistic Sampling Patterns for Finite Completability of Low Tensor-Train Rank Tensor
Authors Morteza Ashraphijuo, Xiaodong Wang
Abstract In this paper, we analyze the fundamental conditions for low-rank tensor completion given the separation or tensor-train (TT) rank, i.e., ranks of unfoldings. We exploit the algebraic structure of the TT decomposition to obtain the deterministic necessary and sufficient conditions on the locations of the samples to ensure finite completability. Specifically, we propose an algebraic geometric analysis on the TT manifold that can incorporate the whole rank vector simultaneously in contrast to the existing approach based on the Grassmannian manifold that can only incorporate one rank component. Our proposed technique characterizes the algebraic independence of a set of polynomials defined based on the sampling pattern and the TT decomposition, which is instrumental to obtaining the deterministic condition on the sampling pattern for finite completability. In addition, based on the proposed analysis, assuming that the entries of the tensor are sampled independently with probability $p$, we derive a lower bound on the sampling probability $p$, or equivalently, the number of sampled entries that ensures finite completability with high probability. Moreover, we also provide the deterministic and probabilistic conditions for unique completability.
Tasks
Published 2017-03-22
URL http://arxiv.org/abs/1703.07698v1
PDF http://arxiv.org/pdf/1703.07698v1.pdf
PWC https://paperswithcode.com/paper/characterization-of-deterministic-and
Repo
Framework

Chaos-guided Input Structuring for Improved Learning in Recurrent Neural Networks

Title Chaos-guided Input Structuring for Improved Learning in Recurrent Neural Networks
Authors Priyadarshini Panda, Kaushik Roy
Abstract Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations. However, it remains unclear how neural circuits encode complex spatio-temporal patterns. We show that neural dynamics are strongly influenced by the phase alignment between the input and the spontaneous chaotic activity. Input structuring along the dominant chaotic projections causes the chaotic trajectories to become stable channels (or attractors), hence, improving the computational capability of a recurrent network. Using mean field analysis, we derive the impact of input structuring on the overall stability of attractors formed. Our results indicate that input alignment determines the extent of intrinsic noise suppression and hence, alters the attractor state stability, thereby controlling the network’s inference ability.
Tasks
Published 2017-12-26
URL http://arxiv.org/abs/1712.09206v3
PDF http://arxiv.org/pdf/1712.09206v3.pdf
PWC https://paperswithcode.com/paper/chaos-guided-input-structuring-for-improved
Repo
Framework

A Framework for Generalizing Graph-based Representation Learning Methods

Title A Framework for Generalizing Graph-based Representation Learning Methods
Authors Nesreen K. Ahmed, Ryan A. Rossi, Rong Zhou, John Boaz Lee, Xiangnan Kong, Theodore L. Willke, Hoda Eldardiry
Abstract Random walks are at the heart of many existing deep learning algorithms for graph data. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to node identity. In this work, we introduce the notion of attributed random walks which serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16.1% while requiring on average 853 times less space than existing methods on a variety of graphs from several domains.
Tasks Representation Learning
Published 2017-09-14
URL http://arxiv.org/abs/1709.04596v1
PDF http://arxiv.org/pdf/1709.04596v1.pdf
PWC https://paperswithcode.com/paper/a-framework-for-generalizing-graph-based
Repo
Framework

A Data-Driven Approach to Pre-Operative Evaluation of Lung Cancer Patients

Title A Data-Driven Approach to Pre-Operative Evaluation of Lung Cancer Patients
Authors Oleksiy Budilovsky, Golnaz Alipour, Andre Knoesen, Lisa Brown, Soheil Ghiasi
Abstract Lung cancer is the number one cause of cancer deaths. Many early stage lung cancer patients have resectable tumors; however, their cardiopulmonary function needs to be properly evaluated before they are deemed operative candidates. Consequently, a subset of such patients is asked to undergo standard pulmonary function tests, such as cardiopulmonary exercise tests (CPET) or stair climbs, to have their pulmonary function evaluated. The standard tests are expensive, labor intensive, and sometimes ineffective due to co-morbidities, such as limited mobility. Recovering patients would benefit greatly from a device that can be worn at home, is simple to use, and is relatively inexpensive. Using advances in information technology, the goal is to design a continuous, inexpensive, mobile and patient-centric mechanism for evaluation of a patient’s pulmonary function. A light mobile mask is designed, fitted with CO2, O2, flow volume, and accelerometer sensors and tested on 18 subjects performing 15 minute exercises. The data collected from the device is stored in a cloud service and machine learning algorithms are used to train and predict a user’s activity .Several classification techniques are compared - K Nearest Neighbor, Random Forest, Support Vector Machine, Artificial Neural Network, and Naive Bayes. One useful area of interest involves comparing a patient’s predicted activity levels, especially using only breath data, to that of a normal person’s, using the classification models.
Tasks
Published 2017-07-21
URL http://arxiv.org/abs/1707.08169v1
PDF http://arxiv.org/pdf/1707.08169v1.pdf
PWC https://paperswithcode.com/paper/a-data-driven-approach-to-pre-operative
Repo
Framework

Probabilistic Models for Computerized Adaptive Testing

Title Probabilistic Models for Computerized Adaptive Testing
Authors Martin Plajner
Abstract In this paper we follow our previous research in the area of Computerized Adaptive Testing (CAT). We present three different methods for CAT. One of them, the item response theory, is a well established method, while the other two, Bayesian and neural networks, are new in the area of educational testing. In the first part of this paper, we present the concept of CAT and its advantages and disadvantages. We collected data from paper tests performed with grammar school students. We provide the summary of data used for our experiments in the second part. Next, we present three different model types for CAT. They are based on the item response theory, Bayesian networks, and neural networks. The general theory associated with each type is briefly explained and the utilization of these models for CAT is analyzed. Future research is outlined in the concluding part of the paper. It shows many interesting research paths that are important not only for CAT but also for other areas of artificial intelligence.
Tasks
Published 2017-03-26
URL http://arxiv.org/abs/1703.09794v1
PDF http://arxiv.org/pdf/1703.09794v1.pdf
PWC https://paperswithcode.com/paper/probabilistic-models-for-computerized
Repo
Framework

Human and Machine Speaker Recognition Based on Short Trivial Events

Title Human and Machine Speaker Recognition Based on Short Trivial Events
Authors Miao Zhang, Xiaofei Kang, Yanqing Wang, Lantian Li, Zhiyuan Tang, Haisheng Dai, Dong Wang
Abstract Trivial events are ubiquitous in human to human conversations, e.g., cough, laugh and sniff. Compared to regular speech, these trivial events are usually short and unclear, thus generally regarded as not speaker discriminative and so are largely ignored by present speaker recognition research. However, these trivial events are highly valuable in some particular circumstances such as forensic examination, as they are less subjected to intentional change, so can be used to discover the genuine speaker from disguised speech. In this paper, we collect a trivial event speech database that involves 75 speakers and 6 types of events, and report preliminary speaker recognition results on this database, by both human listeners and machines. Particularly, the deep feature learning technique recently proposed by our group is utilized to analyze and recognize the trivial events, which leads to acceptable equal error rates (EERs) despite the extremely short durations (0.2-0.5 seconds) of these events. Comparing different types of events, ‘hmm’ seems more speaker discriminative.
Tasks Speaker Recognition
Published 2017-11-15
URL http://arxiv.org/abs/1711.05443v3
PDF http://arxiv.org/pdf/1711.05443v3.pdf
PWC https://paperswithcode.com/paper/human-and-machine-speaker-recognition-based
Repo
Framework

Plan3D: Viewpoint and Trajectory Optimization for Aerial Multi-View Stereo Reconstruction

Title Plan3D: Viewpoint and Trajectory Optimization for Aerial Multi-View Stereo Reconstruction
Authors Benjamin Hepp, Matthias Nießner, Otmar Hilliges
Abstract We introduce a new method that efficiently computes a set of viewpoints and trajectories for high-quality 3D reconstructions in outdoor environments. Our goal is to automatically explore an unknown area, and obtain a complete 3D scan of a region of interest (e.g., a large building). Images from a commodity RGB camera, mounted on an autonomously navigated quadcopter, are fed into a multi-view stereo reconstruction pipeline that produces high-quality results but is computationally expensive. In this setting, the scanning result is constrained by the restricted flight time of quadcopters. To this end, we introduce a novel optimization strategy that respects these constraints by maximizing the information gain from sparsely-sampled view points while limiting the total travel distance of the quadcopter. At the core of our method lies a hierarchical volumetric representation that allows the algorithm to distinguish between unknown, free, and occupied space. Furthermore, our information gain based formulation leverages this representation to handle occlusions in an efficient manner. In addition to the surface geometry, we utilize the free-space information to avoid obstacles and determine collision-free flight paths. Our tool can be used to specify the region of interest and to plan trajectories. We demonstrate our method by obtaining a number of compelling 3D reconstructions, and provide a thorough quantitative evaluation showing improvement over previous state-of-the-art and regular patterns.
Tasks
Published 2017-05-25
URL http://arxiv.org/abs/1705.09314v2
PDF http://arxiv.org/pdf/1705.09314v2.pdf
PWC https://paperswithcode.com/paper/plan3d-viewpoint-and-trajectory-optimization
Repo
Framework

Run, skeleton, run: skeletal model in a physics-based simulation

Title Run, skeleton, run: skeletal model in a physics-based simulation
Authors Mikhail Pavlov, Sergey Kolesnikov, Sergey M. Plis
Abstract In this paper, we present our approach to solve a physics-based reinforcement learning challenge “Learning to Run” with objective to train physiologically-based human model to navigate a complex obstacle course as quickly as possible. The environment is computationally expensive, has a high-dimensional continuous action space and is stochastic. We benchmark state of the art policy-gradient methods and test several improvements, such as layer normalization, parameter noise, action and state reflecting, to stabilize training and improve its sample-efficiency. We found that the Deep Deterministic Policy Gradient method is the most efficient method for this environment and the improvements we have introduced help to stabilize training. Learned models are able to generalize to new physical scenarios, e.g. different obstacle courses.
Tasks Policy Gradient Methods
Published 2017-11-18
URL http://arxiv.org/abs/1711.06922v2
PDF http://arxiv.org/pdf/1711.06922v2.pdf
PWC https://paperswithcode.com/paper/run-skeleton-run-skeletal-model-in-a-physics
Repo
Framework
comments powered by Disqus