Paper Group ANR 265
Human Pose Estimation from Depth Images via Inference Embedded Multi-task Learning. Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions. A Computational Approach to Automatic Prediction of Drunk Texting. Nonparametric Regression with Adaptive Truncation via a Convex Hierarchical Penalty. Computer Assis …
Human Pose Estimation from Depth Images via Inference Embedded Multi-task Learning
Title | Human Pose Estimation from Depth Images via Inference Embedded Multi-task Learning |
Authors | Keze Wang, Shengfu Zhai, Hui Cheng, Xiaodan Liang, Liang Lin |
Abstract | Human pose estimation (i.e., locating the body parts / joints of a person) is a fundamental problem in human-computer interaction and multimedia applications. Significant progress has been made based on the development of depth sensors, i.e., accessible human pose prediction from still depth images [32]. However, most of the existing approaches to this problem involve several components/models that are independently designed and optimized, leading to suboptimal performances. In this paper, we propose a novel inference-embedded multi-task learning framework for predicting human pose from still depth images, which is implemented with a deep architecture of neural networks. Specifically, we handle two cascaded tasks: i) generating the heat (confidence) maps of body parts via a fully convolutional network (FCN); ii) seeking the optimal configuration of body parts based on the detected body part proposals via an inference built-in MatchNet [10], which measures the appearance and geometric kinematic compatibility of body parts and embodies the dynamic programming inference as an extra network layer. These two tasks are jointly optimized. Our extensive experiments show that the proposed deep model significantly improves the accuracy of human pose estimation over other several state-of-the-art methods or SDKs. We also release a large-scale dataset for comparison, which includes 100K depth images under challenging scenarios. |
Tasks | Multi-Task Learning, Pose Estimation, Pose Prediction |
Published | 2016-08-13 |
URL | http://arxiv.org/abs/1608.03932v1 |
http://arxiv.org/pdf/1608.03932v1.pdf | |
PWC | https://paperswithcode.com/paper/human-pose-estimation-from-depth-images-via |
Repo | |
Framework | |
Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions
Title | Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions |
Authors | Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Lillian Lee |
Abstract | Changing someone’s opinion is arguably one of the most important challenges of social interaction. The underlying process proves difficult to study: it is hard to know how someone’s opinions are formed and whether and how someone’s views shift. Fortunately, ChangeMyView, an active community on Reddit, provides a platform where users present their own opinions and reasoning, invite others to contest them, and acknowledge when the ensuing discussions change their original views. In this work, we study these interactions to understand the mechanisms behind persuasion. We find that persuasive arguments are characterized by interesting patterns of interaction dynamics, such as participant entry-order and degree of back-and-forth exchange. Furthermore, by comparing similar counterarguments to the same opinion, we show that language factors play an essential role. In particular, the interplay between the language of the opinion holder and that of the counterargument provides highly predictive cues of persuasiveness. Finally, since even in this favorable setting people may not be persuaded, we investigate the problem of determining whether someone’s opinion is susceptible to being changed at all. For this more difficult task, we show that stylistic choices in how the opinion is expressed carry predictive power. |
Tasks | |
Published | 2016-02-02 |
URL | http://arxiv.org/abs/1602.01103v2 |
http://arxiv.org/pdf/1602.01103v2.pdf | |
PWC | https://paperswithcode.com/paper/winning-arguments-interaction-dynamics-and |
Repo | |
Framework | |
A Computational Approach to Automatic Prediction of Drunk Texting
Title | A Computational Approach to Automatic Prediction of Drunk Texting |
Authors | Aditya Joshi, Abhijit Mishra, Balamurali AR, Pushpak Bhattacharyya, Mark Carman |
Abstract | Alcohol abuse may lead to unsociable behavior such as crime, drunk driving, or privacy leaks. We introduce automatic drunk-texting prediction as the task of identifying whether a text was written when under the influence of alcohol. We experiment with tweets labeled using hashtags as distant supervision. Our classifiers use a set of N-gram and stylistic features to detect drunk tweets. Our observations present the first quantitative evidence that text contains signals that can be exploited to detect drunk-texting. |
Tasks | |
Published | 2016-10-04 |
URL | http://arxiv.org/abs/1610.00879v1 |
http://arxiv.org/pdf/1610.00879v1.pdf | |
PWC | https://paperswithcode.com/paper/a-computational-approach-to-automatic |
Repo | |
Framework | |
Nonparametric Regression with Adaptive Truncation via a Convex Hierarchical Penalty
Title | Nonparametric Regression with Adaptive Truncation via a Convex Hierarchical Penalty |
Authors | Asad Haris, Ali Shojaie, Noah Simon |
Abstract | We consider the problem of non-parametric regression with a potentially large number of covariates. We propose a convex, penalized estimation framework that is particularly well-suited for high-dimensional sparse additive models. The proposed approach combines appealing features of finite basis representation and smoothing penalties for non-parametric estimation. In particular, in the case of additive models, a finite basis representation provides a parsimonious representation for fitted functions but is not adaptive when component functions posses different levels of complexity. On the other hand, a smoothing spline type penalty on the component functions is adaptive but does not offer a parsimonious representation of the estimated function. The proposed approach simultaneously achieves parsimony and adaptivity in a computationally efficient framework. We demonstrate these properties through empirical studies on both real and simulated datasets. We show that our estimator converges at the minimax rate for functions within a hierarchical class. We further establish minimax rates for a large class of sparse additive models. The proposed method is implemented using an efficient algorithm that scales similarly to the Lasso with the number of covariates and samples size. |
Tasks | |
Published | 2016-11-30 |
URL | https://arxiv.org/abs/1611.09972v4 |
https://arxiv.org/pdf/1611.09972v4.pdf | |
PWC | https://paperswithcode.com/paper/nonparametric-regression-with-adaptive |
Repo | |
Framework | |
Computer Assisted Composition with Recurrent Neural Networks
Title | Computer Assisted Composition with Recurrent Neural Networks |
Authors | Christian Walder, Dongwoo Kim |
Abstract | Sequence modeling with neural networks has lead to powerful models of symbolic music data. We address the problem of exploiting these models to reach creative musical goals, by combining with human input. To this end we generalise previous work, which sampled Markovian sequence models under the constraint that the sequence belong to the language of a given finite state machine provided by the human. We consider more expressive non-Markov models, thereby requiring approximate sampling which we provide in the form of an efficient sequential Monte Carlo method. In addition we provide and compare with a beam search strategy for conditional probability maximisation. Our algorithms are capable of convincingly re-harmonising famous musical works. To demonstrate this we provide visualisations, quantitative experiments, a human listening test and audio examples. We find both the sampling and optimisation procedures to be effective, yet complementary in character. For the case of highly permissive constraint sets, we find that sampling is to be preferred due to the overly regular nature of the optimisation based results. The generality of our algorithms permits countless other creative applications. |
Tasks | |
Published | 2016-12-01 |
URL | http://arxiv.org/abs/1612.00092v2 |
http://arxiv.org/pdf/1612.00092v2.pdf | |
PWC | https://paperswithcode.com/paper/computer-assisted-composition-with-recurrent |
Repo | |
Framework | |
Can we reach Pareto optimal outcomes using bottom-up approaches?
Title | Can we reach Pareto optimal outcomes using bottom-up approaches? |
Authors | Victor Sanchez-Anguix, Reyhan Aydogan, Tim Baarslag, Catholijn M. Jonker |
Abstract | Traditionally, researchers in decision making have focused on attempting to reach Pareto Optimality using horizontal approaches, where optimality is calculated taking into account every participant at the same time. Sometimes, this may prove to be a difficult task (e.g., conflict, mistrust, no information sharing, etc.). In this paper, we explore the possibility of achieving Pareto Optimal outcomes in a group by using a bottom-up approach: discovering Pareto optimal outcomes by interacting in subgroups. We analytically show that Pareto optimal outcomes in a subgroup are also Pareto optimal in a supergroup of those agents in the case of strict, transitive, and complete preferences. Then, we empirically analyze the prospective usability and practicality of bottom-up approaches in a variety of decision making domains. |
Tasks | Decision Making |
Published | 2016-07-03 |
URL | http://arxiv.org/abs/1607.00695v1 |
http://arxiv.org/pdf/1607.00695v1.pdf | |
PWC | https://paperswithcode.com/paper/can-we-reach-pareto-optimal-outcomes-using |
Repo | |
Framework | |
The Power of Side-information in Subgraph Detection
Title | The Power of Side-information in Subgraph Detection |
Authors | Arun Kadavankandy, Konstantin Avrachenkov, Laura Cottatellucci, Rajesh Sundaresan |
Abstract | In this work, we tackle the problem of hidden community detection. We consider Belief Propagation (BP) applied to the problem of detecting a hidden Erd\H{o}s-R'enyi (ER) graph embedded in a larger and sparser ER graph, in the presence of side-information. We derive two related algorithms based on BP to perform subgraph detection in the presence of two kinds of side-information. The first variant of side-information consists of a set of nodes, called cues, known to be from the subgraph. The second variant of side-information consists of a set of nodes that are cues with a given probability. It was shown in past works that BP without side-information fails to detect the subgraph correctly when an effective signal-to-noise ratio (SNR) parameter falls below a threshold. In contrast, in the presence of non-trivial side-information, we show that the BP algorithm achieves asymptotically zero error for any value of the SNR parameter. We validate our results through simulations on synthetic datasets as well as on a few real world networks. |
Tasks | Community Detection |
Published | 2016-11-10 |
URL | http://arxiv.org/abs/1611.04847v3 |
http://arxiv.org/pdf/1611.04847v3.pdf | |
PWC | https://paperswithcode.com/paper/the-power-of-side-information-in-subgraph |
Repo | |
Framework | |
Latent Variable Discovery Using Dependency Patterns
Title | Latent Variable Discovery Using Dependency Patterns |
Authors | Xuhui Zhang, Kevin B. Korb, Ann E. Nicholson, Steven Mascaro |
Abstract | The causal discovery of Bayesian networks is an active and important research area, and it is based upon searching the space of causal models for those which can best explain a pattern of probabilistic dependencies shown in the data. However, some of those dependencies are generated by causal structures involving variables which have not been measured, i.e., latent variables. Some such patterns of dependency “reveal” themselves, in that no model based solely upon the observed variables can explain them as well as a model using a latent variable. That is what latent variable discovery is based upon. Here we did a search for finding them systematically, so that they may be applied in latent variable discovery in a more rigorous fashion. |
Tasks | Causal Discovery |
Published | 2016-07-22 |
URL | http://arxiv.org/abs/1607.06617v1 |
http://arxiv.org/pdf/1607.06617v1.pdf | |
PWC | https://paperswithcode.com/paper/latent-variable-discovery-using-dependency |
Repo | |
Framework | |
Verifiability of Argumentation Semantics
Title | Verifiability of Argumentation Semantics |
Authors | Ringo Baumann, Thomas Linsbichler, Stefan Woltran |
Abstract | Dung’s abstract argumentation theory is a widely used formalism to model conflicting information and to draw conclusions in such situations. Hereby, the knowledge is represented by so-called argumentation frameworks (AFs) and the reasoning is done via semantics extracting acceptable sets. All reasonable semantics are based on the notion of conflict-freeness which means that arguments are only jointly acceptable when they are not linked within the AF. In this paper, we study the question which information on top of conflict-free sets is needed to compute extensions of a semantics at hand. We introduce a hierarchy of so-called verification classes specifying the required amount of information. We show that well-known standard semantics are exactly verifiable through a certain such class. Our framework also gives a means to study semantics lying inbetween known semantics, thus contributing to a more abstract understanding of the different features argumentation semantics offer. |
Tasks | Abstract Argumentation |
Published | 2016-03-31 |
URL | http://arxiv.org/abs/1603.09502v1 |
http://arxiv.org/pdf/1603.09502v1.pdf | |
PWC | https://paperswithcode.com/paper/verifiability-of-argumentation-semantics |
Repo | |
Framework | |
Using Enthymemes to Fill the Gap between Logical Argumentation and Revision of Abstract Argumentation Frameworks
Title | Using Enthymemes to Fill the Gap between Logical Argumentation and Revision of Abstract Argumentation Frameworks |
Authors | Jean-Guy Mailly |
Abstract | In this paper, we present a preliminary work on an approach to fill the gap between logic-based argumentation and the numerous approaches to tackle the dynamics of abstract argumentation frameworks. Our idea is that, even when arguments and attacks are defined by means of a logical belief base, there may be some uncertainty about how accurate is the content of an argument, and so the presence (or absence) of attacks concerning it. We use enthymemes to illustrate this notion of uncertainty of arguments and attacks. Indeed, as argued in the literature, real arguments are often enthymemes instead of completely specified deductive arguments. This means that some parts of the pair (support, claim) may be missing because they are supposed to belong to some “common knowledge”, and then should be deduced by the agent which receives the enthymeme. But the perception that agents have of the common knowledge may be wrong, and then a first agent may state an enthymeme that her opponent is not able to decode in an accurate way. It is likely that the decoding of the enthymeme by the agent leads to mistaken attacks between this new argument and the existing ones. In this case, the agent can receive some information about attacks or arguments acceptance statuses which disagree with her argumentation framework. We exemplify a way to incorporate this new piece of information by means of existing works on the dynamics of abstract argumentation frameworks. |
Tasks | Abstract Argumentation |
Published | 2016-03-29 |
URL | http://arxiv.org/abs/1603.08789v1 |
http://arxiv.org/pdf/1603.08789v1.pdf | |
PWC | https://paperswithcode.com/paper/using-enthymemes-to-fill-the-gap-between |
Repo | |
Framework | |
A Counterexample to the Forward Recursion in Fuzzy Critical Path Analysis Under Discrete Fuzzy Sets
Title | A Counterexample to the Forward Recursion in Fuzzy Critical Path Analysis Under Discrete Fuzzy Sets |
Authors | Matthew J. Liberatore |
Abstract | Fuzzy logic is an alternate approach for quantifying uncertainty relating to activity duration. The fuzzy version of the backward recursion has been shown to produce results that incorrectly amplify the level of uncertainty. However, the fuzzy version of the forward recursion has been widely proposed as an approach for determining the fuzzy set of critical path lengths. In this paper, the direct application of the extension principle leads to a proposition that must be satisfied in fuzzy critical path analysis. Using a counterexample it is demonstrated that the fuzzy forward recursion when discrete fuzzy sets are used to represent activity durations produces results that are not consistent with the theory presented. The problem is shown to be the application of the fuzzy maximum. Several methods presented in the literature are described and shown to provide results that are consistent with the extension principle. |
Tasks | |
Published | 2016-05-09 |
URL | http://arxiv.org/abs/1607.04583v1 |
http://arxiv.org/pdf/1607.04583v1.pdf | |
PWC | https://paperswithcode.com/paper/a-counterexample-to-the-forward-recursion-in |
Repo | |
Framework | |
Adaptive imputation of missing values for incomplete pattern classification
Title | Adaptive imputation of missing values for incomplete pattern classification |
Authors | Zhun-Ga Liu, Quan Pan, Jean Dezert, Arnaud Martin |
Abstract | In classification of incomplete pattern, the missing values can either play a crucial role in the class determination, or have only little influence (or eventually none) on the classification results according to the context. We propose a credal classification method for incomplete pattern with adaptive imputation of missing values based on belief function theory. At first, we try to classify the object (incomplete pattern) based only on the available attribute values. As underlying principle, we assume that the missing information is not crucial for the classification if a specific class for the object can be found using only the available information. In this case, the object is committed to this particular class. However, if the object cannot be classified without ambiguity, it means that the missing values play a main role for achieving an accurate classification. In this case, the missing values will be imputed based on the K-nearest neighbor (K-NN) and self-organizing map (SOM) techniques, and the edited pattern with the imputation is then classified. The (original or edited) pattern is respectively classified according to each training class, and the classification results represented by basic belief assignments are fused with proper combination rules for making the credal classification. The object is allowed to belong with different masses of belief to the specific classes and meta-classes (which are particular disjunctions of several single classes). The credal classification captures well the uncertainty and imprecision of classification, and reduces effectively the rate of misclassifications thanks to the introduction of meta-classes. The effectiveness of the proposed method with respect to other classical methods is demonstrated based on several experiments using artificial and real data sets. |
Tasks | Imputation |
Published | 2016-02-08 |
URL | http://arxiv.org/abs/1602.02617v1 |
http://arxiv.org/pdf/1602.02617v1.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-imputation-of-missing-values-for |
Repo | |
Framework | |
Robust SAR STAP via Kronecker Decomposition
Title | Robust SAR STAP via Kronecker Decomposition |
Authors | Kristjan Greenewald, Edmund Zelnio, Alfred Hero |
Abstract | This paper proposes a spatio-temporal decomposition for the detection of moving targets in multiantenna SAR. As a high resolution radar imaging modality, SAR detects and localizes non-moving targets accurately, giving it an advantage over lower resolution GMTI radars. Moving target detection is more challenging due to target smearing and masking by clutter. Space-time adaptive processing (STAP) is often used to remove the stationary clutter and enhance the moving targets. In this work, it is shown that the performance of STAP can be improved by modeling the clutter covariance as a space vs. time Kronecker product with low rank factors. Based on this model, a low-rank Kronecker product covariance estimation algorithm is proposed, and a novel separable clutter cancelation filter based on the Kronecker covariance estimate is introduced. The proposed method provides orders of magnitude reduction in the required number of training samples, as well as improved robustness to corruption of the training data. Simulation results and experiments using the Gotcha SAR GMTI challenge dataset are presented that confirm the advantages of our approach relative to existing techniques. |
Tasks | |
Published | 2016-05-05 |
URL | http://arxiv.org/abs/1605.01790v1 |
http://arxiv.org/pdf/1605.01790v1.pdf | |
PWC | https://paperswithcode.com/paper/robust-sar-stap-via-kronecker-decomposition |
Repo | |
Framework | |
Syntax-Semantics Interaction Parsing Strategies. Inside SYNTAGMA
Title | Syntax-Semantics Interaction Parsing Strategies. Inside SYNTAGMA |
Authors | Daniel Christen |
Abstract | This paper discusses SYNTAGMA, a rule based NLP system addressing the tricky issues of syntactic ambiguity reduction and word sense disambiguation as well as providing innovative and original solutions for constituent generation and constraints management. To provide an insight into how it operates, the system’s general architecture and components, as well as its lexical, syntactic and semantic resources are described. After that, the paper addresses the mechanism that performs selective parsing through an interaction between syntactic and semantic information, leading the parser to a coherent and accurate interpretation of the input text. |
Tasks | Word Sense Disambiguation |
Published | 2016-01-21 |
URL | http://arxiv.org/abs/1601.05768v1 |
http://arxiv.org/pdf/1601.05768v1.pdf | |
PWC | https://paperswithcode.com/paper/syntax-semantics-interaction-parsing |
Repo | |
Framework | |
What is Wrong with Topic Modeling? (and How to Fix it Using Search-based Software Engineering)
Title | What is Wrong with Topic Modeling? (and How to Fix it Using Search-based Software Engineering) |
Authors | Amritanshu Agrawal, Wei Fu, Tim Menzies |
Abstract | Context: Topic modeling finds human-readable structures in unstructured textual data. A widely used topic modeler is Latent Dirichlet allocation. When run on different datasets, LDA suffers from “order effects” i.e. different topics are generated if the order of training data is shuffled. Such order effects introduce a systematic error for any study. This error can relate to misleading results;specifically, inaccurate topic descriptions and a reduction in the efficacy of text mining classification results. Objective: To provide a method in which distributions generated by LDA are more stable and can be used for further analysis. Method: We use LDADE, a search-based software engineering tool that tunes LDA’s parameters using DE (Differential Evolution). LDADE is evaluated on data from a programmer information exchange site (Stackoverflow), title and abstract text of thousands ofSoftware Engineering (SE) papers, and software defect reports from NASA. Results were collected across different implementations of LDA (Python+Scikit-Learn, Scala+Spark); across different platforms (Linux, Macintosh) and for different kinds of LDAs (VEM,or using Gibbs sampling). Results were scored via topic stability and text mining classification accuracy. Results: In all treatments: (i) standard LDA exhibits very large topic instability; (ii) LDADE’s tunings dramatically reduce cluster instability; (iii) LDADE also leads to improved performances for supervised as well as unsupervised learning. Conclusion: Due to topic instability, using standard LDA with its “off-the-shelf” settings should now be depreciated. Also, in future, we should require SE papers that use LDA to test and (if needed) mitigate LDA topic instability. Finally, LDADE is a candidate technology for effectively and efficiently reducing that instability. |
Tasks | |
Published | 2016-08-29 |
URL | http://arxiv.org/abs/1608.08176v4 |
http://arxiv.org/pdf/1608.08176v4.pdf | |
PWC | https://paperswithcode.com/paper/what-is-wrong-with-topic-modeling-and-how-to |
Repo | |
Framework | |