Paper Group ANR 108
Domain Adaptation through Synthesis for Unsupervised Person Re-identification. A Survey of Hierarchy Identification in Social Networks. Planification par fusions incrémentales de graphes. Extending Neural Generative Conversational Model using External Knowledge Sources. Automatic Large-Scale Data Acquisition via Crowdsourcing for Crosswalk Classifi …
Domain Adaptation through Synthesis for Unsupervised Person Re-identification
Title | Domain Adaptation through Synthesis for Unsupervised Person Re-identification |
Authors | Slawomir Bak, Peter Carr, Jean-Francois Lalonde |
Abstract | Drastic variations in illumination across surveillance cameras make the person re-identification problem extremely challenging. Current large scale re-identification datasets have a significant number of training subjects, but lack diversity in lighting conditions. As a result, a trained model requires fine-tuning to become effective under an unseen illumination condition. To alleviate this problem, we introduce a new synthetic dataset that contains hundreds of illumination conditions. Specifically, we use 100 virtual humans illuminated with multiple HDR environment maps which accurately model realistic indoor and outdoor lighting. To achieve better accuracy in unseen illumination conditions we propose a novel domain adaptation technique that takes advantage of our synthetic data and performs fine-tuning in a completely unsupervised way. Our approach yields significantly higher accuracy than semi-supervised and unsupervised state-of-the-art methods, and is very competitive with supervised techniques. |
Tasks | Domain Adaptation, Person Re-Identification, Unsupervised Person Re-Identification |
Published | 2018-04-26 |
URL | http://arxiv.org/abs/1804.10094v1 |
http://arxiv.org/pdf/1804.10094v1.pdf | |
PWC | https://paperswithcode.com/paper/domain-adaptation-through-synthesis-for |
Repo | |
Framework | |
A Survey of Hierarchy Identification in Social Networks
Title | A Survey of Hierarchy Identification in Social Networks |
Authors | Denys Katerenchuk |
Abstract | Humans are social by nature. Throughout history, people have formed communities and built relationships. Most relationships with coworkers, friends, and family are developed during face-to-face interactions. These relationships are established through explicit means of communications such as words and implicit such as intonation, body language, etc. By analyzing human interactions we can derive information about the relationships and influence among conversation participants. However, with the development of the Internet, people started to communicate through text in online social networks. Interestingly, they brought their communicational habits to the Internet. Many social network users form relationships with each other and establish communities with leaders and followers. Recognizing these hierarchical relationships is an important task because it will help to understand social networks and predict future trends, improve recommendations, better target advertisement, and improve national security by identifying leaders of anonymous terror groups. In this work, I provide an overview of current research in this area and present the state-of-the-art approaches to deal with the problem of identifying hierarchical relationships in social networks. |
Tasks | |
Published | 2018-12-20 |
URL | http://arxiv.org/abs/1812.08425v1 |
http://arxiv.org/pdf/1812.08425v1.pdf | |
PWC | https://paperswithcode.com/paper/a-survey-of-hierarchy-identification-in |
Repo | |
Framework | |
Planification par fusions incrémentales de graphes
Title | Planification par fusions incrémentales de graphes |
Authors | Damien Pellier, lias. Belaidi |
Abstract | In this paper, we introduce a generic and fresh model for distributed planning called “Distributed Planning Through Graph Merging” ({\sf DPGM}). This model unifies the different steps of the distributed planning process into a single step. Our approach is based on a planning graph structure for the agent reasoning and a CSP mechanism for the individual plan extraction and the coordination. We assume that no agent can reach the global goal alone. Therefore the agents must cooperate, {\it i.e.,} take in into account potential positive interactions between their activities to reach their common shared goal. The originality of our model consists in considering as soon as possible, {\it i.e.,} in the individual planning process, the positive and the negative interactions between agents activities in order to reduce the search cost of a global coordinated solution plan. |
Tasks | |
Published | 2018-10-19 |
URL | http://arxiv.org/abs/1810.08460v1 |
http://arxiv.org/pdf/1810.08460v1.pdf | |
PWC | https://paperswithcode.com/paper/planification-par-fusions-incrementales-de |
Repo | |
Framework | |
Extending Neural Generative Conversational Model using External Knowledge Sources
Title | Extending Neural Generative Conversational Model using External Knowledge Sources |
Authors | Prasanna Parthasarathi, Joelle Pineau |
Abstract | The use of connectionist approaches in conversational agents has been progressing rapidly due to the availability of large corpora. However current generative dialogue models often lack coherence and are content poor. This work proposes an architecture to incorporate unstructured knowledge sources to enhance the next utterance prediction in chit-chat type of generative dialogue models. We focus on Sequence-to-Sequence (Seq2Seq) conversational agents trained with the Reddit News dataset, and consider incorporating external knowledge from Wikipedia summaries as well as from the NELL knowledge base. Our experiments show faster training time and improved perplexity when leveraging external knowledge. |
Tasks | |
Published | 2018-09-14 |
URL | http://arxiv.org/abs/1809.05524v1 |
http://arxiv.org/pdf/1809.05524v1.pdf | |
PWC | https://paperswithcode.com/paper/extending-neural-generative-conversational |
Repo | |
Framework | |
Automatic Large-Scale Data Acquisition via Crowdsourcing for Crosswalk Classification: A Deep Learning Approach
Title | Automatic Large-Scale Data Acquisition via Crowdsourcing for Crosswalk Classification: A Deep Learning Approach |
Authors | Rodrigo F. Berriel, Franco Schmidt Rossi, Alberto F. de Souza, Thiago Oliveira-Santos |
Abstract | Correctly identifying crosswalks is an essential task for the driving activity and mobility autonomy. Many crosswalk classification, detection and localization systems have been proposed in the literature over the years. These systems use different perspectives to tackle the crosswalk classification problem: satellite imagery, cockpit view (from the top of a car or behind the windshield), and pedestrian perspective. Most of the works in the literature are designed and evaluated using small and local datasets, i.e. datasets that present low diversity. Scaling to large datasets imposes a challenge for the annotation procedure. Moreover, there is still need for cross-database experiments in the literature because it is usually hard to collect the data in the same place and conditions of the final application. In this paper, we present a crosswalk classification system based on deep learning. For that, crowdsourcing platforms, such as OpenStreetMap and Google Street View, are exploited to enable automatic training via automatic acquisition and annotation of a large-scale database. Additionally, this work proposes a comparison study of models trained using fully-automatic data acquisition and annotation against models that were partially annotated. Cross-database experiments were also included in the experimentation to show that the proposed methods enable use with real world applications. Our results show that the model trained on the fully-automatic database achieved high overall accuracy (94.12%), and that a statistically significant improvement (to 96.30%) can be achieved by manually annotating a specific part of the database. Finally, the results of the cross-database experiments show that both models are robust to the many variations of image and scenarios, presenting a consistent behavior. |
Tasks | |
Published | 2018-05-30 |
URL | http://arxiv.org/abs/1805.11970v1 |
http://arxiv.org/pdf/1805.11970v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-large-scale-data-acquisition-via |
Repo | |
Framework | |
Spatially-weighted Anomaly Detection
Title | Spatially-weighted Anomaly Detection |
Authors | Minori Narita, Daiki Kimura, Ryuki Tachibana |
Abstract | Many types of anomaly detection methods have been proposed recently, and applied to a wide variety of fields including medical screening and production quality checking. Some methods have utilized images, and, in some cases, a part of the anomaly images is known beforehand. However, this kind of information is dismissed by previous methods, because the methods can only utilize a normal pattern. Moreover, the previous methods suffer a decrease in accuracy due to negative effects from surrounding noises. In this study, we propose a spatially-weighted anomaly detection method (SPADE) that utilizes all of the known patterns and lessens the vulnerability to ambient noises by applying Grad-CAM, which is the visualization method of a CNN. We evaluated our method quantitatively using two datasets, the MNIST dataset with noise and a dataset based on a brief screening test for dementia. |
Tasks | Anomaly Detection |
Published | 2018-10-05 |
URL | http://arxiv.org/abs/1810.02607v1 |
http://arxiv.org/pdf/1810.02607v1.pdf | |
PWC | https://paperswithcode.com/paper/spatially-weighted-anomaly-detection |
Repo | |
Framework | |
Faithful Semantical Embedding of a Dyadic Deontic Logic in HOL
Title | Faithful Semantical Embedding of a Dyadic Deontic Logic in HOL |
Authors | Christoph Benzmüller, Ali Farjami, Xavier Parent |
Abstract | A shallow semantical embedding of a dyadic deontic logic by Carmo and Jones in classical higher-order logic is presented. This embedding is proven sound and complete, that is, faithful. The work presented here provides the theoretical foundation for the implementation and automation of dyadic deontic logic within off-the-shelf higher-order theorem provers and proof assistants. |
Tasks | |
Published | 2018-02-23 |
URL | http://arxiv.org/abs/1802.08454v2 |
http://arxiv.org/pdf/1802.08454v2.pdf | |
PWC | https://paperswithcode.com/paper/faithful-semantical-embedding-of-a-dyadic |
Repo | |
Framework | |
Spurious samples in deep generative models: bug or feature?
Title | Spurious samples in deep generative models: bug or feature? |
Authors | Balázs Kégl, Mehdi Cherti, Akın Kazakçı |
Abstract | Traditional wisdom in generative modeling literature is that spurious samples that a model can generate are errors and they should be avoided. Recent research, however, has shown interest in studying or even exploiting such samples instead of eliminating them. In this paper, we ask the question whether such samples can be eliminated all together without sacrificing coverage of the generating distribution. For the class of models we consider, we experimentally demonstrate that this is not possible without losing the ability to model some of the test samples. While our results need to be confirmed on a broader set of model families, these initial findings provide partial evidence that spurious samples share structural properties with the learned dataset, which, in turn, suggests they are not simply errors but a feature of deep generative nets. |
Tasks | |
Published | 2018-10-03 |
URL | http://arxiv.org/abs/1810.01876v1 |
http://arxiv.org/pdf/1810.01876v1.pdf | |
PWC | https://paperswithcode.com/paper/spurious-samples-in-deep-generative-models |
Repo | |
Framework | |
Unseen Word Representation by Aligning Heterogeneous Lexical Semantic Spaces
Title | Unseen Word Representation by Aligning Heterogeneous Lexical Semantic Spaces |
Authors | Victor Prokhorov, Mohammad Taher Pilehvar, Dimitri Kartsaklis, Pietro Lio, Nigel Collier |
Abstract | Word embedding techniques heavily rely on the abundance of training data for individual words. Given the Zipfian distribution of words in natural language texts, a large number of words do not usually appear frequently or at all in the training data. In this paper we put forward a technique that exploits the knowledge encoded in lexical resources, such as WordNet, to induce embeddings for unseen words. Our approach adapts graph embedding and cross-lingual vector space transformation techniques in order to merge lexical knowledge encoded in ontologies with that derived from corpus statistics. We show that the approach can provide consistent performance improvements across multiple evaluation benchmarks: in-vitro, on multiple rare word similarity datasets, and in-vivo, in two downstream text classification tasks. |
Tasks | Graph Embedding, Text Classification |
Published | 2018-11-12 |
URL | http://arxiv.org/abs/1811.04983v1 |
http://arxiv.org/pdf/1811.04983v1.pdf | |
PWC | https://paperswithcode.com/paper/unseen-word-representation-by-aligning |
Repo | |
Framework | |
Understanding Neural Pathways in Zebrafish through Deep Learning and High Resolution Electron Microscope Data
Title | Understanding Neural Pathways in Zebrafish through Deep Learning and High Resolution Electron Microscope Data |
Authors | Ishtar Nyawira, Kristi Bushman, Iris Qian, Annie Zhang |
Abstract | The tracing of neural pathways through large volumes of image data is an incredibly tedious and time-consuming process that significantly encumbers progress in neuroscience. We are exploring deep learning’s potential to automate segmentation of high-resolution scanning electron microscope (SEM) image data to remove that barrier. We have started with neural pathway tracing through 5.1GB of whole-brain serial-section slices from larval zebrafish collected by the Center for Brain Science at Harvard University. This kind of manual image segmentation requires years of careful work to properly trace the neural pathways in an organism as small as a zebrafish larva (approximately 5mm in total body length). In automating this process, we would vastly improve productivity, leading to faster data analysis and breakthroughs in understanding the complexity of the brain. We will build upon prior attempts to employ deep learning for automatic image segmentation extending methods for unconventional deep learning data. |
Tasks | Semantic Segmentation |
Published | 2018-08-31 |
URL | http://arxiv.org/abs/1809.00084v1 |
http://arxiv.org/pdf/1809.00084v1.pdf | |
PWC | https://paperswithcode.com/paper/understanding-neural-pathways-in-zebrafish |
Repo | |
Framework | |
Preferential Attachment Graphs with Planted Communities
Title | Preferential Attachment Graphs with Planted Communities |
Authors | Bruce Hajek, Suryanarayana Sankagiri |
Abstract | A variation of the preferential attachment random graph model of Barab'asi and Albert is defined that incorporates planted communities. The graph is built progressively, with new vertices attaching to the existing ones one-by-one. At every step, the incoming vertex is randomly assigned a label, which represents a community it belongs to. This vertex then chooses certain vertices as its neighbors, with the choice of each vertex being proportional to the degree of the vertex multiplied by an affinity depending on the labels of the new vertex and a potential neighbor. It is shown that the fraction of half-edges attached to vertices with a given label converges almost surely for some classes of affinity matrices. In addition, the empirical degree distribution for the set of vertices with a given label converges to a heavy tailed distribution, such that the tail decay parameter can be different for different communities. Our proof method may be of independent interest, both for the classical Barab'asi -Albert model and for other possible extensions. |
Tasks | |
Published | 2018-01-21 |
URL | http://arxiv.org/abs/1801.06816v2 |
http://arxiv.org/pdf/1801.06816v2.pdf | |
PWC | https://paperswithcode.com/paper/preferential-attachment-graphs-with-planted |
Repo | |
Framework | |
Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification
Title | Multi-task Mid-level Feature Alignment Network for Unsupervised Cross-Dataset Person Re-Identification |
Authors | Shan Lin, Haoliang Li, Chang-Tsun Li, Alex Chichung Kot |
Abstract | Most existing person re-identification (Re-ID) approaches follow a supervised learning framework, in which a large number of labelled matching pairs are required for training. Such a setting severely limits their scalability in real-world applications where no labelled samples are available during the training phase. To overcome this limitation, we develop a novel unsupervised Multi-task Mid-level Feature Alignment (MMFA) network for the unsupervised cross-dataset person re-identification task. Under the assumption that the source and target datasets share the same set of mid-level semantic attributes, our proposed model can be jointly optimised under the person’s identity classification and the attribute learning task with a cross-dataset mid-level feature alignment regularisation term. In this way, the learned feature representation can be better generalised from one dataset to another which further improve the person re-identification accuracy. Experimental results on four benchmark datasets demonstrate that our proposed method outperforms the state-of-the-art baselines. |
Tasks | Person Re-Identification |
Published | 2018-07-04 |
URL | http://arxiv.org/abs/1807.01440v2 |
http://arxiv.org/pdf/1807.01440v2.pdf | |
PWC | https://paperswithcode.com/paper/multi-task-mid-level-feature-alignment |
Repo | |
Framework | |
Weakly-supervised Neural Semantic Parsing with a Generative Ranker
Title | Weakly-supervised Neural Semantic Parsing with a Generative Ranker |
Authors | Jianpeng Cheng, Mirella Lapata |
Abstract | Weakly-supervised semantic parsers are trained on utterance-denotation pairs, treating logical forms as latent. The task is challenging due to the large search space and spuriousness of logical forms. In this paper we introduce a neural parser-ranker system for weakly-supervised semantic parsing. The parser generates candidate tree-structured logical forms from utterances using clues of denotations. These candidates are then ranked based on two criterion: their likelihood of executing to the correct denotation, and their agreement with the utterance semantics. We present a scheduled training procedure to balance the contribution of the two objectives. Furthermore, we propose to use a neurally encoded lexicon to inject prior domain knowledge to the model. Experiments on three Freebase datasets demonstrate the effectiveness of our semantic parser, achieving results within the state-of-the-art range. |
Tasks | Semantic Parsing |
Published | 2018-08-23 |
URL | http://arxiv.org/abs/1808.07625v1 |
http://arxiv.org/pdf/1808.07625v1.pdf | |
PWC | https://paperswithcode.com/paper/weakly-supervised-neural-semantic-parsing |
Repo | |
Framework | |
Hierarchical Clustering better than Average-Linkage
Title | Hierarchical Clustering better than Average-Linkage |
Authors | Moses Charikar, Vaggos Chatziafratis, Rad Niazadeh |
Abstract | Hierarchical Clustering (HC) is a widely studied problem in exploratory data analysis, usually tackled by simple agglomerative procedures like average-linkage, single-linkage or complete-linkage. In this paper we focus on two objectives, introduced recently to give insight into the performance of average-linkage clustering: a similarity based HC objective proposed by [Moseley and Wang, 2017] and a dissimilarity based HC objective proposed by [Cohen-Addad et al., 2018]. In both cases, we present tight counterexamples showing that average-linkage cannot obtain better than 1/3 and 2/3 approximations respectively (in the worst-case), settling an open question raised in [Moseley and Wang, 2017]. This matches the approximation ratio of a random solution, raising a natural question: can we beat average-linkage for these objectives? We answer this in the affirmative, giving two new algorithms based on semidefinite programming with provably better guarantees. |
Tasks | |
Published | 2018-08-07 |
URL | http://arxiv.org/abs/1808.02227v1 |
http://arxiv.org/pdf/1808.02227v1.pdf | |
PWC | https://paperswithcode.com/paper/hierarchical-clustering-better-than-average |
Repo | |
Framework | |
Can Deep Learning Relax Endomicroscopy Hardware Miniaturization Requirements?
Title | Can Deep Learning Relax Endomicroscopy Hardware Miniaturization Requirements? |
Authors | Saeed Izadi, Kathleen P. Moriarty, Ghassan Hamarneh |
Abstract | Confocal laser endomicroscopy (CLE) is a novel imaging modality that provides in vivo histological cross-sections of examined tissue. Recently, attempts have been made to develop miniaturized in vivo imaging devices, specifically confocal laser microscopes, for both clinical and research applications. However, current implementations of miniature CLE components, such as confocal lenses, compromise image resolution, signal-to-noise ratio, or both, which negatively impacts the utility of in vivo imaging. In this work, we demonstrate that software-based techniques can be used to recover lost information due to endomicroscopy hardware miniaturization and reconstruct images of higher resolution. Particularly, a densely connected convolutional neural network is used to reconstruct a high-resolution CLE image from a low-resolution input. In the proposed network, each layer is directly connected to all subsequent layers, which results in an effective combination of low-level and high-level features and efficient information flow throughout the network. To train and evaluate our network, we use a dataset of 181 high-resolution CLE images. Both quantitative and qualitative results indicate superiority of the proposed network compared to traditional interpolation techniques and competing learning-based methods. This work demonstrates that software-based super-resolution is a viable approach to compensate for loss of resolution due to endoscopic hardware miniaturization. |
Tasks | Super-Resolution |
Published | 2018-06-21 |
URL | http://arxiv.org/abs/1806.08338v1 |
http://arxiv.org/pdf/1806.08338v1.pdf | |
PWC | https://paperswithcode.com/paper/can-deep-learning-relax-endomicroscopy |
Repo | |
Framework | |