- Why Swear? Analyzing and Inferring the Intentions of Vulgar ExpressionsEMNLP 2018.
- The Remarkable Benefit of User-Level Aggregation for Lexical-based Population-Level PredictionsEMNLP 2018.
- A Cross-Platform and Cross-Interaction Study of User Personality based on Images on Twitter and FlickrPLoS ONE 2018.
Assessing the predictive value of different social media platforms is important to understand the variation in how users reveal themselves across multiple platforms. Most social media platforms allow users to interact in multiple ways: by posting content to the platform, liking others’ posts, or building a user profile. While prior studies offer insights into how language use differs across platforms, differences in image usage is less well understood. In this study, we analyzed variation in image content with user personality across three interaction types (posts, likes and profile images) and two platforms, using a unique data set of users who are active on both Twitter and Flickr. Usage patterns on these two social media platforms revealed different aspects of users’ personality. Cross-platform data fusion is thus shown to improve personality prediction performance.
- User-Level Race and Ethnicity Predictors from Twitter TextCOLING 2018. [Abstract]
User demographic inference from social media text has the potential to improve a range of
downstream applications, including real-time passive polling or quantifying demographic bias.
This study focuses on developing models for user-level race and ethnicity prediction. We introduce
a data set of users who self-report their race/ethnicity through a survey, in contrast to previous
approaches that use distantly supervised data or perceived labels. We develop predictive models
from text which accurately predict the membership of a user to the four largest racial and ethnic
groups with up to .884 AUC and make these available to the research community.
- Expressively vulgar: The socio-dynamics of vulgarity and its effects on sentiment analysis in social mediaCOLING 2018. [Abstract]
Vulgarity is a common linguistic expression and is used to perform several linguistic functions. Understanding their usage can aid both linguistic and psychological phenomena as well as benefit downstream natural language processing applications such as sentiment analysis. This study performs a large-scale, data-driven empirical analysis of vulgar words using social media data. We analyze the socio-cultural and pragmatic aspects of vulgarity using tweets from users with known demographics. Further, we collect sentiment ratings for vulgar tweets to study the relationship between the use of vulgar words and perceived sentiment and show that explicitly modeling vulgar words can boost sentiment analysis performance.
- Controlling Human Perception of Basic User TraitsEMNLP 2017. [Abstract]
Much of our online communication is text-mediated and, lately, more common with automated agents. Unlike interacting with humans, these agents currently do not tailor their language to the type of person they are communicating to. In this pilot study, we measure the extent to which human perception of basic user trait information – gender and age – is controllable through text. Using automatic models of gender and age prediction, we estimate which tweets posted by a user are more likely to mis-characterize his traits. We perform multiple controlled crowdsourcing experiments in which we show that we can reduce the human prediction accuracy of gender to almost random – a > 20% drop in accuracy. Our experiments show that it is practically feasible for multiple applications such as text generation, text summarization or machine translation to be tailored to specific traits and perceived as such.
- Beyond Binary Labels: Political Ideology Prediction of Twitter UsersACL 2017. [Abstract]
Automatic political orientation prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users’ political ideology using a seven-point scale which enables us to identify politically moderate and neutral users – groups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the groups of politically engaged users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
- Personality Driven Differences in Paraphrase Preference In Workshop on Natural Language Processing and Computational Social Science (NLP+CSS). ACL 2017. [Abstract]
Personality plays a decisive role in how people behave in different scenarios, including online social media. Researchers have used such data to study how personality can be predicted from language use. In this paper, we study phrase choice as a particular stylistic linguistic difference, as opposed to the mostly topical differences identified previously. Building on previous work on demographic preferences, we quantify differences in paraphrase choice from a massive Facebook data set with posts from over 115,000 users. We quantify the predictive power of phrase choice in user profiling and use phrase choice to study psycholinguistic hypotheses. This work is relevant to future applications that aim to personalize text generation to specific personality types.
- Personality Profiles of Users Sharing Animal-related Content on Social Media.Anthrozoos, 2017. [Abstract]
Animal preferences are thought to be linked with more salient psychological traits of people and most research examining owner personality as a differentiating factor has obtained mixed results. The rise in usage of social networks offers users a new medium in which users broadcast their preferences and activities, including about animals. In two studies, the first on Facebook status updates and the second on images shared on Twitter, we revisited the link between user Big Five personality traits and animal preference, specifically focusing on cats and dogs. We used automatic content analysis of text and images to unobtrusively measure preference for animals online using large data sets. Results from Study 1 indicated that those who mentioned ownership of a cat (by using the phrase ‘my cat’) in their status updates were more open to experience, introverted, neurotic and less conscientious when compared to the general population, while users mentioning ownership of a dog (by using ‘my dog’) were only less conscientious compared to the rest of the population. Study 2 foundfinds that users who featured either cat or dog images in their tweets are more neurotic, less conscientious and less agreeable than those who do not. In addition, posting images containing cats was specific to users higher in openness, while posting images featuring dogs was associated with users higher in extraversion. These findings taken together align with some previous findings on the relationship between owner personality and animal preference, additionally highlighting some social media specific behaviors.
- Predicting Emotional Word Ratings using Distributional Representations and Signed ClusteringEACL 2017. [Abstract]
Inferring the emotional content of words is important for text-based sentiment analysis, dialogue systems and psycholinguistics, but word ratings are expensive to collect at scale and across languages or domains. We develop a method that automatically extends word-level ratings to unrated words using signed clustering of vector space word representations along with affect ratings. We use our method to determine a word’s valence and arousal, which determine its position on the circumplex model of affect, the most popular dimensional model of emotion. Our method achieves superior out-of-sample word rating prediction on both affective dimensions across three different languages when compared to state-of-the-art word similarity based methods. Our method can assist building word ratings for new languages and improve downstream tasks such as sentiment analysis and emotion detection.
- Studying Personality through the Content of Posted and Liked Images on Twitter Web Science 2017. [Abstract]
Interacting with images through social media has become widespread due to ubiquitous Internet access and multimedia enabled devices. Through images, users generally present their daily activities, preferences or interests. This study aims to identify the way and extent to which personality differences measured as using the Big Five model are related to online image posting and liking. In two experiments, the larger consisting of ~$1.5 million Twitter images both posted and liked by ~4,000 users, we extract interpretable semantic concepts using large-scale image content analysis and analyze differences specific of each personality trait. Predictive results show that image content can predict personality traits, and that there can be significant performance gain by fusing the signal from both posted and liked images.
- Studying the Dark Triad of Personality using Twitter BehaviorCIKM 2016. [Abstract]
Research into the darker traits of human nature is growing in interest especially in the context of increased social media usage. This allows users to express themselves to a wider online audience. We study the extent to which the standard model of dark personality – the dark triad – consisting of narcissism, psychopathy and Machiavellianism, is related to observable Twitter behavior such as platform usage, posted text and profile image choice. Our results show that we can map various behaviors to psychological theory and study new aspects related to social media usage. Finally, we build a machine learning algorithm that predicts the dark triad of personality in out-of-sample users with reliable accuracy.
- Analysing Biases in Human Perception of User Age and Gender from TextACL 2016. [Abstract]
User traits disclosed through written text, such as age and gender, can be used to personalize applications such as recommender systems or conversational agents. However, human perception of these traits is not perfectly aligned with reality. In this paper, we conduct a large-scale crowdsourcing experiment on guessing age and gender from tweets. We systematically analyze the quality and possible biases of these predictions. We identify the textual cues which lead to miss-assessments of traits or make workers more or less confident in their choice. Our study demonstrates that differences between real and perceived traits are noteworthy and elucidates inaccurately used stereotypes in human perception.
- Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective PeerJ Computer Science (2016). [Abstract]
Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e. N-grams, and topics. Our models can predict the court’s decisions with a strong accuracy (79% on average). Our empirical analysis indicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. We also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis.
- Sub-Story Detection in Twitter with Hierarchical Dirichlet Processes Information Processing and Management (2016). [Abstract]
Social media has now become the de facto information source on real world events. The challenge, however, due to the high volume and velocity nature of social media streams, is in how to follow all posts pertaining to a given event over time – a task referred to as story detection. Moreover, there are often several different stories pertaining to a given event, which we refer to as sub-stories and the corresponding task of their automatic detection – as sub-story detection. This paper proposes hierarchical Dirichlet processes (HDP), a probabilistic topic model, as an effective method for automatic sub-story detection. HDP can learn sub-topics associated with sub-stories which enables it to handle subtle variations in sub-stories. It is compared with state-of-the-art story detection approaches based on locality sensitive hashing and spectral clustering. We demonstrate the superior performance of HDP for sub-story detection on real world Twitter data sets using various evaluation measures. The ability of HDP to learn sub-topics helps it to recall the sub-stories with high precision. This has resulted in an improvement of up to 60% in the F-score performance of HDP based sub-story detection approach compared to standard story detection approaches. A similar performance improvement is also seen using an information theoretic evaluation measure proposed for the sub-story detection task. Another contribution of this paper is in demonstrating that considering the conversational structures within the Twitter stream can bring up to 200% improvement in sub-story detection performance.
- Real Men don’t say ‘cute’: Using Automatic Language Analysis to Isolate Inaccurate Aspects of Stereotypes Social Psychological and Personality Science (2016). [Abstract]
People associate certain behaviors with certain social groups. These stereotypical beliefs consist of both accurate and inaccurate associations. Using large-scale, data driven methods with social media as a context, we isolate stereotypes by using verbal expression. Across four social categories – gender, age, education level, and political orientation – we identify words and phrases that lead people to incorrectly guess the social category of the writer. Although raters often correctly categorize authors, they overestimate the importance of some stereotype-congruent signal. Findings suggest that data-driven approaches might be a valuable and ecologically valid tool for identifying even subtle aspects of stereotypes and highlighting the facets that are exaggerated or misapplied.
- Exploring Stylistic Variation with Age and Income on TwitterACL 2016. [Abstract]
Writing style allows NLP tools to adjust to the traits of an author. In this paper, we explore the relation between stylistic and syntactic features and authors’ age and income. We confirm our hypothesis that for numerous feature types writing style is predictive of income even beyond age. We analyze the predictive power of writing style features in a regression task on two data sets of around 5,000 Twitter users each. Additionally, we use our validated features to study daily variations in writing style of users from distinct income groups. Temporal stylistic patterns not only provide novel psychological insight into user behavior, but are useful for future research and applications in social media.
- #Microposts2016 – 6th Workshop on ‘Making Sense of Microposts’ WWW 2016. [Abstract]
#Microposts2016, the 6th workshop on Making Sense of Microposts, is summarised by the sub-theme: big things come in small packages. The workshop serves as a forum to discuss and promote research on the generation, analysis and reuse of Microposts – small chunks of information published on social media and messaging platforms. Low effort and cost to publish Microposts gives a voice to all, across differences in expertise, socio-cultural, generational and economic spheres, covering a wide swathe of topics, posted in the moment and on the go, during events, crises and personal experiences. While the usual suspects, including Twitter, Facebook, Instagram and Pinterest continue to dominate, especially as services are merged or shared across platforms,
newer players such as WhatsApp, Vine, Meerkat andYik Yak are growing in popularity, with increased access to fast, high capacity networks and advanced small, personal devices. #Microposts2016 solicited participation from Computer Science and other relevant fields, with a focus on interdisciplinary work. Starting in 2015, the workshop includes a track dedicated to encouraging research employing methods for analysis of Microposts in the Social Sciences.
- Analyzing Personality through Social Media Profile Picture ChoiceICWSM 2016. [Abstract]
The content of images users post to their social media is driven in part by personality. In this study, we analyze how Twitter profile images vary with the personality of the users posting them. In our main analysis, we use profile images from over 66,000 users whose personality we estimate based on their tweets. To facilitate interpretability, we focus our analysis on aesthetic and facial features and control for demographic variation in image features and personality. Our results show significant differences in profile picture choice between personality traits, and that these can be harnessed to predict personality traits with robust accuracy. For example, agreeable and conscientious users display more positive emotions in their profile pictures, while users high in openness prefer more aesthetic photos.
- Discovering User Attribute Stylistic Differences via ParaphrasingAAAI 2016. [Abstract]
User attribute prediction from social media text has proven successful and useful for downstream tasks. In previous studies, user trait differences have been limited primarily to the presence or absence of words that indicate topical preferences. In this study, we aim to find linguistic style distinctions across three different user attributes: gender, age and occupational class. By combining paraphrases with a simple yet effective method, we capture a wide set of stylistic differences that are exempt from topic bias. We show their predictive power in user profiling, conformity with human perception and psycholinguistic hypotheses, and potential use in generating natural language tailored to specific user traits.
- An Empirical Exploration of Moral Foundations Theory in Partisan News Sources LREC 2016. [Abstract]
News sources frame issues in different ways in order to appeal or control the perception of their readers. We present a large scale study of news articles from partisan sources in the US across a variety of different issues. We first highlight that differences between sides exist by predicting the political leaning of articles of unseen political bias. Framing can be driven by different types of morality that each group values. We emphasize differences in framing of different news building on the moral foundations theory quantified using hand crafted lexicons. Our results show that partisan sources frame political issues differently both in terms of words usage and through the moral foundations they relate to.
- Modelling Valence and Arousal in Facebook posts In Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA). NAACL, 2016. [Abstract]
Access to expressions of subjective personal posts increased with the popularity of Social Media. However, most of the work in sentiment analysis focuses on predicting only valence from text and usually targeted at a product, rather than affective states. In this paper, we introduce a new data set of 2895 Social Media posts rated by two psychologically-trained annotators on two separate ordinal nine-point scales. These scales represent valence (or sentiment) and arousal (or intensity), which defines each post’s position on the circumplex model of affect, a well-established system for describing emotional states (Russell, 1980; Posner et al., 2005). The data set is used to train prediction models for each of the two dimensions from text which achieve high predictive accuracy – correlated at r = :65 with valence and r = :85 with arousal annotations. Our data set offers a building block to a deeper study of personal affect as expressed in social media. This can be used in applications such as mental illness detection or in automated large-scale psychological studies.
- Studying the temporal dynamics of word co-occurrences: An application to event detectionLREC 2016. [Abstract]
Streaming media provides a number of unique challenges for computational linguistics. This paper studies the temporal variation in word co-occurrence statistics, with application to event detection. We develop a spectral clustering approach to find groups of mutually informative terms occurring in discrete time frames. Experiments on large datasets of tweets show that these groups identify key real world events as they occur in time, despite no explicit supervision. The performance of our method rivals state-of-the-art methods for event detection on F-score, obtaining higher recall at the expense of precision.
- Studying User Income through Language, Behaviour and Affect in Social Media PLoS ONE 10 (2015). [Abstract]
Automatically inferring user demographics from social media posts is useful for both social science research and a range of downstream applications in marketing and politics. We present the first extensive study where user behaviour on Twitter is used to build a predictive model of income. We apply non-linear methods for regression, i.e. Gaussian Processes, achieving strong correlation between predicted and actual user income. This allows us to shed light on the factors that characterise income on Twitter and analyse their interplay with user emotions and sentiment, perceived psycho-demographics and language use expressed through the topics of their posts. Our analysis uncovers correlations between different feature categories and income, some of which reflect common belief e.g. higher perceived education and intelligence indicates higher earnings, known differences e.g. gender and age differences, however, others show novel findings e.g. higher income users express more fear and anger, whereas lower income users express more of the time emotion and opinions.
- Analysing domain suitability of a sentiment lexicon by identifying distributionally bipolar words In Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA). EMNLP, 2015. [Abstract]
Contemporary sentiment analysis approaches rely heavily on lexicon based methods. This is mainly due to their simplicity, although the best empirical results can be achieved by more complex techniques. We introduce a method to assess suitability of generic sentiment lexicons for a given domain, namely to identify frequent bigrams where a polar word switches polarity. Our bigrams are scored using Lexicographers Mutual Information and leveraging large automatically obtained corpora. Our score matches human perception of polarity and demonstrates improvements in classification results using our enhanced context-aware method. Our method enhances the assessment of lexicon based sentiment detection algorithms and can be further used to quantify ambiguous words.
- An analysis of the user occupational class through Twitter contentACL 2015. [Abstract]
Social media content can be used as a complementary source to the traditional methods for extracting and studying collective social attributes. This study focuses on the prediction of the occupational class for a public user profile. Our analysis is conducted on a new annotated corpus of Twitter users, their respective job titles, posted textual content and platform-related attributes. We frame our task as classification using latent feature representations such as word clusters and embeddings. The employed linear and, especially, non-linear methods can predict a user’s occupational class with strong accuracy for the coarsest level of a standard occupation taxonomy which includes nine classes. Combined with a qualitative assessment, the derived results confirm the feasibility of our approach in inferring a new user attribute that can be embedded in a multitude of downstream applications.
- Analyzing crowdsourced assessment of user traits through Twitter posts HCOMP 2015 (Work-in-Progress). [Abstract]
Social media allows any user to express themselves to the public through posting content. Using a crowdsourcing experiment, we aim to quantify and analyze which human attributes lead to better perceptions of the true identity of others. Using tweet content from a set of users with known age and gender information, we ask workers to rate their perception of these traits and we analyze those results in relation to the crowdsourcing workers’ age and gender. Results show that female workers are both more confident and more accurate at reporting gender, and workers in their thirties were most accurate but least confident for rating age. Our study is a first step in identifying the worker traits which contribute to a better understanding of others through their posted text content. Our findings help to identify the types of workers best suited for certain tasks.
- Mental Illness Detection at the World Well-Being Project for the CLPsych 2015 Shared Task In Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality (CLPysch). NAACL, 2015. [Abstract]
This article is a system description and report on the submission of the World Well-Being Project from the University of Pennsylvania in the `CLPsych 2015′ shared task. The goal of the shared task was to automatically determine Twitter users who self-reported having one of two mental illnesses: post traumatic stress disorder (PTSD) and depression. Our system employs user metadata and textual features derived from Twitter posts. To reduce the feature space and avoid data sparsity, we consider several word clustering approaches. We explore the use of linear classifiers based on different feature sets as well as a combination use a linear ensemble. This method is agnostic of illness specific features, such as lists of medicines, thus making it readily applicable in other scenarios. Our approach ranked second in all tasks on average precision and showed best results at .1 false positive rates.
- The Role of Personality, Age and Gender in Tweeting about Mental Illnesses In Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality (CLPsych). NAACL, 2015. [Abstract]
Mental illnesses, such as depression and post traumatic stress disorder (PTSD), are highly underdiagnosed globally. Populations sharing similar demographics and personality traits are known to be more at risk than others. In this study, we characterise the language use of users disclosing their mental illness on Twitter. Language-derived personality and demographic estimates show surprisingly strong performance in distinguishing users that tweet a diagnosis of depression or PTSD from random controls, reaching an area under the receiver operating characteristic curve – AUC – of around .8 in all our binary classification tasks. In fact, when distinguishing users disclosing depression from those disclosing PTSD, the single feature of estimated age shows nearly as strong performance (AUC = .806) as using thousands of topics (AUC = .819) or tens of thousands of n-grams (AUC = .812). We also find that differential language analyses, controlled for demographics, recover many symptoms associated with the mental illnesses in the clinical literature.
- Extracting socioeconomic patterns from the news: Modelling text and outlet importance jointly In Workshop on Language Technologies and Computational Social Science (LACSS). ACL., 2014. [Abstract]
Information from news articles can be used to study correlations between textual discourse and socioeconomic patterns. This work focuses on the task of understanding how words contained in the news as well as the news outlets themselves may relate to a set of indicators, such as economic sentiment or unemployment rates. The bilinear nature of the applied regression model facilitates learning jointly word and outlet importance, supervised by these indicators. By evaluating the predictive ability of the extracted features, we can also assess their relevance to the target socioeconomic phenomena. Therefore, our approach can be formulated as a potential NLP tool, particularly suitable to the computational social science community, as it can be used to interpret connections between vast amounts of textual content and measurable society driven factors.
- Predicting and characterising user impact on TwitterEACL 2014. [Abstract]
The open structure of online social networks and their uncurated nature give rise to problems of user credibility and influence. In this paper, we address the task of predicting the impact of Twitter users based only on features under their direct control, such as usage statistics and the text posted in their tweets.We approach the problem as regression and apply linear as well as nonlinear learning methods to predict a user impact score, estimated by combining the numbers of the user’s followers, followees and listings. The experimental results point out that a strong prediction performance is achieved, especially for models based on the Gaussian Processes framework. Hence, we can interpret various modelling components, transforming them into indirect ‘suggestions’ for impact boosting.
- Tools for mining non-stationary data – v2. Clustering models for discovery of regional and demographic variation – v2Public Deliverable for Trendminer Project, 2014. [Abstract]
This document presents advanced research and software development work for Task 3.2 on tools for mining non-stationary data and for Task 3.3 on clustering models integrating regional and demographic information for the aim of understanding streaming data. First, for modelling non-stationary data, a research experiment is presented for categorising and forecasting word frequency patterns using Gaussian Processes, with an emphasis on word periodicities. A new soft clustering method based on topic models is introduced, which learns topics and their temporal profile jointly. For using regional and demographic user information, the predictive model presented in previous work (Samangooei et al., 2013) is extended. This is used to identify differences in voting intention between different regions of the United Kingdom and different genders. For discovering specific regional clusters, the soft clustering technique is extended to learn the topics, their regional and temporal profile jointly. Finally, the predictive and clustering models developed on social media data are applied to a news summary dataset where richer linguistic features are also used.
- Temporal models of streaming social media dataUniversity of Sheffield, 2014. [Abstract]
There are significant temporal dependencies between online behaviour and occurring real world activities. Particularly in text modelling, these are usually ignored or at best dealt with in overly simplistic ways such as assuming smooth variation with time. Social media is a new data source which present collective behaviour much more richly than traditional sources, such as newswire, with a finer time granularity, timely reflection of activities, multiple modalities and large volume. Analysing temporal patterns in this data is important in order to discover newly emerging topics, periodic occurrences and correlation or causality to real world indicators or human behaviour patterns. With these opportunities come many challenges, both engineering (i.e.\ data volume and processing) and algorithmic, namely the inconsistency and short length of the messages and the presence of large amounts of irrelevant messages to our goal. Equipped with a better understanding of the dynamics of the complex temporal dependencies, tasks such as classification can be augmented to provide temporally aware responses.
In this thesis we model the temporal dynamics of social media data. We first show that temporality is an important characteristic of this type of data. Further comparisons and correlation to real world indicators show that this data gives a timely reflection of real world events. Our goal is to use these variations to discover emerging or recurring user behaviours. We consider both the use of words and user behaviour in social media. With these goals in mind, we adapt existing and build novel machine learning techniques. These span a wide range of models: from Markov models to regularised regression models and from evolutionary spectral clustering which models smooth temporal variation to Gaussian Process regression which can identify more complex temporal patterns.
We introduce approaches which discover and predict words, topics or behaviours that change over time or occur with some regularity. These are modeled for the first time in the NLP literature by using Gaussian Processes. We demonstrate that we can effectively pick out patterns, including periodicities, and achieve state-of-the-art forecasting results. We show that this performance gain transfers to improve tasks which do not take temporal information in account. Further analysed is how temporal variation in the text can be used to discover and track new content. We develop a model that exploits the variation in word co-occurrences for clustering over time. Different collection and processing tools, as well as several datasets of social media data have been developed and published as open-source software.
The thesis posits that temporal analysis of data, from social media in particular, provides us with insights into real-world dynamics. Incorporating this temporal information into other applications can benefit standard tasks in natural language processing and beyond.
- A temporal model of text periodicities using Gaussian ProcessesEMNLP 2013. [Abstract]
Temporal variations of text are usually ignored in NLP applications. However, text use changes with time, which can affect many applications. In this paper we model periodic distributions of words over time. Focusing on hashtag frequency in Twitter, we first automatically identify the periodic patterns. We use this for regression in order to forecast the volume of a hashtag based on past data. We use Gaussian Processes, a state-of-the-art bayesian non-parametric model, with a novel periodic kernel. We demonstrate this in a text classification setting, assigning the tweet hashtag based on the rest of its text. This method shows significant improvements over competitive baselines.
- Mining user behaviours: A study of check-in patterns in Location Based Social NetworksWeb Science 2013. [Abstract]
Understanding the patterns underlying human mobility is of an essential importance to applications like recommender systems. In this paper we investigate the behaviour of around 10,000 frequent users of Location Based Social Networks (LBSNs) making use of their full movement patterns. We analyse the metadata associated with the whereabouts of the users, with emphasis on the type of places and their evolution over time. We uncover patterns across different temporal scales for venue category usage. Then, focusing on individual users, we apply this knowledge in two tasks: 1) clustering users based on their behaviour and 2) predicting users’ future movements. By this, we demonstrate both qualitatively and quantitatively that incorporating temporal regularities is beneficial for making better sense of user behaviour.
- Where’s @wally: A classification approach to geolocating users based on their social tiesHT 2013. [Abstract]
This paper presents an approach to geolocating users of online social networks, based solely on their ‘friendship’ connections. We observe that users interact more regularly with those closer to themselves and hypothesise that, in many cases, a person’s social network is sufficient to reveal their location. The geolocation problem is formulated as a classification task, where the most likely city for a user without an explicit location is chosen amongst the known locations of their social ties. Our method uses an SVM classifier and a number of features that reflect different aspects and characteristics of Twitter user networks. The SVM classifier is trained and evaluated on a dataset of Twitter users with known locations. Our method outperforms a state-of-the-art method for geolocating users based on their social ties
- Exploring venue-based city-to-city similarity measures In Workshop on Urban Computing (UrbComp). SIGKDD 2013. [Abstract]
In this work we explore the use of incidentally generated social network data for the folksonomic characterization of cities by the types of amenities located within them. Using data collected about venue categories in various cities, we examine the effect of different granularities of spatial aggregation and data normalization when representing a city as a collection of its venues. We introduce three vector-based representations of a city, where aggregations of the venue categories are done within a grid structure, within the city’s municipal neighborhoods, and across the city as a whole. We apply our methods to a novel dataset consisting of Foursquare venue data from 17 cities across the United States, totaling over 1 million venues. Our preliminary investigation demonstrates that different assumptions in the urban perception could lead to qualitative, yet distinctive, variations in the induced city description and categorization.
- Clustering models for discovery of regional and demographic variation Public Deliverable for Trendminer Project, 2013.
- A user-centric model of voting intention from Social MediaACL 2013. [Abstract]
Social Media contain a multitude of user opinions which can be used to predict realworld phenomena in many domains including politics, finance and health. Most existing methods treat these problems as linear regression, learning to relate word frequencies and other simple features to a known response variable (e.g., voting intention polls or financial indicators). These techniques require very careful filtering of the input texts, as most Social Media posts are irrelevant to the task. In this paper, we present a novel approach which performs high quality filtering automatically, through modelling not just words but also users, framed as a bilinear
model with a sparse regulariser. We also consider the problem of modelling groups of related output variables, using a structured multi-task regularisation method. Our experiments on voting intention prediction demonstrate strong performance over large-scale input from Twitter on two distinct case studies, outperforming competitive baselines.
- Trendminer: an architecture for real time analysis of social media text In Workshop on Real-Time Analysis and Mining of Social Streams (RAMSS). ICWSM 2012. [Abstract]
The emergence of online social networks (OSNs) and the accompanying availability of large amounts of data, pose a number of new natural language processing (NLP) and computational challenges. Data from OSNs is different to data from traditional sources (e.g. newswire). The texts are short, noisy and conversational. Another important issue is that data occurs in a real-time streams, needing immediate analysis that is grounded in time and context. In this paper we describe a new open-source framework for efficient text processing of streaming OSN data (available at www.trendminer-project.eu). Whilst researchers have made progress in adapting or creating text analysis tools for OSN data, a system to unify these tasks has yet to be built. Our system is focused on a real world scenario where fast processing and accuracy is paramount. We use the MapReduce framework for distributed computing and present running times for our system in order to show that scaling to online scenarios is feasible. We describe the components of the system and evaluate their accuracy. Our system supports easy integration of future modules in order to extend its functionality.
- Unsupervised document zone identification using probabilistic graphical modelsLREC 2012. [Abstract]
Document zone identification aims to automatically classify sequences of text-spans (e.g. sentences) within a document into predefined zone categories. Current approaches to document zone identification mostly rely on supervised machine learning methods, which require a large amount of annotated data, which is often difficult and expensive to obtain. In order to overcome this bottleneck, we propose graphical models based on the popular Latent Dirichlet Allocation (LDA) model. The first model, which we call zoneLDA aims to cluster the sentences into zone classes using only unlabelled data. We also study an extension of zoneLDA called zoneLDAb, which makes distinction between common words and non-common words within the different zone types. We present results on two different domains: the scientific domain and the technical domain. For the latter one we propose a new document zone classification schema, which has been annotated over a collection of 689 documents, achieving a Kappa score of 85%. Overall our experiments show promising results for both of the domains, outperforming the baseline model. Furthermore, on the technical domain the performance of the models are comparable to the supervised approach using the same feature sets. We thus believe that graphical models are a promising avenue of research for automatic document zoning.
- Regression models of trends in streaming dataPublic Deliverable for Trendminer Project, 2012.
- Gamma Knife surgery for pediatric arteriovenous malformations: a 25-year retrospective study Journal of Neurosurgery: Pediatrics 10 (2012): 445-450. [Abstract]
The authors present their 25-year experience in treating pediatric arteriovenous malformations (AVMs) to allow comparisons with other historic studies and data in adults.
Data were collected from a prospectively maintained departmental database selected for age and supplemented by case note review and telephone interviews as appropriate.
Three hundred sixty-three patients, ages 1–16 years (mean ± SD, 12 ± 3.2 years), underwent 410 treatments; 4 had planned 2-stage treatments and 43 were retreated subsequent to an initial partial response. Fifty-eight percent received general anesthesia for the procedure. Sixteen percent had previously undergone embolization. The most common presenting symptoms were as follows: hemorrhage (80.2%), epilepsy (8.3%; overall seizure prevalence 19.9%), and migrainous headaches (6.3%). Only 0.28% of the AVMs were incidental findings. The mean lesion volume was 3.75 ± 5.3 cm3 (range 0.01–32.8 cm3), with a median Spetzler-Martin grade of III (range I–V). The mean peripheral (therapeutic) dose was 22.7 ± 2.3 Gy (range 15–25 Gy), corresponding to a mean maximum dose of 43.6 ± 6 Gy (range 25–51.4 Gy).
The obliteration rate was 71.3% in patients who received one treatment and 62.5% for retreated patients, with a mean obliteration time of 32.4 and 79.6 months, respectively. The overall obliteration rate was 82.7%. No follow-up data are as yet available for the 4 patients who underwent the staged treatments. Only 4 patients received peripheral doses below 20 Gy, and the AVM was obliterated in 3 of these patients. The other patients received 20, 22.5, or 25 Gy and had obliteration rates of 82.6%, 77.7%, and 86.3%, respectively. The bleeding rate postradiosurgery was 2.2%, and the cumulative complication rate was 3.6%, with radionecrosis being the most common complication (1.1%).
Surprisingly, there was no correlation (p = 0.43) between outcome and radiosurgical dose when that dose was between 20 and 25 Gy, thus suggesting that the lower of these 2 doses may be effective. Radiosurgery for pediatric AVM is safe and effective.
- Survival and complications following Gamma Knife radiosurgery or enucleation for ocular melanoma: a 20-year experience Acta Neurochirurgica 154 (2012): 605-610. [Abstract]
We present our experience in treating ocular melanoma at the National Centre for Stereotactic Radiosurgery in Sheffield, UK over the last 20 years.
We analysed 170 patients treated with Gamma Knife radiosurgery, recorded the evolution of visual acuity and complication rates, and compared their survival with 620 patients treated with eye enucleation. Different peripheral doses (using the 50% therapeutic isodose) were employed: 50-70 Gy for 24 patients, 45 Gy for 71 patients, 35 Gy for 62 patients.
There was no significant difference in survival between the 35-Gy, 45-Gy and 50– to 70-Gy groups when compared between themselves (p = 0.168) and with the enucleation group (p = 0.454). The 5-year survival rates were: 64% for 35 Gy, 62.71% for 45 Gy, 63.6% for 50–70 Gy and 65.2% for enucleated patients. Clinical variables influencing survival for radiosurgery patients were tumour volume (p = 0.014) and location (median 66.4 vs 37.36 months for juxtapapillary vs peripheral tumours, respectively; p = 0.001), while age and gender did not prove significant. Regarding complications, using 35 Gy led to more than a 50% decrease, when compared with the 45-Gy dose, in the incidence of cataract, glaucoma and retinal detachment. Retinopathy, optic neuropathy and vitreous haemorrhage were not significantly influenced. Blindness decreased dramatically from 83.7% for 45 Gy to 31.4% for 35 Gy (p = 0.006), as well as post-radiosurgery enucleation: 23.9% for 45 Gy vs 6.45% for 35 Gy (p = 0.018). Visual acuity, recorded up to 5 years post-radiosurgery, was significantly better preserved for 35 Gy than for 45 Gy (p = 0.0003).
Using 35 Gy led to a dramatic decrease in complications, vision loss and salvage enucleation, while not compromising patient survival.
- Unsupervised word sense disambiguation with N-gram features Artificial Intelligence Review 41 (2012): 241-260. [Abstract]
The present paper concentrates on the issue of feature selection for unsupervised word sense disambiguation (WSD) performed with an underlying Naïve Bayes model. It introduces web N-gram features which, to our knowledge, are used for the first time in unsupervised WSD. While creating features from unlabeled data, we are “helping” a simple, basic knowledge-lean disambiguation algorithm to significantly increase its accuracy as a result of receiving easily obtainable knowledge. The performance of this method is compared to that of others that rely on completely different feature sets. Test results concerning nouns, adjectives and verbs show that web N-gram feature selection is a reliable alternative to previously existing approaches, provided that a “quality list” of features, adapted to the part of speech, is used.