A design and policy proposal for improving the democratic quality of social media Marc Smith…
Predicting Elections with Twitter: What 140 Characters Reveal about Political Sentiment (Tumasjan et al.)
Successful use of social media in las presidential campaign has established twitter as an integral part of political campaign toolbox
Goal: analyze on Twitter: 1. Deliberation, 2. Sentiment, 3. Prediction
Deliberation: Honeycutt and Herring – Twitter not only used for one-way comm, but 31% of all tweets direct a specific addressee. Kroop and Jansen – political internet discussion boards dominated by small # of heavy users
Sentiment: How accurately can Twitter inform us about the electorate’s political sentiment?
Prediction: can Twitter serve as a predictor of the election result?
Data: examined more than 100k tweets and extracted their sentiment using LIWC
Target: German federal election 2009
1. While Twitter is used as a forum for political deliberation on substantive issues, this forum is dominated by heavy users
Two widely accepted indicators of blog-based deliberation:
-The exchange of substantive issues (31% of all messages contain “@”),
-Equality of participaion: While the distribution of users across groups is almost identical with the one found on internet message boards, we find even less equality of participation for the political debate on Twitter. Additional analyses have shown users to exhibit a party-bias in the volume and sentiment of messages.
2. The online sentiment in tweets reflects nuanced offline differences between the politicians in our sample.
-Leading candidates: Very similar profile for all leading candidates, only polarizing political characters, such as liberal leader and socialist, deviate in line with their roles as opposition leaders. Messages mentioning Steinmeir (coalition leader) are most tentative
3. Similarity of profiles is a plausible reflection of the political proximity between the parties
Key findings: high convergence of leading candidates, more divergence among politicians of governin grand coalition than among those of a potential right wing coalition
4. Activity on Twitter prior to election seems to validly reflect the election outcome (MAE 1.65%), and joint party mentions accurately reflect the political ties between parties.
From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series (Brendan O’Connor)
Measuring public opinion through social media
Old method – query via dialing, asking, etc.
New method – people write their thoughts to social media, query social media to create aggregate text sentiment measure.
Can compare results from new method to old method
-High correlations between very simple sentiment analysis and telephone polls
-Time series smoothing helps
Text Data: Twitter
-Large, public, ll in one place
-Sources: Archiving Twitter Streaming API (“Gardenhose”/”Sample” ~15% public tweets); Scrape earlier messages via API
-Volume ~ .7B tweets
-Poll data: consumer confidence (2008-2009) – index of consumer sentiment (Reuters/Michigan), Gallup daily. 2008 presidential elections (aggregation, pollster.com). 2009 presidential job approval (Gallup daily)
-Message selection via topic keywords
-topic frequencies change rapidly
-Sentiment analysis: word counting.
–Subjectivity Clues lexicon from OpinionFinder / U Pitt (Very simple system!)
Key: don’t need to classify individual messages correctly, just need a sentiment ratio over messages.
-Sentiment Ratio Moving Average: High day-to-day volatility. Average last k days.
-Which leads, poll or text?
–Cross-correlation analysis: between sentiment score for day t, poll for day t+L.
—Results: “jobs” text leading indicator for poll, can be turned into forecasting model
—Reminiscent of Leskovec et al. Blogpulse paper, very nice!
-Keyword message selection:
–15-day windows, no lag. “jobs” r=80%, “job” r=7%. Is stemming always good?
Presidential elections and job approval: sentiment doesn’t correlate, but pure volume does (79% for “obama” 74% for “mccain”)
-Preliminary results that sentiment analysis on Twitter data can give information similar to traditional opinion polls. But, still not well-understood. Twitter bias? News vs. opinion?
-Issues: Relevant message selection, Time series smoothing
-Replacement for polls? Promising but not quite yet
Information Contagion: an Empirical Study of the Spread of News on Digg and Twitter Social Networks (Lerman et al.)
Information flow on networks
Dynamics of Social Information
-How does infromation spread on online social networks?
–How far and how fast does information flow on networks?
–What factors influence its spread?
–How does network structure affect dynamics of information flow?
–What does this tell us about quality of information?
-Study question through comparative empirical analysis of 2 social news networks – using URLs as markers
Social News: Digg, Twitter + Tweetmeme
-Tweetmeme aggregates all tweets and features most retweeted URLs on its front page
-3.5K digg stories with time submitted, promoted, votes for each story (time of vote, name of voter). 140k active users who voted for at least one stroy, 71k of them following at least one user. 258k links = fan network
-398 most retweeted stories 6/11/09 – 7/3/09, extracted from tweetmeme. Retweets of each story, up to 1k most recent retweets. Follower network of users who retweeted the stories
-Usability of social netws – do people use digg, twitter the same way? what effect do differences in user interface have?
-dynamics of social networks – how far does info spread, how fast does it spread, and what are the effects of net strucutre?
-Submitter = user who submitted link to story, or user who tweeted link to a story
-Vote = vote on Digg or retweet on Twitter
-Fan = fan on Digg or follower on Twitter
User activity: distribution of fans (Power law on Digg with up to 1e5, power law with bump ~ 10 on Twitter with up to 1e7 users)
User activity: distribution of voting: Power law on Digg and Twitter (with different slopes)
Dynamics of stories: both digg and twitter show exponential growth, but for Digg it is preceded by slow period before story is on front page, both show vote saturation
Popularity distribution of stories shows lognormal fit
Information flow on networks: information spreads on a network as fans (followers) vote for (retweet) stories their friends submit or vote for.
Dynamics of information spread on networks looks very similar to overall dynamics of information spread (evolution of fan votes qualitatively similar to evolution of all votes)
BUT distribution of popularity is different, now shows normal fit. “Inequality of popularity” no longer observed (social influence accounted for?). News spreads farther on Twitter than on Digg.
How far does information spread among submitter’s fans?
-On digg many stories get voted by submitter’s fans, opposite case on Twitter
How fast does info spread on networks?
-Two distinct phases on digg: stories spread faster through network before promotion than afterwards.
-On Twitter, info spreads at constant rate.
Network structure differences: Digg network is denser, more inter-connected than Twitter’s
Summary of results:
-Network structure and info flow
–Digg’s network is denser than Twitter’s: News spreads faster initially through Digg’s network, but it does not spread as far as on Twitter
–Twitter’s network is sparse: Fans unconnected to submitter help spread story
-User interface and information flow:
–Before promotion, Digg stories spread mainly through network (and do so faster)
–No equivalent of promotion on Twitter
Tweeting from the Town Square: measuring Geographic Local Networks (Yardi and boyd)
Two geographically bounded events: Wichita shooting and Altanta parking garage collapse
Methods: two crawls and a poll
RQ1: Do geographically local topics have more dense Twitter networks than non-local topics?
Why this is important? People living in close geo proximity may share characteristics. Connecting similar people can help them form ties, foster community
Spread of News
Spread of News Online – ongoing discussion vs. spikes of short-term high-density discussions around real-world events
Methods: searched key terms about each evenet, stored user info, crawled first degree net of users. Polled users who had tweeted twice or more about church shooting in first 24 hours after it was announced. Administered poll 3-5 days after event. Sent out 800 requests, received 164 responses.
RQ2: Are people who are central in twitter network more geographically central in physical world?
Sarita Yardi gives shout-out to NodeXL, asks for more scale!
RQ3: What sources do people go to for local news events?
Twitter maps show high level of locality to event, slow spread outward
News Sources – go to locals
News Seekers – also go to locals, then to MSM
-Utilize local short paths for disseminating information. Schools have long used an “emergency phone tree” with specified # of branches and leaves
-Timely notification of unexpected events
Invited Panel: US Government and Social Media
Macon Phillips, Director of New Media for the Obama White House
Moving from Elections to Governance
Wants academics to build tools that show effect of using social media on user behavior
WH new media director Macon Phillips asks for tools that allow thousands of people to communicate with the President (thanks @sadatshami !)
Don Burke, CIA Directorate of Science and Technology, Intellipedia Project
Haym Hirsh, Director, Division of Information and Intelligent Systems
Social Media and the Federal Government
US Gov’t early crowdsourcing project – National Weather Service Cooperative Observer Program (1890)
–DARPA Balloon Challenge
–Over 100 gov’t blogs
-Policy implications and clarifications
–70% of Airmen use YouTube
-Legal and Policy
–Terms of Service: Indemnification, etc.
–Advertising (e.g. alongside gov’t content)
–Procurement: Free = Gift? No competition? Charges imposed after lock-in
-Open Government Dialogue
The Open Dialogue Top 5:
1. Concerns about Obama’s Birth Certificate
2. Government spending
5. Birth Certificate
Additional Opportunities: “No matter who you are, most of the smartest people work for someone else.”
-Foster experimentation and innovation w/in federal government
-Provide data for innovation outside the def
-align legal and policy with aspirations
Question about contribution quality: do people feel their contributions are worthwhile? How do we make the value and implications of contribution clear?
What do “votes” for questions mean? Who is the right person to say that legalization of marijuana is not a big question? What questions are “big enough to matter”? The “pothole problem” – should questions about fixing potholes be crowdsourced?
Few poorly worded questions about marijuana, people will speak eloquently and argue for the issue, so it’s not just spam
Don Burke – not EVERY system has to be based on socialmedia
Questions: How do you get recognized by gov’t? Answers: open access, publishing where you’ll be noticed
Question about Intellipedia and procedures for aggregating information. Answer: without the wiki, there was no way to share tacit knowledge. But want to go beyond wiki and to the larger web
Jure Leskovec about developing APIs for gov’t data. Answer: no APIs yet, but government is collecting data in one place that’s publicly visible. Want to see scientific community analyzing datasets and finding results, government may not necessarily know what’s a “good” dataset.
***Analysis of Social Network Usage***
Governance in Social Media: A Case Study of the Wikipedia Promotion Process (Leskovec et al.)
Wikipedia promotion process
3 important features:
-deliberative process yielding a single decision
-is publicly recorder
-consequential for the community
Similarity to offline world: people evaluate other people
We study perspective of voters:
-Burke & Kraut examine candidate’s perspective
-How voters evaluate candidate?
-How do evaluations change over time?
Main findings: Relative assessment
-Voter’s evaluation of the candidate reflects different types of relative assessment
–Let voter V vote on candidate C
–we find that vote of V heavily depends on relationship and relative merit of V and C:
—Number of edits
—Number of “barnstars”
–Response function of vote V:
—Prob. V votes given that x other people have voted
Dataset: Wikipedia voting
-Votes are time stamped and signed by users
–2.8k elections sept ’04 – Jan ’08. 44.6% success rate: Successful: 94.7% support. Failed: 31% support votes
–114K votes (78% support). Each vote can get commented: Support votes: 7% get discussed. Oppose votes: 82% get discussed
-8.3K users voted
–2.5k candidates (some go for promotion multiple times)
–How do properties of voter V and candidate C affect V’s vote?
–Two natural (but competing) hypotheses:
H1. Prob. that C receives positive vote depends primarily on characteristics of C, there are objective criteria for user to become admin
H2. Prob. that C receives positive vote depends on relationship between characteristics of C and V
Merit (level of contribution):
-Two ways to quantify merit: total #edits, total #barnstars
-Relative merit: How does prob of V voting positively depend on diff in merit of C and V?
Relative merit hypothesis: if V has higher merit than C then he is less likely to vote
Observations: V is especially unlikely to vote for candidates of the same merit (total edits or barnstars)
Direct V-C interaction: Prob of positive vote as function of prior interactions of V and C.
Observation = prior interaction increases probability of a positive vote (with diminishing returns)
Thresholds and diversity of voters:
-Aggregate response function:
–How does prob. of voting positively depend on frac. of positive votes so far?
-Aggregate response function: baseline: if voter were to flip a coin then f(x)=x
-Observation: voters more inclined to express opinion when it goes against prevailing opinion
-Personal response functions: How does prob. of voter V voting positively depend on frac. of positive votes so far?
-Enough data that we can build models of individuals
-Average is close to baseline but individual variation in shape of response function is large
-Over time voters become more conservative, response functions shift downward and to the left
Elections over time:
–Elections unfold over time: Sequence of pairs (s(t),o(t))
—Very negative elections end ealry
—Failed elections are “top-heavy” = start very positive and slowly get negative.
—Successful elections get more positive over time
—Order of early votes doesn’t matter
–False hypotheses: Candidate’s friends vote early, Herding behavior (excessive influence of first votes)
Activity Lifespan: an Analysis of User Survival Patterns in Online Knowledge Sharing Communities (Yang et al.)
-User survival analysis to show that participation patterns and performance factors can account for a considerable amount of variance in predicting user lifespan
-Compare 3 major Q&A sites: Yahoo! Answers, Baidu Knows, and Naver Knowledge-iN
-Discuss how systems might sustain users
-Characteristics of Q&A sites we studied: in Yahoo Answers, earn points at flat rate per answer / best answer, Pay flat rate in points. In Baidu and Naver, earn points at flat rate per answer + points per best answer, and asker can offer additional points
-In Yahoo, significantly more questions / answer
Method: survival analysis
Defining “death” in online communities: period of inactivity exceeding 100 days. Found model prediction not sensitive to different cutoffs (50-150 days)
General comparison: 30-70% users leave after first day, afterwards curves for all 3 sites flatten. YA users more likely to remain than users of other two sites.
Answering life on average longer than asking life across all sites.
Preference between answering and asking (A/R ratio) can account for considerable amount of variance in predicting user lifespan
Obtaining more answers to your first question, writing longer question correlated with longer lifespan on Yahoo and Baidu
Winning best answer also correlated with longer lifespan
First 30 days:
More activity, asking more questions, obtaining more answers per question positively correlated with lifespan on all 3 sites
A/R ratio negatively correlated with lifespan on Yahoo but positively correlated with lifespan on Baidu and Naivr
Winning (best answer) also positively correlated with lifespan on all three sites
Analysis: community evolution
All three sites presented a decline in survival rate from year 1 to year 2, especially for Yahoo Answers
Naivr suffered more difficulty in sustaining users in 2nd year as almost no users stayed after 250 days
Conversational vs. Informational: There is a significant and consistence difference in survival patterns between conversational categories and informational categories: more conversational categories survive for longer
*with the exception* of “computer/internet” on Baidu only (cultural difference?)
Analysis: why do YA users stay longer
Share and Enjoy: