Social Media Data Mining with Raspberry Pi: 9 Videos for the Complete Beginner

Since the start of this year, I’ve been working on a project to take a $30 Raspberry Pi 2 computer turn it to create a social media data mining machine using the programming language Python. The words “programming language” may be off-putting, but my goal is to work through the process step-by-step so that even a complete beginner can follow along and accomplish the feat.

The inexpensive, adaptable $30 Raspberry Pi 2I’m motivated by two impulses. My first impulse to help people gain control over and ownership of the information regarding interaction that surrounds us. My second impulse is to demonstrate that mastery of social media information is not limited to the corporate, the government, or the otherwise well-funded sphere. This is not a video series for those who already are technologically wealthy and adept. It’s for anyone who has $30 to spare, a willingness to tinker, but the feeling that they’ve been left out of the social media data race. I hope to make the point that anyone can use social media data mining to find out who’s talking to whom. The powers that be are already watching down at us: my hope is that we little folks can start to watch up.

I’m starting the project by shooting videos. The video series has further potential, but has proceeded far enough along to represent a fairly good arc of skill development. Eventually I’d like to transcribe the videos and create a written and illustrated how-to pamphlet; these videos are just the start.

Throughout the videos, I’ve tried not to cover up the temporary mistakes, detours and puzzling bugs that are typical of programming. No one I know of hooks up the perfect computer system or writes a perfect program on the first try. Working through error messages and sleuthing through them is part of the process, and you’ll see that occasionally in these videos.

Please feel free to share the videos if you find them useful. I’d also appreciate any feedback you might have to offer.

Video 1: Hardware Setup for the Raspberry Pi

Video 2: Setting up the Raspberry Pi’s Raspbian Operating System

Video 3: Using the Raspberry Pi’s Text and Graphical Operating Systems

Video 4: Installing R

Video 5: Twitter, Tweepy and Python

Video 6: Debugging

Video 7: Saving Twitter Posts in a CSV File

Video 8: Extracting and Saving Data on Twitter URLs, Hashtags, and Mentions

Video 9: Custom Input

Installing R and the package igraph on a Mac: As Always, Not Quite the Same

The incredibly useful research program called R is available on many platforms — Linux, Windows and Apple computers — and can run the same scripts across all three of its different versions.  That said, the experience of getting R to run those scripts is not quite the same on an Apple Mac.  This seems to be some kind of unwritten rule for Macs — whatever your program, on a Mac the menus, procedures and names of commands will somehow end up being different.

So what?  Well, if you’re just getting started with R, you’ll need to occasionally get some tips and tricks for making the program work.  Most of the how-to blog posts and videos you can find out there use examples using a Linux or Windows system — and they just won’t work for a Mac.  I found this out the hard way when teaching students to use the igraph package for R to perform social network analysis.  A few of my students have Macs at home, and it didn’t take long for them to cry for help, because the R program they were dealing with looked very little like the R program I’d been showing them.

If you find yourself in the same boat, and are running into trouble using R and igraph, I hope the following video will be of some help. Using a screen capture of a Mac running OS X, I briefly demonstrate the experience of installing R and running a script with the igraph package on from an Apple vantage point.  One difference is that there are a few menu options you’ll need to select when installing igraph to actually make it run.  In another simple but crucial difference for Macs, you’ll need to select all the text in your script before running it.  THEN, and only then, use the “Execute” command.  That’s not necessary on a Windows computer, but it’s a make-or-break move on a Mac.

Why? Don’t ask me why. It’s the same old story that we’ve had for thirty years: it’s just different on a Mac.

The walkthrough video:

Please leave a comment if you have a question or need clarification, and I’d be glad to be of help if I can.

Presentation Materials for Twitter Adoption in U.S. Legislatures at #SMSociety 2016 Conference

The following are links to supporting materials for the presentation “Twitter Adoption in U.S. Legislatures: A Fifty-State Study” made to the 2016 International Conference on Social Media & Society on Wednesday, July 13 at Goldsmiths, University of London.

1. Free full-text access:

ACM DL Author-ize serviceTwitter Adoption in U.S. Legislatures: A Fifty-State Study

James M. Cook
SMSociety ’16 Proceedings of the 7th 2016 International Conference on Social Media & Society, 2016

2. Download Powerpoint Presentation Slides from presentation

3. Abstract: This study draws theoretical inspiration from the literature on Twitter adoption and Twitter activity in United States legislatures, applying predictions from those limited studies to all 7,378 politicians serving across 50 American state legislatures in the fall of 2015. Tests of bivariate association carried out for individual states lead to widely varying results, indicating an underlying diversity of legislative environments. However, a pooled multivariate analysis for all 50 states indicates that the number of constituents per legislator, district youth, district level of educational attainment, legislative professionalism, being a woman, sitting in the upper chamber, holding a leadership position, and legislative inexperience are all significantly and positively associated with Twitter adoption and Twitter activity. Controlling for these factors, legislator party, majority status, partisan instability, district income, and the percent of households in a state with an Internet connection are not significantly related to either Twitter adoption or recent Twitter use. A significant share of variation in social media adoption by legislators remains unexplained, leaving considerable room for further theoretical development and the development of contingent historical accounts.

Please feel free to review these materials before or after my presentation. I look forward to your comments.

Interdisciplinary Faculty Panel: What is Research? (11/3/15 at UMA)

Interdisciplinary Faculty Panel: What is Research?

Lisa Botshon, Professor of English
Rosie Curtis, Lecturer in Architecture
Sarah Hentges, Associate Professor of American Studies
Peter Milligan, Professor of Biology
Carey Clark, Assistant Professor of Nursing, Moderator

Tuesday, November 3, 12 Noon
University of Maine at Augusta Katz Library

Questioning EyesMembers of this faculty panel will discuss their answer to the question “What is Research?” from the vantage point of their own discipline, then present examples of their own current research projects. Moderator Carey Clark will encourage movement from multidisciplinary presentation to interdisciplinary discussion.

All members of the public and the UMA community are welcome to attend this faculty panel. Please encourage students considering or engaged in research projects to attend. Light refreshments will be served.

FMI: James Cook, james.m.cook@maine.edu, 207-621-3190

Learning Unbounded: EdX Introduction to R

It’s an open secret: to be a university professor is to be a perpetual student.  Learning doesn’t stop with the PhD; there’s always something new to read, always something new to discover, always something new to write, always something new to analyze, always a new technique to understand. This is why academics love the summer: finally, after teaching what we’ve already learned, we can learn some more!

One of my projects this summer is to bone up on the basics of a computer program for data analysis and visualization called R.  When I was a graduate student in the 1990s, statistical software was produced exclusively by companies at a fairly steep price.  Even now SAS 9.4, a software package used for data analysis in the academic and business communities, costs many thousands of dollars for an individual license (it’s so expensive that SAS won’t publish its price publicly).  If you were lucky, you had access to a university lab with software already installed.  If you didn’t have access and you wanted to run an analysis beyond the simplest level, you were simply out of luck.

All that changed with the introduction of R, a free and open-source program that runs on Windows computers, Mac computers, Unix computers and even web servers.  Methodologists from all kinds of disciplines are increasingly devoted to the development and extension of R, meaning that the latest analytical techniques are regularly added to R through easily added plug-ins called “packages.” R is easy to download, quick to install, and …

… well, I’d like to say it’s easy to run, but the truth is that for a generation that has grown up using pointing and clicking, it may be a bit intimidating to see a program with a command prompt that requires you to work almost entirely by entering text commands at prompts or developing programs of saved commands:

Screenshot of R running in the Windows environment

Still, with a bit of practice, it’s not much harder to type in text commands than it is to choose options in a drop-down menu.  The difference is that with drop-down menus, all options are presented to you in an organized fashion.  When you use R, you have to start out knowing what the commands are, and if you don’t know, you have to go find out.  It’s not R’s responsibility to show you what to do; it’s your responsibility to learn what R can do.  This is learning unbounded.

I became familiar with R by necessity earlier this year, when I needed to generate robust variance estimates in order to account for clustering in a sample.  That option isn’t available in most free menu-driven statistical programs, and I had a budget of $0 for my research project, so I installed R and the package rms by Frank E. Harrell, Jr.  R got the job done.

Since then, I’ve become aware that R can do much more than run a statistical analysis.  It can be used to gather data automatically.  It can be used to write automated webpages.  It can be used to create simulations.  It can visualize patterns in data with amazing graphics and videos (browse through the Google+ community for Statistics and R to get a taste of the possibilities).  But this level of high-end performance requires a more fundamental understanding of R than I’ve got right now.  To get back to basics and build myself a good foundation of understanding, I’ve started EdX’s Introduction to R Programming course.  This is another example of learning unbounded.  It’s an entirely online educational experience, I haven’t paid a cent to enroll, and I’m finding myself interacting with people from all over the globe in the course’s discussion sections.  Students in this course are asked to introduce themselves and say a little bit about where they’re from.  On a whim this morning, I tallied up the countries represented among students in the R course.  They are:

The United States isn’t even the top spot for R students; that position is taken by India, and there are 48 nations sending at least one student to the course. Just as the way we produce knowledge is changing, so is the way we learn how to produce knowledge.

P.S. Faced with a generation of academic and business analysts flocking to R, SAS has lost significant market share. Earlier this year, SAS responded by making a partial version of its software available for free. This software is called SAS University Edition and can be downloaded here. I’ve found installation to be more complicated and time-consuming than for R (the whopping download of a 1.8 GB installation file and the need to first install Oracle VM VirtualBox management software accounts for most of this difficulty), but I’m hopeful that I’ll have this second package of analytical software up and running soon so that I can compare the ease and power of the two programs.

Stages of Teaching and Learning Social Media Analytics (Presentation Notes)

This afternoon, I’ll be making a short presentation of thoughts on teaching social media analytics at the 2015 conference of the International Communication Association as part of its BlueSky Workshop on Tools for Teaching and Learning of Social Media Analytics. While the workshop is focused on the experience of teaching using a series of particular tools, I am interested in rejecting the question, “Which tools are best for teaching?,” and supplanting it with the idea of building capability in students in a progressive strategy. At different stages in students’ development as social media researchers, different analytic platforms may be more or less appropriate as teaching tools.

Below is a copy of notes for my presentation; notes can also be downloaded as a PDF here.


Objective: To introduce unexperienced undergraduate students to the process of analyzing social media with sufficient breadth that they may continue to learn independently.

Teaching Challenges Provoking Implementation:

  • As the mandate for higher education continues to widen, undergraduate students tend more and more to be non-traditional, to lack preparation, to lack confidence, and to be fascinated by but intimidated by math, research and technology.
  • Social media platforms are in a state of constant change.
  • Social media analytics packages and methods are rapidly evolving now and are likely to experience significant change in the next decade.

Learning Outcomes: Students who complete a course in social media analytics will be able to:

  1. Find and navigate social media platforms
  2. Recognize the common elements of social media:
    1. Individuals
    2. Actions
    3. Memberships
    4. Relationships
  3. Extract observations of these elements into datasets:
    1. Individual-level
    2. 1-mode network
    3. 2-mode network
  4. To analyze data and report data visualizations, qualitative categorizations and quantitative statistics

Strategy: A gentle, stepwise series of stages taking students from where they are to where they need to be, introducing students to a variety of analytic platforms, and focusing on the social research skills that will remain constant despite changes in social media and social media analytic platforms.

Stages of learning social media analytics, from Consumer to Manager to Secondhand Gatherer to Primary Gatherer to Analyst

Teaching Challenges in Implementation:

  • Universal access for students who no longer share a common campus, common hardware and common software
  • Reasonable yet challenging entry for students who come to class with a variety of previous experience and capabilities
  • A variety of reasonable endpoints for students who vary in their level of progression and accomplishment

Opening Maine Campaign Contribution Data Gets Tricky

Over the past year, I’ve been developing an Open Maine Politics website to mix, share and make social a variety of kinds of information about the Maine State Legislature.  Campaign finance profiles for legislators are part of the developing picture, but this weekend I’m hitting a speed bump as inconsistencies in the Maine Ethics Commission’s official dataset force me to look more closely at each case and fix errors one by one.  Cleaning the data feels like spring cleaning.  At least the season’s right.

Call for Applications: Maine Policy Scholar Program

Are you a University of Maine at Augusta student taking classes in the 2015-2016 academic year? Are you interested in politics and/or policy?  Are you looking for a way to take your work to the next level?

The University of Maine at Augusta, continuing its association with the Maine Community Foundation, has the opportunity to nominate a Maine Policy Scholar for the 2015-2016 academic year.  The successful applicant to the Maine Policy Scholar program receives a $1,500 scholarship with a budget of $1,000 for research expenses, and is expected to delve into applied research into a real Maine policy issue.

As the UMA advisor for the program, I’ll be working throughout next year with next year’s Maine Policy Scholar to help her or him in developing and carrying out an applied research program.  The selected student will also participate in three-four statewide meetings with faculty and scholars from across the University of Maine system for rigorous review of progress.  The year culminates in the presentation of a research memo to a board of state political leaders convened by the Maine Community Foundation. This memo has historically been also directed to a Maine political leader relevant to the subject of the student’s research, such as the Governor or the head of a state executive agency.  This is a good chance to gain valuable experience while you make a difference in Maine policy.

Applicants must be matriculated UMA students with a GPA of at least 3.00, and must have completed 60 or more credits of coursework by September 2015.  Previous work in applied research or previous study of research methods is ideal.

Are you interested?  Applications must be received by March 7.   Applications should consist of a current resume describing academic and professional experience and a letter of intent including a description of a proposed research topic. Send applications as an e-mail attachment to james.m.cook@maine.edu or by mail to James Cook, Assistant Professor of Social Science, University of Maine at Augusta, 46 University Drive, Augusta, ME 04330.

For more information on the application process or the Maine Policy Scholars program, please feel free to contact me at 621-3190 or james.m.cook@maine.edu.  Additional information is also available at http://www.mainecf.org/policyscholars.aspx on the web.

Recent Maine Policy Scholars, with links to their final policy memos, are:

Finding and Extracting Variables from Web Pages with PHP: A How-to for Social Scientists in the Rough

“Data Mining”: Just Another Way for Social Scientists to Ask Questions

If social science is the study of the structure of interactions, groups and classes, and if interactions, groups and classes are increasingly tied to the online environment, then it is increasingly important for social scientists to learn how to collect data online. Fortunately, the approach to “data mining” online interaction is fundamentally the same as the approach to studying offline social interaction:

  1. We approach the subject,
  2. We query the subject, and
  3. We obtain variables based on the responses we’re given.

Because the online environment and our online subjects are different, the way we make online queries must be different from the way we make offline queries. In data mining we don’t question human beings who can flexibly interpret a question; instead, we question computers responsible for the architecture of the online social system, and they will only respond if questioned in precisely the right way.

 

Learning to Mine the Web for Social Data — Without a Computer Science Degree

I’ve been trying to learn how to mine social information from websites on my own, without the benefit of any formal education in computer science.  This is kind of fun even when it’s frustrating, as long as I remember that getting information from the online environment is like solving a puzzle.  On most websites, social information (relations, communications, and group memberships) is stored in a database (like XMLSQL or JSON); some content management software (like WordPress, Joomla or Drupal) takes the information stored in a database and posts it on web pages, surrounded by code that makes the information comprehensible to humans like you and me.  If websites are researcher-friendly, they allow databases to be queried directly through an Application-Programming Interface (API).

Many websites don’t let a person query their databases, even when all the information published on those websites is public.  What’s a social scientist to do?  Well, we could literally read each single web page, find the information about relations, communications and group memberships we’re interested in, write down that information, and enter it into our own database for analysis.  We could do this, hypothetically, but at the practical scale of the Internet it’s often impossible.  Manually collecting interactions on a website with 10,000 participants could take years — and by the time we were done, there would be a whole new set of interactions to observe!

Fortunately, because web pages on social websites are written by computers, there are inevitably patterns in the way they’re written.  Visit a typical page on a social media website and use your browser’s “View source” command to look at the raw HTML language creating that page.  You’ll find sections that look like this:


<div class=”post” postid=”32“><div class=”comments”><a name=”comments”></a><h3>3 Comments on “Lucille’s First Blog Post”</h3><div class=”commentblock”>
<div class=”comment” id=”444“><a href=”/member.php?memberid=”201” usertitle=”Tim – click here to go to my blog”> Tim</a>: Greetings! How are you, Lucille?</div>
<div class=”comment” id=”445“><a href=”/member.php?memberid=”1181” usertitle=”Lucille – click here to go to my blog”> Lucille</a>: Hey, Tom. I’m new here. How do I respond to your comment?</div>
<div class=”comment” id=”446“><a href=”/member.php?memberid=”201” usertitle=”Tim – click here to go to my blog”> Tim</a>: Congratulations, Lucille, you just did!  Welcome to the community.</div>
</div></div></div>


That may look like a cluttered mess, but if you look carefully you can find important information.  Some of that information is the content that users write.   Other pieces of information track posts, comments and users by number or name. These names and numbers (indicated in red above) can be thought of as social science variables, and encouragingly they’re placed in predictable locations in a web page:

variable preceded by followed by
post id <div postid=” “><div >
comment id <div id=” “><a href=”/member.php?
member id member.php?memberid=” ” usertitle=”
member name  usertitle=”  – click here to go to my blog

There should be a set of rules for finding these predictable locations, and my goal in data mining is to explain those rules in a computer program that automatically reads many pages on a website, much faster than I can read them.  In English, the rules would look like this:

“Find text that is preceded by [preceding text] and is followed by [following text].  This text is an instance of [variable name].”

Unfortunately, computers don’t understand English.  I am familiar with a language called PHP that can read lines of a web page.  I didn’t know of a command in PHP that would let me carry out the rule described above.  What to do?  Ask a friend.  I asked a friend of mine with a PhD in Computer Science if he could identify such a command in PHP. His answer: “Well, you don’t want to use PHP. The first thing to do is teach yourself Perl.” The Perl programming language, he went on to explain, has much more efficient and flexible approach to handling strings as variables, and if I was going to be serious about data mining efficiently, I should use Perl.

I can’t tell you how many times some computer science expert has told me I shouldn’t follow a path because it was “inelegant” or “inefficient.”  Well, that may be wonderful advice for professional computer programmers who have to design and maintain huge information edifices, or to those who have a few extra semesters to spare in their learning quest, but in my case I say a hearty “Baloney!” to that.  Research does not need to and often cannot wait for the most efficient or elegant or masterful technique to be mastered.  Sometimes the most important thing to do is to get the darned research done.

In my case, this means that I’m going to use PHP, even though it may not be elegant or efficient or flexible or have objects to orient or [insert computer science tech phrase here].  I’m going to use PHP because I know it and it will — clumsily or not — get the darned job done.  Good enough may not be perfect but it is, by definition, good enough.  As long as the result is accurate, I can live with that.

 

A Rough but Ready Method for Extracting Variables from Web Pages with PHP — Explode!

It took a bit of reading through PHP’s online manual, but eventually I found a method that works for me — the “explode” command.  In what follows, I’m going to assume that you are familiar with PHP.  If you aren’t, that’s OK — you’ll just have to find another way to extract information out of a web page.

The PHP command “Explode” takes a string — a line of text in a web page — and splits it into parts.  “Explode” splits your line of text wherever a certain delimiter is found.  A delimiter is nothing more than a piece of text you want to use as a splitting point.  Let’s use an example, the web page snippet listed above:


<div class=”post” postid=”32″><div class=”comments”><a name=”comments”></a><h3>3 Comments on “Lucille’s First Blog Post”</h3><div class=”commentblock”>

<div class=”comment” id=”444″><a href=”/member.php?memberid=”201″ usertitle=”Tim – click here to go to my blog”> Tim</a>: Greetings! How are you, Lucille?</div>

<div class=”comment” id=”445″><a href=”/member.php?memberid=”1181″ usertitle=”Lucille – click here to go to my blog”> Lucille</a>: Hey, Tom. I’m new here. How do I respond to your comment?</div>

<div class=”comment” id=”446″><a href=”/member.php?memberid=”201″ usertitle=”Tim – click here to go to my blog”> Tim</a>: Congratulations, Lucille, you just did! Welcome to the community.</div>

</div></div></div>


Let’s say I’d like to look through 5,000 web pages like this, representing 5,000 individual blog posts.  In each of these 5,000 web pages, the particular post id and comment ids and member ids may change, but the places where they can be found and the code surrounding them remain the same.  We’ll use the code surrounding our desired information as delimiters.

To get really specific, let’s say I’d like to extract a member id number from the above web page every place it occurs.

The first step is to find a line of the web page on which a member id number exists.  To do this, I’ll use the stristr command, which searches for text. The command if (stristr($line, ‘?memberid=’)) {…} takes a look at a line of a website ($line) and asks if it contains a certain piece of text (in this case, ?memberid=).  If the piece of text is found, then what ever commands inside the brackets { } are executed.  If the piece of text is not found, then your computer won’t do anything.

So far, we have:

if (stristr($line, ‘?memberid=’))
{

}

What goes inside the brackets?  Some exploding!  Our first line of code inside the brackets tells the computer to split a line of website code using the text memberid= as the delimiter.

$cutstart = explode(‘memberid=’, $line);

This leaves a line of website code in two pieces, with the delimiter memberid= removed.  Those two pieces are set by the explode command to be $cutstart[0] and $cutstart[1]:

Original line of text: <div id=”444″><a href=”/member.php?memberid=”201″ usertitle=”Tim – click here to go to my blog”> Tim</a>: Greetings! How are you, Lucille?</div>

$cutstart[0]: <div id=”444″><a href=”/member.php?

$cutstart[1]: “201” usertitle=”Tim – click here to go to my blog”> Tim</a>: Greetings! How are you, Lucille?</div>

Where’s the member id number we want?  It’s the number right at the start of $cutstart[1], sitting in between the double quotation marks.  To get at that, let’s add another line of code to explode $cutstart[1] which tells the computer to split $cutstart[1] into pieces at the spots where there are double quotation marks.  The command in the second line of code inside the brackets is:

$cutend = explode(‘”‘, $cutstart[1]);

and takes $cutstart[1] apart, turning it into the pieces $cutend[0]$cutend[1], $cutend[2], $cutend[3] like so:

original $cutstart[1]: “201” usertitle=”Tim – click here to go to my blog”> Tim</a>: Greetings! How are you, Lucille?</div>

$cutend[0]: 201

$cutend[1]: usertitle= 

$cutend[2]: Tim – click here to go to my blog

$cutend[3]: > Tim</a>: Greetings! How are you, Lucille?</div>

Which part am I interested in?  Only the member id number, and finally that’s what I’ve got in $cutend[0].  If I want, I can rename it to help me remember what I’ve got:

$memberid = $cutend[0];

Taken all together, the code looks like this.


if (stristr($line, ‘?memberid=’))
{
$cutstart = explode(‘memberid=’, $line);
$cutend = explode(‘”‘, $cutstart[1]);
$memberid = $cutend[0];
}


This may not be the most elegant or efficient solution, but it’s pretty simple — and most importantly, gosh darn it, it works.  A novice data miner like me will never get hired away by Google for basic programming like this, and if you’re a social scientist with mad programming skills you may scoff at the elementary nature of this step.  That’s OK; this isn’t written for the Google corporation or wicked-fast coders.  I wrote all this out because the code was a big step for me in becoming a better, more complete social scientist.  If you’re looking to take the same step, I hope this post helps you along.

Credit goes to Tizag for helping me to understand the “explode” command a bit better. In turn, if you can think of a way for me to explain this more clearly or fully, please let me know by sharing a comment.

Building Offline Community to study Online Community: the Social Media & Society Conference

Attending academic conferences can feel a bit like living in a retelling of Goldilocks and the Three Bears. A conference that’s too small can leave you feeling underfed. On the other hand, a conference that’s too large can be overwhelming, intimidating and even alienating. A conference on a highly particular subject may be quite useful if you select just the right one, but may be completely useless if you’re even slightly off the mark. The presentations at an overly general conference may lack those crucial connections that stimulate career-changing “aha!” insights. If you’ve been to enough conferences, you probably know what I mean.

How rare, and therefore how precious, is the conference that hits the Goldilocks sweet spot in between these distasteful extremes. The 2013 Social Media & Society International Conference was that conference for me. Gathering and connecting presentations on the causes, kinds and consequences of online social connection, #SMSociety13 managed to be more than simply the sum of its individual presentations. Researchers across diverse fields of social science, humanities, business and computer science shared distinctive approaches and concerns regarding the same substantive subject, which meant that we all had some basis for understanding but also had something to learn:

Topics of discussion at #SMSociety13, the 2013 Social Media and Society Conference

Attendance numbered in the sweetly moderate middle between a hundred and two hundred, providing a critical but collegial mass of thinkers who began conversations during one set of presentations and continued them across others. How do we bridge (or barricade) the quantitative-qualitative divide? How do we know who is “really” speaking in an online environment, and how do participants manage the online presentation of self? What are the ways in which online interaction leads to offline action? As we ran into one another again and again in various combinations, these questions carried over into the late night at a pub and over danishes in the morning, with an aggregate from far-flung places becoming a quirky community.

Photos from the 2013 Social Media and Society Conference at Dalhousie University in Halifax, Nova Scotia

The Social Media & Society International Conference meets again at Ryerson University in Toronto on September 27-28, 2014. Got a paper or panel in mind? Submit through this link: I’d love to see you there. Abstracts are due April 18. Poster proposals are due May 23.

1 2