Twitter is a great subject for social media research because 1) it is used by a lot of active and influential people and 2) its data is presumed public, obviating privacy concerns. Yet the sheer volume of Twitter data poses problems for researchers, especially those without thousands of extra dollars needed to harness insane amounts of computer power. Part of the solution for modest researchers at small institutions like myself is to study relatively small-scale subjects. Another part of the solution is to tie together multiple low-cost solutions and not look for one magic software package to address all needs.
I’m working on a project right now in which I’ve been following all tweets by and tweets mentioning members of the Maine State Legislature over time. I could write a program in PHP using the Twitter API to accomplish this… if I had a bit more time and know-how. I’ll try to get these later, but for now, I’m running multiple copies of the program Tweet Archivist Desktop, each of which captures and saves tweets by or regarding one Twitter account as they’re posted. Tweet Archivist Desktop costs just $9.99 for a perpetual license, which I consider well work the price.
Tweet Archivist Desktop creates a separate .csv dataset for each of the searches I’m saving. To gather them all together, I’m following advice shared helpfully by solveyourtech. On my Windows laptop, I’m entering the command prompt and combining all csv files in a folder into a single csv file with a variant of the “copy” command.