Today is international Day of DH 2015 – here’s how I’m reflecting on (and doing!) digital humanities practice and possibility through the Smithsonian Transcription Center.
First, I’ve created a blog for Day of DH 2015. I’ve already posted about the opportunities “we” have to learn together with the Smithsonian Transcription Center. The “we” is me, volunteers, Smithsonian staff, the wider public, researchers and more – in any and many configurations. Today alone, we’ve learned about 19th c ink innovations, President Obama’s commitment to #PollinatorHealth and shared how the public can help, and helped new volunteers learn how to transcribe astronomical logbooks and botanical logbooks. It’s another in a series of ways #welearntogether.
Read more of my first post after the jump.
It was a fascinating discussion with science writer Courtney Quirin – I shared my approaches and the ways I’ve chronicled the movement of people through keywords to my profile; she also suggested a few great ways to improve my interpretation of those numbers.* If you are considering sharing your research and want a great platform to connect with scholars in intersecting ways, academia.edu could be the right place for you.
Do you already use profile analytics from academia.edu? Let me know if you have any best practice tips to share!
*Allow me to clarify that I have not been asked to give any endorsement of the social networking site, nor am I receiving anything other than good tips from Courtney to increase my ability to analyze the numbers!
Neal Ungerleider (@nealunger on Twitter) writes for Fast Company about Redditors’ efforts in sleuthing crowdsourced information on the Boston Marathon bombing suspects in the attached link. Notable is Ungerleider’s cautious balance in critiquing the motivations of participants and the utility of crowdsourcing information at an event.
In exploring the behavior on this subreddit, a pertinent takeaway from this situation emerges: crowdsourcing in investigative situations is best for gathering data (that can be used by analysts to offset more specific intelligence) but becomes unreliable and even dangerous as misinformation if participants are given space to assert conclusions.
Conversely, crowdsourced data, coupled with conclusions, can be helpful for cultural heritage projects like the ones on which I am working; in these situations, we may have serendipitous moments of discovery in relation to collections and the hidden stories of our archives. In both cases, the cautions of the Reddit case are useful considerations in understanding and relating to users/audiences.
This presentation shares the thinking I’ve been using to frame some of my recent work: relating to crowdsourcing and engaging users in transcription of digitally archival material and communities of practice on Wikipedia. With this Prezi, I delivered an information briefing to decision-makers at Smithsonian Institution (SI) in mid-March. This is an on-going, work-in-progress situation that seems to offers great opportunities to improve upon and expand crowdsourcing in exciting ways at SI.
I’d like to take up the call to “show my work” and commit more clearly to open data and open cultural data, which would include sharing steps on how to get to conclusions.Please get in touch if you have thoughts or feedback on these guiding principles – or the tools I’ve discussed.