Category Archives: Spring 2015

css.php

TANDEM Project Update 4.26.15

WEEK 12 TANDEM PROJECT UPDATE:

This has been a week of accelerated achievement on all fronts for TANDEM. Thanks to Steve, we have a working MVP hosted on www.dhtandem.com/tandem. Further, we have also made huge strides on the front end with Kelly’s robust initial set of HTML/CSS pages for the site. While the two ends are not tied together just yet, they are within sight as of this weekend. Jojo continues to surprise the group with her intuitive mix of outreach and awesome having sent out personalized invitations to key members in our contact list and people who have shown interest in the past few months. Keep reading for more detailed information about these and other developments.

DEVELOPMENT / DESIGN:

MVP functionality added this week includes:

  • Ability to upload multiple files
  • Ability to persist data via a sqlite database containing project data and pointers to file locations
  • backend analytic code connected to front end
  • ability to zip and download results

Remaining tasks are:

  • Implement polished UI
  • Implement error handling
  • Handle session management so that simultaneous users keep their data separate
  • Look for opportunities to gain efficiency
  • Correct a small bug in the opencv output
  • Review security, backup file storage approaches and rework as needed to achieve best practices.

OUTREACH:

Continuing to garner community support, Jojo attended a GC Digital Initiatives event Tuesday as well as the English department’s Friday Forum. Additionally, initial invites for the launch went out to the digital fellows and DH Praxis friends and family via paperless post. Digital Fellow Ex Officio Micki Kaufman has already replied that she wouldn’t miss it.  I’m now working to organize outreach with the other teams.

The press release is coming along on the class wiki, too!!

Corpus:

With functionality ironed out, we continue to work with the dataset we have generated via TANDEM for the Mother Goose corpus. As part of our release, we will include work that we have done in both analysis and data visualization for the initial test corpus. If you have questions or points of interest in Mother Goose feel free to comment them below! We are interested in hearing the kinds of questions one might ask of a text/image corpus.

HUAC

As we close in on the final weeks, we’ve come to realize that what we may not be able to write a script that will do all that we want, search-wise. Fortunately, working with DocumentCloud as our database has allowed us to utilize their robust functionality, and we have used their tools to provide basic search and browse functions on our site. With these in place, we’re focused on polishing our front end, pitch, and documentation. We are also considering adding one more layer of fun…

We would like to position this project part as useful tool for historians, part as a template for replicable front-end to DocumentCloud, and as participatory digital scholarship.

To the participatory aspect, we’re considering creating and implementing a crowd sourcing platform to help with assigning the needed metadata to the individual testimonies.

One of the early and lasting story lines behind the project has been making publicly accessible a collection of materials with a shadowy past and curious relationship to public/private spaces, agendas, politics, and notions of guilt. At this stage, scholars would appreciate having the transcripts collected and rendered (simply) searchable; the scattered nature of the testimonies themselves is a major roadblock to HUAC studies that we’re trying to level out. But beyond that, incorporating crowd sourcing would resonate with the true spirit of the Digital HUAC project, which in a sense is the anti HUAC project, by relying on contributions from the public. To include a wide array of contributors in documenting and publicizing material whose origins lie in silencing or coercing folks seems powerful.

We’d love to hear input from you, our classmates, on this potential new addition to the project.

CUNYcast weekly update

CUNYcast has hit the ground running with great work all around.

Outreach & Management:

This week outreach set up an amazing tabling event! We received over 30 signatures from people who are interested in casting! This week Tuesday evening we will be hosting a workshop for interested casters!

Developer:

To get our front-end Calender and show info widgets working, we had to do two things:

Define sourceDomain: The installation guide for the widgets does a poor job of explaining this, but the sourceDomain is the site which you are pulling information from. The tutorials we were using stated that source domain should be your public site address, but no, it actually isn’t. Our public site address is cunycast.net (as in, the site we are sending information to) and the information we’re getting the information from to fill the widgets is airtime.cunycast.net (the proper sourceDomain).

Remove i-frame: Airtime recommends putting its widgets in an iframe, which stands for inline frame.  an inline frame allows you to embed another html document on to a page. As such, they let you define and manipulate rules within a specific section of your page. That said, they are finicky and hard to configure, in that not only do you have to figure out the proper dimensions for the frame on the page, but you also need to work between two .html to get it working properly. As such, we took the information from the i-frame.html and just embedded it in the <head> and <body> of our page.

Designer:

The website has been flushed out we now have 5 pages:

today

today1

today2

The FAQ list is rounding out a lot of uses for the site and the Process page is going to be a great space for us to manage our longer form tutorials and our project development. This week will formally document our user testing and make the necessary changes to our site. This week we will perform 5 user testing experiments. We will open the website for a subject and ask them to respond. We will then ask them to use the tutorial page specifically, and respond. This should be a great way to finalize the language about our project.

YOU CAN NOT SEE THE CHANGES LIVE YET but… you will be able to see them Monday!

yay!

Stay tuned and I will update this post when the pages go live!

Fashion Index Weekly Update

During this semester, we have been working on collecting collection of python and Instagram scripts and interacting with database.

We added the “game” function, which will be an ethos of our website, based on bootstrap. The game refers to the indexing page on the website as well. We believe the index page will show a curatorial power to the users. The game will let the participation. In this respect, our group is aiming to get more interactions. User engagement plays a critical role in this project.

We are planning to tag more and archive more images in order to have deeper historical contents. On the images, we have to attach longitude, latitude, time information, and URLs.

 

manhattan_nyfw_csv

We are thinking about how we can contribute to both fashion studies students and DHers. The way we collect images pulling out from Instagram, crowdsourcing, is a totally community facilitated process. We observe latent people in community and interest. Community building is an integral part because it will generate new types of communication among different users.  In the end, we will hit  other parts of fashion world e.g.) Paris, London, etc. Also, we try to moderate and curate the data.

We have been discussing the possibilities and prospects of our theme in terms of layer of interaction of fashion. What extend in concern of this field? and we are questioning the power of fashion world.

 

 

 

Team HUAC

This week, team Digital HUAC worked on refining our project narrative. This work dovetails with both outreach and site content: we’ll use narrative material to pitch potential users and partners and beef up our site itself. Juliana developed a thorough “pitch kit” with relevant topics and questions, and in response we filled out sections such as: “Challenges with the Current State of HUAC Records,” and “Our Solution.” We feel that such an approach effectively communicates vital information to all parties. It also helps us think through issues concretely. Nothing forces you to articulate your project means and aims better than thinking about how strangers will interact with it all.

We also demoed a new MVP as a fallback plan. Given that we are gravitating towards fully leveraging Document Cloud’s search interface, we experimented with embedding the DC viewer and search mechanism in our site itself. This is less than ideal: for one thing, this only rendered string-search results that didn’t make use of the robust, standardized metadata that we took time to tag each transcript with. But it was helpful to think about recasting our MVP just in case, and we welcomed the chance to get under the hood of Document Cloud in more detail.

Wanna “Cast” ?

Q. What did one graduate student say to the other graduate student?

A. Wanna “cast”???

Go ahead…. we dare ya. Shout out loud! (and if you’re still not sure, stop by the front lobby, we’ll be demo-ing from 1:30-7 on Monday and Tuesday April 20 & 21st in front of the library)

Cast_1

Cast_2

Cast_3

Cast_4

Fashion Index Weekly Update

During the class discussion, we had an issue of staid version vs. dynamic version of our website. Since we are running out of time, we may not be able to cover database. So we ended up continuing the static one. We need to navigate data between mapping. As well as, we have to navigate tags based on time, space-csv scripts.

At this point, feedback loop is critical, which refers to increase more users engagement such as tagging more maps and bringing more images related to #sprezzatura. Anyway, we will stick around #sprezzatura instead #nyfw (New York Fashion Week).

In our website, there is a theory section, we may fill up with 500-1000 essays written by fashion studies related scholars. We expect that those essay will be a strong part of the user stories.

The developer, Tessa’s plan is to explore more new tags, implement retroactive geographical filters, populate tags from NY (few images, which will be trend-bases). Definitely, cleansing the datasets, which are not totally related to #sprezzatura, is very important. The images should be credible. We can easily find several irrelevant images in relation to the hash tags, which we are mainly looking for. We will set our own parameter to avoid the random images. Also, Tessa is going to work on geocoding, the main function is searching addresses based on on  zip codes, longitude, and latitude. The data came from Python script. Geo-specific data, “reversing geocoding” convert zip codes to longitude and latitude.

We made slight changes on our website. For our introduction page, we displayed datasets of images, which will be shuffled around in black and white. Then, we added dataset section.

Renzo made questionnaires in order to get feedbacks and they are composed of multiple choices and short sentences.

According to Dave Rioden from NYPL, we should focus on the interaction or engagement with users. We are planning to set up a server that will archive game data. DH and Fashion Studies students will test out the game.

 

playfashion

 

Lastly, we got 24 follower on our Instagram account. Undergraduate fashion school students and a fashion blogger followed us. We should facilitate  more chances of interaction and communication.

 

Back 2 Werk

Hope everyone had a great week away from classes.

We have approval for tabling in front of the library on April 20 & 21st to get the word out about CUNYcast (Thank you Matt G. for getting permissions). We hope to sign up casters at GC and share information about our initiative with the rest of the student body.

Our first big event will be attempting to broadcast the Annual Academic M.O.M. Conference which is being held at The Graduate Center this year (as well as Manhattan College).

WGS tells me that they have organized electronic signage and the event is posted on their website. (Read more about the launch on my blog last week). Or, our blog posting here on the Commons from April 5 here to see what our upcoming work schedule looks like.

See ya’ll in class. Onward.

Digital HUAC Update

A short update today, as we continue to push forward on getting our search functional. We’re stalled out on a few specific questions that are, hopefully, the final barriers in putting it all together. We’ve reached out to the digital fellows and a few other people we hope can help us on these questions–

-What is the best way to connect to a REST API? Our code is currently configured using curl. Is that the best approach?

-What is the best way to structure our search in JSON—using a list (with indexed search results by location) or using an associative array of key-value pairs? We have created key-value metatags for our documents in DocumentCloud, but the resulting JSON search results only display the built-in metadata tags (e.g., title: “”, id: “”) and not our created metadata tags. Is that an issue on the DocumentCloud or on the coding side?

We’ve added a bunch more testimonies to our DocumentCloud group, and have started on entering the metadata for it. The writing and outreach process continue to move forward, along with some of the smaller aspects of UX and development.

TANDEM Project Update 4.11.15

TANDEM Week 9 Presentation

TANDEM: A Brief Agenda

I. Review our project goals

  • Discuss new interested users (advertising, biodiversity cataloging)
  • Discuss output applications in “Mother Goose Counts”

II. Describe our development drive

  • Branches of Dev underway
    • UI/UX dynamic pages
    • Django framework
    • TANDEM tool python script

III. Explain our development steps

  • Two parallel paths were followed building Python “backend” code to run the analytics on the users’ input files
  • The paths were merged and tested on a laptop
  • The Python environment was then built on the server
  • A command line versionTANDEM will now run on the server using local server-based files.
  • @sreal19 will Demo TANDEM! (Fasten your seatbelts, folks!)

IV. Discuss next steps

  • What still needs doing hooking up front and back ends.
  • Getting polished examples of our output up along with clear links to available datavis resources.
  • Getting Kelly’s best practices documentation live.
  • Outreach (not just to beta testers, but to users who might not have considered these tools before — looking for education applications/journalism
  • Now is also the time to start considering the life beyond Praxis:
  • Grants for continuing work?
  • How much labor/manpower/development would be needed to move beyond MVP?
  • What does 1.0 look like?

Thanks for following and stay tuned for updates!

@dhTANDEM #picturebookshare

tufte retweet