Wanna “Cast” ?

Q. What did one graduate student say to the other graduate student?

A. Wanna “cast”???

Go ahead…. we dare ya. Shout out loud! (and if you’re still not sure, stop by the front lobby, we’ll be demo-ing from 1:30-7 on Monday and Tuesday April 20 & 21st in front of the library)

Cast_1

Cast_2

Cast_3

Cast_4

Fashion Index Weekly Update

During the class discussion, we had an issue of staid version vs. dynamic version of our website. Since we are running out of time, we may not be able to cover database. So we ended up continuing the static one. We need to navigate data between mapping. As well as, we have to navigate tags based on time, space-csv scripts.

At this point, feedback loop is critical, which refers to increase more users engagement such as tagging more maps and bringing more images related to #sprezzatura. Anyway, we will stick around #sprezzatura instead #nyfw (New York Fashion Week).

In our website, there is a theory section, we may fill up with 500-1000 essays written by fashion studies related scholars. We expect that those essay will be a strong part of the user stories.

The developer, Tessa’s plan is to explore more new tags, implement retroactive geographical filters, populate tags from NY (few images, which will be trend-bases). Definitely, cleansing the datasets, which are not totally related to #sprezzatura, is very important. The images should be credible. We can easily find several irrelevant images in relation to the hash tags, which we are mainly looking for. We will set our own parameter to avoid the random images. Also, Tessa is going to work on geocoding, the main function is searching addresses based on on  zip codes, longitude, and latitude. The data came from Python script. Geo-specific data, “reversing geocoding” convert zip codes to longitude and latitude.

We made slight changes on our website. For our introduction page, we displayed datasets of images, which will be shuffled around in black and white. Then, we added dataset section.

Renzo made questionnaires in order to get feedbacks and they are composed of multiple choices and short sentences.

According to Dave Rioden from NYPL, we should focus on the interaction or engagement with users. We are planning to set up a server that will archive game data. DH and Fashion Studies students will test out the game.

 

playfashion

 

Lastly, we got 24 follower on our Instagram account. Undergraduate fashion school students and a fashion blogger followed us. We should facilitate  more chances of interaction and communication.

 

Back 2 Werk

Hope everyone had a great week away from classes.

We have approval for tabling in front of the library on April 20 & 21st to get the word out about CUNYcast (Thank you Matt G. for getting permissions). We hope to sign up casters at GC and share information about our initiative with the rest of the student body.

Our first big event will be attempting to broadcast the Annual Academic M.O.M. Conference which is being held at The Graduate Center this year (as well as Manhattan College).

WGS tells me that they have organized electronic signage and the event is posted on their website. (Read more about the launch on my blog last week). Or, our blog posting here on the Commons from April 5 here to see what our upcoming work schedule looks like.

See ya’ll in class. Onward.

Digital HUAC Update

A short update today, as we continue to push forward on getting our search functional. We’re stalled out on a few specific questions that are, hopefully, the final barriers in putting it all together. We’ve reached out to the digital fellows and a few other people we hope can help us on these questions–

-What is the best way to connect to a REST API? Our code is currently configured using curl. Is that the best approach?

-What is the best way to structure our search in JSON—using a list (with indexed search results by location) or using an associative array of key-value pairs? We have created key-value metatags for our documents in DocumentCloud, but the resulting JSON search results only display the built-in metadata tags (e.g., title: “”, id: “”) and not our created metadata tags. Is that an issue on the DocumentCloud or on the coding side?

We’ve added a bunch more testimonies to our DocumentCloud group, and have started on entering the metadata for it. The writing and outreach process continue to move forward, along with some of the smaller aspects of UX and development.

TANDEM Project Update 4.11.15

TANDEM Week 9 Presentation

TANDEM: A Brief Agenda

I. Review our project goals

  • Discuss new interested users (advertising, biodiversity cataloging)
  • Discuss output applications in “Mother Goose Counts”

II. Describe our development drive

  • Branches of Dev underway
    • UI/UX dynamic pages
    • Django framework
    • TANDEM tool python script

III. Explain our development steps

  • Two parallel paths were followed building Python “backend” code to run the analytics on the users’ input files
  • The paths were merged and tested on a laptop
  • The Python environment was then built on the server
  • A command line versionTANDEM will now run on the server using local server-based files.
  • @sreal19 will Demo TANDEM! (Fasten your seatbelts, folks!)

IV. Discuss next steps

  • What still needs doing hooking up front and back ends.
  • Getting polished examples of our output up along with clear links to available datavis resources.
  • Getting Kelly’s best practices documentation live.
  • Outreach (not just to beta testers, but to users who might not have considered these tools before — looking for education applications/journalism
  • Now is also the time to start considering the life beyond Praxis:
  • Grants for continuing work?
  • How much labor/manpower/development would be needed to move beyond MVP?
  • What does 1.0 look like?

Thanks for following and stay tuned for updates!

@dhTANDEM #picturebookshare

tufte retweet

 

 

Fashion Index weekly update

All of our tem members have been working so hard during the break. We have also faced certain restrictions.

#Sprezzatura tag search was not working well with the NYC latitude and longitude data that we were looking for. We decided to choose a more fashion-themed tag that already pulled images from NYC. We made an alternative plan to focus on the tag #NYFW (New York Fashion Week). This would be our MVP (Minimum Viable Products).

excel files

Tessa highlighted the fields that will be relevant for CartoDB including Latitude/ Longitude and created time. She adjusted for NY time zone. She also included the image URL.

Minn updated Carto DB. He posted the images based on NYC open data. First, he custom designed the 5 boroughs of map on Mapbox then exported into CartoDB. Later on,  he filled 5 boroughs of New York with polygons. Lastly, he placed pins on specific areas.

map

 

latlong

 

instagram

 

 

 

 

 

Digital HUAC update

This week we are working on some large items:

Our number one goal this week has been to get our search functionality up and running. Daria has been a coding machine, working on this non-stop. We’re nearly there. Some of things Daria has been grappling with are connecting to the DocumentCloud API using a REST API call function and trying to figure out what is the best taxonomy to be read by both PHP and JSON. The existing tutorials and scripts either explain using PHP to connect to a MySQL database, or use Python to connect to the DocumentCloud API, however, Google Developers has a tutorial on using PHP to connect to the Google Books and Google News APIs, which has proven a useful tool in working the PHP to DocumentCloud API situation. For a peek behind the scenes, check out some of Daria’s code here.

Juliana and Chris have been hitting Twitter hard, and our followers have doubled in the last week. Juliana created an NYC DH account and is exploring it as a place for potential groups and people who might be interested in our project. We continue to amass a list of historians and institutions that will be interested in Digital HUAC. All of this outreach is working toward our short-term goal of getting our project name out there, and also our long-term goal of finding an institution to partner with (which is one more step on the road to Digital HUAC world-domination).

Juliana and Chris have also begun to write up our overarching narrative (the theme: NO APOLOGIES!) as a way to create a story to pitch, but also looking toward the future beyond class. What direction to we want the project to go in, and how is the narrative helpful in this regard? Along these same lines, we’re simultaneously writing content for our site, since many of our current pages are just placeholders. We’re slowly but steadily working toward a functional, robust site.

We started with 5 testimonies, because that seemed like a manageable number when we had a lot of technological unknowns. Now that we’ve gotten over some of our biggest technology hurdles, we’re able to increase the size of our corpus with relative ease. I am adding new testimonies to our DocumentCloud group daily and the associated metadata will be added in the coming week as well. We don’t have an updated target number of testimonies, but would like to get as many in as possible. This process of adding testimonies will continue throughout the rest of the semester. The added testimonies will make search testing significantly more interesting, as well as showcasing more of what this project’s full potential is.

We’ve also been at work on some smaller items:

–Getting our contact form to send an email to us.
–Getting the browse functionality going, at least in very beta way. For now, this will just be an alphabetical list of names. Each name, when clicked upon, will provide a results page of all the documents that person is named in.
–We agreed upon a Creative Commons license and have added that to our site in place of the ©.
–We have a new week-by-week action plan that details what needs to get done to get us to a fully-functional MVP by May 19.

TANDEM project update

The code merge was completed and tested on two local machines and uploaded to the server at Reclaimhosting.com. According to Tim Owens at Reclaim, the necessary Python packages were loaded on the server, but the code cannot find three of them, so, as of this date, the code has not been run (Note: running this code on the server is an interim step to verify that the core logic of the text analysis and image analysis works properly). However, the server was built out so that the demonstration Django application launches successfully. Unfortunately, once it launches, some of the pages cause errors as does any attempt to write to the database. Our subject matter expert has been contacted to help debug these errors.

On a separate development path, multiple members of the team are working on building the Django components we need to turn the analytics engine into an interactive web application. Steve is working on linking the the core program to a template or view. Chris, Kelly and Jojo are working on designing and building the templates in a Django framework. Current UI/UX concerns involve potential upload sizes combined with processing time, button prompts that launch the analysis, and ways to convey best practice documentation so that it’s clear, concise, and that it facilitates proactive troubleshooting. The next part of this process will be to address the presentation of the final page, where the user is promoted to download their file. This page has great potential to be underwhelming, but there are some simple features we can apply to jazz it up, such as data visualization examples and by providing external links to next-step options.

On the outreach front, Jojo went to a Django hacknight Wednesday to get a handle on people building Django apps. She made contact with several new advocates in addition to garnering further support from Django Girls participants web developers Nicole Dominguez and Jeri Rosenblum, as well as hacknight organizer Geoff Sechter. The new contacts include Michel Biezunski. He seems like he could help. And has used Django to upload and redistribute files for his app InstantPhotoAlbum. So he could help when we work on figuring out potential options for placing and giving back data.

Last but not least, Chris attended a meetup at DaniPad NYC Tech Coworking space in Queens, NY this past week. There, he met a handful of Python developers who had insight into working with Django based web-apps. Commercial uses for TANDEM-like were brainstormed and people responded with interest in testing a prototype. Along with academic beta-testers, some of these people will be included in the contact list when TANDEM is deployed.