Cindy and I, who are not tech savvy at all, have been working with what feels like a million data sets.
Our process is outlined below:
– What we were looking for?
– How we were going to explain it?
-How it related to our research?
How were we going to visualize it?
The first question was the easiest. We knew what we wanted to share, we did not know how to share it visually. Vilem Flusser says, “Changing image to text is magical”, but I tend to think text to image can also be very magical.
Of course Step 1: attending a DH Fellows Presentation ( if you have no clue about the software, this is the way to go)
Step 2: Working with the software to understand how it will tell your story
Step 3: going to the Data sets. the one for us was https://nycopendata.socrata.com/
Ste 4: Finding the right program for our story. For us it ended up being CartoDB, I mentioned neatline in another post,but as you will see in the next step that did not work.
Step 5: Exporting the data in a way that it would map without a lot of commands. We tried exporting a number of data sets that did not work for us. It would import as a polygon or with null and who knew how to georeference that? Then it would not map. Only the shape files seemed to map easily.
Step 6 : Find more data that would tell the story
Step 7: Once we found the data, finally mapping the data
Step 8: Making the data make sense for the viewer
Today in class we will show our final project of geomapping
How ICT impacts student learning? Does income and location impact whether students can achieve?
Our project will be displayed in cartodb.
Stay Tuned