Tag Along With Adler: How This Adler Zooniverse Research Project Got Started
Header Image: A person looking through the Doane Observatory telescope at the Adler Planetarium.
Have you ever attempted to search the internet for a specific thing, but found that no matter what you typed in the search bar you couldn’t find what you were looking for? The same problem can happen when searching digital collections! This type of disconnect between search terms and searchable data happens most frequently when the language being used by the searcher differs from the language used by the institution. Even as museums begin to incorporate AI and machine learning into the process of creating descriptive language and keywords for collections materials, much of the vocabulary and word choice differences are being trained right into these programs—continuing to affect the ability to discover! Tag Along with Adler is a new crowdsourced research project from the Adler Planetarium that invites anyone to join us in the curatorial process of describing images of our objects.
Traditionally, museums have focused on cataloging collections in order to record what an object is. This means, for the most part, that collections records focus on information such as who made an object, when it was made, where it was made, and what it is made of. This is all important information to record for future generations, but it misses what the object is about. Many users of online collections are searching for objects based on visual characteristics, such as what is depicted on an object, what does the object mean or do. If this information is missing from the records, it can be difficult or even impossible to find what you’re looking for.
Most museums make their collections searchable online via adding metadata descriptions of their objects. However, the effectiveness of this process (in terms of whether these searches will be successful) depends on how museums are describing their collections. For example, if you saw the image below at the Adler Planetarium and wanted to share it with a friend later, you might pull up the Adler’s digital collection search page and start typing in various terms. What terms would you use? Currently, the terms that would pull this image up in the search results are: woman, meteorite, Chicago Park District, 1945, 1950, Adler Planetarium, Chicago. Perhaps you used one of these, but if you used any other terms like “black and white,” “girl,” “posing,” you wouldn’t find this image!
This is a problem for museum collections across the world, but it was even causing issues internally for staff at the Adler Planetarium too! As digital programming and social media use have increased due to the COVID-19 pandemic, we often want to include collections materials in our digital engagement strategy. In our attempts to search within our own collections based on visual characteristics, we realized it is difficult to even determine what our own options are, based on search results.
Imagine: there is a huge snow storm rolling through Chicago, and you want to share a picture of a time the Adler was “snowed in.” Without the assistance of image tags, which are descriptive words added to the catalogue, featuring words like “snow” or “blizzard,” how do you find the one image of the Adler in the snow amongst the 300+ images which include the Adler building? Without a robust search feature, we have had to rely on staff members looking through the collections manually. If our own staff are having issues with this process, we had to assume these same issues were affecting our guests and online users, too!
How Tag Along With Adler Got Started
I, Jessica BrodeFrank, am the Digital Collections Access Manager at the Adler Planetarium, and for almost 5 years I have helped to manage all the digital collections within our historic collections department, including working with the Collections team to ensure objects are searchable on our online collections search. At the same time, I am a research student working on my PhD at the University of London, School of Advanced Study, specializing in how to increase ease of search and make databases more accessible and inclusive through the use of crowdsourcing.
Along with our Collections and Zooniverse teams, I ran a project called Mapping Historic Skies from November 2019 to February 2021. The project was designed and run as part of a collaboration with the Adler Zooniverse team, including Zooniverse Humanities Lead Dr. Samantha Blickhan (co-author of this blog!), and our former Zooniverse Designer Becky Rother. Through this collaboration, our Collections team was able to witness firsthand the exciting results of using a crowdsourcing platform like Zooniverse to engage with thousands of volunteer citizen scientists. As Mapping Historic Skies was ending, we (Jessica and Samantha) began to consider how crowdsourcing could help the Collections team to further enrich the Adler’s collections database.
Tag Along with Adler is part of my doctoral research, and a result of the ongoing collaborative work between our Adler Zooniverse and Collections teams. Based on previous crowdsourcing projects such as the 2000s run steve.museum, Tag Along with Adler looks to see how inviting volunteers into the traditionally professionally curated process of describing collections can help to not only create engaging experiences for guests, but also create a more representative and diverse set of search terms for collections.
Get Involved With This Research Project
In Tag Along with Adler we are asking you to look at the visual art within our collections of works on paper, rare book illustrations, and historic photographs and add the terms you would use to search for these images. We acknowledge that it is impossible for any one person to anticipate the language of everyone, so we want your help in getting more access points to our data! By becoming a volunteer with this project, you help not only the Adler Collections and Zooniverse teams, but also the very real research project being conducted as part of my doctoral research. Remember: consensus is not the goal! We want your language and participation, helping us revolutionize the way the museum field looks at interactive experiences and cataloging practices.
Additionally, this research project introduces volunteers to the work that goes into automating this type of process. A frequent question posed for crowdsourcing projects is, “Why can’t a computer do this?” I ran the project images through two different AI tagging models, one trained by the Metropolitan Museum of Art using 155,531 samples from their collections, and the other trained by the Google Cloud Vision API (the basis for any Google Image search). In the “Verify AI Tags” workflow, volunteers are able to see the tags that these two models created for the Adler collections, and help to verify the suggestions. This allows volunteers to see both the positive outcomes of AI tagging, but also the negative drawbacks and limitations of this technology—essentially, demonstrating that, while automated processes can be helpful, we need to interact with them critically if we want to avoid incorrect or biased results.
Tag Along With Adler’s Impact
As of June 14th, 2021, Tag Along with Adler is 50% complete! 2,071 registered volunteers, and countless unregistered participants, have helped to contribute 58,472 classifications. The project originally launched with 1091 Adler collections items, which were sent through both the “Verify AI Tags” and “Tag Images” workflows of the project.
So far, in the “Tag Images” workflow, Adler Zooniverse volunteers have created 100,389 individual tags for 500 images. We compared these individual tags to the current terms available in the Adler Planetarium Collections Search Catalogue, as well as against the terms created by the two AI models used as part of this project: the Google Cloud Vision API, and the Metropolitan Museum of Art Tagger. These comparisons were important for us to be able to understand how the language our project participants used differs from the language of museum cataloguers and a human trained computer tagging model. We found that a very small percentage of tags added by users were already in the Adler’s catalogue; and even fewer had been created by the AI models! The median average for tags added that were already in the Adler catalogue was 12.2%, and the median average for tags added that were also created by the AI models was 7.25%. For our team this was an exciting early assurance of the importance of using crowdsourcing for metadata creation, helping to provide early proof that language of cataloguers and the choice of what to describe is different than that of the public, and in many ways much more limited. It also helped to show that though AI has some promises for metadata and tag creation, its success is still extremely dependent on the dataset used to train the AI model.
Similarly, for the “Verify AI Tags” workflow, project participants created 79,023 individual tags for 500 images! Once again, a very small proportion of tags added by volunteers were already in the Adler’s catalogue; and even fewer had been created by the AI taggers. The median average for tags added that were already in the Adler catalogue was 13.4%, and the median average for tags added that were also created by the AI models was 4.5%.
A surprise for the research team was seeing that the tags added as part of the “Verify AI Tags” workflow had a much lower percentage of matching to AI generated tags than the tags added as part of the “Tag Images” workflow! We had anticipated that by seeing the tags as part of the image in the “Verify AI Tags” workflow, it would have an effect on the language and tags added by users. It appears it may have. 20% fewer tags were added by participants as part of the “Verify AI Tags” workflow than were added by participants of the “Tag Images” workflow. On average (median) participants of the “Verify AI Tags” workflow added 3.1 tags per classification, whereas the median for participants of the “Tag Images” workflow was 4.02 tags per classification. Overall these early statistics hint that being prompted by tags as part of the workflow may change the way participants tag an image, initially showing it may limit the amount of tags added by participants!
Looking back at this image, the Tag Along with Adler project volunteers added the following terms to describe the image: Asteroid, Blonde, Display, exhibit, Exhibition, Girl, meteor, Meteorite, Moon rock, Museum, Planets, Photograph, portrait, pose, Rock, sitting, sitting on, Space rock, vintage, Woman, Young Woman. Already we are seeing that there are now many more options for finding this image!
Our team is enthusiastic about these early results; with over 100,000 tags already created that will help improve and diversify the cataloging of our collections. Even more incredible is that these came from over 2,000 individual participants, helping the Adler expand whose voices are included in our collections and changing the way we describe our objects to better serve the public. In this project, consensus is not the goal. Join in and help us enrich access to our collections!
Learn More About Adler Zooniverse
Zooniverse is the world’s largest and most popular platform for people-powered research. This research is made possible by volunteers—more than 2.2 million people around the world who come together to assist professional researchers. In 2020, Zooniverse gained over 263,000 new volunteer citizen scientists. You don’t need any specialized background, training, or expertise to participate in Zooniverse projects. It is easy for anyone to contribute to real academic research, on their own computer, at their own convenience.