top of page

Twitter A11y

Goal: Make visual content on Twitter more accessible.

Time frame: 5 weeks, Spring 2022

Team: Arthi Puri and Sigrid Lohre

Mobile design

Accessibility

UX/UI

Qualitative Research

This project was through NYU's Looking Forward class taught by Regine Gilbert and Gus Chalkias. It was a collaboration between students and the Twitter Accessibility team as a joint effort to come up with creative solutions to make visual content more accessible on the Twitter app. 

Three iPhone mockups of final mobile design

Background and Problem Space

"Media on Twitter is predominately visual—such as data visualizations, emoji, picture stickers, and ASCII art—presenting challenges to users who have cognitive, learning, or sensory disabilities. We (Twitter) want to explore how other sensory modalities, like sound and haptics, can make these media types more accessible."

Deliverable: Create a prototype that explores a new creative concept for how visual content on Twitter can be made more accessible for the low vision and blind community.

Accessibility

One in seven people worldwide has a disability that affects how we interact with the world and our devices. With media on Twitter predominantly being visual, it excludes a large part of the user base from interacting and consuming this type of content and information. If the content was developed in a more accessible and multi-sensory way one could improve not only the accessibility of the application but also impact the quality of life among populations of people with disabilities. As our Professor Gus Chalkias has emphasized, "accessibility is about giving the users choices and offering options".

01 Research Process

We started our qualitative research by taking a look at articles and academic research done on making visual content more accessible. From this research, we gathered that alt-text libraries were found to be beneficial for the discoverability of content. Despite these findings, we identified a gap in the literature, particularly when it comes to translating visual humor and understanding context. So even though a meme had alt-text, it usually did not give much contextual information. Reddit and Youtube are where we found the most feedback on the inaccessibility of memes and we repeatedly found that users prefer memes being explained to them in a conversational manner. In addition to this, we also discovered a podcast called Say my Meme, which describes memes for blind and visually impaired people. 

02 Personas and User Journey

We then created a user journey, and two personas to capture different user needs and goals of Twitter users.

User Journey showing emotions of a user on a graph while interacting with the twitter app

User Journey

Image of Lily Johnson (blind Twitter user persona), background info, goals and ambitions, frustrations and tools used are listed out.

Persona 1: Lily Johnson

Image of an adult male named Amir Noor. Persona details include: background info, goals and ambitions, frustrations and tools he uses.

Persona 2: Amir Noor

02 Design Process

To brainstorm ideas, we created 4 different sketches that we eventually merged into the final designs.

The first 2 sketches were inspired by one of our guest speakers in this class Nefertiti Matos, who is a voice-over artist who narrates movie trailers. In a guest talk during our class, she spoke about how some users prefer shorter descriptions while others prefer extended descriptions.

Wireframes of solution

Solution 1

Wireframes of solution

Solution 2

Wireframes of solution

Solution 3

Wireframes of solution

Solution 4

04 Final Design

Our solution targets the existing alt-text feature in the Twitter app by adding a tab bar where users can navigate between regular alt-text, a description from www.knowyourmeme.com and lastly an audio description. Through the audio description twitter users can explain the meme by adding a personal touch, intonation, and humor similar to that of a voice note.  

Two iphones laying on top of each other. Mockups of final designs.

Click image for Figma prototype

Final thoughts

This project was a great opportunity to think outside of the box when it comes to translating visual humor in an effort to make content more accessible on Twitter. It was a learning experience and with the great guidance of the Twitter Accessibility team, we were able to craft a creative solution that not only is technically feasible and built on existing features that are already available through the application.

Looking forward

For future work on this suggested feature, we think it would be of great importance to do more research on how this solution could benefit deaf-blind users too. It could be interesting to see if the knowyourmeme.com description could also be available on a braille display. 

Given our time limit, we were not able to do any formal research in person with targeted users, but it would definitely be of high priority if we were to move on with this. This also goes for performing proper concept and usability testing.

Lastly, we would also suggest considering how this solution could be scalable, particularly when it comes to languages and thinking of other popular visual graphics like gifs, animations, and emojis.

Next project
bottom of page