For the world's 2.2bn people who have a vision impairment or blindness, the experience of browsing the internet and social media is vastly different from the rest of the population's.
Text, for instance, has to be read by screen readers - software applications that translate what's on the screen into Braille or audio.
But what about images? The most common practice for making images accessible online is through "alternate text", or alt-text. These are text blocks embedded within a picture (by the person who posted that image) that describe what the image is about. These text blocks can then be read by screen readers.
For example, a black-and-white photo of a man hugging his fiancé who's holding a bouquet in front of the Eiffel Tower would be described as "a black-and-white photo of a man hugging his fiancé who's holding a bouquet in front of the Eiffel Tower" in alt-text for screen readers to read.
Every Silver Lining Has A Cloud: There are three big problems with alt-text, though. One is that if manually inputted, it's a tedious process to write alt-text for each of the countless images uploaded every second online. And if automated, alt-text is not always accurate.
Second is that it's a rare practice. Most content you see on social media is user-generated, and most users won't add alt-text before sharing their vacation selfies on Instagram.
The other problem with alt-text is that it's not always practical. How do you write alt-text for a meme? How do you explain a joke without losing the humour in translation?
Some Workarounds: People and companies are trying to fix this. Besides improving AI algorithms to boost alt-text accuracy and urging more platforms and websites to increase accessibility, attempts are also being made to make alt-text itself more...conversant. Some ideas include minimising text blocks and adding audio files to familiarise text.
How well do you know the top news of the last week? Have a go at our TheWeekThatWas Quiz and test your wits.