Welcome to Turtles (all the way down!) A newsletter which seeks to explore the relationship between cultural theory and digital design.
Turtles is back! I’ve been on a bit of a hiatus as I’ve been building my own digital platform, Clusta (now in open beta!) - I’ll be attaching links to each article from now on to my collection in Clusta, so you can explore my research into digital media without the (doom)scrolling!
Find my research for this week’s article here (not currently optimised for mobile, sorry!)
The algorithms which dictate the content of social media feeds have always had a fairly fractious relationship with, well, all of us - the people who use these platforms.
Most recently, we have seen backlash to Instagram’s recent update, replacing the ‘timeline' of posts from followed accounts in users’ feeds with the more in-vogue algorithm style of major platforms, which combines ‘suggested posts’ with posts from followed accounts, and in the order of ‘relevance’, of which there is no publicised metric.
Our fractious relationship with these algorithms exists for good reason. Social media feeds hold the power of our shared narrative as a society, even now a planet, something I have explored in a previous article.
Not only is this power great, it is also one we have no control over.
We, the users, are both the most vital party in the success of these platforms and simultaneously the least important to considerations made when designing the algorithms for feeds.
The most important party? The advertisers, otherwise known as the real customers of major social networks.
The trick is that social networks no longer need us to like the feeds - they just need us to be addicted to them. The casino-style psychology of the doomscroll (article coming soon) has proven just as, if not more effective for keeping us all plugged in, and consequently making our opinions void.
A fairly dreary start to this article, I know. Maybe you are thinking ‘help! I need to escape!’ That would be understandable. Perhaps we consider a way to do just that.
It wasn’t always like this
There is lots of talk about web 1.0, 2.0, 3.0, x.0 etc. - but I want to focus specifically on the difference between 1.0 and 2.0. Broadly, we know that web 2.0 is meant to represent the web in the image of the tech giants such as Google, Amazon, Facebook et al, but there is also another way to frame this difference:
Web 1.0 was the internet before the algorithms.
There was no Google search. No feeds to scroll. You entered a URL, and where you went from there would depend solely on the hyperlinks contained in whatever site you were routed to.
What I find personally so interesting about this method of ‘surfing the web’ (why did we stop using this phrase?) is how it is driven by intention. It isn’t the passive, drool-inducing doomscroll or advertised, optimised content - it is you, right here and now, going ‘that sounds kind of interesting’ and acting on that.
These websites are ultimately a curation. The content and hyperlinks they provide are not selected algorithmically, but by hand.
One of the ultimate examples of ‘curating’ a website is the behemoth that is Wikipedia. An excellent example of the intention-based exploration which can be achieved on the site is the niche practice of Wikipedia speed-running, where the speed-runner attempts to arrive at a specific wikipedia page from a seemingly totally irrelevant page as fast as possible, by only clicking on links through to other wikipedia pages.
One interesting phenomenon that is also noticeable in Web 1.0 is the much wider range of design and content. The image above (from 1996) a testament to this. How many websites would you find today that look like this?
There are many reasons for why we have seen design and content become more and more similar over time, but there can be little doubt a significant factor has been based on what algorithms favour. Be it website architecture and copy tailored for search engine optimisation (SEO) to inflammatory posts on Facebook to reams of twenty-something’s gasping in Youtube thumbnails, content has slowly become more homogenous in the interest of ‘pleasing’ the algorithm.
We, as humans, ultimately have a much more diverse set of criteria between us than any algorithm for what we may define as ‘pleasing’. Web content didn’t have to please one giant algorithm to ever reach an audience - it just had to please one person who had a website, and anyone can have a website.
The return of curation
We have most likely seen the back of an internet made up predominantly of pure HTML, Times New Roman sites with pretty intense colour schemes (and perhaps that is for the best - at least we have the memories) but there are new modes of digital curation that are becoming increasingly prominent.
The first of these is right in front of you - the return of email newsletters through platforms such as Substack has generated a whole new culture of digital discovery through the people you choose to follow and engage with.
There is a kind of beautiful chaos to reading an article on the issues of social media algorithms and discovering the Wikipedia speed-run world record, the 1996 Space Jam website and another article on social media’s relationship to meta-narratives.
There is, of course, nothing stopping an algorithm serving these different materials together in a feed. However, we know that curated information will always create this kind of diversity, whereas the trend we are seeing with content produced to maximise algorithmic potential suggests we may see less and less of the most bizarre works for which the internet has always been most notorious.
Another channel keeping the weirder side of the internet publicised is the culture of ‘reaction’ content.
A number of Youtube creators and Twitch streamers produce the majority of their content around reacting to videos / media predominantly found by their viewers.
Not only are these videos themselves a personal curation of content that the creators find most interesting, funny or relevant, but there is another value this kind of curation provides.
One of the most intriguing aspects of social media algorithms is how intensely personalised they are. This personalisation has been attributed to playing a major part in the rise of disinformation online and its effectiveness was clearly proven all the way back in the Cambridge Analytica scandal, let alone the Trump-induced January Capitol storm.
It may seem strange to reference these in the context of a Youtube videos and Twitch streams, but what this reaction content is doing is creating a distinctively shared experience between the creator and their viewers and between the viewers themselves.
And perhaps this is what is most powerful about curated content - it enables all who interact with it to see the same internet for once, and truly form a community around that.
Thanks for reading! Remember to check out my collection on Clusta to see my research for this article, as well as my wider research into digital media.