Bluesky purports to offer algorithmic choice to its users. Some of the design outlined mentioned by Graber (2023) includes treating algorithms as general aggregator services, allowing the user to swap between aggregators (or indeed, creating an aggregation of aggregators!) at will. Graber also notes correctly that even a simple “just the posts of people I follow, in chronological order” is itself an algorithm.
This is far from the only algorithm available, however. Slachmuijlder (2024) notes that Bluesky also offers algorithms such as “Popular with Friends,” “Science,” “Blacksky,” and “Quiet Posters.” These algorithms are all quite different.
“Popular with Friends” showcases popular content from the people you follow and the people they follow. This relies on signals such as likes–the more liked a post is, the stronger the signal of “popularity.” This is limited to two levels of follows–your direct follows, and their follows–so that you aren’t simply browsing the most popular content on the entire service.
“Science” is “a curated feed from Bluesky professional scientists, science communicators, and science/nature photographer/artists.” This therefore is less a machine algorithm and much more of a human algorithm in operation, relying on the judgement of the curator(s) to determine posts which “fit” the category.
“Blacksky” is a feed for showcasing black voices. This is poster-determined, as using a specific hashtag can either include a single post into the feed or add the poster into the feed permanently. (Manual removal of posts or posters is available if necessary to clean up the feed.)
“Quiet Posters” includes posts from people who follow you who don’t post often, ensuring that infrequent posts aren’t drowned out in a large or busy feed.
The ability for users to select their own algorithms, or indeed choose several of them (if science is an interest, why not include Science as one of the options, as well as something which looks more directly at your own follows and followers?) is a powerful feature. The feedback loops in operation are different for each of these algorithms, but the user is not locked into any of them.
A feed such as Blacksky, which can be manipulated by simply adding the appropriate hashtag to a post, is simple to join to make content more visible. This ease of use also makes it relatively easy to abuse, particularly as removal of a post or poster from the feed is a manual affair. Perhaps to some extent (and I am theorizing heavily here!) this is a risk that is judged acceptable to some marginalized communities–the risk of a non-marginalized poster voluntarily marking themself as marginalized is comparatively low, versus the risk that an automated method of removing marginalized posters from the feed could be abused by non-marginalized viewers. The algorithm’s design in this case could be understood to offer maximal ease of inclusion and minimal risk of being unfairly removed by abuse of automation.
“Popular with Friends” seems more vulnerable to persistent abuse, however. As this is based on how “popular” a post is, any automated influence operation that artificially inflates a post’s like count could drive up a post’s visibility in the feed. This is somewhat mitigated by the limitation to any single user’s immediate follows and their follows. Still, for a user with an expansive social network, this might still be something of a concern.
The largest risk I can see with Blacksky’s function is what I highlighted earlier–a risk of external hijacking of the feed, by external actors adding the required tags to automatically include themselves and then broadcast to everybody using that algorithm. The design of manual intervention being required to reverse that is the corresponding weakness (even though good reasons may exist for that design decision). If I were to design a modification to this, I would seek to incorporate signals from the users of the feed. If a significant proportion of the feed’s subscribers were to block or otherwise downvote a given post, the feed might respond to this signal by deprioritizing or hiding that post. Of course, this also creates a weakness of external actors subscribing to the feed in sufficient numbers to block any target post they choose. For marginalized communities, this may be perceived as a much bigger risk than the risk of the occasional broadcast from inappropriate sources (until manual intervention arrives). A refinement of my modification might also gate the accounts whose blocking signals are counted, such that users who have only recently subscribed to the feed cannot influence automated moderation, with various cutoffs (such as a week or a month) available to be tested and selected as necessary to make the feed more or less resilient to exploitation in response to changing conditions.
In a similar fashion, Popular with Friends could possibly be adjusted to only include like signals from an account’s own follows. The limitation in only showing popular content for direct follows and their follows is probably acceptable, as this allows for exploration of content that is not directly connected, but nearly directly connected to the user. Even so, only counting like signals from direct follows would make external influence operations nearly useless. Further adjustment could weight like signals more heavily if the target poster is not a direct follow. A direct follow has more chance of being part of the same immediate circle, and may therefore be more likely to receive more likes among that circle than a user who is only indirectly connected into the circle. Thus, amplification of these weaker “at-a-distance” like signals may be appropriate to compensate for the expected lower possible volume of likes from direct follows.
References:
Graber, J. (2023, March 30). Algorithmic choice. Bluesky. https://bsky.social/about/blog/3-30-2023-algorithmic-choice
Slachmuijlder, L. (2024, November 30). Bluesky lets you choose your algorithm. Tech and Social Cohesion. Substack. https://techandsocialcohesion.substack.com/p/bluesky-lets-you-choose-your-algorithm
Leave a Reply