Category: Tasks

  • Week 9: Concept Map

    So, we needed to put together a concept map for what we’ve been exploring in this course, at least as I have experienced it! I have no doubt that I’ve drawn in items that others don’t feel figured heavily, and conversely I’ve probably omitted things people may have thought were especially key.

    Still, for all the inherent limitations, here’s what I’ve produced.

  • Week 9: DIY Unit

    Federated, Decentralized, and E2EE Networks: Similarities, Differences, and Trade-offs

    (TL;DW)

    If you think about the history of computing, there’s a general trend that appears:

    When a new advancement is made, it tends to be very large, very expensive, and usually only available to those who can afford the extravagance (or have / convince themselves they have great need). But as long as you have the money, space, or whatever resource is required, there’s no real barrier to getting one–there’s so few instances of the new tech out there, you can’t really even hope to connect it to get any sort of network benefit.

    As time goes on and the new technology becomes more affordable, it typically enters the realm of “too expensive for the home, but businesses might have one.” In this phase, there’s a powerful incentive for businesses to create network effects, often by advancing their own patented or unique revision of the technology, in an effort to first convince people to use it, then make it difficult or costly for them to leave it, then use that increasingly captive audience to convince others to join their particular walled garden.

    But as the technology continues to advance and becomes faster, smaller, cheaper… there’s more capability for people to begin making use of it at home. And naturally, they want to use it in the way that suits them–which is not always the way the businesses who created the walled gardens want them to use it. Over time, the growing ubiquity of the new technology can result in people overcoming network effects just because of the sheer number of people working against those effects in all their myriad ways.


    Now, let’s think about social computing in this fashion. The earliest stages of social computing were very much about protocols, but they were somewhat unwieldy to employ and required a lot of resources to use, tied as they were to large computing systems and communication infrastructure that was rather resource-intensive.

    As time passed, these large systems became easier to set up and use, but… communications infrastructure was still lacking, and running large websites at home was frequently infeasible. This led to a centralization effect, where a site managed by a business could benefit from having those network effects–more users could mean more revenue, which allowed them to expand the site. It had the potential to be a positive feedback loop, if the effects of greed and profit-seeking were kept at bay.

    But over time, the technology available to us has continued to increase. We have more powerful computers, and always-on broadband connections are much more commonplace. We could very well be approaching the time where people can pull their “social networks” (in the sense of a service or a site) back out to the edges of the network, rather than keeping them held in a centralized location.

    And this leads us to decentralized networks!


    There’s certainly more than one style of decentralized network, taken as an umbrella category. If you want to replicate the look and feel of something such as Twitter, perhaps you might start by creating your own Twitter-like website that you can run on your own equipment. And hey, maybe somebody else runs their own copy of the software, too. A protocol can connect the two of them so you can see each other’s activity–this is a federated system.

    I would categorize a federated system as a midpoint along the continuum of decentralization. It’s still a somewhat-centralized service, after all. The admin may federate with other servers… or they might not. You still have to find and choose what server you want to sign up under, and if you go create a new account on another server, well… that’s a different account on a different server, and you might need to rebuild your post history and your social graph from there.

    There are other styles of decentralized network–P2P and blockchain are two other mentioned by Jeong et al. (2025). But if you continue pulling the social network out to the edge, the logical endpoint is a system where everybody has their own “site.”

    The example I would use of something like this would be Bluesky and the AT Protocol, which you may have read about in Unit 8! Under the AT Protocol, every user has a globally unique decentralized identifier. The human-readable handle can be changed, but the DID remains unique, no matter where you go. And you can go wherever you want–the DID can be updated to point to a personal data store, or PDS, that can be placed wherever you choose. It can be on Bluesky’s servers, or you can run it yourself. That PDS contains your posts, your social graph, and all the data that is “yours.” If you want to go somewhere else, you simply move your PDS to a new location, with no need to rebuild from scratch.

    By contrast, end-to-end-encrypted (E2EE) networks are a different breed entirely. They certainly can be decentralized… but that introduces new challenges.

    In an E2EE network, everything is encrypted at the endpoints before being transmitted. No middle points along the path can read the traffic. That’s already a benefit. Of course, metadata and traffic flow analysis can be used to infer other data about the communications, and so E2EE is often also implemented in a completely decentralized fashion.

    But one of the biggest challenges of E2EE is key management. That is, if the encryption keys are only managed at the endpoints… how do new endpoints exchange those keys? If I only accept encrypted traffic, how do you send me a message to ask for my encryption keys? I would have to post my keys somewhere, and before you could try to talk to me, you would have to find wherever I put my keys.

    In the realm of OpenPGP, keyservers are a common answer to this quandry. But a keyserver is a point of centralization–the bigger a keyserver becomes, the more useful it is for people to use it, and the more the network will de facto depend on it. So already, we see where centralization still plays a very important role in making E2EE networks convenient for users.

    Signal is an example of an E2EE network, but still maintains a centralized service to facilitate key exchange and introduction. But because keys are essentially tied to the endpoint devices, the loss or rotation of a device can mean the entire messaging history is lost. This would not be acceptable in a social network. More recently, a newer E2EE protocol (based on the Signal protocol) has been developed to try to extend the benefits of E2EE to social networking applications as well.


    Decentralized and E2EE social computing is an area of social computing that has held my fascination for a long time, and probably will in future as well! Hopefully this was an interesting overview of this area for you, too!

    References:

    Basem, O., Ullah, A., & Hassen, H. R. (2023). Stick: An end-to-end encryption protocol tailored for social network platforms. IEEE Transactions on Dependable and Secure Computing, 20(2), 1258-1269. https://doi.org/10.1109/TDSC.2022.3152256

    Jeong, U., Ng, L. H. X., Carley, K. M., Liu, H. (2025, March 31). Navigating decentralized online social networks: An overview of technical and societal challenges in architectural choices. arXiv preprint. https://doi.org/10.48550/arXiv.2504.00071

    Masnick, M. (2019, August 21). Protocols, not platforms: A technological approach to free speech, 19-05. Knight First Amendment Institute. https://perma.cc/MBR2-BDNE

  • Week 8: Technical Standards, APIs, and Protocols

    I’m going to consider these protocols:


    I was actually a user on LiveJournal back when Brad Fitzpatrick launched OpenID. And for a while, it was quite nice to be able to use a single identity provider for everything. The promise of convenience and being able to have one canonical digital “you” that everything could be linked to was quite alluring.

    Of course, as Facebook and Google grew ever larger and encompassed more and more of the Web’s active social services… well. The attraction died. And I realized something important–perhaps it was more secure to simply create many sharded identities at the various sites. Because I didn’t want Facebook, or Google, or indeed any identity provider, to be the sole key to my digital identity. What happened if that provider went away–bankrupted, or sold off, or just blown offline by some large-scale attack? And as long as every site used a different set of credentials, compromise at one could not compromise all. So using individual logins was a win for both security and resiliency.

    I like that the Wikipedia article for OpenID explicitly draws some comparisons to OAuth, because I think that’s also replaced OpenID to a large extent. Both involve authenticating through an external identity provider (IdP). However, behind the scenes, OpenID has both sites talking to each other and verifying the identity, while OAuth essentially hands a token to the user, who hands it to the second site, who uses the token to query the first site through whatever API it offers.

    In some ways, OAuth has more applicable contexts, because it’s designed to allow users to delegate access even to non-interactive devices. However, OAuth has a major issue if it’s not set up correctly with HTTPS: A token is not typically single-use. If either end-device or connection flow is compromised, the token can be stolen and re-used to gain access, and possibly even persist access by using the stolen token to renew and gain a new token even when the original application is no longer active. The same issue would, in fairness, arise if OpenID was used without HTTPS, but the use of a nonce and live authentication would at least potentially pose a somewhat more complicated challenge to replay later.


    ActivityPub is clearly designed off a general social media platform paradigm. That “followers,” “following,” and “liked” are required attributes of any “actor” in the protocol says a lot, in my mind. There are other optional attributes, but those are always required. So the assumption that “likes” are a thing is baked right in there, as are the assumptions that people using this protocol want to “follow” each other and allow others to follow them.

    The discussion around how server are essentially taking messages from actors’ outboxes and posting them into other actors’ inboxes is reminiscent of SMTP, of course. But then there’s the line that says that “Attempts to deliver to an inbox on a non-federated server SHOULD result in a 405 Method Not Allowed response.” So… the ability to deliver a message to an inbox depends on the two servers being federated. But I don’t see anything in this protocol description as to how this federation occurs. Worse, I see a note that while ActivityPub uses authentication in server-to-server federation, there’s no standard for this??? Only a list of best practices? Yo, if you’re going to say that non-federated deliveries should fail, you should definitely have a spec for how federation should occur! Without a spec, people are going to bodge that stuff, and it’s not always going to be pretty!

    In fact, I currently think that’s possibly the biggest weakness in ActivityPub (that I see), at least as far as interoperability goes. Theoretically, compatible servers should be able to use the protocol to send between implementations–and the protocol already notes that clients may or may nor be able to edit / interpret the original “source” used to create the rendered “content” in the messages. But without a standard for federation… aren’t we basically left in a situation where everybody basically does what seems good to them, with maybe a nod to “best practices” in concept, but with everybody doing their own specific twist along the way? That seems like a recipe for a very fragmented ecosystem, where various implementations can only reliably talk to themselves, using whatever setup method is deemed acceptable for setting up federation.


    Now let’s talk about AT Protocol! The more I read, the more I like it… the entire design of the protocol revolves around a different concept entirely. Rather than focusing on a user-centric model of an inbox and an outbox, AT Protocol conceives of the entire social network from the perspective of all the various services needed to power the network in a decentralized fashion. Handles are linked to domain names and keys published via domain names or web servers. All the data in the network is stored within personal data servers (PDSs) for the various accounts. Updates to the network are posted to the various PDSs. These updates are gathered and collated by relays. App views can then display the collected content of the network, with authoritative data always available through the PDS which originated any specific post. I touched on this back in Week 5 (https://comp650.nextdoornetadmin.ca/2025/05/28/week-5-analyzing-social-systems-part-2/), but the app views can also use different algorithms to alter the view of the network at the user’s request.

    This structure is designed to allow any part to be swapped out or replaced at any point. Users can change their handles to use their own domain names, and pull their entire store of data into another PDS at will. Importantly, the PDS contains all the user’s data, including posts, friends, and blocks. The user’s social graph is always ultimately under their control, and cannot be shut down because the user can always retrieve their data (which they control with their own cryptographic keys) and move it to another PDS. Similarly, if a particular relay decides to censor the network by refusing to crawl particular PDSs, anybody can set up another relay to collate the network’s data. Applications offering an app view of the network can be swapped out, and so can the algorithms responsible for ordering and displaying the data.

    At a technical level, this strikes me as a censorship-resistant protocol, but rather than attempting to counter a network-level adversary that controls and filters network access, the goal here is to counter corporate or organization-level control of the server infrastructure that drives the network. (This is complementary to something like Tor, and you could even theoretically extend this to running PDSs at .onion hidden services.) This also means that the network cannot easily be enshittified, because by design there can be no lock-in–all components can be swapped out if they no longer serve the user’s interests.

    What does stick out to me right now is that by design, all data in the PDS is public. There are no access controls. Users can choose to block individuals, but that only prevents them seeing the blocked user–their own content is still visible to the blocked user. I suspect this is, to a large degree, a function of making the network so resistant to centralized control. In that setup, no user can unilaterally impose their will on another, and the same is true for any organization that would try to moderate content. There are publicly-sourced block lists, and moderation tools that can add some metadata to posts, but it’s up to individual app views or relays to respect those configuration choices; it can’t be completely enforced within the social network.


    I’ve been using Signal for quite some time, and I have a massive amount of respect for the Signal developers. The Signal protocol itself is different from Signal the application, it is important to note. The Signal protocol can be (and has been) implemented within other applications (such as WhatsApp).

    The Signal protocol is also somewhat censorship-resistant, but it has been designed more for security than for other purposes. To this end, the protocol ensures that messages being sent between participants are encrypted end-to-end, so that the service operator cannot see what is being exchanged. Of course, this still leaves visible the metadata of who is doing the communicating. A further development was to allow “sealed sender”, where the sender’s identity can be obscured to the server, which only need know the recipient’s identity. The use of the Axolotl ratcheting algorithm (later renamed the Double Ratchet Algorithm) also ensures that even a compromise of the message stream will be eliminated as the keys are rotated and exchanged between the endpoints.

    Importantly, while the protocol is verifiably secure, the applications and end devices are “out of scope” for those assurances. It’s entirely possible for a secure protocol to be used in a very insecure application. It’s also very possible for it to be used in a messaging application run by an untrustworthy company that makes its money by tracking its users and their communication patterns (*cough* Meta *cough*). As is usually the case with an E2EE protocol, the centralization is used for necessary purposes including exchanging public keys and relaying messages when either end device is temporarily offline. No facility is provided for a user to move to a different application using the protocol while bringing their entire social graph with them; Signal Protocol is only concerned with securing messages between endpoints.


    Now… how would I design a distributed / federated social platform? Well… let’s start with establishing design principles. In no particular order,

    1. User data should be controllable by the user. Similar to AT Protocol, this should include their unique preferences, friend list, and block list.
    2. Users can establish a list of preferred or mandatory data stores. This can be a data store under their personal control, or it might be a store administered by a trusted contact. By default, the network will attempt to store “posted” data to three separate stores for redundancy. Reducing the number of data stores below this number should trigger a warning to the user.
    3. Data stores have a configurable storage limit per user. Users cannot post into the network unless they have sufficient space remaining.
    4. Every user has a unique handle and public/private keypair. This is used to sign outbound posts into the network, asserting the user’s ownership. Public posts and media are only signed and otherwise cleartext. Posts may also be restricted to specific viewers by encrypting those posts and media to the receivers’ public keys.
    5. When data is transmitted into storage, the client also encrypts the data to the store’s public key. This is necessary to ensure that data store operators can scan for and remove illegal materials. All material is still signed with the owner’s keys, so it can be verified that a data store has not altered or tampered with the stored material–it should be either entirely present, or not at all.
    6. The first post into the network should be the user’s profile, including their public key and list of data stores.
    7. Existing posts can be edited, including for key rotation, by providing proof of knowledge of the previous private key used to post them.
    8. When posting into the network, clients may also send a short message indicating a new post was submitted from the user (and bound for “public” or alternately a list of specified handles), along with a unique ID for that post, to a relay server. Relays gossip amongst themselves to share updates which they know about. Posts which are not sent to a relay are quasi-“private”, but are still available if directly retrieved (though their access controls remain intact via the list of recipients for whom they are encrypted).
    9. Relay servers serve as an index of user profiles, and sender/receiver/post ID triplets.

    This is conceptually similar to the AT Protocol, except with additional access controls to control post visibility, as well as adding redundancy for the network’s content. This protects against failure or unavailability of a personal data store.

    A PDS installation should be configurable to allow whitelisted content (for private / friend group storage) or public content. Servers may allocate storage to users based on defaults or based on configured amounts linked to the handle / keypair. This may even allow users to establish commercial relationships with storage providers if desired (but never because it’s required–users can always run their own PDS, and can choose to have fewer than three storage instances if desired, though that reduces reliability).

    Because posts and other material aren’t directly accessed by the relay servers, they should be relatively fast at serving the indexes described. Clients can perform a lookup of a user, a user’s posts, or posts destined for them, caching the data retrieved. Clients may display inbound posts by locating the PDS for the sending user and then retrieving the post content from any of the listed PDSs that can provide it. Clients may also be explicitly given a PDS link to retrieve, if a post is not announced to a relay. Clients posting content will attempt to directly store their content directly on all authoritative PDSs; if one is unavailable, an alert should be raised and the client should retry the storage operation later or request a functional PDS to perform a sync to the unavailable PDS later.

    Locating a user’s profile will also grant access to the user’s public key, allowing posts to be directed toward that user (provided the receiving user has not instructed their client to block display of posts from the sending user). If the user has chosen to make this knowledge publicly available, their list of friends may also be posted, allowing others to traverse the network and find additional connections.

    Key rotation is explicitly encouraged, as is data transfer. Servers should be prepared to locate all stored information by key and accept edits (including a new signature over the post) or export it to the client or a new PDS. Such commands or “system posts” are, like regular posts, signed with the handle’s known keypair to authenticate the command.

    Relays not only gossip information about new posts, but (by default) about connections to other relays as well. This should allow a new relay to “bootstrap” with only a few connections to existing relays.

    Groups may be created by a special “system post” which creates a new handle and keypair for multicast encryption. The creating user is registered as the “owner” of the group, which then creates a first post as the group’s “profile” as with a regular account, including a list of PDSs which the group will use. Group modifications, including to the list of owners and moderators, are approved by the group owner(s). Moderators are given “edit” access to the group’s posts. The group account is then instantiated on the PDSs as a clustered service, using appropriate primary / backup election methods to ensure that the group can respond to incoming input autonomously.

    Group members may post to the group by sending a regular post destined only to the group; posts with a mix of group and regular recipients must be rejected. When the group account receives a post destined for it, the group account “wraps” the inbound post and rebroadcasts it using the group’s multicast encryption key. Moderators may delete the post if it is deemed inappropriate, which causes the group to blacklist that specific post ID (to prevent re-transmission). The posting user may also edit their message as normal, and the group should re-wrap and re-broadcast it as normal when the edit is detected. If the original user deletes their message, the group deletes its copy as well.

    Because the group’s key is established for multicast encryption, users who have joined the group can instruct their clients to include posts from the group account, and messages from that group will be decrypted using multicast encryption rather than the user’s personal keypair.

    It is anticipated that very popular groups may require their own set of private PDSs specifically to handle group content, particularly as this must be “re-broadcasted” under the group’s own account. This will, however, take the load of running the “group” account off any public PDSs, as well as alleviating the storage burden and possibly increasing performance.

    A completely self-contained network can be created by initiating a relay with external connections disabled. If a client posts to its own private PDS, and notifies only the disconnected relay, then a social network of one user is the result. A second relay can be added by permitting external connections only to a specified target, and the self-contained network can grow from there.

    I explicitly envision this being able to run within Tor as a (set of) hidden service(s). Though I’m sure the devil is in the details… it’s kind of a rough outline, I know.

  • Week 7: Legal and Ethical Issues (part 1)

    Mance, H. (2019, July 18). Is privacy dead? Financial Times. https://www.ft.com/content/c4288d72-a7d0-11e9-984c-fac8325aaa04

    • The problem with “mutually assured surveillance” is that power is not equally shared. More and more, those with power bend the laws to their own whim–or simply ignore them as inconvenient gnats to be swatted aside. Surveillance may capture police misdeeds, but how many of those actually face the same repercussions that regular citizens would? And how many of them get some form of immunity or deference to the assumption that they were “just trying to do their job”? Turn and witness how many people in the US currently are being deported against court orders, or being grabbed off the streets and out of courtrooms without any chance to invoke due process! Power is distributed asymmetrically, and without power, surveillance cannot be acted upon.
    • Privacy is a hindsight problem only for those who have always had it. I grew up in a house where my parents decided to remove my bedroom door so that I would have no privacy in my own room, where showers could be interrupted by somebody flinging aside the curtain with no warning, where all access to outside media (newspapers, books, TV, movies, Internet, radio) was strictly monitored and controlled, where even schooling was delivered in the home and friends were generally only seen at church. Where even imagination was monitored and adjudicated to be permissible or evidence of demonic possession. Privacy is not a hindsight problem to me. It is a foresight problem. From my perspective, everybody else is suffering from a lack of foresight, imagination, experience, or all of the above.
    • Google may say the right words about us being in control of our data, but notably, they reserve the right to use that data at their whim among their own properties. Don’t listen to their words; read their terms of service.
    • “Informed consent” is often not informed at all. How many people just click “Accept” to move on without reading anything? That they chose to move forward didn’t make their consent “informed”! Companies don’t have to put a pile of legalese in front of us; it’s entirely possible to have a plain language terms of service saying what kinds of things you will and won’t do, and why. “We need the right to ‘republish’ the content you submit so that we can display your uploaded avatar and journal posts to others. We won’t do anything else with it without your express permission.” That would be simple and honest! Doesn’t take five pages of legal boilerplate.
    • You know what could be done? Mandatory de-identification upon data export from any system, or any time the purpose changes. If you have home surveillance cameras, you should be able to view the feed yourself, within that system. Export the data to post something on YouTube, and all faces are automatically blurred, timestamps scrubbed, etc. Any time a company is bought out by a competitor, all customer data is automatically wiped (and customers can choose whether to re-enter their data in the new company system). If police need to see something from your home cameras, they can come over and videotape the screen.

    You know, I administer systems absolutely stuffed with personal data. Not just in a business context! I also work with a non-profit serving a community with many people who would face incredibly adverse consequences if their membership in the community became known. That could be loss of housing, loss of employment, or worse. We need personal data in order to secure events and address problem individuals, but privacy is fiercely guarded. Our community entrusts us with their information–sometimes very reluctantly–and it’s a responsibility I have to take seriously.

    I’ve occasionally considered my ethical obligations if I were to be directed to move that data to an open, insecure, unaudited platform, or if the organization decided to share that data for marketing purposes or something of that sort. Is my greater obligation to the organization who has legally collected that data and on whose behalf I manage the system, or to the community who entrusted their personal information to us for a specific purpose in a specific context? Would I be prepared to unilaterally erase the database and face whatever consequences came from that action? Can I hide behind the cloak of “Well, I didn’t know for sure that this would happen…” or would that just be a convenient lie to try and make myself feel better?

    The answer will always depend on the exact circumstances, and so I can never know for sure in advance what my decision would be. But it’s something I regularly ask myself, even so.

  • Week 6: The Problem of Evil (part 3)

    So, for improving a social system, I’m going to look at Telegram. First, though, let’s state the context, our problems, and our goals.

    Telegram is most often used in a direct-message manner, but it also offers “channels” or group messaging. I’m specifically going to look at a channel with a group of people who are all involved in a specific task. Everybody has the ability to post messages.

    In this context, the problems I see mostly revolve around what people post. This is a non-exhaustive list, but they may:

    • post hurtful content without prompting
    • post hurtful content in reaction to the posts of others
    • spread rumors (outside of the channel) to instigate a mob response within the channel

    The ideal, of course, would be to stop or trap harmful content before it can be seen. But in my experience, an automated solution will not be as effective here as could be desired–the context of the channel may not fit the assumptions of whoever coded the bot or trained the model, and that could lead to a mismatch between the automated solution and what should be permitted or blocked.

    My goal, then, is to allow participants to, in essence, selectively withdraw their commentary from other participants in the group. I would like to do this in a way that isn’t as catastrophic as leaving the entire group, and ideally in a way that doesn’t try to force the group to “choose sides” (which could lead to the group fracturing repeatedly).

    One solution I might propose is what I will call “silencing.” This works similarly to blocking somebody, except that ideally, the block goes both ways–if Participant A silences Participant B, then not only does A stop seeing B, but B should also stop seeing A’s posts and responses. This handily prevents the first two points on my list of goals, as B’s hurtful content will no longer be visible to A, and B will also be deprived of A’s posts–there is no need to provide A’s information to B for B to use in others hurtful ways. No notification of this action need be provided to either participant, or to any channel participants at large, to avoid the action creating more agitation.

    A second solution I might propose is a “time out” function. In concept, this is similar to muting a channel (by disallowing notifications for a general time period) or leaving a channel altogether. The difference is, even when muted, the temptation remains to check the channel and see what people are doing–only notifications have ceased. In “time out,” the channel is no longer even visible or accessible. This function should be available to individual users (to put themselves in time out) and to channel admins (to put any user into time out). Where an individual has put themselves into time out, they should be able to navigate into their settings menu and explicitly choose to “time in”. Where an admin has placed somebody into time out, user override is not permitted.

    This creates a stage in between disabling notifications and having to depart the channel altogether, and can be used in different ways:

    1. The individual user is feeling “ganged up on” or mobbed, and chooses to time out with the intention of coming back later. Other channel participants can see that the user has “departed” and can no longer be messaged via the channel. The client should not prompt the user or automatically time back in of its own accord. Given the user’s curiosity might lead to trying to “sneak a peek,” the “time in” option is placed in the settings menu to ensure that the user is fully ready to re-enter participation. Upon timing back in, the user resumes participation as desired. Other channel participants can see the user has returned, but no channel-wide “join” announcement is made.
    2. An admin sees somebody posting harmful content, and they will not be dissuaded. From past experience, the admin knows this is atypical behavior for the participant, and suspects the participant is suffering some form of altered experience (whether by substance intoxication, mental strain, or otherwise). They place the participant into “time out” as a way of isolating the participant until they are ready to return without causing further damage. This is less damaging than outright evicting the participant from the channel. Other channel participants see the same messages as described above in scenario #1. The participant who was placed into time out sees an administrative message indicating that they were placed into time out by an admin, with the ability to directly contact that admin. Any channel admin should be able to time the user back in, but the originating admin’s info is given for accountability and to allow the timed-out user to be able to speak directly to the person who placed them into that state.

    There are, of course, some loopholes that remain. Even if A silences B, A’s posts are still visible to Participant C, who can act as a relay. If C does act as a relay, this act of relaying may also not be visible to A. This is, therefore, not a complete solution on its own–group norms should be established to ask before relaying information that somebody “missed” or “didn’t see.” Even so, this still doesn’t solve the problem of actual malevolence, but I judge that dealing with actual malevolence would likely require much stronger measures anyhow.

    “Time out” is vulnerable to abuse by admins. It might be tempting to use this as a sort of “silencing” group-wide, short of actually kicking somebody out. But by its design, the message shown to other group participants is that the user in time out has “departed” the channel; there’s no benefit to an admin to put somebody in time out if their intent is to silence them permanently, as it looks like the same thing as having kicked them out fully. Potential abuse is also limited by allowing other admins to time the user back in, and by making the originating admin’s identity clear to the timed out user (with that admin also being able to be directly contacted).

  • Week 6: The Problem of Evil (part 1)

    Dibbell, J. (1993, December 23). A rape in cyberspace: How an evil clown, a Haitian trickster spirit, two wizards, and a cast of dozens turned a database Into a society. The Village Voice. https://web.archive.org/web/19970612100454/http://www.levity.com/julian/bungle.html

    • Let’s start with multiverse theory, yes? Let us say that there is an infinite number of universes out there. Then anything we can think of or imagine has in fact happened, in some universe, somewhere.
    • In many ways, I view creativity and imagination as portals to another world, almost quite literally. Under multiverse theory, it has happened–we could be said to merely be recording that which happened elsewhere, in another place, in another reality.
    • With that in mind, I have therefore always viewed words and thoughts as so much more than “just” words and thoughts. They have deep impact. They have meaning. If you want to imagine somebody causing grievous bodily harm, know that they have. If you want to emote doing something quite nasty to somebody else online, you have good as actually done it–because somewhere, you actually have.
    • I know that my perspective on this is unique, but it means that I have never approached the virtual world of cyberspace as a realm without consequence. (Even offline imagination, after all, has a consequence–for the person imagining, if nobody else.) The identities we construct online have weight and being. If an identity becomes attached to the self as an alternate, pseudonymous persona, then the actions taken towards and words said to that identity now directly impact the person behind them. They are one and the same.
    • I also believe quite strongly that communities must be able to defend themselves from those who would inflict harm upon them. Particularly when those “evildoers” don’t believe that the harm even exists, and treat their actions as though they have no consequence.

    Jhaver, S., Birman, I., Gilbert, E., & Bruckman, A. (2019, July). Human-machine collaboration for content regulation: The case of Reddit Automoderator. ACM Trans. Comput.-Hum. Interact. 26(5), https://doi.org/10.1145/3338243

    • Before I get too far into the paper, I would say that I consider Reddit to have one of the better content regulation models in terms of the feedback available and number of “levels” to regulation. Users can upvote and downvote, or flag content. Subreddits can have multiple moderators looking after just that particular subreddit, and these are often volunteers with deep experience in their area, meaning they better understand what is allowable or undesirable in that specific subreddit. The signals being sent by the other redditors can influence a moderator’s view on a specific comment or post, if it even gets that far–some comments or posts may be “downvoted into oblivion” by redditors before a mod ever gets involved! Reddit’s paid staffers are much further up the stack, and need only be involved relatively infrequently.
    • I would say that this is a general rule: trust increases relative to transparency. Greater transparency usually leads to greater trust. And while human mods might start with the benefit of the doubt from human readers, bots generally do not. Including transparency on exactly why an action was taken is an extremely important feature to have, and I would say the same is even more true for “AI” tools as those become more commonplace. (Even more so because a neural net may not be “debuggable” in the same way as a list of regexs; the explanation generated may be the human mod’s only clue as to why something happened the way it did.)
    • It’s correct that those who are technically proficient in a particular skill will usually be called upon to use that proficiency more often. Other mods may have the access, but not the same skill. I’ve been in that situation, and I’ll usually willingly accept being the single point of control for many of the same reasons highlighted in the paper–it’s easier, I already know everything that’s happening, and I can debug much faster. But I hadn’t recognized explicitly that this creates additional burdens for those of us taking on that responsibility, with no specific recognition or reward for doing so.

    Mannell, K. & Smith, E. T. (2022, September 14). It’s hard to imagine better social media alternatives, but Scuttlebutt shows change is possible. The Conversation. https://theconversation.com/its-hard-to-imagine-better-social-media-alternatives-but-scuttlebutt-shows-change-is-possible-190351

    • It’s always been possible to build a platform (or tools, more generally) for public benefit rather than profit. The tricky bit, in my mind, has always been the fact of running the damn thing, never mind the governance of it! If it’s too technical to set up (and let’s face it, people who want to build a platform for public benefit tend to be very technically-oriented), you’re going to have problems with adoption, even among those who would like to use the system for that public benefit! If it’s too simple to set up, you’ve probably either made the system insecure by using too open a default, or made it secure by default but restricted the ability for other people to be able to build on that platform and make changes (because the vast majority of default installs will not accept those changes).
    • In some ways, this “fediverse” problem is also seen in Mastodon and Bluesky. In Mastodon, setting up a server and determining how to federate it with others is a massive undertaking; in Bluesky, setting up your own personal data server or relay may similarly be a technical challenge to accomplish. Without a centralized system to bind everything together, having to always “roll your own” is an ongoing challenge that can honestly become quite a drag!
    • Governance also becomes a much greater challenge in a public-benefit platform. It’s not impossible, of course. In a for-profit model, you can pay people of diverse perspectives to bring their expertise into the corporation. In a public benefit model, you’re probably looking for volunteers. Not just that, but volunteers with good character and determination–you don’t have profit as a motivator, so you have to be more careful that they’re working for the same ultimate goal and won’t disappear if the correct decisions aren’t profitable enough. And as if that wasn’t difficult enough to find, you also would like it if they had needed technical skills, decent social skills, and maybe brought some diversity to the table as well?

    Santana, A. D. (2014, ). Virtuous or vitriolic. Journalism Practice, 8(1), 18-33. https://doi.org/10.1080/17512786.2013.813194

    • For some people, accountability acts as a check on their wayward nature. Some do not require as much accountability, but it is a fool who would solely trust another’s nature! And it is usually those who are least in need of accountability who best understand the necessity of it! The problem we see writ large online is that… most people do not have the moral core to comport themselves justly in the shadows as well as in the light of day.
    • The EFF makes an excellent point here–the sacrifice of anonymity (or pseudonymity) may not be necessary if there are more effective alternatives at hand. Techdirt, for example, allows both anonymous and pseudonymous comments, and while some people are still less than civil, most of the commenters there behave themselves reasonably well. The community polices itself, and does not need to unmask its members to do so.
    • Interesting that, of the six factors listed which make incivil discourse more likely, the two I lack are dissociative anonymity (because I believe that online actions, even when anonymous, are inextricably linked to the offline self because they are the product of the soul) and dissociative imagination (because I believe that online actions have real consequences, and closing the browser cannot / should not shield you from the reality of those consequences).
    • It’s always possible, of course, that smaller communities are more capable of self-policing, as each contributing member’s efforts will be more visible and not diluted against a background avalanche of uncivil commentary. Perhaps that should also be taken into consideration…

    Sherchan, W., Nepal, S., & Paris, C. (2013, August). A survey of trust in social networks. ACM Comput. Surv. 45(4). http://dx.doi.org/10.1145/2501654.2501661

    • This “social trust” thing sounds reasonable, and yet my immediate response is, “What do you do about individuals who hold a fair amount of social capital, and yet display themselves to be inherently socially untrustworthy?” That is, they and their actions clearly have weight, and yet they will always betray confidences and take whatever course of actions benefits them most in the moment. Speaking from experience, I cannot and do not trust these people, no matter how much social capital they hold, no matter how positive the interactions may be. And any channel in which they exist is, by their presence, marked as untrustworthy.
    • I will note that this paper explicitly defines trust as the expectation that somebody will behave as expected. You could argue that an untrustworthy individual who acts in an untrustworthy fashion is actually trusted–because they are acting as expected! And yet the entire facade is designed to lull people into believing that they will behave one way, before they reveal their true selves and take a different action. I have learned, through painful experience, to expect this, but their actions actively seek to create a false expectation. And it feels very odd to say that they are trusted in spite of their efforts to act in an untrustworthy fashion, explicitly because I ignore what they are trying to do and instead choose to (correctly) anticipate their next heel turn.
    • I would agree that the Internet has had neither a utopian nor a dystopian effect in the social context. However, I do disagree that we have experienced “a fundamental transformation in the nature of community from groups to social networks.” A social network also exists within an offline group; all we have done is extend that network via the new connections we can create (or existing connections we can reinforce–constructively or destructively!) online.
    • The “Web of Trust” model is best known to me from OpenPGP, where I would determine whether I had full trust, partial trust, or no trust in the holder of another key. This was separate from whether I had verified the key myself–it’s entirely possible to have verified a key and trust that the key itself belongs to a particular individual, but have no reason to trust any decisions that individual makes about what other keys to trust!
    • The biggest problem with the Web of Trust is that it requires a lot of effort to do correctly. It’s rather normal to find people who shortcut their decisions and choose to trust basically everybody’s assessments blindly. That’s a huge, huge hole in the model’s assumptions, and it comes of the model being so difficult to use in practice that the utility gained is, in most cases, very small. Too small to bother with.
    • Along with that idea, I’d love to see a trust visualization that could visualize your key in the Web of Trust and predict / recommend to whom you should establish a connection in order to increase the overall trust level of your own keys. It’s not possible, because trust isn’t an overall status that can be calculated–every person makes the own decisions about who to trust and to what degree. 😛

    Gorman, G. (2015, February 26). Interviews with the trolls: ‘We go after women because they are easier to hurt’. News.com.au. https://www.news.com.au/technology/online/social/interviews-with-the-trolls-we-go-after-women-because-they-are-easier-to-hurt/news-story/c02bb2a5f8d7247d3fdd9aabe0f3ad26

    • Honestly, this entire article just made me mad. Like, really mad. I generally have a fairly mild outlook on the entire human race–sometimes negative, but usually tending towards a mean somewhere just on the mildly positive side of the scale. Trolls, however, drive me crazy. Words and reactions are pointless when dealing with a troll–as the interview makes clear, it’s the reactions that the trolls are seeking!
    • And yet brushing off a troll is no longer enough. They move on to SWATting or other creepy behaviors, seeking that reaction, trying to do everything they can to make you break and give them that sweet, sweet sensation of being in control and having power over another person. There’s only one way to deal with such a cancer, and that is to erase it.
    • This also happens in communities, when you have an individual who is ostensibly a member of the group, and yet whose goal is to get reactions. It may start with their focus being mostly turned to other groups, but eventually it turns inward to the other members of the community who they believe aren’t giving them their fair recognition (because they disapprove of the behaviors being exhibited). It becomes a cancer in the community, and there’s really no recourse to it other than eviction of such an individual.
    • Sadly, matters are quickly compounded because such an individual will quickly turn other members of the community against the purported “decision-makers” and “elites” who actually implemented and effectuated the decision of the community’s majority. Second chances are begged, then third and fourth chances. And in the end, all that is accomplished is more people being hurt, including those attempting to protect the community by excising the problem.
    • There is no class of people that I hate–actually hate–more than the trolls.

    Dron, J., and Anderson, T. (2014, March 21). Agoraphobia and the modern learner. Journal of Interactive Media in Education, 2014(1). https://doi.org/10.5334/2014-03

    • If an open environment in learning exposes us to a continuum of opportunity and threats, this strikes me as not unlike the continuum of love and pain. To open yourself to loving somebody is to simultaneously open yourself to experiencing pain because of that love–whether the pain of loss, or betrayal, or just a bad argument. The common factor in both of these continuums is vulnerability. It is vulnerability–openness–that enables greater positives, but also greater negatives. But without an open mind, we learn nothing.
    • I quibble with the statement that safety is a prerequisite for survival. I would probably personally go back to a model such as Maslow’s hierarchy of needs–survival is usually one of our first goals! Safety is a condition that allows us to move our minds off the necessity of ensuring survival and move up the pyramid. In that sense, I would consider that safety is essentially a precondition to enabling “higher” learning (anything beyond the lessons of hard-knock school).
    • I certainly don’t necessarily expect safety within a group. A group, after all, does have those formalized inclusion requirements, and “rituals of entry or exit.” What this most often means to me personally, is that I am now part of a group which I may not be able to leave voluntarily (without sacrifice), and am now therefore trapped with others who may turn on me and either attempt to torment or otherwise socially shun me. Within a group, I will always stand out, whether by choice or by chance. My unique perspective or approach to things often means that I may unwittingly breach (or never even notice) expected norms of conduct among the group’s members. But what a group does offer is the hierarchy. Any hierarchy must have rules of some form or another, and if I can learn those rules, I have something I can navigate and authorities to whom I can appeal for assistance in learning or guidance or accomplishing a task.
    • Conversely, I feel much safer in the linkages of the network. I know whom I can approach for each contextual matter, but I can observe everything. My upbringing was such that the concept of context was drilled into me as a bedrock foundation; information cannot be shared across a context (network) boundary unless either permission is given, or the given network has recognizably altered to include a new participant (usually including tacit permission being given by the originator of the information at issue). Therefore, I can observe and integrate information from all networks in which I participate, but my exploration of individual topics must remain closeted to the network from which they originated. If topics can be discussed in an alternate network, the source of any information must be anonymized or disguised. Therefore I have less concern about my personal safety in these environments, because my default stance is to prevent information leakage and (somewhat) rigorously compartmentalize any information which is shared in any given network.
    • Interestingly, I don’t trust to the anonymity of a set, except when I am in a read-only role. If I move into a read-write (or inquiring) role, then I am already committing to converting a set into a network by making some form of interpersonal connection, however fleeting or ephemeral. I would argue that it is misleading to expect any sort of anonymity in a set if you choose to post anything to which people can respond and thereby complete a social connection. (One-way posts to give out information without allowing response cannot complete a social connection and thereby allows one to maintain membership in a set without conversion into a network.) Without true anonymity (as a potential target) and without the more developed connections people seek to cultivate within a network (before raising sensitive topics), of course trolls and other miscreants become more of a problem!
    • The emergent behavior of a collective is only useful insofar as it actually reflects the behavior of the collective. If a corporation puts a thumb on the output of the recommendation engine, the recommendation is no longer the product of the collective, even if the output is indistinguishable in appearance from the actual output of the collective. In that fashion, business interests are actively incentivized to sabotage the learning that could result from examining the collective’s output.
    • The Landing’s “circles” are very reminiscent of LiveJournal’s friends groups, and function in much the same way–allowing selective disclosure on an item-level basis. But the Landing was also incredibly more complicated than LiveJournal by its very nature. LJ was about your journal, first and foremost. You could join a group, certainly! You could have a list of friends, of people who had friended you, include yourself in various sets by adding interest-based tags to your profile… all of these things, and yet the site was primarily about the journal. The content that was shared in groups were journal entries of the group’s members, directed specifically to the group. By contrast, the Landing is… well, what is it not? It’s a wiki, it’s a profile page, it’s group forums for classes, it’s a “notebook” with ancient information from 2000 that we’re supposed to study as “state-of-the-art,” it’s subgroups of groups for specific terms of a specific class… it’s everything everywhere all at once, and that’s too much.

      (I am digressing here, but I find I prefer Moodle or Brightspace specifically because it’s more structured. Flexibility is a good thing, but I would suggest that there needs to be some sort of commonality of purpose or presentation to form a structure that can then be fleshed out by the learners. I also acknowledge that I have not had a lot of experience with the Landing, and it’s possible that I may have been able to adapt to it if I had more years to do so. But… if it takes years to be able to understand and work within a single website…)
  • Week 5: Analyzing Social Systems (part 3)

    For this one, I needed to do a bit of a social network map. I did this by hand, and will continue referring to it below.

    This is an egocentric map, as that’s a projection that makes the most sense to me for this example. “Strength” of the relationship (as assessed by me) is measured by the type of line, moving from a solid line for the strongest connections, through a dashed line, to a dotted line for the weakest connections. Connections between nodes other than the central node are assessed by observation and supposition; I haven’t taken a poll of anybody on this! Link reciprocity is not directly assessed or indicated here.

    In a general sense of speaking, most of the people in my house are located in the lower-left corner, with immediate biological family in the mid-left. Friends and family in the Seattle area are located in the lower-right corner, with some crossover in the lower middle created by long mutual involvement in a non-profit. (There is one exception–the placement of the node labelled “F” on the lower-to-middle right was an oversight, and it should be more correctly clustered with other nodes on the lower-left!)

    The upper-right section is a very partial graph of connections into my workplace, and the upper-left holds two connections to good friends online who are not otherwise connected to other nodes on the graph.

    One of the first things to stand out to me is the fact that most of the connections I have drawn to myself are considered to be highly or moderately strong. On reflection, this does make sense–these are the connections most likely to stand out and therefore most likely to be drawn (except where the receiving nodes are already drawn into the graph, in which case adding a link is of trivial effort).

    Another point that is relatively clear to be seen is that there is a fair amount of crossover between two otherwise-distinct clusters–a house in Vancouver and a house in Seattle. While those clusters were drawn using physical location as an organizing principle, the friendships created over a mutual interest and activity have created many links between these two locations in social space.

    Conversely, my biological family, workplace, and notable Internet-only connections have no connections between each other or to the other groups. This reflects definite intention on my part, as some of these areas of life are deliberately kept partitioned from each other. (In the case of the Internet-only connections, this is actually less intention and more simply a case of circumstance, but the effect, as with the others, is to have them partitioned away from everything else in the network nearly completely.)

    In every direction, the graph could certainly be extended, with direct relationships growing weaker and more remote as I went. (My workplace alone would probably double the size of the overall graph.) I expect that, even as these groups continued to grow and display more interconnections with each other, the partitioned groups would stay partitioned, and the connected groups would grow ever more so.

  • Week 5: Analyzing Social Systems (part 2)

    Bluesky purports to offer algorithmic choice to its users. Some of the design outlined mentioned by Graber (2023) includes treating algorithms as general aggregator services, allowing the user to swap between aggregators (or indeed, creating an aggregation of aggregators!) at will. Graber also notes correctly that even a simple “just the posts of people I follow, in chronological order” is itself an algorithm.

    This is far from the only algorithm available, however. Slachmuijlder (2024) notes that Bluesky also offers algorithms such as “Popular with Friends,” “Science,” “Blacksky,” and “Quiet Posters.” These algorithms are all quite different.

    “Popular with Friends” showcases popular content from the people you follow and the people they follow. This relies on signals such as likes–the more liked a post is, the stronger the signal of “popularity.” This is limited to two levels of follows–your direct follows, and their follows–so that you aren’t simply browsing the most popular content on the entire service.

    “Science” is “a curated feed from Bluesky professional scientists, science communicators, and science/nature photographer/artists.” This therefore is less a machine algorithm and much more of a human algorithm in operation, relying on the judgement of the curator(s) to determine posts which “fit” the category.

    “Blacksky” is a feed for showcasing black voices. This is poster-determined, as using a specific hashtag can either include a single post into the feed or add the poster into the feed permanently. (Manual removal of posts or posters is available if necessary to clean up the feed.)

    “Quiet Posters” includes posts from people who follow you who don’t post often, ensuring that infrequent posts aren’t drowned out in a large or busy feed.

    The ability for users to select their own algorithms, or indeed choose several of them (if science is an interest, why not include Science as one of the options, as well as something which looks more directly at your own follows and followers?) is a powerful feature. The feedback loops in operation are different for each of these algorithms, but the user is not locked into any of them.

    A feed such as Blacksky, which can be manipulated by simply adding the appropriate hashtag to a post, is simple to join to make content more visible. This ease of use also makes it relatively easy to abuse, particularly as removal of a post or poster from the feed is a manual affair. Perhaps to some extent (and I am theorizing heavily here!) this is a risk that is judged acceptable to some marginalized communities–the risk of a non-marginalized poster voluntarily marking themself as marginalized is comparatively low, versus the risk that an automated method of removing marginalized posters from the feed could be abused by non-marginalized viewers. The algorithm’s design in this case could be understood to offer maximal ease of inclusion and minimal risk of being unfairly removed by abuse of automation.

    “Popular with Friends” seems more vulnerable to persistent abuse, however. As this is based on how “popular” a post is, any automated influence operation that artificially inflates a post’s like count could drive up a post’s visibility in the feed. This is somewhat mitigated by the limitation to any single user’s immediate follows and their follows. Still, for a user with an expansive social network, this might still be something of a concern.

    The largest risk I can see with Blacksky’s function is what I highlighted earlier–a risk of external hijacking of the feed, by external actors adding the required tags to automatically include themselves and then broadcast to everybody using that algorithm. The design of manual intervention being required to reverse that is the corresponding weakness (even though good reasons may exist for that design decision). If I were to design a modification to this, I would seek to incorporate signals from the users of the feed. If a significant proportion of the feed’s subscribers were to block or otherwise downvote a given post, the feed might respond to this signal by deprioritizing or hiding that post. Of course, this also creates a weakness of external actors subscribing to the feed in sufficient numbers to block any target post they choose. For marginalized communities, this may be perceived as a much bigger risk than the risk of the occasional broadcast from inappropriate sources (until manual intervention arrives). A refinement of my modification might also gate the accounts whose blocking signals are counted, such that users who have only recently subscribed to the feed cannot influence automated moderation, with various cutoffs (such as a week or a month) available to be tested and selected as necessary to make the feed more or less resilient to exploitation in response to changing conditions.

    In a similar fashion, Popular with Friends could possibly be adjusted to only include like signals from an account’s own follows. The limitation in only showing popular content for direct follows and their follows is probably acceptable, as this allows for exploration of content that is not directly connected, but nearly directly connected to the user. Even so, only counting like signals from direct follows would make external influence operations nearly useless. Further adjustment could weight like signals more heavily if the target poster is not a direct follow. A direct follow has more chance of being part of the same immediate circle, and may therefore be more likely to receive more likes among that circle than a user who is only indirectly connected into the circle. Thus, amplification of these weaker “at-a-distance” like signals may be appropriate to compensate for the expected lower possible volume of likes from direct follows.

    References:

    Graber, J. (2023, March 30). Algorithmic choice. Bluesky. https://bsky.social/about/blog/3-30-2023-algorithmic-choice

    Slachmuijlder, L. (2024, November 30). Bluesky lets you choose your algorithm. Tech and Social Cohesion. Substack. https://techandsocialcohesion.substack.com/p/bluesky-lets-you-choose-your-algorithm

  • Week 5: Analyzing Social Systems (part 1)

    Donath, J. (2020). 2. Visualizing Social Landscapes. In The Social Machine. https://web.archive.org/web/20240817102406/https://covid-19.mitpress.mit.edu/pub/ljr3x1qq/release/1

    • Immediately, the discussion about visualizing social interaction online brings to mind some of the MMOs I have spent time playing in my undergraduate days! Interacting with each other’s avatar was sometimes just a silly prank (particularly where voice communication was being used outside of the game), but it was just as expressive as pranking each other in real life. Where text communication was used, it was also possible for other players to “eavesdrop” on the conversation in passing, just as if they were walking past an energetic discussion on the street.
    • Visualizations are something we dealt with quite a bit in COMP683. There’s a definite knack to selecting a good (rather than a merely adequate) visualization for each instance where one is useful.
    • Maps are a useful thing, and it is correct to say that they gain much of their utility from abstraction and simplification. I would consider legends to be a similar tool, and (at least to my mind) more applicable in a wider variety of contexts. If I am visualizing something, it is just as important to know what I am looking at, as it is to be seeing it in the first place. And just as with a map, the decision on what to highlight and how to do so is key to enabling viewer understanding.
    • An algorithmic map is also somewhat subjective in what it displays, because it encodes the subjective judgements of the algorithm’s author(s) to determine what should be abstracted and under what conditions each level of abstraction should be employed. But once the algorithm is coded, those judgements will be implemented evenly and without (further) bias. I would suggest that this is not only faster to update and run, but could also be a “fairer” map generator overall?
    • Map interactivity is a feature which enables autonomy, which means that an interactive map can help stimulate intrinsic motivation to play with and learn from it, no? Shout-out to SDT from last week!
    • The point about maps enabling asocial navigation is a good one. Wayfaring is a more romantic thought, certainly, but in practice… I prefer a map for various reasons–not least of which is not having to work to comprehend the other person’s directions. That doesn’t necessarily mean I want to avoid interacting with people entirely, however. Still, it’s a good point to keep in mind, that even something as simple as a “reference” like a map can function to change a community dramatically, such as when we shift focus to information retrieval rather than discussion.

    Donath, J. (2020). 4. Mapping Social Networks. In The Social Machine. https://web.archive.org/web/20240820021904/https://covid-19.mitpress.mit.edu/pub/ngsi0mxz/release/1

    • I’m amused by the example of choosing who to speak to by going through a list of names versus picking a network cluster. In my LiveJournal days, I essentially had separate filters for those “rings” of closeness–close friends, good friends, acquaintances, loose ties, etc. I could (and did) also filter by interest, or by physical proximity. There was always a common thread that those filters were constructed upon, and that’s what made those filters such an intuitive tool to use.
    • It also comes to my mind that a network is precisely how I saw and understood social groups as I first entered university. I mapped out who was connected to whom, who was the common link between different groups, and who spoke with authority both within and between groups. This was something I did rather formally at the time, largely because dealing with social groups was a skill I was still learning. These days I don’t think of things quite so rigidly… but the concepts are still accurate, it seems.
    • Knowing that every map has omissions, and knowing that there are many ways people may answer a single question (such as that about “close ties”), I’m reminded that in order to understand the answer, we must first understand the question. Part of that process of understanding will always involve asking how the question was formed, for what purpose, why it was phrased as such, how it was heard, how it was perceived… there are so many factors at work. No wonder researchers must usually ask more narrowly-defined questions!
    • Back to what I said previously about mapping some of the social networks around me when I had first entered university… I wanted to know who the “authorities” were, and who connected what group… but I never actually considered that there’s more roles. I mean, I probably considered “rebroadcasting” (which could be seen as amplifying?) but beyond that… filtering of information, tuning of information (to emphasize or highlight particular parts)… there’s more things we can do with information than just stop it or rebroadcast it. Fascinating.
    • I was never on MySpace, but LiveJournal definitely encouraged connections among strangers. Part of that was because you could add interest-based tags to your own profile, and people could browse those tags… since your profile also listed your friends and people who had friended you–friendships were not necessarily reciprocal!–that was another avenue for people to find you. And, of course, then there was the content of each person’s journal, plus any additional links they had on their profile.
    • More importantly, while you could go to anybody’s journal and read their posts, you could also click the “Friends” tab and read all of their friends’ posts. This was, in fact, where I spent most of my time–I didn’t need to read my own journal, I wanted to read those of my friends! Privacy levels were integrated, so even if a friend posted something very private and I could see it on my friends page, strangers browsing my friends page could not see it.
    • In that way, the friends page offered people a public view of who the journal owner wanted to follow and read. It was an excellent way of discovering social links, but it didn’t necessarily show you everything… it was a sanitized, public view by default.
    • Comments on the journal entries could also be public or private (between the journal owner and the commenter). The comments were the other half of the conversation, and they said much in aggregate about who was reading and responding, and on what topics.

    Ardito, G. & Dron, J. (2024). The emergence of autonomy in intertwingled learning environments: a model of teaching and learning. Asia-Pacific Journal of Teacher Education, 52(2), 241-264. https://doi.org/10.1080/1359866X.2024.2325746

    • Why on earth would anybody perceive “traditional” education to be a linear system, no matter how complex? Identical inputs will never yield prescribed or predictable outputs–the human species is not that simple. It’s all about probabilities, but even then, a successful teacher will need to consider how to address the outliers. That’s part of the whole deal!
    • I like the description of keeping the system “on the edge of chaos.” It’s the balance between enough chaos to enable further progress, and enough order to prevent the entire system from collapsing on itself.
    • This, incidentally, is also why I dislike corporations who pursue “disruption” purely. Chaos must be balanced with order. It is not enough to unmake the previous system; there must be a replacement. Those who destroy without a plan for creation are only halfway there, and 50% isn’t good enough.
    • Understanding that the graphs are quite possibly not “complete,” I’m amused by the omission of a key resource in Figure 7–the people! Not just teachers, but most importantly, the fellow students who were able to pass on their knowledge via peer mentoring! Isn’t that kind of the whole point, really? Creating an environment where your resources are not just oriented vertically in teacher-student dimension, but horizontally in student-student dimension? (And likewise in faculty-faculty interactions as well, I might add.)
    • Nobody who trains another, yet withholds the ability for them to train those who come later, has finished the job. Part of teaching any subject is enabling those who have taught to turn around and pass their knowledge on in turn. With that in mind, how blind would we have to be to believe that teacher-to-student interactions are the only interactions of value in a learning environment? Students will teach others as they go; would it not make sense to help them practice doing so effectively, as part of learning the material?

    Martin, A. (2013, May 1). The web’s ‘echo chamber’ leaves us none the wiser. WIRED. https://www.wired.com/story/online-stubbornness/

    • I would accept that language may not be evidence of thought, but is evidence of how we think. The structure of our thinking is encoded into the structure of our language. And in a similar sense, learning a new language and its structure will also change the structure of our thoughts.
    • What does this say about people who are fluent in (and regularly use!) more than one language? If they have learned to move seamlessly from one language to another, does this mean they hold two separate patterns of thought–or does it mean that they have constructed a hybrid of the two, different from either that came before it?
    • Now suppose someone learned the patois or pidgin language that was originally a combination of two other languages. It would follow that they have learned the hybrid patterns of thought as well, but not necessarily the methods of thought behind the predecessor languages. To them, the other two source languages may appear vaguely familiar, but also not; the same may be true of the accompanying structure of thinking.
    • What does this mean in the context of an echo chamber? It’s comforting to hear people who speak in the same “language” as you do, with the accompanying shorthands and abbreviations of concept. But it doesn’t expand your thinking or make you think critically about your own thought and how you could “translate” from one to the other by finding points of commonality and breaking down the points of difference.
    • This article also hits on two other points that I find important: the Internet makes finding similar viewpoints easy because it removes geographical boundaries, and online networks are no more of an echo-chamber than real-life social networks. Put these two together, and you could rightly suggest that the reason the Internet has facilitated the effects of an echo chamber, a filter bubble, or a more strongly-polarized culture is because of the removal of geographical boundaries. That is, when we had a geographically restricted set of people with whom we would have most interactions through our lives, that was our available network. To some extent, the necessity of interacting with people of differing viewpoints was built into this, because of the restrictions on the network. While it was possible to hold extreme views, it was more likely that each person would have to understand how to work with people of diverse perspectives, and the effect of an echo chamber was more limited because of that. When those geographic boundaries were removed, humans had more freedom to seek out interactions which were most comfortable, rather than most practical. I don’t discount the culpability of corporate interests tuning their algorithms to maximize “engagement with the machine,” but let’s not ignore our own fallibility as a species–given the opportunity, most of us would happily retreat to the comfort of people who think and speak like us. When we do not actively prioritize learning and growing outside our “native” environments, we end up constructing our own echo chambers.
  • Week 4: Sociological and Psychological Foundations (part 1)

    Ryan, R. M., & Deci, E. L. (2020, April). Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemporary Educational Psychology, 61. https://doi.org/10.1016/j.cedpsych.2020.101860

    – Right away, I note that SDT indicates that the (proposed to be) innate human drive for self-determination can only be robust if the three basic psychological needs of autonomy, competence, and relatedness are met. Yet I also immediately consider that, in this context, we’re already operating at the tip of Maslow’s hierarchy of needs–we should remember that there are many other preconditions that may need to be met before an innate drive for development and self-determination will exhibit itself.

    – Fascinating to see the difference between autonomous extrinsic motivation (based on value) versus intrinsic motivation (based on interest / enjoyment). So many of my activities fall into the former now, versus the latter…

    – Good to see the distinction made between control vs. structure! I always want to provide a “skeleton” or (in the paper’s words) scaffold that people can then fill in or flesh out as they wish. There’s always been a good bit of assertion that this is “controlling” and the better approach is to set no rules whatsoever so people can just do whatever they want. I’ve never felt that to be productive, and sometimes outright dangerous.

    – I wonder if some of these “autonomy-supportive” behaviors such as “resist[ing] giving answers” and “offering progress-enabling hints when students seem stuck” might be confused with (or be complementary to!) the Socratic method of asking open-ended questions as a way of encouraging students to think critically and gain insights on their own. I’ve had several people simply refuse to engage with questions altogether, accusing me of using the Socratic method as if that’s inherently a bad thing.

    – Grading! My expectation through elementary and secondary school was for all grading to be purely criterion-based, it shocked me when I entered undergraduate courses and discovered comparative grading in use (namely, “grading on a curve” according to a normal distribution). As a lifelong overachiever, I’m usually the person busting the curve, and it annoys me significantly when I can’t be graded “fairly” for my actual performance. And yet most of my fellow students preferred grading on a curve (when I wasn’t in the same class as them) and explained it as being a counterweight for “unfairly hard” assessments. As much as that may be true, I wonder if criterion-based grading isn’t both more accurate and also “better” in the sense of offering informational value about one’s accomplishments without pushing people to “just be better than somebody else.”

    – It’s interesting to reflect on supporting processes versus outcomes. In some ways, focusing on the end result could be construed as supporting autonomy–there are many ways to reach the desired outcome. At the same time, there’s been several thinkers who have advocated the opposite. (For example, a company that does “the right thing” by their customers will naturally be rewarded with profits, while focusing on profits above all tempts companies to take the shortest path to the goal, at the expense of their customers.) Both of these approaches seem to have points of validity to me. Clearly, doing good does not always lead to a profitable outcome; just as clearly, “teaching to the test” does not yield the desired results. It seems more productive to balance these approaches, understanding that “how did we get to the goal?” is just as important as “did we reach the goal?”.

    I also hasten to add that this requires a good understanding of what the goal is. If we failed to reach the goal of a specific grade, the temptation might be to decide that we chose the wrong path. But we may have reached a goal of learning and gaining deeper understanding, which makes the path a much more positive one.


    Mikami, A. Y., Khalis, A., & Karasavva, V. (2025). Logging out or leaning in? Social media strategies for enhancing well-being.Journal of Experimental Psychology: General, 154(1), 171–189. https://doi.org/10.1037/xge0001668

    – Is it possible that social media platform use, as with other addictive behaviors, presents the positive effects as the lure (connect with your friends! win a jackpot in the casino!) while minimizing the negative effects (depression and self-negativity, I can quit any time / I can beat the house because I’m smarter than everybody else)? Is that comparison lazy, or am I picking up on a similarity? In both cases, the negative effects are hidden or minimized because repetition of the addictive behavior leads to habituation of the positive effects and accumulation of the negative effects, but repetition is how profits are made.

    – I have been told that some personalities are more susceptible to addictive behaviors. I wonder if anybody’s correlated these personalities to social media use and prevalence of negative vs. positive effects. I imagine that if negative effects are more prevalent in specific personalities, abstinence might be a more effective treatment for those, while tutorial might be a more effective treatment for others…

    – It strikes me that a balanced approach intuitively seems to be the best way to use social media platforms–as an augmentation to offline social interaction, not a replacement. Social media definitely allows more flexibility in who we interact with, when, and under what conditions; in this, I judge it to be quite positive. At the same time, overdoing this interaction has negative effects which may be countered or diluted by having supportive offline social interaction as well. Neither online nor offline social interaction seem to replace the benefits of the other, so using them in a balanced fashion seems to be the best option to me.


    Best, P., Manktelow, R., & Taylor, B. (2014, June). Online communication, social media and adolescent wellbeing: A systematic narrative review. Children and Youth Services Review, 41, 27-36. https://doi.org/10.1016/j.childyouth.2014.03.001

    – “Evidence of a ‘rich-get-richer’ phenomenon is provided whereby young people whose offline friendship quality is perceived as ‘high’ had greater benefits from online communicative activities those who did not possess high quality friendships.” Interesting. Evidence towards a balanced approach being of most benefit, as I posited above?


    Vogel, E. A., & Rose, J. P. (2016). Self-reflection and interpersonal connection: Making the most of self-presentation on social media. Translational Issues in Psychological Science, 2(3), 294–302. https://doi.org/10.1037/tps0000076

    – If there’s a general bias towards emphasizing positive self-presentation on social networking sites, and the benefits of participating on SNS accrue mainly to those who focus on their own positive self-presentation… what vanity! Is there truly any real, long-term benefit to be had from presenting a biased perspective of yourself and then focusing on and reinforcing that biased presentation? Besides, how does this work with research showing that there’s benefits to be gained from being emotionally real and open about one’s struggles?

    – “According to mood management theory, consumers select media that will help regulate their moods. When expanded to social media, the theory suggests that users select social content (such as another Facebook user’s profile) that will improve their mood.” I would suggest that this only holds so long as the user selects the content! And as we know, Facebook’s algorithms determine what content is pushed to the user in order to maximize engagement, not to be responsive to the user’s desires.


    I had another seven papers I wanted to read; time pressures force me to move onward…