Dibbell, J. (1993, December 23). A rape in cyberspace: How an evil clown, a Haitian trickster spirit, two wizards, and a cast of dozens turned a database Into a society. The Village Voice. https://web.archive.org/web/19970612100454/http://www.levity.com/julian/bungle.html
- Let’s start with multiverse theory, yes? Let us say that there is an infinite number of universes out there. Then anything we can think of or imagine has in fact happened, in some universe, somewhere.
- In many ways, I view creativity and imagination as portals to another world, almost quite literally. Under multiverse theory, it has happened–we could be said to merely be recording that which happened elsewhere, in another place, in another reality.
- With that in mind, I have therefore always viewed words and thoughts as so much more than “just” words and thoughts. They have deep impact. They have meaning. If you want to imagine somebody causing grievous bodily harm, know that they have. If you want to emote doing something quite nasty to somebody else online, you have good as actually done it–because somewhere, you actually have.
- I know that my perspective on this is unique, but it means that I have never approached the virtual world of cyberspace as a realm without consequence. (Even offline imagination, after all, has a consequence–for the person imagining, if nobody else.) The identities we construct online have weight and being. If an identity becomes attached to the self as an alternate, pseudonymous persona, then the actions taken towards and words said to that identity now directly impact the person behind them. They are one and the same.
- I also believe quite strongly that communities must be able to defend themselves from those who would inflict harm upon them. Particularly when those “evildoers” don’t believe that the harm even exists, and treat their actions as though they have no consequence.
Jhaver, S., Birman, I., Gilbert, E., & Bruckman, A. (2019, July). Human-machine collaboration for content regulation: The case of Reddit Automoderator. ACM Trans. Comput.-Hum. Interact. 26(5), https://doi.org/10.1145/3338243
- Before I get too far into the paper, I would say that I consider Reddit to have one of the better content regulation models in terms of the feedback available and number of “levels” to regulation. Users can upvote and downvote, or flag content. Subreddits can have multiple moderators looking after just that particular subreddit, and these are often volunteers with deep experience in their area, meaning they better understand what is allowable or undesirable in that specific subreddit. The signals being sent by the other redditors can influence a moderator’s view on a specific comment or post, if it even gets that far–some comments or posts may be “downvoted into oblivion” by redditors before a mod ever gets involved! Reddit’s paid staffers are much further up the stack, and need only be involved relatively infrequently.
- I would say that this is a general rule: trust increases relative to transparency. Greater transparency usually leads to greater trust. And while human mods might start with the benefit of the doubt from human readers, bots generally do not. Including transparency on exactly why an action was taken is an extremely important feature to have, and I would say the same is even more true for “AI” tools as those become more commonplace. (Even more so because a neural net may not be “debuggable” in the same way as a list of regexs; the explanation generated may be the human mod’s only clue as to why something happened the way it did.)
- It’s correct that those who are technically proficient in a particular skill will usually be called upon to use that proficiency more often. Other mods may have the access, but not the same skill. I’ve been in that situation, and I’ll usually willingly accept being the single point of control for many of the same reasons highlighted in the paper–it’s easier, I already know everything that’s happening, and I can debug much faster. But I hadn’t recognized explicitly that this creates additional burdens for those of us taking on that responsibility, with no specific recognition or reward for doing so.
Mannell, K. & Smith, E. T. (2022, September 14). It’s hard to imagine better social media alternatives, but Scuttlebutt shows change is possible. The Conversation. https://theconversation.com/its-hard-to-imagine-better-social-media-alternatives-but-scuttlebutt-shows-change-is-possible-190351
- It’s always been possible to build a platform (or tools, more generally) for public benefit rather than profit. The tricky bit, in my mind, has always been the fact of running the damn thing, never mind the governance of it! If it’s too technical to set up (and let’s face it, people who want to build a platform for public benefit tend to be very technically-oriented), you’re going to have problems with adoption, even among those who would like to use the system for that public benefit! If it’s too simple to set up, you’ve probably either made the system insecure by using too open a default, or made it secure by default but restricted the ability for other people to be able to build on that platform and make changes (because the vast majority of default installs will not accept those changes).
- In some ways, this “fediverse” problem is also seen in Mastodon and Bluesky. In Mastodon, setting up a server and determining how to federate it with others is a massive undertaking; in Bluesky, setting up your own personal data server or relay may similarly be a technical challenge to accomplish. Without a centralized system to bind everything together, having to always “roll your own” is an ongoing challenge that can honestly become quite a drag!
- Governance also becomes a much greater challenge in a public-benefit platform. It’s not impossible, of course. In a for-profit model, you can pay people of diverse perspectives to bring their expertise into the corporation. In a public benefit model, you’re probably looking for volunteers. Not just that, but volunteers with good character and determination–you don’t have profit as a motivator, so you have to be more careful that they’re working for the same ultimate goal and won’t disappear if the correct decisions aren’t profitable enough. And as if that wasn’t difficult enough to find, you also would like it if they had needed technical skills, decent social skills, and maybe brought some diversity to the table as well?
Santana, A. D. (2014, ). Virtuous or vitriolic. Journalism Practice, 8(1), 18-33. https://doi.org/10.1080/17512786.2013.813194
- For some people, accountability acts as a check on their wayward nature. Some do not require as much accountability, but it is a fool who would solely trust another’s nature! And it is usually those who are least in need of accountability who best understand the necessity of it! The problem we see writ large online is that… most people do not have the moral core to comport themselves justly in the shadows as well as in the light of day.
- The EFF makes an excellent point here–the sacrifice of anonymity (or pseudonymity) may not be necessary if there are more effective alternatives at hand. Techdirt, for example, allows both anonymous and pseudonymous comments, and while some people are still less than civil, most of the commenters there behave themselves reasonably well. The community polices itself, and does not need to unmask its members to do so.
- Interesting that, of the six factors listed which make incivil discourse more likely, the two I lack are dissociative anonymity (because I believe that online actions, even when anonymous, are inextricably linked to the offline self because they are the product of the soul) and dissociative imagination (because I believe that online actions have real consequences, and closing the browser cannot / should not shield you from the reality of those consequences).
- It’s always possible, of course, that smaller communities are more capable of self-policing, as each contributing member’s efforts will be more visible and not diluted against a background avalanche of uncivil commentary. Perhaps that should also be taken into consideration…
Sherchan, W., Nepal, S., & Paris, C. (2013, August). A survey of trust in social networks. ACM Comput. Surv. 45(4). http://dx.doi.org/10.1145/2501654.2501661
- This “social trust” thing sounds reasonable, and yet my immediate response is, “What do you do about individuals who hold a fair amount of social capital, and yet display themselves to be inherently socially untrustworthy?” That is, they and their actions clearly have weight, and yet they will always betray confidences and take whatever course of actions benefits them most in the moment. Speaking from experience, I cannot and do not trust these people, no matter how much social capital they hold, no matter how positive the interactions may be. And any channel in which they exist is, by their presence, marked as untrustworthy.
- I will note that this paper explicitly defines trust as the expectation that somebody will behave as expected. You could argue that an untrustworthy individual who acts in an untrustworthy fashion is actually trusted–because they are acting as expected! And yet the entire facade is designed to lull people into believing that they will behave one way, before they reveal their true selves and take a different action. I have learned, through painful experience, to expect this, but their actions actively seek to create a false expectation. And it feels very odd to say that they are trusted in spite of their efforts to act in an untrustworthy fashion, explicitly because I ignore what they are trying to do and instead choose to (correctly) anticipate their next heel turn.
- I would agree that the Internet has had neither a utopian nor a dystopian effect in the social context. However, I do disagree that we have experienced “a fundamental transformation in the nature of community from groups to social networks.” A social network also exists within an offline group; all we have done is extend that network via the new connections we can create (or existing connections we can reinforce–constructively or destructively!) online.
- The “Web of Trust” model is best known to me from OpenPGP, where I would determine whether I had full trust, partial trust, or no trust in the holder of another key. This was separate from whether I had verified the key myself–it’s entirely possible to have verified a key and trust that the key itself belongs to a particular individual, but have no reason to trust any decisions that individual makes about what other keys to trust!
- The biggest problem with the Web of Trust is that it requires a lot of effort to do correctly. It’s rather normal to find people who shortcut their decisions and choose to trust basically everybody’s assessments blindly. That’s a huge, huge hole in the model’s assumptions, and it comes of the model being so difficult to use in practice that the utility gained is, in most cases, very small. Too small to bother with.
- Along with that idea, I’d love to see a trust visualization that could visualize your key in the Web of Trust and predict / recommend to whom you should establish a connection in order to increase the overall trust level of your own keys. It’s not possible, because trust isn’t an overall status that can be calculated–every person makes the own decisions about who to trust and to what degree. 😛
Gorman, G. (2015, February 26). Interviews with the trolls: ‘We go after women because they are easier to hurt’. News.com.au. https://www.news.com.au/technology/online/social/interviews-with-the-trolls-we-go-after-women-because-they-are-easier-to-hurt/news-story/c02bb2a5f8d7247d3fdd9aabe0f3ad26
- Honestly, this entire article just made me mad. Like, really mad. I generally have a fairly mild outlook on the entire human race–sometimes negative, but usually tending towards a mean somewhere just on the mildly positive side of the scale. Trolls, however, drive me crazy. Words and reactions are pointless when dealing with a troll–as the interview makes clear, it’s the reactions that the trolls are seeking!
- And yet brushing off a troll is no longer enough. They move on to SWATting or other creepy behaviors, seeking that reaction, trying to do everything they can to make you break and give them that sweet, sweet sensation of being in control and having power over another person. There’s only one way to deal with such a cancer, and that is to erase it.
- This also happens in communities, when you have an individual who is ostensibly a member of the group, and yet whose goal is to get reactions. It may start with their focus being mostly turned to other groups, but eventually it turns inward to the other members of the community who they believe aren’t giving them their fair recognition (because they disapprove of the behaviors being exhibited). It becomes a cancer in the community, and there’s really no recourse to it other than eviction of such an individual.
- Sadly, matters are quickly compounded because such an individual will quickly turn other members of the community against the purported “decision-makers” and “elites” who actually implemented and effectuated the decision of the community’s majority. Second chances are begged, then third and fourth chances. And in the end, all that is accomplished is more people being hurt, including those attempting to protect the community by excising the problem.
- There is no class of people that I hate–actually hate–more than the trolls.
Dron, J., and Anderson, T. (2014, March 21). Agoraphobia and the modern learner. Journal of Interactive Media in Education, 2014(1). https://doi.org/10.5334/2014-03
- If an open environment in learning exposes us to a continuum of opportunity and threats, this strikes me as not unlike the continuum of love and pain. To open yourself to loving somebody is to simultaneously open yourself to experiencing pain because of that love–whether the pain of loss, or betrayal, or just a bad argument. The common factor in both of these continuums is vulnerability. It is vulnerability–openness–that enables greater positives, but also greater negatives. But without an open mind, we learn nothing.
- I quibble with the statement that safety is a prerequisite for survival. I would probably personally go back to a model such as Maslow’s hierarchy of needs–survival is usually one of our first goals! Safety is a condition that allows us to move our minds off the necessity of ensuring survival and move up the pyramid. In that sense, I would consider that safety is essentially a precondition to enabling “higher” learning (anything beyond the lessons of hard-knock school).
- I certainly don’t necessarily expect safety within a group. A group, after all, does have those formalized inclusion requirements, and “rituals of entry or exit.” What this most often means to me personally, is that I am now part of a group which I may not be able to leave voluntarily (without sacrifice), and am now therefore trapped with others who may turn on me and either attempt to torment or otherwise socially shun me. Within a group, I will always stand out, whether by choice or by chance. My unique perspective or approach to things often means that I may unwittingly breach (or never even notice) expected norms of conduct among the group’s members. But what a group does offer is the hierarchy. Any hierarchy must have rules of some form or another, and if I can learn those rules, I have something I can navigate and authorities to whom I can appeal for assistance in learning or guidance or accomplishing a task.
- Conversely, I feel much safer in the linkages of the network. I know whom I can approach for each contextual matter, but I can observe everything. My upbringing was such that the concept of context was drilled into me as a bedrock foundation; information cannot be shared across a context (network) boundary unless either permission is given, or the given network has recognizably altered to include a new participant (usually including tacit permission being given by the originator of the information at issue). Therefore, I can observe and integrate information from all networks in which I participate, but my exploration of individual topics must remain closeted to the network from which they originated. If topics can be discussed in an alternate network, the source of any information must be anonymized or disguised. Therefore I have less concern about my personal safety in these environments, because my default stance is to prevent information leakage and (somewhat) rigorously compartmentalize any information which is shared in any given network.
- Interestingly, I don’t trust to the anonymity of a set, except when I am in a read-only role. If I move into a read-write (or inquiring) role, then I am already committing to converting a set into a network by making some form of interpersonal connection, however fleeting or ephemeral. I would argue that it is misleading to expect any sort of anonymity in a set if you choose to post anything to which people can respond and thereby complete a social connection. (One-way posts to give out information without allowing response cannot complete a social connection and thereby allows one to maintain membership in a set without conversion into a network.) Without true anonymity (as a potential target) and without the more developed connections people seek to cultivate within a network (before raising sensitive topics), of course trolls and other miscreants become more of a problem!
- The emergent behavior of a collective is only useful insofar as it actually reflects the behavior of the collective. If a corporation puts a thumb on the output of the recommendation engine, the recommendation is no longer the product of the collective, even if the output is indistinguishable in appearance from the actual output of the collective. In that fashion, business interests are actively incentivized to sabotage the learning that could result from examining the collective’s output.
- The Landing’s “circles” are very reminiscent of LiveJournal’s friends groups, and function in much the same way–allowing selective disclosure on an item-level basis. But the Landing was also incredibly more complicated than LiveJournal by its very nature. LJ was about your journal, first and foremost. You could join a group, certainly! You could have a list of friends, of people who had friended you, include yourself in various sets by adding interest-based tags to your profile… all of these things, and yet the site was primarily about the journal. The content that was shared in groups were journal entries of the group’s members, directed specifically to the group. By contrast, the Landing is… well, what is it not? It’s a wiki, it’s a profile page, it’s group forums for classes, it’s a “notebook” with ancient information from 2000 that we’re supposed to study as “state-of-the-art,” it’s subgroups of groups for specific terms of a specific class… it’s everything everywhere all at once, and that’s too much.
(I am digressing here, but I find I prefer Moodle or Brightspace specifically because it’s more structured. Flexibility is a good thing, but I would suggest that there needs to be some sort of commonality of purpose or presentation to form a structure that can then be fleshed out by the learners. I also acknowledge that I have not had a lot of experience with the Landing, and it’s possible that I may have been able to adapt to it if I had more years to do so. But… if it takes years to be able to understand and work within a single website…)
Leave a Reply