Much of my reading this week seemed to unwittingly bring up the themes of the recommendation engine and our (humanity’s) motivations.
The recommendation engine is a powerful bit of software. Like most tools, it’s a double-edged sword. The ability to analyze users and extract meaningful signals that can be used to make connections and matches with other signals, in order to recommend new material, is extraordinary. Imagine how much time I might spend on Wikipedia if I had a recommendation engine analyzing the articles I spent time reading and suggested new, fascinating topics that I could dig into!
And therein lies the biggest problem with the recommendation engine, too. It is, after all, merely a tool. The analysis extracts signals, but precisely which signals are extracted is not defined. Nor is it defined how to connect and match those signals. In both cases, the tool has an “owning” user who defines how the tool will operate.
In some ways, I think we (humanity) have been somewhat let down by the assumptions made as earlier versions of technology arose. “Time spent on page” became a proxy measurement for “interest;” “number of comments” became a proxy measurement for “engagement.” These proxy measurements are popular, I suspect, first because they can be accomplished with current technology, but second because they can be accomplished silently and automatically–without subject input. Certainly, if we click the “Like” button on something, that’s an affirmative input to the system… but in the absence of subject-initiated input, systems would have no signal to act upon unless these proxy measurements were pulled into the mix.
Because of this, the recommendation engine’s suggestions can become too easily warped, even without bringing motivation into the picture. Is it interest that kept me on a page, or some sort of stimulating input that preys on addictive or gambling personalities? If the metric to boost is actually “time spent on page,” one is just as good as the other. Am I happily engaged with thought-provoking content, or am I angrily hammering the comment button on something divisive? If the metric targeted is actually “comments posted,” then again, one is just as good as the other. Our use of proxy measurements has, perhaps, led us all to target the wrong things.
And then we come to motivation! In the hands of someone well-intentioned, the recommendation engine might be able to be tuned, adding additional signals into the mix. Sentiment analysis can find angry or upset discourse and de-prioritize that content, in favor of boosting content with more neutral or positive discourse. Perhaps a user setting labelled “show me more viewpoints” could intentionally expand the window of content shown. And I doubt very much that such a system would necessarily lack available profits… the problem, as I see it, comes when the pursuit of more profits with less work comes into play. Balancing the requests and views of many invested users takes effort. Trying to make friendly features available that build and maintain a healthy community takes effort! In our drive to always do more with less, human effort is an investment that is all too easily pared away in the drive for more profits. (AI / LLM tools, anybody?)
More broadly speaking, I think the recommendation engine is just the most visible example of tools that had great promise and have over time become warped into causing great damage. I would submit that it’s not enough to simply develop an amazing technology and trust in “the market” to do “the right thing” with it. Developing something great is only half, maybe even only a third of the battle. Asking ourselves how to best use it, and why we use it, is just as important. And if there’s a third part of that story, I think it’s to be found in the “maintenance” of the tool’s owners. Because it’s not enough to make the decision to use one’s power for good (so to speak) only once. It’s an ongoing decision that has to be made again and again, even as others may choose short-term profits over long-term societal health.
And that, too, is part of social computing, isn’t it? If only more people recognized it as such.
Leave a Reply