Personalization Algorithms, and Why They Don’t Understand Us Creative Types


“Personalization is a process of gathering, storing and analyzing information about site visitors and delivering the right information to each visitor at the right time.” – Algorithms for Web Personalization

In 2011, Eli Pariser uncovered the filter bubble. In front of our eyes, Google and Facebook become geniuses at giving us what we “want,” based on algorithms that guess our interests and concerns. Today, nearly ever digital news outlet, search engine and social media app engages in an “invisible, algorithmic editing of the web.

“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.” – Mark Zuckerberg, Facebook.

According to Pariser, the idea is one of relevance – digital platforms racing to deliver entertainment, opinions and news most relevant to you as the reader. Personal relevance has become the new watchword for internet communication companies. The “race for relevance” on the web has involved efforts to predict what consumers are going to click, watch, read or buy on based on their previous web history for example (Netflix movie suggestions), or based on their what their closest friends do and share online (Facebook).

Are you sure? by kristiewells, Flickr.com

This concept is not unlike that of relevance to the reader in news values, or the criteria of newsworthiness that journalists often use to select and produce the news based on a wide range of potential stories. In a study of British journalists specializing in science, medicine and related subjects, Hansen (1994) found that these journalists “deploy conventional news-value criteria, but emphasize in particular the importance of a ‘relevance to the reader’ criterion in the selection of science news” (p. 111). A study of American science journalists published in 1979 also revealed a trend in science journalism toward consumer-oriented coverage of science “…so that readers can answer the question ‘what does it mean to me?’” (Dennis & McCartney, 1979, p. 13).

So it appears that Facebook, Google, Netflix an even Twitter are not alone in valuing and chasing relevance to their consumers. The functions of personalization filters embedded in these sites involve actions ranging from “simply making the presentation more pleasing to anticipating the needs of a user and providing customized and relevant information to the user” (Algorithms for Web Personalization). But just because journalists have been using “relevance to reader” criteria to evaluate news value of potential stories for centuries, does not mean that this news value serves the best interests of digital media audiences, or even that it was a good assumption from the beginning.

“In two important ways, personalized filters can upset this cognitive balance between strengthening our existing ideas and acquiring new ones. First, the filter bubble surrounds us with ideas with which we’re already familiar (and already agree), making us overconfident in our mental frameworks. Second, it removes from our environment some of the key prompts that make us want to learn.” – Eli Pariser, Filter Bubble, p. 84

Personalization algorithms embedded into digital media search engines, news aggregators and social media platforms are not just neutral mathematical formulas for making our searches and news consumption more pleasurable and efficient. These algorithms have consequences for our views of the world: they may get in the way of creativity and innovation. According to Pariser, “the filter bubble isn’t tuned for a diversity of ideas or of people.”

Pariser quotes Siva Vaidhyanathan’s The Googlization of Everything: “Learning is by definition an encounter with what you don’t know, what you haven’t thought of, what you couldn’t conceive, and what you never understood or entertained as possible. It’s an encounter with what’s other – even with otherness as such. The kind of filter that Google interposes between an Internet searcher and what a search yields shields the searcher from such radical encounters.”

“The filter bubble has dramatically changed the informational physics that determines which ideas we come into contact with. And the news, personalized Web may no longer be as well suited for creative discovery as it once was.” - Eli Pariser, Filter Bubble, p. 103

Essentially, personalization filters amplify confirmation bias by presenting us with ideas and topics Google and Facebook have already figured out we “like,” based on various signals including what we click, where we live, who our friends are, and many more. (Confirmation bias happens in science too, when a scientist looks only for data that confirm a previous theory or desired conclusion.) Actually, personalization algorithms are indirectly designed to amplify confirmation bias. Bringing you ideas, opinions and news that confirm your prior attitudes and beliefs is the goal.

But this is where I believe one of the faultiest assumptions of personalization algorithms is most readily apparent. Think about it: personalization filters assume you prefer information that confirms what you already know, what you already believe or enjoy. Even Pariser reflects this idea when he writes, “[c]onsuming information that conforms to our ideas of the world is easy and pleasurable; consuming information that challenges us to think in new ways or question our assumptions is frustrating and difficult,” (p. 88).

But is this always true? I recently conducted interviews with science communicators for a PhD project, prompting each communicator to tell me what news values they consider important as they translate science research into news for their audiences[1]. While a majority of those I interviewed did mention personal relevance to the reader as an important consideration when producing science news, the exceptions to this rule were most interesting. A handful of science communicators seems to consider this tactic not conducive – even unethical – to inspiring broad-mindedness and an interest in science among their readers.

According to Shalom H. Schwartz’ universal human value typology, people’s fundamental goals in life can be explained by several different motivational dimensions, or value orientations. One of these is a scale of conservation/traditionalism value endorsement vs. openness to change value endorsement. People on the conservation/traditionalism end of the scale in many situations value conformity, respect for tradition and security, while people at the other end value curiosity, freedom, exploration, excitement and choosing their own goals in life.

Without going into any research on the topic, it would seem that catering to people’s value and respect for what they already know, and to their preference for ideas that conform to their pre-existing worldviews, is a tactic that would primarily engage people on the conservation/traditionalism end of this particular value orientation. But while this might be a good assumption among some demographics, for some people some of the time, it is probably a terrible assumption for people who value curiosity, open-mindedness and exploration of diverse ideas. People like me, I would hope.

So not only do personalization algorithms reduce my potential for creative thoughts, according to Pariser, but they also wrongly assume that consumption of self-confirming information is something that I value in my life. And if they get that wrong, then what is the point?

I think that the people who study and create personalization algorithms need to take a critical eye to the assumptions they make about people’s values and interests, people’s worldviews and concerns. In keeping with one of Pariser’s arguments, could incorporating more fuzzy-logic and “drift” into personalization algorithms better suit more creative and open to experience individuals? Whether the personalization filter is ethical at all is a different story (and one that doesn’t look too promising). But on top of ethical considerations, personalization algorithms might just plain-and-simple be getting our fundamental values, especially our orientation toward openness to change, wrong.

References:

Dennis, E. E. & McCartney. J. (1979). Science journalists on metropolitan dailies. The Journal of Environmental Education, 10, 9-15.

Hansen, Anders. (1994). Journalistic practices and science reporting in the British press. Public Understanding of Science, 3(2), 111-134. doi: 10.1088/0963-6625/3/2/001


[1] This data is currently unpublished, but aimed for conference and/or journal publication.


4 Responses to “Personalization Algorithms, and Why They Don’t Understand Us Creative Types”

  1. Martin Holzherr Reply | Permalink

    Creative Types are then sensation seeking people (Wikipedia:"Sensation seeking is a personality trait defined by the search for experiences and feelings, that are "varied, novel, complex and intense", and by the readiness to "take physical, social, legal, and financial risks for the sake of such experiences.")
    Why should algorithms not be able to identify sensation seeking people? Perhaps these people are indeed only a small fraction of the mass, but they may be well worth to be identified, because these people are also gamblers, are ready to invest their money in crazy projects and so on.

    The author of this RFC (request for comment) should be glad about the missing algorithms, the algorithms which exploit the feeble points of the Creatives.

  2. Steve Smith Reply | Permalink

    Nice post, Paige.

    The "tyranny of choice" of so much content is an exponentially growing challenge that even Pariser concedes personalization must provide a role in addressing. The technology isn't going away for that very reason - but it needs to be way more transparent to people in terms of how it operates and what data is being collected.

    Personalization also doesn't need to be an all-or-nothing proposition. I believe personalization can play a valuable role in facilitating content discovery for people without undermining the value that editorial voice and human-powered curation bring to the table.

  3. Martin Holzherr Reply | Permalink

    The main problem rightly addressed in this post is
    1) that personalization is done automatically and in an intransparent way
    2) that personalization serves commercial interests and not primarily user interest

    The deeper underlying problem is that users conceive the internet as a platform which is free and serves the user whereas in reality the provider of a service ( like e.g. a search engine) wants to earn money with the service.

  4. Mallory McGuinness Reply | Permalink

    Well thought out and elegantly put. If you would like to find my author G+ page, I am just beginning to flesh it out with my personal, somewhat crazy theories on social network theory. Most of all, your point about confirmation bias resonated with me the most.

    Perhaps it is due to a guilty conscience, because I incorporate what you are probably referring to as "personalization" predictive modeling on a daily basis, I did want to point out that there is nothing too murky or deceptive about these calculations - they're mostly statistical equations for calculating similarity, really, that you could find online with a quick search, and they've been around since the 70's.

    One way that some businesses make predictions about media or other attributes you may like is to simply take the average of your three closest neighbors on your social graph. In graph lingo, your closest neighbors are called nodes, and they represent the friends you engage with most. Engagment is measured in links - in graph lingo, that's called edges - and that's essentially why FB (at least my theory) named their original algorithm edgerank - it probably just took the average of all of your network friend's profile attributes and "likes" and figured out what to post that way, putting weight on links that were- what was it - the most recent, the people you interacted with most? and something else?

    Anyway, there's no real mystery, even evaluating your level of "engagment" with other users is just averaging the amount of links - created through likes, profile updates and any other lifestream link creating activity that you have to your best friends/over your total number of friends.

    I just wanted to provide an example of how transparent recommendation systems actually are..I will show examples like this, or draw correlations between amusing datasets for the purpose of linkbaiting, but I suspect most people that calculate similarity measures, association, and the like, do what I do, which is attempt to make data driven decisions about marketing decisions for my clients.

    And as a scientist, I think that with this new light shed on the discovery process, maybe you can grow to appreciate data crunchers like me partaking in methods similar to the scientific method - forming hypotheses that are as informed as possible, observing behavior in a contained environment, and drawing observations through identifying patterns, and finding the root of abnormal activity!

    PS- I may not have an inspirational job but I manage to stay creative I am an artist and musician!

Leave a Reply


5 × = twenty five