Wednesday, February 18, 2015

Data in the Museum: Experimenting on People or Improving Their Experience?

Every few months, a major news outlet does an "expose" about data collection on museum visitors. These articles tend to portray museums as Big Brother, aggressively tracking visitors' actions and interests across their visit. Even as the reporters acknowledge that museums are  trying to better understand and serve their visitors, there's a hint of menace in headlines like "The Art is Watching You."

We're trying to personalize. We're trying to adapt. We're trying to be responsive. But it can still come off as creepy. In a world of iteration, prototyping, and A/B testing, do we need a new ethical litmus test for social experimentation?

I came back to this question as I listened to the most recent RadioLab podcast about Facebook's mass social experiments on users. For years, Facebook has teamed up with social psychologists to perform social experiments through small changes to the Facebook interface. These experiments look a lot like those conducted in social psychology labs, with two big differences:
  • the sample sizes are many tens of thousands of times larger than those in the lab--and a lot more diverse across age, class, and geography. 
  • no one signs a form giving consent to participate. 
I thought this sounded great: better data, useful research. Turns out not everyone thinks this is a good way for us to learn more about humanity. Last year, there was a HUGE media kerfuffle when people were shocked to learn that they had been "lab rats" for Facebook engineers researching how the News Feed content could impact people's moods.

To me, this was surprising. Sure, I get the ick factor when my personal data is used as currency. But I know (mostly) what I'm buying with it. Facebook is a completely socially-engineered environment. Facebook decides what content you see, what ads you see, and your personal ratio of puppies to snow warnings. And now people are outraged to find out that Facebook is publishing research based on their constant tweaking. It's as if we are OK with a company using and manipulating our experience as long as they don't tell us about it.

It seems that the ethical objections were loudest when the intent of the experiment was to impact someone's mood or experience. And then I started thinking: we do that all the time in museums. We change labels based on what visitors report that they learned. We change layouts based on timing and tracking studies of where people go and where they dwell. We juxtapose artifacts to evoke emotional response. We tweak language and seating and lighting--all to impact people's experience. Do we need consent forms to design an experience?

I don't think so. That seems over the top. People come to the museum to enjoy what the invisible hands of the curators have wrought. So it brings me back to my original question: when you are in the business of delivering curated experiences, where is the ethical line? 

Consider the following scenarios. Is it ethical to...
  • track the paths people take through galleries and alter museum maps based on what you learn?
  • give people different materials for visitor comments and see whether the materials change the substance of their feedback?
  • cull visitor comments to emphasize a particular perspective (or suite of perspectives)?
  • offer visitors different incentives for repeat visitation based on behavior?
  • send out two different versions of your annual membership appeal letter to see which one leads to more renewals?
  • classify visitors as types based on behavior and offer different content to them accordingly?
I'd say most of these are just fine--good ideas, probably. I suspect we live in an era where the perceived value of experimentation outweighs the perceived weight of the invisible hand of the experimenter. Then again, I was surprised by the lab rat reaction to the Facebook experiments.

It's hard sometimes to differentiate what's an experiment on humans and what's an experiment to improve your work for humans. As the Facebook example shows, just claiming your intent is to improve isn't enough. It matters what the humans think, too. 

I guess that's what makes us more than lab rats--we can speak up and debate these issues. What do you think?

If you are reading this via email and would like to share a comment, you can join the conversation here.
blog comments powered by Disqus