Monitoring, Surveillance, Sousveillance

Monitoring, Surveillance, Sousveillance


Hi. I’ve been studying
human-computer interaction for a long time. And more
recently I’m turning my eye to human-data interaction.
And when I say data, I mean all of the data
about you as a consumer that is collected from devices
and services. Or the data that you’re collecting as
developers from people. If you take one thing away from this talk, I want to just ask yourselves, why. Why is the data being collected? Why are you collecting the data? For what purpose? Because
I think one of the things we need as data scientists is to be clear about the data we’re collecting. Now, I’ve worked with data
scientists and designers to build recommendation
algorithms for merchandising. Quite positive. I’m now working with
privacy and safety experts to worry about some of
the concerns people have, and the piece down the
middle is visibility. Even in corporations we
don’t always have visibility into the reasons why we collect data. Consumers certainly don’t,
and that causes anxiety. When you talk to people
about data collection they often think about surveillance. Now surveillance evokes the idea of the collection of data
for a particular purpose. As we move into this new world though, of the Internet of Things and
smart cities and smart homes, a lot of sensor data is being collected with no particular purpose in mind. Back to the question why. Because we can, just because we can. Because it might be useful perhaps. Because we haven’t thought
about not doing it. Data science is in its infancy. It’s a fetish object, it’s a fetish topic, everyone loves data science. But we’re at the
beginning of understanding what we should collect and
what we’re not collecting, as well as what we are collecting. So it’s not surprising that as consumers and those concerned about
data, sousveillance, or the watched watching the watchers, is becoming increasingly
something we’re interested in. We’re asking questions about why. I’ve done a series of
interviews with people about their concerns about data. It isn’t black and
white, unlike my slides. It’s about wanting a
contract and a conversation. It’s about wanting to
participate, wanting visibility. A friend of mine, when she
found out she was pregnant, decided that she did
not want to be tracked about her pregnancy, so
she used incognito mode for searches. She bought nothing online, she didn’t talk on social
media about her pregnancy. She took efforts to opt
out, including paying for large objects, like prams,
with cash and gift cards rather than credit cards. Nothing wrong with
that, but what was wrong with the model, in the data model? It flagged her as a potential criminal. It flagged her as fraudulent, potentially, so she had to unravel that. Data science is in its infancy, and our predictive models
about what people’s intent or actions are, are not always right. So what we’re doing, if we’re not careful, is creating a climate of lack of trust, and what that means is people
may potentially walk away from our services and devices. Because we’re not including them and thinking hard about this. So again, ask why you’re collecting data. Ask what models you’re building. Ask what data you’re not collecting in order to build a better
model, a more rounded model, and think about why not participation. Now, I mentioned to my friend,
and one of the big issues I think we need to think about, is meaningful opt in and opt out. What does it mean to let people choose to opt out of data collection? In the EU, this is becoming
increasingly a big issue, and a regulation is about to pass. Also about deletion, the
right to be forgotten. Data science is in its infancy. We don’t yet understand how
the statistical models we buy will cope with the removal of data. Deletion is a really
interesting technical problem. I’ve mentioned forecasting. Intent modeling and forecast
modeling is clearly beneficial in many instances, but we
have to not just take them, the forecast, as given. @e
have to ask why and what. We have to come with a primary premise as building trust and
collaboration with people of whose data we’re collecting. We should start with models of trust first and then think about
where we’re going forward. I just want to sum up the things I’ve been thinking about. Asking why, meaningful opt in and opt out. What does it mean to delete? What are the forecast models of the future and how can we forecast what data we’re collecting that’s artifactual? How will it be used? And build on trust. So I want to think about you talking to me about human-data interaction
from your perspective. Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *