Ethan Zuckerman’s online home, since 2003
Currently Browsing: quantified self 2011

Reflections on the 2011 Quantified Self conference

I spent the past weekend in Mountain View at the Quantified Self conference, a gathering of about 400 pioneers in the space of personal tracking and citizen science. I was one of the few at the conference, along with a handful of venture capitalists and healthcare types, who wasn’t experimenting with personal tracking. I came at the invitation of the Robert Wood Johnson Foundation, who are fascinated by the movement and invited a handful of thinkers and practitioners to the event, paying our way, in exchange for our reflections on the conference and the field. Once I’d registered, Gary Wolf (one of the co-organizers) was kind enough to invite me to give a short talk on what we might learn from tracking media consumption. But despite these generous efforts at inclusion, I felt like I likely had most in common with the anthropologist who introduced herself in one of the first sessions saying, “I’m trying to figure out what motivates people to self track.”

Me too. I’ve been an involuntary self-tracker for more than 25 years by virtue of being a type 1, insulin-dependent diabetic. Tracking my blood sugars was one of my most loathed activities through junior high and high school, and my most ignored activity in college, though I’ve come to see its utility in adulthood. But it’s still hard for me to understand why people would volunteer to track their steps, calories, caffeine consumption or REM sleep. Fortunately, when I confessed this confusion on stage in front of a group of self-trackers, I got good-natured laughter and I had a general sense that most of the people at the conference understood that their behavior was exceptional. Whether they’re early adopters or outliers seems like an open question to me. I’m convinced that what’s going on with quantified self experiments is helpful for the folks who are currently undertaking experiments. I’m less sure that this movement is going to become mainstream or change principles of scientific and medical discovery.

Here are three open questions I’m trying to work through:

– What’s the relationship between self tracking and citizen science? And between citizen science and laboratory science?
– How does the quantified self interact with the internet of things, and a wider proliferation of sensors?
– Is the quantified self broader than the quantified body? And what do we learn from quantifying other aspects of the self?

Seth Roberts gave the opening talk at the conference, and I had the pleasure of talking one on one with him Sunday morning as he accompanied me on my (ultimately fruitless) search for egg and chorizo tacos enroute to the conference. This gave me the chance to ask him several more questions about his working method and his discoveries. After Seth’s talk on Saturday, I found myself wondering whether he was proposing that everyone engage in the sort of self-tracking and experimentation he practices. I also wondered about the utility of his work to a broader scientific community – reacting to his talk, Ryan Calo of Stanford Center for Internet and Society described Roberts’s results as “rigorous anecdotes”, as distinguished from traditional scientific practices of double-blind studies.

A one on one conversation with Roberts helped me see that there’s a possible way in which personal science could help a much broader range of people, a tiered approach to the working method. Seth conducts novel experiments and looks for broader scientific principles that underly his findings about his own experiences. Those underlying principles – theories of nutrition or exercise – allow him to design interventions that he tests on himself, using the large set of data he’s collected on himself as a baseline. Not everyone will be able to (or want to) design interventions based on novel understandings of nutrition and sleep science. But a larger group may accumulate baseline data from self tracking and engage in experiments suggested by professional or citizen scientists. (As Sean Ahrens’s talk on hookworm and Chron’s disease demonstrated, it’s not easy as you might think to do this well.) And a yet larger group might benefit from discoveries made through this process and adopt interventions, even if they don’t have data on how effective the treatments are.

Roberts’s opening talk at the conference ended with a fairly aggressive critique of science as it’s generally practiced by trained professionals in the context of a university or corporate lab. Professional scientists are forced by grant length to use short timescales in their research, and Roberts worries that their motivations aren’t sufficiently personal to get them to consider practical solutions that might initially seem silly (like standing on one bent leg until exhaustion six times a day). I thought his critique was helpful and valid, up to a point… I know a lot of scientists who care passionately about the issues they study and would be happy to look silly if it helped them find a breakthrough in treating a chronic disease that had verifiable benefits for a group of people. I worried, that Roberts’s framing of citizen science in opposition to lab science was a false dichotomy. It’s possible for citizen science to inform lab science, and vice versa, and I think even Roberts acknowledges that we want more science, not a war between the citizen and professional camps.

Lots of people at the conference were conducting experiments on themselves, either designed to test the effectiveness of an intervention (does taking hookworms make my Chron’s disease symptoms more manageable?) or to monitor and understand the dynamics of a particular indicator (am I having a hard time at my current job? What’s my mood like when I’m working versus when I’m at home?). Fewer were sharing this data. The vertical integration of companies like Zeo (which manufacture sensors, sell products and collect and analyze data from users) means that a few actors have large sets of data – Zeo likely has more data on sleep than any other sleep lab simply because their sample size is so large compared to that in most lab experiments. But most self-trackers aren’t sharing their data very widely, both due to privacy concerns (will my health insurance provider cut me off if they discover I’m a restless sleeper? That I only walk 3000 steps a day?) and in part because sharing and aggregating data may not have easily apparent benefits.

Here’s a matrix I scrawled on the back of a napkin after the first day of the conference. (Because this is a slightly cleaned up napkin drawing, I reserve the right to modify or discard this model altogether when someone demonstrates its inadequacies to me…):

The vertical access considers whether the data we collect is useful by itself, or whether it’s useful primary through aggregation with lots of other data. The horizontal looks at the audience for the data: is this information primarily helpful to you or to others? Many of the projects we saw at the conference fit squarely into the lower left of the matrix. They’re personal experiments that rely on individual data for individual insights. Tracking my mood with MercuryApp when I’m writing may be very helpful in convincing me that I need a different career, but that data’s not especially useful to a broader audience. Other types of personal tracking may have aggregation benefits – information on my sleep cycle is helpful by itself, but likely more helpful if I can compare to how other people are sleeping, and especially to how happy and healthy people are sleeping. (Information that leads towards building a model has benefits for me as an individual and for a broader set of people as well, and positions on this matrix should probably be blurry, uncertain smudges, not fixed points.)

One of the coolest technologies I heard about at the conference was Asthmapolis, a tracking device attached to an asthma inhaler that sends GPS data to a central server when the inhaler is triggered. It might be useful to have a record of where my asthma attacks occurred, so I can avoid a particular part of my city, but this data is more likely to be helpful to public health officials, as they try to figure out what parts of cities are subject to asthma attacks and what factors might be mitigated. (David Van Sickle of Asthmapolis told me about this astounding piece of research that used emergency room records to asthma attacks in the city of Barcelona and trace those attacks to the offloading and storage of soybeans. Exciting as this research is, the hope is that we could figure out a correlation like this in weeks, not years, and make changes more rapidly.) It’s appropriate that Asthmapolis refers to the data they collect from their units as “surveillance data” – the continuum from tracking to surveillance (my horizontal axis) is the continuum from choosing to track yourself to being tracked by others. An extreme of surveillance might be the sort of tracking and profiling conducted by internet advertisers – you don’t choose to be tracked, and despite promises that targeted ads will be more useful to you than untargeted ones, most of us aren’t very fond of ads that guess at our identity and our desires.

Here’s another pass at this matrix. The experiments at Quantified Self that focus most heavily on personal tracking cluster in the bottom left. Having our individual movements tracked, not for our benefit but for the benefit of a third party, is the uncomfortable sphere of surveillance. When our individual movements are less interesting than the movements of masses of people, we move to the top right, distributed sensing. I don’t quite know what to call the top left, but I think it’s perhaps the most exciting sphere for many at the conference: community science. The promise here is that we can all track our sleep, productivity or mood and aggregate the data, making discoveries that help ourselves and the world as a whole. It’s an exciting vision, though I think we’re far from realizing it, not just due to shortcomings in tools and protocols for sharing data.

As I mentioned before, I’m not sure this is the best way to understand this space, and I’m certain it’s not the only way to model the interests and motivations of participants. But it did help me understand why there’s a gap between projects that focus on individuals changing their own behavior and projects that hope to map disease incidence through collecting many points of data.

If I had a complaint about the Quantified Self conference, it was that it focused heavily on quantified health and less on other aspects of the self. The projects and ideas I found most exciting were those that moved beyond brain waves and blood pressure and sought to understand less embodied aspects of the self through quantifying behaviors. I was most aware of this tension in a session on tracking location. Many of the projects we discussed were using location as a proxy for behavior, and using this data to refine other bodily measurements: if I can see that I was in the park, I was likely walking the dog, which means I was burning this many calories per hour. I was interested in tracking location because where I go and what I see is an interesting aspect of my existence. I’m curious to see whether my paths through a city are limiting me from certain types of encounters, and potentially in building systems that help me encounter the unexpected.

The bias towards health is an understandable one – if you’re not sleeping well, you might be willing to engage in far more self examination than if you were trying to diversify your media diet, or ensure you visit unfamiliar neighborhoods in your city. And because healthcare is such a huge market in the US, it makes sense that entrepreneurs and large corporations would be paying attention to the space, especially the intersection of the health and gadget space (two profitable tastes that taste great together!) But when I quantify myself, it’s often in terms of tracking my productivity (words written per day), my influence (who retweeted me? who quoted my posts?) and my attention (what did I read? was it candy, or did it influence my work?)

I think there’s something to be learned by using some of the ideas and techniques being applied to quantified health questions and applying them to other aspects of the self. Whether or not I find myself gravitating to Quantified Self meetups in Boston, I’m hoping to meet other people interested in questions of how we might self-track and understand ourselves in ways beyond the performance of our bodies: our moods, our work, our media, our interests, our movements.

I’m grateful to RWJF for making it possible for me to attend the conference and to Gary Wolf and Kevin Kelly for being such great hosts and giving me a spot on the program. And I’m grateful to everyone exploring these ideas for opening such interesting and provocative questions.


Kevin Kelly on context for the quantified self

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Kevin Kelly, co-founder of the Quantified Self conference, offers some context for the conversations we’ve had for the past two days.

Quantified self is part of a larger trend in where we’re going. It’s part of listening to the technology and to what it wants to do, because technology is telling us where it’s going. The amount of information on this planet is increasing more quickly than anything we make or than any biological component. It grows at about 66% a year… the rate of Moore’s law.

The most rapid physical production expansions, of things like concrete or oil, is about 7%. (Obviously, there are exceptions like iPads…) Metadata – information about information – is growing even more quickly. 276 exabytes is simply a meaningless number – we know it’s really, really big, but it’s hard to even comprehend.

We’re in the middle of a third paradigm of metaphors and organizing principles for personal computers. We’ve moved from desktop and office metaphors, to page/link and web metaphors, to a new metaphor around streams, tags and the cloud. RSS feeds, Facebook walls, Netflix streams are our general drift at present. This accompanies a shift from the me to the we, and from pages and files to data.

What emerges in this new model are Lifestreams. That’s what we curate in the age of the quantified self. We head upstream, and we leave a wake of data behind us. Lifeloggers, who log everything they do, are pioneers in this space. Gordon Bell and others take these exercises to an extreme, and they’re sharing it, as part of the shift from me to we.

These lifestreams intersect with each other and are, in a way, creating a new media. If we organize computation around lifestreams, an intersection between our lifestreams is a communication, an event of some sort. The media we are in is these streams of data. Everything around us has a sliver of intelligence in it, and is generating bits of data. Each of those objects has a lifestream of data, from the hotel room to your shoes. This environment, with data streams and life streams, is the space where we’ll do the work of the quantified self.

What we’ll see very soon is spectacles and glasses that will let us see that data world. It might be a screen we hold up, but we’ll be able to see this overlay of the digital world embedded in the material world. There’s thinking that the digital life is disembodied bits… but it’s really about bits embedded in the physical world.

A car is a chip with wheels, a shoe is a chip with heels. This isn’t just an internet of things – it’s a database of things. All these words – semantic web, web 3.0 – are describing this data-rich environment filled with these things we make and their data streams.

A movie is made from millions of little pieces. Historically, we made them from bits of film. But now, we might make them from data. There are 151,000 shots of the Golden Gate Bridge on Flickr – there’s no reason to take one. We will reach a moment where it’s no longer useful to film a flyover shot of the bridge. Eventually, we can curate and reassemble a unique film from that database.

This is what we do when we write. We reassemble words from a database of 30,000 or so words, a dictionary. We rarely, if ever, make up words. Much as an Amazon.com page is assembled for us on the fly, we can have database-based cinema, images and writing.

Data is the new media, itself. It’s what we’re going to be swimming in. It’s what the economy is built on.

While I’m sure about that, I have some questions, Kelly tells us. We don’t really care about data, he reminds us – we care about experience. We want experiences from the data, and we may not want to make those experiences ourselves. There’s a tension between wanting that data acquisition to be active or passive. Is the act of checking in on Foursquare, or reporting your weight, a critical part of the process? Successful experiences will shift the attention, at different levels and different amounts. Teachers are good at doing this, helping us shift between the detailed view and the broad view.

Another question is around the question of sharing. We’re still at the beginning of sharing. Everything that can be shared will be shared, because sharing things increases their value. Any data that can be shared will be shared – we know that under the right conditions, with the right permissions and right benefits, people will be willing to share information. But there is a thin line between the quantified self and “intimate surveillance”, a term Marc Smith has coined. There may be a slider between a degree of surveillance and personalization. I want my friends to treat me as an individual with particular needs, and that can only happen if I reveal my likes and dislikes, allowing that surveillance to happen.

The cost of privacy is generic relationships – if you’re unknown to everyone, it’s hard to customize to you. On the other side, you can have high transparency and high personalization, and people seem to be willing to push the slider far in that direction. The difference between quantified self and intimate surveillance is, basically, permission.

A second metaphor is thinking about living in a small town. The woman across the street might know everything about where we’re going and what we’re doing… but we know lots about her, too. Strangers appearing at the house might trigger a call to the cops. That’s a symmetrical and beneficial relationship. It’s uncomfortable once we hit an assymetry in the data. We may not want less data – we may want more symmetry, more benefits from what the other party knows.

The third question KK offers is who owns this data? Who owns your friendships? There’s another party involved. Who owns your genes? 99.9% are shared by other humans. Who owns your location? The knowledge that you’re in a public space is hard to own. Your reputation or history? Your conversations? The real issue is that we’re moving away from ownership altogether to access. The benefits of accessing are eclipsing the benefits of accessing it – consumers may eventually not own anything at all. Netflix means you can stop owning movies – if you have access to all movies anytime, why would you buy movies? This may be leaking from the virtual to the material world, particularly once we have personal fabrication. It may eventually play out into data, because access is often better than ownership.

The fourth uncertainty has to do with the nature of the quantified self: self knowledge through numbers. The self part is the interesting bit. Technology leads to the extended self. Animals extend themselves through their tools. McLuhan talked about the wheel as the extension of the foot. Computers are extensions of our brains. How far can this go? The quantified self is really the quantifiable self. We are making ourselves quantifiable, wearing, implanting and swallowing gadgets to extend our quantifiability. It’s not just what’s close to us – it’s the environment we move in.

In self-tracking, people change behaviors to become more quantifiable… which in turn helps us make changes. How far will it go?

We should be able to examine our body for toxins, say after we eat something or visit an area with toxic waste. You should be sequencing your genes every hour, 24/7, in real time. We’re going to become quantifiable to the point where we’re real time and changing ourselves in real time.

Those four things, Kelly tells us, he’s not sure about. But he is sure that where technology is going is changing how we know things. Technology modifies how we know – telescopes, microscopes have made a new method of knowing possible: science. If science is how we know, technology modifies how we know. There is no written history of the scientific method. KK has been cobbling together a history of high points: the controlled experiment in 1580 (Bacon); the necessity of repeatability, invented in 1665 (Boyle); falsifiable testibility (Popper, 1920): randomized sample (Fisher, 1926); controlled placebo (1937), computer simulations (1946), double-blind experiment (1950). The scientific method is going to change more in the next 50 years than it has in the past 400 years.

THrowing away negative results is a crazy idea – we’ll be saving those. Triple blind trials ensure that not only do the experimenter and subjects not know the answer, you aren’t even aware an experiment is going on. Quantified Self is right in the center of what’s going on: clinical trials of 1, real-time experiments instead of batch mode, personalized pharma, participatory medicine, wikified results that are never final. Exhaustive data, the Google way of doing science, is better than having a hypothesis. Collaborative research, where many people do research in tandem on a vast scale.

Your trivial-seeming self tracking ap is part of something much bigger. It’s part of a new stage in the scientific method, another tool in the kit. It’s not the only thing going on, but it’s part of that evolutionary process. Making drugs for one person won’t come purely from the quantified self, but it’s part of that process.

You might object to a decentralized, peer to peer, amateur, never finished unauthorized encyclopedia… but you’d be wrong. What happens when this happens to medicine?


Quantified Self Ignite talks, part two

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Man, Ignite talks. They’re great, but they’re overwhelming to blog. Here’s another round of nine at Quantified Self – forgive the lack of links and often, surnames – this has been an informal sort of events.


Alex Chafee from Moodlog tells us that that we’re happier than we think. That’s one of his discoveries from logging his own mood and looking at the mood daa of others. He’s working on methods to talk about mood that’s more subtle than measuring from 1-5. One tool allows people to characterize moods with colors – it turns out that hunger is a dark rust color. We might consider moods in terms of different scales and axes. One might be our feelings of sociability – we might distinguish between being lonely and being solitary in terms of how much sociability we want. Over time, the system will allow him to build a crowd-sourced scale of mood words.


Bob Evans wants to build not one, but 700 applications to track people’s moods. His platform PACO is designed to allow easy creation of new tracking applications. PACO is an acronym – personal analytics companion – but it’s also the name of his dog. And dogs are an excellent companion that are sensitive to your moods and behaviors.

The design philosophy of PACO is on building simple tools that can be connected together, ala UNIX. And there’s a basic philosophy of privacy – your data is yours, and you can share it if you’d like.

You can design a simple app, and it will ask for your attention in the tray on an Android phone. A participant can answer questions, and she can always access the data she’s entered. As someone who’s administering an experiment, you can see all the reported data.

In deploying the tool, he’s discovered that it’s important for surveys to be short and sweet. Three questions is about as much as anyone will answer. And you need to ask the right questions – a guy who put together a study that asked “What are you doing?” eight times a day was able to participate in his own experiment for only three days before he gave up!


Ian Eslick of the MIT Media Lab wants to learn from your QS experiments. There’s hundreds of possible self-optimization experiments to try, listed on the internet. As people engage in QS experiments, there’s data to evaluate. But how do you generalize from people’s self-experiments?

He sees promise in combining information from multiple studies using techniques from recommendation engines. In these systems, thousands of people’s preferences combine to predict your possible preferences. He’s working to use techniques from research on recommendation engines to allow aggregation of data from sites like Cure Together.


Mei Lin Fung tells us about weight loss study she participated in some years back. She was one of 200 participants in a study that compared two weight loss methods – lose weight and then maintain, or learn to maintain, then lose weight. She was in the group that learned to maintain weight first, for eight weeks, then focused on losing weight for 20 weeks. Her total involvement with the study was two years long, including 6 months wearing a pedometer. And for her, the most challenging part was weekly meetings of 20 women with a facilitator.

Some useful techniques she learned included landmark walks, finding local landmarks that were 1000 steps away, allowing for short, 10 minute walks towards a goal of 3000 to 6000 steps per day. Learning to articulate and track goals was helpful, as was smelling and savoring food before eating it.

Weight loss has four goals, she tells us – improved nutrition, increased physical activity, increased quality of life and weight loss, only as the fourth component. She tells us that, despite what you might learn working on your own, social support is critical. And she urges us to consider that personal and professional science can work together to make discoveries.


Uwe Erich Heiss from Dynamic Clinical Systems talks about tracking pain. Doctors ask patients, “How have you been?” People are lousy at remembering how they’ve been. And doctors give patients 22 seconds, on average, to answer that question. So it’s worth finding ways to help people track how they’ve been and get that information into the doctor’s hands in a meaningful way.

He and his team have repurposes an unsuccessful technology, the iTag – originally used to “bookmark your radio”. By clicking, you record the level of discomfort you are feeling, using multiple clicks to signify more intense pain. They’ve now moved to a browser based system that patients can use at home, drawing on a picture of a body to show a pain map. They’ve collected a million events from 100,000 patients in 20 clinical areas using the system.

While there’s much to be learned from the data collected, there’s also much to learn about the process. He tells us that experiments would work better if there were a baseline of data collected before someone is experiencing pain. We need better standards for self-tracking, and APIs and rules for sharing data.


Kyle Machulis of OpenYou brings the house down with his Quantified Coder Project. The motto – “Putting the U back in programming, even if there was no U, even in British English”.

In his day job, Kyle writes device drivers for game controllers, and now for devices like the Mindwave, Zeo, and Fitbit, allowing him to work with data locally, rather than putting it into the cloud. OpenYou is about opening all sorts of data, including from locked-down devices like pacemakers. He takes an “any means necessary” approach, which may mean violating licenses and warranties, and publishes code allowing you to open your quantitative devices as well.

All this driver development and device analysis requires lots of sitting. Like many programmers, Kyle gets into a zone, writing code for 6-14 hours at a stretch. He wondered, “What can we do with a programming environment that would help us understand what happens in those hours?”

We could use accelerometers on our chairs, measures of strikeforce on our keys to understand whether we were fidgety, or stressed when working on particular code. And we could correlate these measures to the code written. If bugs are associated with a particular pattern of physical states, maybe we could review our code by looking at other times we were frustrated or fidgety. Apply this to other people’s code and we can figure out what libraries are frustrating to use, and which other programmers enjoy. In the long run, perhaps this becomes a piece of metadata we share on services like Google Code.


Dennis Harscoat of Quan#er introduces us to a beta iphone app that asks the question, “How much?”

You can use Quan#er to post information like “#coffee: 4cups”. The idea is to ask people to post data that’s specific, open and enables discovery. The ap asks what, how much, and what unit, and allows an optional picture. Outputs include graphs of whatever you’re tracking over time. If you choose to publish your data, others can cheer you on and compare their data to yours. Alternatively, you can track privately, but their business model is “free for all, pay for privacy”. (He reminds us that we can trust him, as he and the company are Swiss. Actually, he’s French Belgian living in Switzerland, whatever that means about trust.) At the very least, we can get our data from his system via emailed text file at any time.


Ernesto Ramirez wants to make you move. He tells us that the difference between UK bus drivers, who are more prone to heart attacks, than UK ticket takers, is that the latter move and the former don’t. We get guidelines on doing 150 minutes of vigorous physical exercise a week, but the truth is, we need to move far more than 30 minutes a day. “Sitting will kill you, even if you’re physically active,” as James Levine at the Mayo Clinic has discovered.

You could walk while you work, using Steelcase’s $4400 walking treadmill desk. Or you could built your own. Ernesto bought a $100 treadmill from Craigslist and a $149 IKEA desk and built his own workstation. In 2 years, he’s logged 600 miles on it, which represents 61,000 calories or 17 pounds of weight. He shows us Fitbit data correlated with Rescue Time that demonstrates he’s done serious work while on the device.

His advice. “Just fucking do it. No, seriously, just fucking do it.” It’s not about fitness – it’s about movement, which will make you a better person.


Mark Carranza describes himself as a former poet, “because it’s much more poetic than being a poet.” And he’s a passionate explorer of memory. His system, memex.mx, has helped him record everything he’s thought since 1984. In that time, he’s recorded 1,230,348 thoughts and 7,506,340 links between thoughts. These thoughts are simple lines, a few words at a time. He writes them down on paper and has input them into a DOS computer program he hasn’t altered since 1992. He averages 232.13 new thoughts per day and 1506 new connections a day.

This collection of links and crosslinks is his hypomnema, a giant chapbook of the “accumulated treasure” of thought. He draws analogies to Vannevar Bush’s Memex system, proposed in 1945, and notes that Bush saw the human mind as unparalleled in making associations, but needing machine augmentation to improve memory. The goal of his system is building a “find engine” rather than a search engine, a form of retrieval and remembering from prompts.

We start on his DOS tool from the word “chicken”. One possible association is “epistemological chicken”, which takes us to a book by Harry M. Collins called Artificial Knowing and then to thoughts on feminist epistemology. As we go through the set of links, it’s a way for Mark to remember what he got from that experience of reading the book. Books often have hundreds or thousands of associations within the system, annotated as he’s come back to the text.

People ask whether he has any time to do anything else. He calculates that he spends no more than 45 minutes transferring thoughts from paper to computer, and that in his lifetime, he’s watched far more television, than invested time in creating an augmented memory.

A next step might be allowing you to store your thoughts in this way and an iPhone app that could allow you to compare your web of associations with others. (And I didn’t even know they had iPhones in 1992…)


Short talks at Quantified Self

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Ted Vicky is trying to answer the question, “Can Twitter make you fit?” He’s had a long path to becoming a PhD researcher in Galway, Ireland. For eleven years, he ran the online fitness center at the White House, under Presidents Bush, Clinton and W Bush. With his background in exercise physiology, he’s taken on questions of “connected health”, asking how social media might be helping people combat obesity and other chronic health issues. He points out that in only one country in the world – Japan – is obesity decreasing.

He’s collecting data from mobile fitness applications – Runkeeper, MyFitnessPal, DailyMile, Nike’s system, Edomondo – by looking at their social media presence. While these tools have a social media component, it’s easiest to collect data when people share their information on Twitter. He’s gotten over a million tweets, and is starting to understand patterns, both by measuring what tools people are using and sending data into analytics engines like Klout. Perhaps unsurprisingly, the people most using these technologies thus far are categorized by Klout as “explorers”

And Vicky is starting to put together a model to understand what people talk about when they tweet about fitness – reports on activity, “blarney”, and conversations between participants. There’s a closely analyzed set of 36 days of data representing 234k tweets and 57k unique users. He hopes we’re going to learn why people share their exercise information, what reinforcement and motivation we get from each other’s behavior and how this information could turn fitness into a broader social trend.


Vipul Gupta is building sensors at Sun/Oracle research. The Sun SPOTs is an Arduino-like sensor platform, built around a more powerful 32 bit ARM CPU that can run Java natively, rather than a clunky variant of C. There are 20,000 devices out there, collecting data using a variety of sensors, including a tilt sensors and radio. They’re being used by researchers, hobbyists and the educational market.

Gupta notes that mobile phones have lots of sensors, including GPS. He wrote an app for an Android phone that tracked his movements as he drove his car, posting his location every 30 seconds to sensor.network. A simple output is a Google map mashup of where he’s been. This is potentially sensitive data, so Gupta has written tools that can blur the data to allow it to be published, smearing position so it’s less personally revealing. You could also obscure data in terms of time series – perhaps you don’t want anyone to know you’re out of town while you’re traveling. You might be more comfortable publishing that data after the fact. You could also release averaged data – your electricity consumption can reveal when you wake and sleep, but an averaged figure might help your building manager understand load needs at different times of day.

Gupta is now experimenting with adding additional sensors to his devices – air quality, ambient noise, radiological sensing. In the long run, he suspects we’ll not be connected to dedicated sensor devices, but to mobile phones, as they subsume the roles of other computing platforms in our lives.


Dave Marvit with Fujitsu believes that we’re moving from a world where companies create a vertically integrated ecosystem around sensing, to one where we have sensing, analysis and service as separate components offered by separate actors. His experiment with the Sprout platform are designed to show what this new ecosystem might look like.

The platform links an ARM processor running Linux with an Apache webserver with 5 miniUSB ports for sensors. He tells us that it will eventually fold into a mobile phone, just like everything else. In the meantime, it’s an excellent tool for integrating multiple sensor streams. To track stress, he’s analyzing data from a fingertip pulse monitor and an accelerometer. The former is a pretty noisy signal – if you move, the data gets messy pretty quickly. But accelerometer data can tell you about the movement and let you throw out that data. He’s used this system to watch himself playing speed chess and is able to correlate moments of stress to particularly stressful moments within the game.

This ability to integrate multiple sensor streams could lead to better measures of ambulatory blood pressure or sleep apnea monitoring. The goal in the long run is to build a platform that supports may modular metasensors and their synchronization and interaction.


Quantified Self – mood tracking

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Margie Morris leads a session on mood tracking during the final breakout session at QS2011. In the room are developers of five different mood tracking tools, and many of the people in the room track moods either on an ongoing basis or occasionally. She starts us with an exercise in understanding our moods, asking us to write down a project that was important to us, and to describe our emotions about the project now, when we began the project, when it started to get difficult and when we were near completion.

The vast majority of people in the session used words to articulate their feelings, often a pair of words. Some drew pictures. No one chose numbers, which is somewhat interesting for a quantified self conference. Margie suggested that we think about our emotional states in terms of a two dimensional matrix, a mood map, with axes of arousal (intensity) and valence (positivity versus negativity). We talk about the difficulty of representing some emotions with one or more numnbers. Is shame a form of sadness? What if it’s the sort of slight, smug shame that comes from doing what you wanted to do instead of what you were supposed to do?

It’s difficult to reduce mood to a number. But it’s useful to do so, because we’re very bad at remembering our moods. We have recall biases that help us remember the most recent and most intense moods – a general sense of happiness and well-being in the past might be masked by a brief, sharp period of unhappiness as we’re asked to recall how we felt. At the same time, there’s a concern expressed by some in the room that technology and culture can bias us against negative emotion. We might be pushed to underreport sadness from a sense that self-help books want us to acknowledge positive emotions and let go of the negative ones, or by an application that subtly encourages us to consider and soften our emotions before reporting them. Morris notes that there’s a bias against extremes – ask people to rate their moods from 1-10 and you’ll see few reports at the extreme ends of the scale. Give users two dimensions to report in and you’ll see that bias soften or disappear.

The application designers briefly discuss whether their tools show you the past emotions you’ve recorded before allowing your most recent input – is it better to allow someone to report their emotion without prompting, or do we benefit from showing people the patterns they’re documenting as they’re reporting?

Jon Cousins told his story about Moodscope, a personal project he started to work on bipolar disorder. Cousins started experiencing an intense depression and went to seek medical help. When the British National Health Service wasn’t able to schedule an appointment with an appropriate psychiatrist for some weeks, he found himself feeling very desperate, and looked for a constructive way to handle his intense emotions. Tracking his emotions, using a modified version of the PANAS (positive and negative affect schedule) to put a single number on his mood, helped lessen his dips in mood and stabilize his emotions.

Cousins unrolled a vast paper scroll that tracked the last few years of his emotional life. The the ribbon of paper stretches the width of the room, and we see – far to the left – a red line oscilating sharply between a deep blue at the bottom of a graph and a sunny yellow at the top. About a year into the graph, the line rises into positive territory and stays there, with very little variation. Cousins explains that this first, long period of positive stability came from taking a simple action: sharing his mood with one other person.

A friend knew that Cousins was tracking his mood and asked him to share his scores on a daily basis. He now shares scores with five close friends. The connection between sharing scores and higher reported scores prompts a number of participants to wonder if Cousins began reporting higher scores for fear of worrying his friends. He explains that he doesn’t believe this is the case. Sometimes he precedes a low score report to his five friends with a note saying, “This is going to look bad, but I’m okay,” knowing they’d otherwise be likely to try to intervene. But he agrees that there’s an effect that comes from reflection: “My friend sends a message that’s as simple as ‘?’ in a response to a low score I’ve posted, and I’m compelled to write him a note of explanation. The act of considering and reporting the emotion can be enough to help find a way out of the trap of negative emotion.”

Someone suggests that the sheer fact that someone cares enough for Cousins to ask to be privy to his mood might be enough to raise his scores… and the daily reminder that person cares is important as well. Others point to the value of journaling – considering one’s mood carefully enough to put a score on it, and ready a possible justification, may allow for a “cognitive reappraisal” of a situation that helps improve mood. Sharing mood information does seem to be powerful, though – one participant talks about how she began reciprocally sharing mood information with a friend, someone she wasn’t especially close to. The two have grown much closer in the process – sharing this information and the reasons behind it has led the two to form a deep bond.


« Previous Entries

Powered by WordPress | Designed by Elegant Themes