… My heart’s in Accra Ethan Zuckerman’s online home, since 2003

June 2, 2011

Reflections on the 2011 Quantified Self conference

Filed under: quantified self 2011 — Ethan @ 3:45 pm

I spent the past weekend in Mountain View at the Quantified Self conference, a gathering of about 400 pioneers in the space of personal tracking and citizen science. I was one of the few at the conference, along with a handful of venture capitalists and healthcare types, who wasn’t experimenting with personal tracking. I came at the invitation of the Robert Wood Johnson Foundation, who are fascinated by the movement and invited a handful of thinkers and practitioners to the event, paying our way, in exchange for our reflections on the conference and the field. Once I’d registered, Gary Wolf (one of the co-organizers) was kind enough to invite me to give a short talk on what we might learn from tracking media consumption. But despite these generous efforts at inclusion, I felt like I likely had most in common with the anthropologist who introduced herself in one of the first sessions saying, “I’m trying to figure out what motivates people to self track.”

Me too. I’ve been an involuntary self-tracker for more than 25 years by virtue of being a type 1, insulin-dependent diabetic. Tracking my blood sugars was one of my most loathed activities through junior high and high school, and my most ignored activity in college, though I’ve come to see its utility in adulthood. But it’s still hard for me to understand why people would volunteer to track their steps, calories, caffeine consumption or REM sleep. Fortunately, when I confessed this confusion on stage in front of a group of self-trackers, I got good-natured laughter and I had a general sense that most of the people at the conference understood that their behavior was exceptional. Whether they’re early adopters or outliers seems like an open question to me. I’m convinced that what’s going on with quantified self experiments is helpful for the folks who are currently undertaking experiments. I’m less sure that this movement is going to become mainstream or change principles of scientific and medical discovery.

Here are three open questions I’m trying to work through:

– What’s the relationship between self tracking and citizen science? And between citizen science and laboratory science?
– How does the quantified self interact with the internet of things, and a wider proliferation of sensors?
– Is the quantified self broader than the quantified body? And what do we learn from quantifying other aspects of the self?

Seth Roberts gave the opening talk at the conference, and I had the pleasure of talking one on one with him Sunday morning as he accompanied me on my (ultimately fruitless) search for egg and chorizo tacos enroute to the conference. This gave me the chance to ask him several more questions about his working method and his discoveries. After Seth’s talk on Saturday, I found myself wondering whether he was proposing that everyone engage in the sort of self-tracking and experimentation he practices. I also wondered about the utility of his work to a broader scientific community – reacting to his talk, Ryan Calo of Stanford Center for Internet and Society described Roberts’s results as “rigorous anecdotes”, as distinguished from traditional scientific practices of double-blind studies.

A one on one conversation with Roberts helped me see that there’s a possible way in which personal science could help a much broader range of people, a tiered approach to the working method. Seth conducts novel experiments and looks for broader scientific principles that underly his findings about his own experiences. Those underlying principles – theories of nutrition or exercise – allow him to design interventions that he tests on himself, using the large set of data he’s collected on himself as a baseline. Not everyone will be able to (or want to) design interventions based on novel understandings of nutrition and sleep science. But a larger group may accumulate baseline data from self tracking and engage in experiments suggested by professional or citizen scientists. (As Sean Ahrens’s talk on hookworm and Chron’s disease demonstrated, it’s not easy as you might think to do this well.) And a yet larger group might benefit from discoveries made through this process and adopt interventions, even if they don’t have data on how effective the treatments are.

Roberts’s opening talk at the conference ended with a fairly aggressive critique of science as it’s generally practiced by trained professionals in the context of a university or corporate lab. Professional scientists are forced by grant length to use short timescales in their research, and Roberts worries that their motivations aren’t sufficiently personal to get them to consider practical solutions that might initially seem silly (like standing on one bent leg until exhaustion six times a day). I thought his critique was helpful and valid, up to a point… I know a lot of scientists who care passionately about the issues they study and would be happy to look silly if it helped them find a breakthrough in treating a chronic disease that had verifiable benefits for a group of people. I worried, that Roberts’s framing of citizen science in opposition to lab science was a false dichotomy. It’s possible for citizen science to inform lab science, and vice versa, and I think even Roberts acknowledges that we want more science, not a war between the citizen and professional camps.

Lots of people at the conference were conducting experiments on themselves, either designed to test the effectiveness of an intervention (does taking hookworms make my Chron’s disease symptoms more manageable?) or to monitor and understand the dynamics of a particular indicator (am I having a hard time at my current job? What’s my mood like when I’m working versus when I’m at home?). Fewer were sharing this data. The vertical integration of companies like Zeo (which manufacture sensors, sell products and collect and analyze data from users) means that a few actors have large sets of data – Zeo likely has more data on sleep than any other sleep lab simply because their sample size is so large compared to that in most lab experiments. But most self-trackers aren’t sharing their data very widely, both due to privacy concerns (will my health insurance provider cut me off if they discover I’m a restless sleeper? That I only walk 3000 steps a day?) and in part because sharing and aggregating data may not have easily apparent benefits.

Here’s a matrix I scrawled on the back of a napkin after the first day of the conference. (Because this is a slightly cleaned up napkin drawing, I reserve the right to modify or discard this model altogether when someone demonstrates its inadequacies to me…):

The vertical access considers whether the data we collect is useful by itself, or whether it’s useful primary through aggregation with lots of other data. The horizontal looks at the audience for the data: is this information primarily helpful to you or to others? Many of the projects we saw at the conference fit squarely into the lower left of the matrix. They’re personal experiments that rely on individual data for individual insights. Tracking my mood with MercuryApp when I’m writing may be very helpful in convincing me that I need a different career, but that data’s not especially useful to a broader audience. Other types of personal tracking may have aggregation benefits – information on my sleep cycle is helpful by itself, but likely more helpful if I can compare to how other people are sleeping, and especially to how happy and healthy people are sleeping. (Information that leads towards building a model has benefits for me as an individual and for a broader set of people as well, and positions on this matrix should probably be blurry, uncertain smudges, not fixed points.)

One of the coolest technologies I heard about at the conference was Asthmapolis, a tracking device attached to an asthma inhaler that sends GPS data to a central server when the inhaler is triggered. It might be useful to have a record of where my asthma attacks occurred, so I can avoid a particular part of my city, but this data is more likely to be helpful to public health officials, as they try to figure out what parts of cities are subject to asthma attacks and what factors might be mitigated. (David Van Sickle of Asthmapolis told me about this astounding piece of research that used emergency room records to asthma attacks in the city of Barcelona and trace those attacks to the offloading and storage of soybeans. Exciting as this research is, the hope is that we could figure out a correlation like this in weeks, not years, and make changes more rapidly.) It’s appropriate that Asthmapolis refers to the data they collect from their units as “surveillance data” – the continuum from tracking to surveillance (my horizontal axis) is the continuum from choosing to track yourself to being tracked by others. An extreme of surveillance might be the sort of tracking and profiling conducted by internet advertisers – you don’t choose to be tracked, and despite promises that targeted ads will be more useful to you than untargeted ones, most of us aren’t very fond of ads that guess at our identity and our desires.

Here’s another pass at this matrix. The experiments at Quantified Self that focus most heavily on personal tracking cluster in the bottom left. Having our individual movements tracked, not for our benefit but for the benefit of a third party, is the uncomfortable sphere of surveillance. When our individual movements are less interesting than the movements of masses of people, we move to the top right, distributed sensing. I don’t quite know what to call the top left, but I think it’s perhaps the most exciting sphere for many at the conference: community science. The promise here is that we can all track our sleep, productivity or mood and aggregate the data, making discoveries that help ourselves and the world as a whole. It’s an exciting vision, though I think we’re far from realizing it, not just due to shortcomings in tools and protocols for sharing data.

As I mentioned before, I’m not sure this is the best way to understand this space, and I’m certain it’s not the only way to model the interests and motivations of participants. But it did help me understand why there’s a gap between projects that focus on individuals changing their own behavior and projects that hope to map disease incidence through collecting many points of data.

If I had a complaint about the Quantified Self conference, it was that it focused heavily on quantified health and less on other aspects of the self. The projects and ideas I found most exciting were those that moved beyond brain waves and blood pressure and sought to understand less embodied aspects of the self through quantifying behaviors. I was most aware of this tension in a session on tracking location. Many of the projects we discussed were using location as a proxy for behavior, and using this data to refine other bodily measurements: if I can see that I was in the park, I was likely walking the dog, which means I was burning this many calories per hour. I was interested in tracking location because where I go and what I see is an interesting aspect of my existence. I’m curious to see whether my paths through a city are limiting me from certain types of encounters, and potentially in building systems that help me encounter the unexpected.

The bias towards health is an understandable one – if you’re not sleeping well, you might be willing to engage in far more self examination than if you were trying to diversify your media diet, or ensure you visit unfamiliar neighborhoods in your city. And because healthcare is such a huge market in the US, it makes sense that entrepreneurs and large corporations would be paying attention to the space, especially the intersection of the health and gadget space (two profitable tastes that taste great together!) But when I quantify myself, it’s often in terms of tracking my productivity (words written per day), my influence (who retweeted me? who quoted my posts?) and my attention (what did I read? was it candy, or did it influence my work?)

I think there’s something to be learned by using some of the ideas and techniques being applied to quantified health questions and applying them to other aspects of the self. Whether or not I find myself gravitating to Quantified Self meetups in Boston, I’m hoping to meet other people interested in questions of how we might self-track and understand ourselves in ways beyond the performance of our bodies: our moods, our work, our media, our interests, our movements.

I’m grateful to RWJF for making it possible for me to attend the conference and to Gary Wolf and Kevin Kelly for being such great hosts and giving me a spot on the program. And I’m grateful to everyone exploring these ideas for opening such interesting and provocative questions.

May 29, 2011

Kevin Kelly on context for the quantified self

Filed under: quantified self 2011 — Ethan @ 8:19 pm

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Kevin Kelly, co-founder of the Quantified Self conference, offers some context for the conversations we’ve had for the past two days.

Quantified self is part of a larger trend in where we’re going. It’s part of listening to the technology and to what it wants to do, because technology is telling us where it’s going. The amount of information on this planet is increasing more quickly than anything we make or than any biological component. It grows at about 66% a year… the rate of Moore’s law.

The most rapid physical production expansions, of things like concrete or oil, is about 7%. (Obviously, there are exceptions like iPads…) Metadata – information about information – is growing even more quickly. 276 exabytes is simply a meaningless number – we know it’s really, really big, but it’s hard to even comprehend.

We’re in the middle of a third paradigm of metaphors and organizing principles for personal computers. We’ve moved from desktop and office metaphors, to page/link and web metaphors, to a new metaphor around streams, tags and the cloud. RSS feeds, Facebook walls, Netflix streams are our general drift at present. This accompanies a shift from the me to the we, and from pages and files to data.

What emerges in this new model are Lifestreams. That’s what we curate in the age of the quantified self. We head upstream, and we leave a wake of data behind us. Lifeloggers, who log everything they do, are pioneers in this space. Gordon Bell and others take these exercises to an extreme, and they’re sharing it, as part of the shift from me to we.

These lifestreams intersect with each other and are, in a way, creating a new media. If we organize computation around lifestreams, an intersection between our lifestreams is a communication, an event of some sort. The media we are in is these streams of data. Everything around us has a sliver of intelligence in it, and is generating bits of data. Each of those objects has a lifestream of data, from the hotel room to your shoes. This environment, with data streams and life streams, is the space where we’ll do the work of the quantified self.

What we’ll see very soon is spectacles and glasses that will let us see that data world. It might be a screen we hold up, but we’ll be able to see this overlay of the digital world embedded in the material world. There’s thinking that the digital life is disembodied bits… but it’s really about bits embedded in the physical world.

A car is a chip with wheels, a shoe is a chip with heels. This isn’t just an internet of things – it’s a database of things. All these words – semantic web, web 3.0 – are describing this data-rich environment filled with these things we make and their data streams.

A movie is made from millions of little pieces. Historically, we made them from bits of film. But now, we might make them from data. There are 151,000 shots of the Golden Gate Bridge on Flickr – there’s no reason to take one. We will reach a moment where it’s no longer useful to film a flyover shot of the bridge. Eventually, we can curate and reassemble a unique film from that database.

This is what we do when we write. We reassemble words from a database of 30,000 or so words, a dictionary. We rarely, if ever, make up words. Much as an Amazon.com page is assembled for us on the fly, we can have database-based cinema, images and writing.

Data is the new media, itself. It’s what we’re going to be swimming in. It’s what the economy is built on.

While I’m sure about that, I have some questions, Kelly tells us. We don’t really care about data, he reminds us – we care about experience. We want experiences from the data, and we may not want to make those experiences ourselves. There’s a tension between wanting that data acquisition to be active or passive. Is the act of checking in on Foursquare, or reporting your weight, a critical part of the process? Successful experiences will shift the attention, at different levels and different amounts. Teachers are good at doing this, helping us shift between the detailed view and the broad view.

Another question is around the question of sharing. We’re still at the beginning of sharing. Everything that can be shared will be shared, because sharing things increases their value. Any data that can be shared will be shared – we know that under the right conditions, with the right permissions and right benefits, people will be willing to share information. But there is a thin line between the quantified self and “intimate surveillance”, a term Marc Smith has coined. There may be a slider between a degree of surveillance and personalization. I want my friends to treat me as an individual with particular needs, and that can only happen if I reveal my likes and dislikes, allowing that surveillance to happen.

The cost of privacy is generic relationships – if you’re unknown to everyone, it’s hard to customize to you. On the other side, you can have high transparency and high personalization, and people seem to be willing to push the slider far in that direction. The difference between quantified self and intimate surveillance is, basically, permission.

A second metaphor is thinking about living in a small town. The woman across the street might know everything about where we’re going and what we’re doing… but we know lots about her, too. Strangers appearing at the house might trigger a call to the cops. That’s a symmetrical and beneficial relationship. It’s uncomfortable once we hit an assymetry in the data. We may not want less data – we may want more symmetry, more benefits from what the other party knows.

The third question KK offers is who owns this data? Who owns your friendships? There’s another party involved. Who owns your genes? 99.9% are shared by other humans. Who owns your location? The knowledge that you’re in a public space is hard to own. Your reputation or history? Your conversations? The real issue is that we’re moving away from ownership altogether to access. The benefits of accessing are eclipsing the benefits of accessing it – consumers may eventually not own anything at all. Netflix means you can stop owning movies – if you have access to all movies anytime, why would you buy movies? This may be leaking from the virtual to the material world, particularly once we have personal fabrication. It may eventually play out into data, because access is often better than ownership.

The fourth uncertainty has to do with the nature of the quantified self: self knowledge through numbers. The self part is the interesting bit. Technology leads to the extended self. Animals extend themselves through their tools. McLuhan talked about the wheel as the extension of the foot. Computers are extensions of our brains. How far can this go? The quantified self is really the quantifiable self. We are making ourselves quantifiable, wearing, implanting and swallowing gadgets to extend our quantifiability. It’s not just what’s close to us – it’s the environment we move in.

In self-tracking, people change behaviors to become more quantifiable… which in turn helps us make changes. How far will it go?

We should be able to examine our body for toxins, say after we eat something or visit an area with toxic waste. You should be sequencing your genes every hour, 24/7, in real time. We’re going to become quantifiable to the point where we’re real time and changing ourselves in real time.

Those four things, Kelly tells us, he’s not sure about. But he is sure that where technology is going is changing how we know things. Technology modifies how we know – telescopes, microscopes have made a new method of knowing possible: science. If science is how we know, technology modifies how we know. There is no written history of the scientific method. KK has been cobbling together a history of high points: the controlled experiment in 1580 (Bacon); the necessity of repeatability, invented in 1665 (Boyle); falsifiable testibility (Popper, 1920): randomized sample (Fisher, 1926); controlled placebo (1937), computer simulations (1946), double-blind experiment (1950). The scientific method is going to change more in the next 50 years than it has in the past 400 years.

THrowing away negative results is a crazy idea – we’ll be saving those. Triple blind trials ensure that not only do the experimenter and subjects not know the answer, you aren’t even aware an experiment is going on. Quantified Self is right in the center of what’s going on: clinical trials of 1, real-time experiments instead of batch mode, personalized pharma, participatory medicine, wikified results that are never final. Exhaustive data, the Google way of doing science, is better than having a hypothesis. Collaborative research, where many people do research in tandem on a vast scale.

Your trivial-seeming self tracking ap is part of something much bigger. It’s part of a new stage in the scientific method, another tool in the kit. It’s not the only thing going on, but it’s part of that evolutionary process. Making drugs for one person won’t come purely from the quantified self, but it’s part of that process.

You might object to a decentralized, peer to peer, amateur, never finished unauthorized encyclopedia… but you’d be wrong. What happens when this happens to medicine?

Quantified Self Ignite talks, part two

Filed under: quantified self 2011 — Ethan @ 5:29 pm

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Man, Ignite talks. They’re great, but they’re overwhelming to blog. Here’s another round of nine at Quantified Self – forgive the lack of links and often, surnames – this has been an informal sort of events.

Alex Chafee from Moodlog tells us that that we’re happier than we think. That’s one of his discoveries from logging his own mood and looking at the mood daa of others. He’s working on methods to talk about mood that’s more subtle than measuring from 1-5. One tool allows people to characterize moods with colors – it turns out that hunger is a dark rust color. We might consider moods in terms of different scales and axes. One might be our feelings of sociability – we might distinguish between being lonely and being solitary in terms of how much sociability we want. Over time, the system will allow him to build a crowd-sourced scale of mood words.

Bob Evans wants to build not one, but 700 applications to track people’s moods. His platform PACO is designed to allow easy creation of new tracking applications. PACO is an acronym – personal analytics companion – but it’s also the name of his dog. And dogs are an excellent companion that are sensitive to your moods and behaviors.

The design philosophy of PACO is on building simple tools that can be connected together, ala UNIX. And there’s a basic philosophy of privacy – your data is yours, and you can share it if you’d like.

You can design a simple app, and it will ask for your attention in the tray on an Android phone. A participant can answer questions, and she can always access the data she’s entered. As someone who’s administering an experiment, you can see all the reported data.

In deploying the tool, he’s discovered that it’s important for surveys to be short and sweet. Three questions is about as much as anyone will answer. And you need to ask the right questions – a guy who put together a study that asked “What are you doing?” eight times a day was able to participate in his own experiment for only three days before he gave up!

Ian Eslick of the MIT Media Lab wants to learn from your QS experiments. There’s hundreds of possible self-optimization experiments to try, listed on the internet. As people engage in QS experiments, there’s data to evaluate. But how do you generalize from people’s self-experiments?

He sees promise in combining information from multiple studies using techniques from recommendation engines. In these systems, thousands of people’s preferences combine to predict your possible preferences. He’s working to use techniques from research on recommendation engines to allow aggregation of data from sites like Cure Together.

Mei Lin Fung tells us about weight loss study she participated in some years back. She was one of 200 participants in a study that compared two weight loss methods – lose weight and then maintain, or learn to maintain, then lose weight. She was in the group that learned to maintain weight first, for eight weeks, then focused on losing weight for 20 weeks. Her total involvement with the study was two years long, including 6 months wearing a pedometer. And for her, the most challenging part was weekly meetings of 20 women with a facilitator.

Some useful techniques she learned included landmark walks, finding local landmarks that were 1000 steps away, allowing for short, 10 minute walks towards a goal of 3000 to 6000 steps per day. Learning to articulate and track goals was helpful, as was smelling and savoring food before eating it.

Weight loss has four goals, she tells us – improved nutrition, increased physical activity, increased quality of life and weight loss, only as the fourth component. She tells us that, despite what you might learn working on your own, social support is critical. And she urges us to consider that personal and professional science can work together to make discoveries.

Uwe Erich Heiss from Dynamic Clinical Systems talks about tracking pain. Doctors ask patients, “How have you been?” People are lousy at remembering how they’ve been. And doctors give patients 22 seconds, on average, to answer that question. So it’s worth finding ways to help people track how they’ve been and get that information into the doctor’s hands in a meaningful way.

He and his team have repurposes an unsuccessful technology, the iTag – originally used to “bookmark your radio”. By clicking, you record the level of discomfort you are feeling, using multiple clicks to signify more intense pain. They’ve now moved to a browser based system that patients can use at home, drawing on a picture of a body to show a pain map. They’ve collected a million events from 100,000 patients in 20 clinical areas using the system.

While there’s much to be learned from the data collected, there’s also much to learn about the process. He tells us that experiments would work better if there were a baseline of data collected before someone is experiencing pain. We need better standards for self-tracking, and APIs and rules for sharing data.

Kyle Machulis of OpenYou brings the house down with his Quantified Coder Project. The motto – “Putting the U back in programming, even if there was no U, even in British English”.

In his day job, Kyle writes device drivers for game controllers, and now for devices like the Mindwave, Zeo, and Fitbit, allowing him to work with data locally, rather than putting it into the cloud. OpenYou is about opening all sorts of data, including from locked-down devices like pacemakers. He takes an “any means necessary” approach, which may mean violating licenses and warranties, and publishes code allowing you to open your quantitative devices as well.

All this driver development and device analysis requires lots of sitting. Like many programmers, Kyle gets into a zone, writing code for 6-14 hours at a stretch. He wondered, “What can we do with a programming environment that would help us understand what happens in those hours?”

We could use accelerometers on our chairs, measures of strikeforce on our keys to understand whether we were fidgety, or stressed when working on particular code. And we could correlate these measures to the code written. If bugs are associated with a particular pattern of physical states, maybe we could review our code by looking at other times we were frustrated or fidgety. Apply this to other people’s code and we can figure out what libraries are frustrating to use, and which other programmers enjoy. In the long run, perhaps this becomes a piece of metadata we share on services like Google Code.

Dennis Harscoat of Quan#er introduces us to a beta iphone app that asks the question, “How much?”

You can use Quan#er to post information like “#coffee: 4cups”. The idea is to ask people to post data that’s specific, open and enables discovery. The ap asks what, how much, and what unit, and allows an optional picture. Outputs include graphs of whatever you’re tracking over time. If you choose to publish your data, others can cheer you on and compare their data to yours. Alternatively, you can track privately, but their business model is “free for all, pay for privacy”. (He reminds us that we can trust him, as he and the company are Swiss. Actually, he’s French Belgian living in Switzerland, whatever that means about trust.) At the very least, we can get our data from his system via emailed text file at any time.

Ernesto Ramirez wants to make you move. He tells us that the difference between UK bus drivers, who are more prone to heart attacks, than UK ticket takers, is that the latter move and the former don’t. We get guidelines on doing 150 minutes of vigorous physical exercise a week, but the truth is, we need to move far more than 30 minutes a day. “Sitting will kill you, even if you’re physically active,” as James Levine at the Mayo Clinic has discovered.

You could walk while you work, using Steelcase’s $4400 walking treadmill desk. Or you could built your own. Ernesto bought a $100 treadmill from Craigslist and a $149 IKEA desk and built his own workstation. In 2 years, he’s logged 600 miles on it, which represents 61,000 calories or 17 pounds of weight. He shows us Fitbit data correlated with Rescue Time that demonstrates he’s done serious work while on the device.

His advice. “Just fucking do it. No, seriously, just fucking do it.” It’s not about fitness – it’s about movement, which will make you a better person.

Mark Carranza describes himself as a former poet, “because it’s much more poetic than being a poet.” And he’s a passionate explorer of memory. His system, memex.mx, has helped him record everything he’s thought since 1984. In that time, he’s recorded 1,230,348 thoughts and 7,506,340 links between thoughts. These thoughts are simple lines, a few words at a time. He writes them down on paper and has input them into a DOS computer program he hasn’t altered since 1992. He averages 232.13 new thoughts per day and 1506 new connections a day.

This collection of links and crosslinks is his hypomnema, a giant chapbook of the “accumulated treasure” of thought. He draws analogies to Vannevar Bush’s Memex system, proposed in 1945, and notes that Bush saw the human mind as unparalleled in making associations, but needing machine augmentation to improve memory. The goal of his system is building a “find engine” rather than a search engine, a form of retrieval and remembering from prompts.

We start on his DOS tool from the word “chicken”. One possible association is “epistemological chicken”, which takes us to a book by Harry M. Collins called Artificial Knowing and then to thoughts on feminist epistemology. As we go through the set of links, it’s a way for Mark to remember what he got from that experience of reading the book. Books often have hundreds or thousands of associations within the system, annotated as he’s come back to the text.

People ask whether he has any time to do anything else. He calculates that he spends no more than 45 minutes transferring thoughts from paper to computer, and that in his lifetime, he’s watched far more television, than invested time in creating an augmented memory.

A next step might be allowing you to store your thoughts in this way and an iPhone app that could allow you to compare your web of associations with others. (And I didn’t even know they had iPhones in 1992…)

Short talks at Quantified Self

Filed under: quantified self 2011 — Ethan @ 4:32 pm

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Ted Vicky is trying to answer the question, “Can Twitter make you fit?” He’s had a long path to becoming a PhD researcher in Galway, Ireland. For eleven years, he ran the online fitness center at the White House, under Presidents Bush, Clinton and W Bush. With his background in exercise physiology, he’s taken on questions of “connected health”, asking how social media might be helping people combat obesity and other chronic health issues. He points out that in only one country in the world – Japan – is obesity decreasing.

He’s collecting data from mobile fitness applications – Runkeeper, MyFitnessPal, DailyMile, Nike’s system, Edomondo – by looking at their social media presence. While these tools have a social media component, it’s easiest to collect data when people share their information on Twitter. He’s gotten over a million tweets, and is starting to understand patterns, both by measuring what tools people are using and sending data into analytics engines like Klout. Perhaps unsurprisingly, the people most using these technologies thus far are categorized by Klout as “explorers”

And Vicky is starting to put together a model to understand what people talk about when they tweet about fitness – reports on activity, “blarney”, and conversations between participants. There’s a closely analyzed set of 36 days of data representing 234k tweets and 57k unique users. He hopes we’re going to learn why people share their exercise information, what reinforcement and motivation we get from each other’s behavior and how this information could turn fitness into a broader social trend.

Vipul Gupta is building sensors at Sun/Oracle research. The Sun SPOTs is an Arduino-like sensor platform, built around a more powerful 32 bit ARM CPU that can run Java natively, rather than a clunky variant of C. There are 20,000 devices out there, collecting data using a variety of sensors, including a tilt sensors and radio. They’re being used by researchers, hobbyists and the educational market.

Gupta notes that mobile phones have lots of sensors, including GPS. He wrote an app for an Android phone that tracked his movements as he drove his car, posting his location every 30 seconds to sensor.network. A simple output is a Google map mashup of where he’s been. This is potentially sensitive data, so Gupta has written tools that can blur the data to allow it to be published, smearing position so it’s less personally revealing. You could also obscure data in terms of time series – perhaps you don’t want anyone to know you’re out of town while you’re traveling. You might be more comfortable publishing that data after the fact. You could also release averaged data – your electricity consumption can reveal when you wake and sleep, but an averaged figure might help your building manager understand load needs at different times of day.

Gupta is now experimenting with adding additional sensors to his devices – air quality, ambient noise, radiological sensing. In the long run, he suspects we’ll not be connected to dedicated sensor devices, but to mobile phones, as they subsume the roles of other computing platforms in our lives.

Dave Marvit with Fujitsu believes that we’re moving from a world where companies create a vertically integrated ecosystem around sensing, to one where we have sensing, analysis and service as separate components offered by separate actors. His experiment with the Sprout platform are designed to show what this new ecosystem might look like.

The platform links an ARM processor running Linux with an Apache webserver with 5 miniUSB ports for sensors. He tells us that it will eventually fold into a mobile phone, just like everything else. In the meantime, it’s an excellent tool for integrating multiple sensor streams. To track stress, he’s analyzing data from a fingertip pulse monitor and an accelerometer. The former is a pretty noisy signal – if you move, the data gets messy pretty quickly. But accelerometer data can tell you about the movement and let you throw out that data. He’s used this system to watch himself playing speed chess and is able to correlate moments of stress to particularly stressful moments within the game.

This ability to integrate multiple sensor streams could lead to better measures of ambulatory blood pressure or sleep apnea monitoring. The goal in the long run is to build a platform that supports may modular metasensors and their synchronization and interaction.

Quantified Self – mood tracking

Filed under: quantified self 2011 — Ethan @ 3:39 pm

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Margie Morris leads a session on mood tracking during the final breakout session at QS2011. In the room are developers of five different mood tracking tools, and many of the people in the room track moods either on an ongoing basis or occasionally. She starts us with an exercise in understanding our moods, asking us to write down a project that was important to us, and to describe our emotions about the project now, when we began the project, when it started to get difficult and when we were near completion.

The vast majority of people in the session used words to articulate their feelings, often a pair of words. Some drew pictures. No one chose numbers, which is somewhat interesting for a quantified self conference. Margie suggested that we think about our emotional states in terms of a two dimensional matrix, a mood map, with axes of arousal (intensity) and valence (positivity versus negativity). We talk about the difficulty of representing some emotions with one or more numnbers. Is shame a form of sadness? What if it’s the sort of slight, smug shame that comes from doing what you wanted to do instead of what you were supposed to do?

It’s difficult to reduce mood to a number. But it’s useful to do so, because we’re very bad at remembering our moods. We have recall biases that help us remember the most recent and most intense moods – a general sense of happiness and well-being in the past might be masked by a brief, sharp period of unhappiness as we’re asked to recall how we felt. At the same time, there’s a concern expressed by some in the room that technology and culture can bias us against negative emotion. We might be pushed to underreport sadness from a sense that self-help books want us to acknowledge positive emotions and let go of the negative ones, or by an application that subtly encourages us to consider and soften our emotions before reporting them. Morris notes that there’s a bias against extremes – ask people to rate their moods from 1-10 and you’ll see few reports at the extreme ends of the scale. Give users two dimensions to report in and you’ll see that bias soften or disappear.

The application designers briefly discuss whether their tools show you the past emotions you’ve recorded before allowing your most recent input – is it better to allow someone to report their emotion without prompting, or do we benefit from showing people the patterns they’re documenting as they’re reporting?

Jon Cousins told his story about Moodscope, a personal project he started to work on bipolar disorder. Cousins started experiencing an intense depression and went to seek medical help. When the British National Health Service wasn’t able to schedule an appointment with an appropriate psychiatrist for some weeks, he found himself feeling very desperate, and looked for a constructive way to handle his intense emotions. Tracking his emotions, using a modified version of the PANAS (positive and negative affect schedule) to put a single number on his mood, helped lessen his dips in mood and stabilize his emotions.

Cousins unrolled a vast paper scroll that tracked the last few years of his emotional life. The the ribbon of paper stretches the width of the room, and we see – far to the left – a red line oscilating sharply between a deep blue at the bottom of a graph and a sunny yellow at the top. About a year into the graph, the line rises into positive territory and stays there, with very little variation. Cousins explains that this first, long period of positive stability came from taking a simple action: sharing his mood with one other person.

A friend knew that Cousins was tracking his mood and asked him to share his scores on a daily basis. He now shares scores with five close friends. The connection between sharing scores and higher reported scores prompts a number of participants to wonder if Cousins began reporting higher scores for fear of worrying his friends. He explains that he doesn’t believe this is the case. Sometimes he precedes a low score report to his five friends with a note saying, “This is going to look bad, but I’m okay,” knowing they’d otherwise be likely to try to intervene. But he agrees that there’s an effect that comes from reflection: “My friend sends a message that’s as simple as ‘?’ in a response to a low score I’ve posted, and I’m compelled to write him a note of explanation. The act of considering and reporting the emotion can be enough to help find a way out of the trap of negative emotion.”

Someone suggests that the sheer fact that someone cares enough for Cousins to ask to be privy to his mood might be enough to raise his scores… and the daily reminder that person cares is important as well. Others point to the value of journaling – considering one’s mood carefully enough to put a score on it, and ready a possible justification, may allow for a “cognitive reappraisal” of a situation that helps improve mood. Sharing mood information does seem to be powerful, though – one participant talks about how she began reciprocally sharing mood information with a friend, someone she wasn’t especially close to. The two have grown much closer in the process – sharing this information and the reasons behind it has led the two to form a deep bond.

New sensors and the Quantified Self

Filed under: quantified self 2011 — Ethan @ 1:06 pm

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Day two of the Quantified Self conference begins with a talk by Eric Boyd on New Sensors and the future of Self Tracking. Eric describes himself as “mostly a hacker”, someone who explores new technology and the capabilities they can provide us with.

He’s produced two very cool projects:

– Heart Spark, a pendant that flashes when your heart beats. This is less a quantified self project than a social communication one – what do we learn when we watch someone else’s heart rate increase as you’re talking.
– North Paw, a wearable compass gives you a sense of north by tingling on your ankle. This helps give you a perpetual sense of direction. Over time, he tells us, you lose the sense of vibration – you simply have a new sense.

Boyd’s talk is an overview of different sensor systems and what they might mean for personal tracking. He notes that not only have sensors gotten smaller and cheaper, but wireless and battery technologies have improved radically. As a result, it’s possible for companies like Green Goose to provide sensors in stickers. They’ve got accelerometers, low-power wifi and high density LiPo batteries that last three years. This means you can put a sticker on a pill bottle and tell whether you’ve taken your medicine, or put sensors on an asthma inhaler to measure where outbreaks are taking place, as Asthmapolis uses to map areas in a community where asthma attacks are common. As we think through the potential of these new sensors, there are lots of questions about open standards, and ensuring that this space remains interoperable.

But that’s not the focus of this talk – instead, it’s a tour of new sensors and capabilities.

Boyd begins with EMG – electromyography – the use of small skin-based electrodes to detect muscle activation. When a muscle contracts, it creates a small electric field. Neurosky, a brainwave monitoring headset, uses this sensor type. Amy Drill gave a talk at Quantified Self New York that showed off a pair of shorts with electrodes in it used to track and optimize the performance of olympic calibre athletes. While the system is currently expensive, they could easily come down in price, and would allow serious athletes and coaches to study the movement of individual muscle groups during activity.

Galvanic skin response sensors detect skin resistance, how much electricity flows across a gap on skin. Basically, this measures sweat levels. On a gross level, this is a way of sensing physical exertion. At lower levels, it can detect slight nervousness, agitation, and excitement. Paired with accelerometers and heart monitors, it might be possible to match mood information to physical activity.

Boyd is interested in glucometers, in part because he’s trying to debug a personal problem with low energy levels in the afternoon. Glucometers are pretty miserable at present, he notes – you spend $1 per test for a sensor that requires a drop of blood. What we hope for is a continuous monitor, but even bloodstream monitors at present need to be replaced every couple of days. The hope is that microneedles – patches of tiny needles with the texture of velcro – might be a solution for delivering vaccines through a skin patch, and eventually for continuous blood or fluid monitoring. Exciting, but these technologies are still in the lab.

Cameras are getting smaller and cheaper, and it’s worth asking whether a picture, traditionally worth a thousand words, could also be worth a thousand data points. Looxcie is a wearable camera that continually records. Press a button and the camera will store the previous 30 seconds. (I’ve wanted this functionality for years, and am thrilled someone’s actually built it.) Boyd talks about his dream camera-based tool, one that looks at faces of people who should be familiar to you and prompts you with their identity, perhaps via earbud. That’s a way off, but there are tools like Foodsnap that try to estimate your caloric load by allowing you to upload photos of your food. It’s hard to know how accurate these systems are – some are using Mechanical Turk to help with estimation. But even if you don’t look at the photos of food you’re eating, the act of photographing has a tendency to shape your diet.

Microphones are a sensor we tend to forget about. They’re cheap – often $2 – and can be used in interesting ways. One hacker put an air pillow in his bed with a microphone in it, and used the sound of airflow to measure his sleep movement and breathing. He got a huge amount of interesting data about sleep cycle from a sensor that was incredibly cheap. Boyd wonders what we might do with new sensors that detect ultrasonics, frequencies that humans can’t hear, but are used by bats and other animals. And he points to the Lena baby monitor, a $700 tool that listens to your child’s attempts at speech and tells you where your child is in the cycle of language development.

We’re seeing more sensors in our physical environment – the quantified world. Electricity monitors can actually tell us a lot about our personal behavior. Midnight bathroom breaks are visible as power fluctuations, as are the beginning and end of your time in bed. Automobiles are filled with an array of sensors, and using products like the Carchip Pro, which downloads automotive sensor data via the ODB2 port , you can access everything your car knows about itself, like tire pressure, speed, and engine RPM. Perhaps you could use this information as a way of detecting stress, if fast acceleration is a proxy for that behavior.

We’re seeing exciting challenges put on the table, like the Tricorder X Prize, recently launched with Qualcomm. It’s a $10m prize for a handheld device with multiple diagnostic capabilities. Boyd tells us that it’s really unlikely to be a separate, handheld unit like a Tricoder – it’s likely to be something strapped to the body. But we’re very early on in the idea, and it’s unclear what strategies might win out. The exciting nature of this moment in time is that there are lots of opportunities, both to play with sensors and to push the DIY aspect of the quantified self movement.

The business of Quantified Self

Filed under: quantified self 2011 — Ethan @ 1:29 am

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

The closing session for the first day at the Quantified Self conference offers some glipses of the future of the movement. Paul Tarini of the Robert Wood Johnson Foundation has been instrumental in sponsoring some of QS’s early work (and the presence of several folks, myself included, ath the conference – thanks, Paul!) He introduces a new feature on Quantified Self’s website, a directory of products and services called The Complete QS Guide to Self Tracking, which features 400 tools at present.

One of the sexiest new tools is the Basis pulsetracer wristwatch, presented by Nadeem Kassam. He left the entertainment industry for health care and has been fascinated by the challenge of producing devices that are easy for many people to wear and enjoy, rather than existing for the obsessive folks who come to conferences like this one. The Basis device uses an optical blood flow sensor to measure bloodflow speed, which it can interpolate into a measure of hert rate. By combining this information with data from an accelerometer, galvanic response and temperature sensors, the “watch” can provide a very broad range of data on caloric burn, activity and sleep status.

The device isn’t the thing, Kassam tells us – it’s the way we present data to users. We need data that’s deep enough to be insightful, but simple and engaging. We need to be able to share it with other companies and systems. And we need to make it easy for users to wear, sync, learn and share, or we won’t find a way to extend personal tracking beyod the early adopter market.

The rest of the closing session focuses on precisely this challenge: turning personal tracking into a consumer product. Ben Rubin of Zeo (a sleep-tracking sensor), Jason Jacobs of Fitnesskeeper, and Brian Krejcarek of GreenGoose are wrestling with similar challenges. They’re building products based on their personal passions, but trying to sell to a broad audience. In the process, they’re learning a great deal about what might work – and not work – in bringing QS ideas to a wider audience.

Ben tells us that, while there are a few truly dedicated users who’ve used Zeo (a sensor that tracks sleep behavior) virtually every night since it’s been released, most users buy the product, use it intently for 3-4 weeks, and then fall off. What’s interesting is that they don’t stop using it entirely – the usage goes down, but six months after purchase, 70% were using it at least once a week. The hope is that by making a product convenient and easy to use, they’ll pick it up any time they have a sleep issue.

Jason, whose company makes software to track physical activity, has discovered that users who share their data on Facebook are more likely to stay engaged. So are users who integrate data from other tracking devices with data from their running or biking. And users who stop using the tools are often able to be lured back in by email prompts, that ask whether a user is “taking a break” and suggesting that they get back to tracking by setting and reaching a goal.

Brian, whose company manufactures inexpensive and small sensors that can record movement on objects like toothbrushes, pill bottles or water pipes, urges the audience to consider passive sensors rather than tools that require active data collection. The problem with a sensor you choose to use, he tells us, is that you end up with zeros for the days you didn’t participate. By making sensors pervasive, you can choose to ignore the data they transmit, but it’s there if and when you want it.

Ben – whose product is a sensor you need to choose to wear – allows that he’s a fan of ubiquitous, passive sensors. In the long term, we’ll have sensors in our beds, cars and phones… but it’s going to take a while. In the meantime, it makes sense to target sensors at problems people are having, like the need to get better sleep. And until there’s a richer ecosystem around these tools, a manufacturer may need to be highly vertically integrated. Zeo produces the physical sensor, the tools for visualizing sleep data, and the community that allows you to compare your sleep to that of others. It’s possible that, going forward, we’ll have a whole ecosystem of providers, but in the meantime, it makes sense to develop everything a user needs.

Gary Wolf, who’s moderating the session, asks all participants “What’s missing in the ecosystem?”

Ben suggests that stress is a market where there aren’t many good tools to analyze and understand a problem many individuals suffer from. In a more general sense, mass consumer awareness is still missing from the market as a whole. Brian suggests that personal tracking isn’t much fun – what’s missing is games that bring happiness to the process. Jason doesn’t believe anything in particular is missing from a QS ecosystem – instead, we just need more time to collect data and develop our tools.

Quantified Self – Location Tracking

Filed under: quantified self 2011 — Ethan @ 12:58 am

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

Robin Barooah leads a session on location tracking at Quantified Self. He’s been developing a tool called Location Swap that is similar in functionality to Google Latitude – it allows you to track your location and share it with others. (He explains that he was working on the project well before Google launched the service.) He describes the ability to know where your partner or your friends are at all times as “an augmented sense”, an awareness that wasn’t possible before technological changes. The change, he offers, might be like the changes in behavior that come from having mobile phones and not being tied to landline phones.

The conversation quickly turns into a discussion of whether people really want to share their location, 24/7, with partners or others. Many of the people in the room would be willing to do so, and a few strongly object to the idea. One participant notes, “I used to live in Amsterdam, and culturally, no one closes their curtains. But there’s a strong cultural norm – you don’t look in the window.” Some of those who object to the idea of sharing location information aren’t worried about their movements, but don’t want their partner or friends feeling surveilled. Barooah admits that this information can be quite personal – he designed the tool initially to track and share his own behaviors, and ended up concluding that he wasn’t comfortable sharing that data.

Could we use location information to access sensor networks available in the physical world? What could we learn from tapping what’s being recorded around us? Josh Kaufmann recommends Asthmapolis, a system that maps the places in which asthma attacks are triggered, by attaching a GPS tag to inhalers, and sending location information to a server. What results is a map of areas in a city with particularly high levels of lung irritants, which might trigger protests against pollution.

There’s a great deal of concern about tracking systems that can’t be turned off. Mary Hodder talks about systems she’s built with telephone companies – it’s critical to have the ability to turn a system off. Tracking trucks for a trucking company is a legitimate activity while drivers are on duty, but the system needs to shut off when they’re off duty. On any of these systems, we need the ability to mediate who the information is shared with.

We talk about what tools people could use to track their data. Several people point out that using GPS continually on your phone tends to run down your batteries – Google Latitude may have done some smart thinking about this, and might be an option for some applications, even if the tool is designed primarily for sharing your location information with others. Mary points to some of the limitations of getting accurate tracking data – GPS is quite accurate, but not always available. On non-smartphones, AGPS – triangulation between towers – gets accuracy to a couple hundred feet. She notes that you can either collect your own data via applications or buy the data from carriers, which Loopt is evidently doing.

I asked people to talk about what they’re hoping to get out of tracking their location. Robin noted that location can be a proxy for behavior – if I’m in the park, I’m probably walking the dog. In the spirit of collecting as much data as possible, it seems silly not to collect this data, since it’s a form of passive sensing that’s perpetually available.

One participant is working on an app – tentatively named “Tripography” – that extrapolates what means of transportation you’re using based on your speed and calculates either calories burned (if you’re walking or biking) or CO2 emitted. The goal is to celebrate people for using low-carbon transport and to suggest alternatives. Another participant studies face to face social interaction and is interested in tracking location as a proxy for interaction. A third hopes to track location and correlate it to financial information – how much money do I spend per day when I’m in San Francisco, versus in Davis, CA? Josh suggests that we might learn from Mark Shepard, whose Sentient City Survival Kit includes an iPhone ap – Serendipitor – that will allow you to calculate a circuitous route between two locations in the hopes of having an unexpected encounter with something surprising and wonderful.

I was a little surprised that more individuals weren’t engaged with tracking their own location data. As at least one person in the session put it, “Well, I know where I am.” Of course, the whole premise of the quantitative self movement is that you often don’t know where you are until you have the data.

I’m fascinated by location tracking, because I suspect many of us inhabit much narrower physical spaces than we suspect or believe. (See my CHI talk on serendipity for more on this.) This session left me with the sense that the QS movement, in large part, is a personal health movement, at this point, rather than a personal data movement.

May 28, 2011

Targeted marketing at Quantified Self?

Filed under: quantified self 2011 — Ethan @ 6:15 pm

People at this conference are tracking aspects of their lives in far more detail than I currently am. I’m feeling very much like a late adopter and thinking I may need to run out and buy a Fitbit, a Zeo and start tracking my moods in terms of smiling panda bears.

But there are other forms of tracking that I’m just not ready to sign up for yet.

This ad, or a variant of it, appears over the urinals at the venue for the quantified self conference. I looked up BodyKey, of course, and discovered that they’re manufacturing an a tool for monitoring your health and education through your urine.

Perhaps there’s no such thing as too much data, but I’m pretty sure I’m not yet the target for this particular application…

Ignite Talks at Quantified Self

Filed under: quantified self 2011 — Ethan @ 5:25 pm

This post is part of my liveblogged account of a conference. Two disclaimers: Liveblogging is hard, and I often get things wrong. If I did, please feel free to correct me via email or in the comments and I’ll make changes when appropriate. Second, the opinions expressed in these sorts of posts are those of the speakers, rather than mine.

There’s a quick lunch break at Quantified Self followed by a series of five-minute Ignite talks. I find these virtually impossible to blog because of the speed, but here’s an attempt.

Rick Smolan wants us to know that big data is not big brother. The producer of projects like A Day in the Life of America, Smolan is focusing his next project on showing us the human side of big data and what we can learn from it. He mentions Ushahidi as an example of data being used to save lives. As we collect data from thousands of individuals, we can map the need for water or healthcare in parts of Haiti.

For a project that looks at a day in the life of big data, he’s going to set up 10 million human sensors, many of whom will download smartphone aps that track GPS location, steps taken, their mental state and ask questions each hour that ask people to help map their experiences and environment. This will be complemented by inputs from 1000 journalists in 50 countries. The goal is to understand how reflecting on data we collect can change our behavior, much like we change our driving by monitoring through the dashboard of a Prius. “These are reflections in a digital mirror – we can use big data to take the pulse of the planet.”

Misha David Chellam from Scanadu tells us that he didn’t know what a tricorder was until his 50-something business partner mentioned the Star Trek device as a metaphor for what they could build together. Misha is a geek twentysomething who likes cool devices, and his partner doesn’t want to die, so together, they’re trying to build the medical tricorder.

There’s lots of health data available these days, from self-tracking devices like the Fitbit and the Zeo, from “macro-scanning” tools like full body scans and genomic analysis, like 23 and me, and we now have access to digital, “nomadic” health record systems like Practice Fusion, Google Health and Microsoft’s Health Vault. We could add to this “sequencing human lifestyles”, data that helps us understand behaviors on a population level.

The next step is interpreting this data. For starters, we can try to do interpretation using doctors in a Mechanical Turk fashion, perhaps using doctors who are solely in private practice and carrying a lower patient load. In the long run, we might do AI – and Watson’s victory on Jeopardy is an inspiration.

The tricorder is the metaphor because it does so many things… and the contemporary tricorder is the mobile phone. It’s got a vast number of sensors that are helpful to us, and we can add to it with interfaces like microfluidics readers. The vision behind Scanadu is developing a strategy that can win an XPrize, focused on building a tricoder that can evaluate a patient better than a board of physicians. Scanadu is taking first steps to this, collecting blood from alpha users and using Wolfram Alpha for contextualizing this data. It’s a first step, but they’ll have lots more to work with if they can partner with tracking device developers.

Alan Gale of Bio-Logic Health is interested in life extension through food and supplement tracking. In the past, Gale tells us, our methods for life extension were pretty weak: mummification, drinking blood, freezing Ted Williams’s head. Current approaches, advocated by people like Aubrey De Grey and Ray Kurzweil either focus on future technologies, or on new developments in regenerative medicine and hormone replacement.

At present, the best preventative techniques we know about are caloric restriction and supplements, a regimen that requires massive lifestyle changes. You have to take hundreds of supplements, some of which can be toxic in high doses. Managing this process requires lots of careful adherence and tracking.

Gale views the human body from an engineering point of view. It’s composed of subsystems, each with inputs and outptus. Systems are regulated both via feedback mechanisms, or through our conscious intervention. When it gets cold, we can shiver (activating endocrine and muscular systems) or we can put on a sweater. The same is true with food. With mobile tools, we can track inputs like what we eat, and outputs like our blood tests. Over time we can build a model of feedback mechanisms, guiding people towards their deficiencies and meeting their goals.

Sarah Gray tells a story about tracking her mood that starts, as most good stories do, with a boy. The boy lived in a different city, and she found herself unable to decide whether she should move to be with him, continue a long distance relationship or move on. So she built a website that allowed her to track her feelings. Over a few months, she rated her mood from 1-5 and looked for patterns. After a few months, her understanding of the situation was much clearer, and she decided to separate from the guy in question.

The app she built is the root of MercuryApp, a mood tracking website designed for easy use with smartphones. She suggests that it works because it encourages a ritual where we track every day, encourages reflection, where we stop and think about what’s going on, and helps us find a story, an arc, to our behavior.

She offers examples of individuals using the app:

– Sebastian, an pathological optimist, who thinks situations are always going to get better. After discovering that he was unhappy at work, week after week, he decided to quit his job and move back to Spain with his wife. “You can write off one sad panda, but not a string of them.”

– Dave, who manages an embedded software team. The team members use the tool to track their morale, and Dave has a real-time health check on the mood of the team.

The goal is to merge hard and soft data to help individuals become happier. Answering the question, “When are you happiest?” requires both quantifiable data and the data of your gut.

Marcy Swenson and Dale Larson offer a skit to explain what agile development might teach us about personal tracking. Dale’s worried about falling asleep during a session this afternoon. So he plans an experiment to use his Zeo, measure his sleep against caffeine consumption, mood, food and exercise data, then graph it all and engage in multivariate analysis to solve the problem!

Marcy observes that Dale seems to be more focused on data than on solving the problem. If we learned from software development, we might try weekly sprints, information radiators and a tight build/measure/learn cycle which might let us figure out what we really needed to know before investing months in a particular process. They’ve expanded on some of these thoughts at startuphappiness.com

Ron Gutman wants us to know about the untapped power of the smile. He’s a serious runner, and discovered that when he hits the wall in a long run (75 minutes in!), he often feels better when he smiles. He began tracking the data closely and discovered it was an unambiguous correlation for him. So he began a wide-ranging study of the power of the smile.

A 30 year longitudinal study tracked the relationship between people’s happiness (on a test of well-being) and success of their marriages and their high school yearbook photos. Based on people’s smiles, researchers could make very accurate predictions of the future of these students. Another study looked at pre-1950 baseball cards. The span of a player’s smile could predict span of the player’s life – bigger smiles predicts longevity.

Less than 15% of people smile less than 5 times a day. At the same time less than 1/3rd smile more than 20 times. It’s certainly possible to smile more: children can smile up to 400 times a day. It’s possible that smiling can, in and of iself, make us feel better. Charles Darwin speculated, “Even the simulation of an emotion tends to arouse it in our minds.”

Gutman tells us that one smile can create the same brain stimulation as 2000 bars of chocolate (with fewer calories.) If you smile, others see your smile and feel good. In turn, they smile and you feel good. He closes with a quote from Mother Theresa: “I will never understand all the good that a simple smile can accomplish.”

Sean Ahrens has Crohn’s disease, an inflammation in the digestive track caused from a disregulated immune system. He’s coped with the disease for 13 years, and recently decided to take some worms. He took pig whipworm, and a friend took human hookworm. It’s not a cheap thing to do – he spent $3000 to purchase worm eggs from Germany and Thailand, which he took every two weeks for five months. The eggs appeared, under a $12 microscope, to be worm eggs. And Ahrens monitored his symptons – gut pain and bowel movements – closely for the months he took the eggs and months afterwards.

It wasn’t a very successful experiment, both in that his symptoms didn’t get much better after taking the eggs, and that he can’t definitively say whether the eggs failed. First, he didn’t have much baseline data. Second, there were other changes in the time he tracked – a change of diet, other medications, and stress from participating in Y-Combinator. He ended up concluding that he didn’t have enough background in math to figure out causality in the data. The talk ends up being a cautionary tale about getting baselines and controlling experiments… which can be hard to do when you want pain to go away. In the meantime, Ahrens is working on a company, Crohnology, that’s a supportive social network for people with the disease.

Tina Park is a designer for Johnson and Johnson who worked on Project Health Design, a effort from the Robert Wood Johnson Foundation’s Pioneers Program to help teenagers with chronic conditions transition from pediatric to adult health care. This tends to be a difficult transition: teenagers go through lots of life changes, teens forget medications, and they can get sick and die.

It’s possible, she argues, to track teen’s moods through texts. And since teens identify health through mood, it’s possible to identify moments where teens may be unwell by reading their text messages. Many teens send hundreds of messages a day. Her project graphed the intensity of message sending on timelines – she shows us a visualization of data of a teen’s data for six months. You can tell when she’s asleep based on the flat periods in the timelines. And you can see what words are common at different times in the series, which can help a teen see what she was talking about and, perhaps, what stressors are happening in her life.

The benefit – this is data that already exists – perhaps we can get insights on mood without collecting any additional data.

Older Posts »

Powered by WordPress