Some of us are putting together an informal half-day conference focused on what’s new and innovative with user research methods. The idea is to find out what cool ideas and methods others have been trying out, and share what we have been upto ourselves. Its a free half-day conference. We chose a Friday afternoon to make it easy for you to come by. The location is Bolt & Peters office in downtown San Francisco. The setting will be informal (there will be food and drinks!). And this will be followed by a cocktail hour at 5.30 PM.
Talks are 20 minutes long and will cover a wide range of topics related to trends in User Research. Speakers include Wendy Castleman from Intuit, Ravit Lichtenberg from HP, Lane Becker from Adaptive Path and more. Go to the User Research Wiki if you want to learn more or want to sign up.
It should be a fun afternoon. Look forward to seeing you there!
Terry Winograd runs an excellent series of talks on HCI at Stanford. This Friday, he is featuring a talk by Blake Ross and Asa Dotzler of the Mozilla foundation. This is a topic I have discussed in numerous discussions with my open-source friends – how Firefox does it (create user-friendly software), and how other open-source projects can learn from it? I look forward to finally learning more. More information about the class here. And here is the title and abstract:
One reason that its exciting to be finally out with MindCanvas is because of the feedback you receive. For a while now, MindCanvas was something we talked about with friends, but not a public forum (I think that its good to release early, but not so early that your design vision is not communicated yet).
One of the first few feedback /queries we have received is about MindCanvas as a never-ending game linked to from someone’s website or software. This is a model we have talked about on and off (and it’s interesting to see it come up so early in discussions).
For the past year, if someone asked me what I am upto – I told them consulting, which is the truth, but only half the truth. The other half, which has kept me up at night (both literally and figuratively) is MindCanvas. No, don’t click the url yet – first hear the story.
We started working on it a year ago, but I have been dreaming of it for a long time. It probably started with seeing the boredom in the eyes of my research subjects (sorry participants) in graduate school. And continued with consulting, with every computer-based study I ever did. We do qualitative research as well, but truth be told – everyone resorts to surveys, at some time or other. The idea of basing product decisions on bored respondents robotically checking HTML boxes and choosing dropdowns really bothers me.
So, what is the solution? A better survey application with some AJAX peppered in? No, we wanted to go far beyond that. Our goal was to reimagine what online research can be. MindCanvas was the answer (a side-note about the name: it satisfies my geeky desire for it to be something about the mind. The words “brain” & “neuro” were summarily rejected by the rest of the team!)
MindCanvas is a research service to help companies gather insights about customers’ thoughts & feelings. We use Game-like Elicitation Methods (GEMs) to let online users participate in answering the complex questions that you face in designing a product or service.
The latest article from StepTwo raises an interesting question – should you finalize site structure based on sorting, or other types of classification exercises?
Broadly I agree – site structure cannot be final final till you consider page layout and other aspects of the design. Card-sorting results are merely suggestions. You need to add in other design and business considerations.
But the problems with creating structures based on card-sorting, mentioned in the article, are not really problems with card-sorting. The problems are more with half-baked understanding or usage of the technique. For example, the article mentions that browser pages cannot accommodate too many top-level headings, long titles etc., and how this impacts structural decisions. But these and other issues can easily be handled with good card-sorting practices and more better analysis.
Two recent articles about anthropology in the corporate environment caught my eye.
The article in Fortune magazine focuses on anthropological work at Microsoft and contrasts modern corporate anthropology with its origins:
“Their fieldwork is far removed from the popular perception of the anthropologist as lantern-jawed adventurer in baggy shorts and pith helmet, canoeing up the Amazon in search of the proverbial lost tribe. But there is a certain correspondence between Microsoft’s research agenda and the work of those old-time anthropologists, many of whom were funded by colonial governments that needed to understand their native subjects in order to rule them more effectively. The modern version of this knowledge-power dynamic is Microsoft, a multinational technology colossus that hires anthropologists who study the natives in order to sell them more software.”
Now that I am working as a practitioner, I am always on the lookout for journals and conferences that highlight “reflective practice”. Case studies do not have enough “reflection” while academic journals often have nothing to do with questions that one faces during practice.
I heard about a new journal “Journal of research Practice” from one of the editors himself. From the description, it sounds interesting.
Some of the points in their editorial focus statement resonate.
Institutions have flourished across the globe to nurture this kind of activity that has come to be known as research. Research has always remained partly unmanageable, partly deviant, despite historic tendencies to co-opt it into the so-called disciplines, professions, research centres, etc. That propensity of research, to maintain a degree of autonomy…
I would go beyond and suggest that in recent years, the internet has enabled the rise of a new type of internet enabled, independent researcher. I first realized this during my last year at Berkeley – that I could now do the same research on my own. Between subscriptions to a few academic journals, Google and a few trips to Berkeley/Stanford library, I can have access to everything I had while at Brown University or UC Berkeley. I would probably not have left the safe cocoon of academia if not for this realization.
Another thing I like about this new Journal is its explicitly multi-disciplinary focus. And its committment to open access.
Maybe this will inspire me to finally write down my thought about how the concept of validity is broken and a huge bottleneck for applied fields. We need to think about something called Implementable Validity.