The Evolution of The ArchivistJul 12, 2010 In Process By Tim Aidlin
In an earlier post I talked about working on The Archivist Desktop version, and how the software began its evolution to the web and the cloud. This post can be considered part-two of that post, and tries to explain the process we at MIX Online went through in concepting and executing The Archivist Web (alpha) — starting from before we ever write a bit of code or color a pixel.
Once we decided to take on the project, we turned to our already established group of experts for guidance. Because we already had a prototype of a similar application in The Archivist Desktop, we already had a ready cadre of fellow-Microsoft employees, friends and colleagues in other companies, and current users whom we could tap for solid user-testing which would provide invaluable guidance on the import of certain features over others, and the direction the project needed to head. While this might skew the results of the user-testing, we felt with the tight-timelines and minimal budget, talking to current users and technologists with which we already had relationships was the way to go. We focused on two primary groups of users, our internal Microsoft team and users of The Archivist Desktop.
I talk a little about doing "User Testing On The Cheap" in another Opinion piece. There’s plenty of easy and inexpensive ways to gather information on how users are actually using your software or site. Check out the Opinion when you have time.
Project stakeholders and internal Microsoft team-members
It was imperative that before building The Archivist Web, we fully discussed the project and its intricacies with internal team-members and stakeholders to ensure that their goals were being met. Simply setting up short individual meetings with people was an easy way to gather feedback on how they used The Archivist Desktop,and what new features they might be looking for in The Archivist Web. Additionally,by communicating clearly with team-members, we were able to make good contacts with other parts of the company and leverage existing technologies and take advantage of a great knowledge-base that we might not otherwise have access to. For instance, by ensuring that we discussed our plans with the Windows Azure team, we were able to overcome some serious hurdles with the way we were planning our methodology for data-storage. Without help from this team, we might have been derailed at the beginning.
“Conversation” with current users
Again, because we had already released The Archivist Desktop, we had an already-established user-base. Over the years there had been a few “vocal” users – who either wrote in to thank us, or report problems and failures – who we felt could provide valuable feedback from real-world use-cases. At the beginning stages of building The Archivist Web we simply emailed a few of the users with whom we had been in contact and asked them a simple set of questions surrounding specific topics such as types of visualizations they would use, how often the tool needed to update, the way they were using the data and the like. By asking specific questions rather than asking for general feedback, users were able to focus on key issues and needs that we could act upon. Often, by asking a generalized question such as “What would you like to change,” you get very generalized answers that are hard to turn into actionable tasks.
From our sit-downs and email conversations, we were able to group our potential audience into three general categories. By segmenting our audience into “personas” we were able to focus our efforts around specific scenarios and the needs of these specific characters. In meeting the needs of these three groups, it was our hope that we would not become myopic on satisfying one group at the expense of another, or, contrarily, find ourselves trying to please everyone who might possibly use the service. Our goal was to sate the needs of the majority of our audience and provide our core users with the best experience possible. To this end, we arrived at three core personas.
Personas are a representation of real users, including their habits, goals and motivation.
I’ll note here for a second that there’s a lot of ways desingers approach User-Personas. Some user/human-centered designers can spend a long time fleshing out the behaviours and market-segmentations of your target audience. We took a pretty rudimentary approach. Smashing Magazine defines it well. In fact, this whole article really is pretty awesome and worth a read in-general.
We know that most users will come by the site once, do a quick search, check out some visualizations, and then leave. That’s just the nature of building websites, especially one that is specifically targeted to such a specific niche as The Archivist. To the casual user we wanted to provide an engaging experience with the fewest barriers to entry.
However, we also knew that we had to concentrate quite clearly on the potential return-users – the people that would keep coming back and really taking advantage of the great features of the product. It is to these particular users that we thought we could provide the most value.
The Marketing Manager
“Social” has become an integral part of any brand-strategy, and until now, there had been only a few ways to gain access to specifically-targeted Twitter data. As well, that data generally has been presented in the most raw format from which the Marketing Manager would have to extrapolate the data. The Archivist needed to be designed to provide a high-level method to extrapolate meaningful data from the thousands of tweets that would now easily be kept in an archive. Additionally, all of this would have to be easily shareable with colleagues, clients, or friends.
Tracking the general “sentiment” is of great importance to the Marketing Manager, too. In addition to seeing who is tweeting, and how often people are tweeting in general Marketing Managers expressed interest in quickly being able identify “how people are feeling” about a brand or site over time.
Because we had the previous experience with The Archivist Desktop, we knew that there was a surprisingly large demand for this tool in the world of academia. During the last year or so we’ve had the opportunity to communicate with numerous professors and students who have found The Archivist Desktop useful for their research. One subject that I believe helped foster interest in The Archivist was the Iranian Elections of 2009. At that time many news outlets were discussing the way Twitter was being used to communicate support for fair elections and the unrest during the controversy over the election results. Students and professors at this time started using The Archivist to do custom searches and build archives of over 200,000 tweets for data-analysis.
For this user, we had to consider how accurate the data were that we are providing and making it easy to access that raw data. For the academic user, the accuracy of the data was generally more important than for a Marketing Manager looking to spot general trends and sentiment.
I thought it might be useful to provide the full PowerPoint walk-through of the wireframe set. This was produced to help give the team an understanding of the full user-experience of coming to to the site under different circumstances and conditions. One thing you may note is the exploration of an extensive array of filters. Unfortunately, this feature of all others is one I personally really wish we were able to fit into the scope of this release.
What we could do
- Tweets over time
- Top Users
- Top Words
- Top URLs
- Tweet vs. Retweet
- Top Sources (software)
- Use ASP.Net Charting Controls
What we couldn’t do (Feature Scoping)
- Filters & Geolocation
As I noted earlier, I spent a bit of time thinking about the frequent requests for filtering by date, secondary keywords, and other items. Quite simply, with the way Twitter works, some of this data wasn’t consistently avaialble, like Geolocation, and others were just so gnarly we couldn’t scope them into this release.
Karsten: can you add a little light to this issue?
- Silverlight Silverlight Charting Controls
Had a heck of a time skinning them
Really easy to skin and implement from a design-perspective, but we couldn’t use the Telerik controls due to distribution problems.