Shiny New Spreadsheets

When my boss explained his new idea about tracking individual performance metrics, it seemed like a great idea.

Our 15-person team of Communications Directors engaged in a wide variety of activities — meeting journalists, pitching stories, designing media strategies for clients — and not every activity led to a tangible outcome. Directors might work for months to build a relationship with a key journalist, only to realize that the outlet wouldn’t run the piece at all. Nevertheless, this behind-the-scenes work was crucial to building the company’s brand.

Recently, a few team members had complained that they weren’t getting credit for all this unseen work. Shouldn’t their performance be measured on whether they targeted and pitched journalists in smart ways — not whether they were lucky in landing stories?

My boss’ solution? A point system in which both activities and outcomes were added up to a score. Team members would need to record all their activities — meetings, events, pitches, and the rest. By quantifying a performance score, both they and my boss could look at performance changes from quarter to quarter and year to year.

As his assistant, I took over project management of this spreadsheet and was soon coming up with all sorts of formulas to calculate a total score. The end result was a personalized spreadsheet with several tabs calculating each activity and a summary tab to look at scores over time.

It was a disaster.

What had started as an attempt to give Directors “credit” had turned into a system for micromanaging their movements. As they recorded each media mention, meeting, and event, they wondered why their boss didn’t trust them to do the basic work of their jobs. Why all the hand-holding? And instead of being able to give them “credit,” we felt like Big Brother. We used the tracking system for another six months, until my boss thought of a better system and everyone breathed a sigh of relief.

Looking back on this episode, there are several reasons I think we failed, both in conceiving the project and in iterating on it once it became clear the spreadsheet wasn’t working. The concepts of user design and iterated testing are particularly useful frameworks for analyzing these mistakes.

At the time, we thought we were being user-centric. Someone had raised a problem, and we were building a spreadsheet to solve it! What a responsive management team! But because we didn’t do our user research well, in reality we were using this “user problem” as an excuse for our own end goals of more insight into Directors’ work.

How we messed it up

Extrapolating a large problem from a few complaints
Two out of our 15 team members had complained. Was that evidence of a pattern, or had we simply found outliers? We didn’t bother to find out. Surely all Directors needed the same tools to do their jobs!

Failing to consider the “pivot” or “perish” options

Since the spreadsheet was my boss’ brilliant new idea, we never tested its basic premise. Would tracking all their activities help Directors feel more acknowledged? Would it create any new problems?

It was pretty clear from the beginning that the new metrics tracker wasn’t taking off. In fact, it was leading to less insight into Directors’ work, as they felt less trusted and became more cagey about their activities. But we didn’t change the system for another six months — and only then because my boss had thought of a better idea. With his spreadsheet idea on the line, we couldn’t change until he had thought of a better idea. This meant limited input from others who might have had better ideas, while we waited for a busy manager to think of a better idea.

Our attempt to solve a user’s concern had turned into a project to prove ourselves. The result was a system designed for us but passed off as a solution for them.

Questioning our users’ motives

That’s not to say that we didn’t consult Directors. But instead of asking our users what they wanted out of a metrics tracker, we presented them with the unfinished spreadsheet and asked them how they liked it. Naturally, they said it was overly complicated and overwhelming. “I just don’t think it’ll work,” one said. Taking the spreadsheet as a given, we took the negative feedback as a sign we needed to reduce the spreadsheet’s complexity, not go back to the drawing board. All our further “testing” revolved around how to make the spreadsheet easier to use.

Given that the tracker was a performance measurement tool, it was easy to dismiss negative feedback as staffers being concerned that a change would lead to unfairly bad performance reviews for them. What if this “score” just didn’t add up fairly? they asked. No, no, no, we assured them. We’ll tinker with this score formula until it works! All we want is to give you is a fair outcome for all this work you’re doing!

Technology for technology’s sake

Perhaps worst of all, this issue probably didn’t need the “technology” of the spreadsheet at all. Thinking back on this episode, the “not getting credit” complaint likely hid a deeper problem — one that may well not have been about metrics at all. Were they feeling insecure? Overworked? Ineffective? All of these would be reasons to complain about a lack of acknowledgement; none of them were best served by a flashy new spreadsheet. In fact, the spreadsheet likely exacerbated the underlying issues of trust that the Directors were surfacing in their complaint.

How I would do it all over

If I could do it all over, I would dig deeper into the underlying issues behind these complaints. Rarely is a user able to articulate exactly what he or she really needs, particularly without further questioning. We shouldn’t have taken an offhand complaint as license to develop a months-long project, without exploring the issue more.

But assuming that we then decided to redesign the metrics system — and that we wanted to solve a user problem, rather than mandating our own version of metrics collection — I would have come up with a scheme for gradual testing of the concept and then the design.

The first step would be to talk to the two Directors who complained, as well as others who hadn’t. What did they look for in a metrics tool? What kinds of features helped them plan and improve their work? The next phase could test out the spreadsheet score system as an idea. Were we to come up with a spreadsheet that calculated a metrics score, would that be useful to you in getting credit from your boss? What activities should be added into that score? Next, it would be useful to ask a few Directors to record their activities for a week on paper or a simplified spreadsheet. If they stopped tracking by the end of the week, that might be a warning sign that a system of tracking would be hard to keep up with. No matter how simplified the spreadsheet got, it wasn’t simpler than writing it on a piece of paper. So if this test in particular failed, we should strongly have considered scrapping the idea.

All these steps would expand the solution-creating team from my boss and me to our full team. The key would be to avoid attaching my boss’ reputation to the spreadsheet as the solution. Had we tested the idea before presenting it as the solution, we could have separated the success of the spreadsheet from the success of my boss’ management more broadly. This would have allowed us to scrap the idea altogether — or to change it significantly to make it work.

We could have built a system that worked for us, focusing on getting buy-in and building a case for our system. Or we could have actually paid attention to our users and tried to solve their problem, thus building trust with our users. Believing — and insisting — that we were solving a user problem while pushing through our own vision was a surefire way to break that trust.

Leave a comment