Like many English majors, I am not a big fan of numbers, especially when it comes to my work. There are, however, constructive ways to apply metrics to your work as a technical communicator and get results that make everyone happy–not just the people who love metrics and reports, but the writers as well.
Why Metrics Rankle
The reason my inner English major rebels against numbers being applied to my work is because it is akin to assigning a value judgment on the literary quality of my work (though let’s be honest: Ernest Hemingway didn’t turn out a winner with every novel, either). Contrariwise, author publications–novels or websites–have only a few metrics to measure “success,” such as number of books sold or number of followers on a Twitter account. Are those numbers supposed to define your value as a writer?
Some metrics are, in my opinion, BS (bovine scatology for my more sensitive readers). For instance, while an organization often depends on the new work and revenue that come from proposals, tracking proposal win percentages is, for me, a BS metric mostly because the outcomes are mostly beyond the control of the writer and the proposal team.
Another BS metric–though it can sound impressive on the annual report–is the number of documents produced. This is a nonsense statistic because the need for documents can vary by the job, organization, time of year, or customer. One person might be turning out multiple presentations every week while another spends an entire quarter writing and editing a single document.
There are managers and organizations that like metrics for everything. As a class I’ve edited notes, “That which gets measured tends to improve.” And not all metrics are necessarily bad. So consider this a challenge rather than an insult. There are ways to measure quality without getting too subjective.
Example 1: Manager A keeps returning documents because they don’t like the content. Some managers are pickier or more indecisive than others. And some of them like to play “Bring Me a Rock” just to watch writers jump through hoops to feel powerful.
Example 2: Manager A (or Editor B) returns documents with specific grammatical, punctuation, or formatting corrections.
If you’re negotiating with a customer about what metrics would be considered acceptable, Example 1 is not something I would accept. Manager tastes and behaviors vary, and it is not the writer’s fault if the manager doesn’t know what s/he wants or changes the content in an arbitrary manner. If your documents contain the content and “angle” discussed with the customer and they still keep getting returned, you and your fellow writers will always look like they’re “in trouble” if you accept document returns as a metric.
I would argue that Example 2 is a situation you can do something about–you can measure the number of errors or number of times documents are returned to the writer for corrections. Another behavior that can be measured is factual accuracy: how many factual errors were found in the documents you produce?
Concrete, quality-based measurements enjoy several advantages:
- They take the emotional attachment or literary judgments out of the equation.
- They require more attention to detail on the part of the writer.
- They can be measured accurately.
- They can lead to measurable improvements.
A good mnemonic for these types of metrics is SMART, meaning are the metrics…
- Time-bound (e.g., measurable over a given amount of time)?
The bottom line is that metrics need not be your enemy. Yes, the reports can take a little extra time; however, they can be employed in a way that prevents abuse and leads to real improvement.