Documentation Productivity Metrics
by Gurpreet Singh
The simple definition for metrics would be the methods or procedures used by companies to judge success and/or failure of their work standards by the scientific measurement of data usually gathered from specific tasks, such as production, testing, documentation and so on.
If we look at documentation, which is a complex task that includes several subtasks such as writing, editing, reviewing and so on, we can gather the common errors in each of the processes to remove them from the entire documentation cycle. This error removal automatically improves the overall performance of the organization/individual involved in the creation of documentation.
Quality vs. Productivity Metrics
Quality metrics, as suggested by their name, are metrics of the statistics regarding the quality aspect of documentation. Productivity metrics are more concerned with the productivity or the rate of production of words rather than the quality aspect.
These two are quite different from each other as they map two distinct areas of the documentation sector: namely, quality and quantity. Each organization/individual already works with some kind of metrics, whether they know it or not. There is a complete subject devoted to the processes used to gather and analyze such statistics with the name of Operation Research (OR).
OR is a very interesting field and is a must for people who are interested in the usage and benefits of metrics of any kind, not just documentation. The power of these statistical analyses can only be visualized correctly if you create one real-time analysis of your own organization.
Documentation Metrics Formula
Now, coming back to the original subject of quality and quantity (aka productivity) in documentation, we use these metrics all the time. If you estimate, based on your previous experience, that a user manual will take x number of hours, then that data has come (indirectly) from your productivity metrics.
If you write down your experience with different projects, such as user a manual of this kind, will take x1 hours per page, the installation guide of y type will take x2 hours to complete, the help file of c type will take x3 hours and so on, then it will constitute a simple documentation productivity metric for you.
This metric simply reflects how soon the work will be finished depending upon the complexity of the task. This is a global method used for estimation of all kind of projects, and we can apply it to documentation projects also.
Benefits (or The Catch!)
The benefits of such productivity metrics are incredible. Let’s say that you are working as an independent consultant like me and get a request for providing quotes for several different kinds of projects every month. One method is to analyze the kind of project and then place a bid based on your experience.
This can be quite dangerous, particularly if you are inexperienced, as estimation can very easily go to values much less than the actual time needed to complete the project. Even experienced consultants often make mistakes in estimating the efforts required for a document. This means that you will work more for the same money due to the faulty estimation. So this method is not scientific enough to be applied to projects, as the error margin is quite high.
On the other hand, let’s say that you create productivity metrics for yourself based on different aspects of documentation and your working speed. You can estimate the time needed to complete a project scientifically by the use of this productivity metric, which you already created in the past.
Yes, a considerable amount of time is needed to first record the data, analyze the statistics and create the metrics, but it is a wise decision that you can make for reducing the forthcoming work in the form of estimates for different projects.
Quality Metrics & Reading Scores
Talking about the quality metrics, it is simply a measure of how useful the document is for the intended audience. Please note that I used the word “intended” audience here, as a document scoring high on your quality metrics for IT managers may fall below the passing mark for say 12th grade students. Each quality metric is limited by the audience it refers to. So in most companies, the generation of different quality metrics for different audiences is common.
The quality metric is usually based on a number of different aspects of documentation such as:
- Readability of the overall document (Flesch reading ease)
- Grade level required to read the document (Flesch-Kincaid grade level)
- Time required by a typical audience subset to read and understand the content of document
- Usability in terms of problem-solving techniques
- Number of editing cycles required to finalize the document
- Spelling/typo errors
- Time spent in hours to complete the document from scratch
- Human resources required in the preparation of the documents
- Compliance with style guides such as MMOS, APA, MLA, Harvard, and so on
So if your intended audience is normal people, you must achieve at least a score of 50 or above in the Flesch reading score. In the Flesch reading score, the scores usually denote the complexity of the documentation and what grade education is required to read and understand the component of documentation. The values are as follows):
So, if you are writing user manuals for 12th grade students, then you must make your documents less complex by using simple words instead of complex ones, producing a Flesch reading score of 35-50, preferably above 40.
I see many listserv messages that have a Flesch score of less than 15, which is not good for the average reader. By using difficult words, the messages are not highly effective since few may understand the content. It might be good to check the reading ease of your posts or documents for the reader’s benefit.
This entire post has a Flesch reading score of 46.7, which means that it can be understood by a majority of people who have a high school education. There are several other factors that you can put into quality metrics, but for now these points will guide you to search for more complex methods to be used in such metrics.
About the Article
This article originally appeared in Vol 9, Issue 4 (Oct 2005) issue of Directives, the newsletter of Management Special Interest Group (SIG) of Society of Technical Communicators (STC). It is reprinted here with slight modifications.