My last column focused on the potential for data to operate as a tool or burden, as a motivator or downer. Often, organizations begin their measurement journey on productivity as a metric. There are many good reasons for this, not least of which is that in a fee-for-service environment, the productivity of direct service staff drives revenue. How this metric is implemented can determine the attitude toward the use of information within the organization. From the outset, this measure reflects the tool/burden, motivator dilemma. So, handling its development and implementation effectively can have lasting impact on the organization’s development of a robust information system.
All measures and data points share critical elements: definition, collection method, accuracy, reliability, and importance to the organization (the “Do I care?” element mentioned in the last column). Different members of the organization often have different definitions of terms like productivity, so clear communication about what is being measured, how it is being measured (and why in this particular way), and why it is important to measure. If a metric is important, then so are targets and benchmarks. It is important to set a goal, but to set it realistically. Targets require a basis of comparisons (e.g., peer or industry standards) with a path that takes the starting point into account – if the organization is well below the industry norm, the norm may not be a realistic starting target.
So, what is productivity? Clinicians often reference their schedule when asked about how busy they are and how full their caseload is. Managers, and fiscal folks in particular, see only those scheduled appointments that result in a billable hour as “productivity.” The gap between those can be huge. The first step in measuring productivity, then, is defining it, communicating the definition, and learning where you are at the start of the process.
Language matters, and when talking about productivity, it can matter a lot. If my hours are low, am I unproductive (read lazy)? I found “service volume” a more useful way to talk about what we are measuring – how many services are we providing? The definition is the number of services we are providing. Initially, what a countable service is may be a bit over inclusive – it may encompass all activities, billable or not. The definition may not stay the same as the information system takes root and matures. For example, countable services may be narrowed to include only those that are potentially billable (any services payers define as reimbursable, regardless of whether all billing requirements are met), or narrower still, to those that are actually billable (meet all billing requirements), or even to only paid service units.
The definition will clearly affect how the data is collected and aggregated. The data system sophistication increases as the definition narrows because certain units will have to be excluded from the total if the criteria are narrowed. As the criteria narrow, staff need to understand the relevance of the narrower definition and benchmarks or targets may need to be adjusted. For example, if all services (billable and not) are counted, then the expectation will likely be higher than when only services ready to bill are counted.
Accuracy is always essential. If this measure is the first one an organization implements – and one that defines a staff person’s effectiveness – then expect initial distrust of the data. In my experience, staff challenged the reports, often bringing their appointment books as proof that the report was incorrect. It was essential to take these concerns seriously and to take the time to make sure that staff understood the metric and the methodology. Otherwise, the reports will feel like spam – a burden and an irritation, both of which are demotivating. It is also the case that sometimes (hopefully very infrequently) the data is wrong. Getting ahead of errors and acknowledging them is critical. The data also must be reliable: accurate and timely. Once staff are used to getting and using data, it must come out predictably.
Last and most important, the data must be important. It must matter at all staff levels that it affects. It must be useful and actionable. Definition, collection, accuracy, and reliability are necessary but not sufficient to the successful implementation of a measure. This is what will make the measure a tool, not a burden, and a motivator, not a downer. So how is measuring productivity useful and motivating?
Knowing how many units of service my organization, my team, and I need to provide in order to be successful allows for better planning of workload at the team level. Do we need to address issues like no-shows? How do we plan for times when referrals peak? For when they are low? How big a caseload can clinicians manage? What is the viable minimum? Are there ways to streamline to reduce the time spent on activities that are not billable or are clerical? Answering these questions can make collecting and help clinicians function at the top of their credential, as the valuable professionals that they are.
Reporting this information can be a measure of our success – we can see how much help we are providing. At the individual level, staff can better gauge their schedule, manage when to take vacation, and know where they stand vis-à-vis expectations. The expectation is that supervisors and the organization will work to facilitate successful attainment and celebrate that attainment. Knowing the volume of service also informs external stakeholders about the organization’s value. When staff and teams are helped to use the data they will see it as a helpful tool and feel more motivated.
About the Author
More Content by ContinuumCloud