Monday, February 20, 2012

Evaluations in Cooperative Groups

Over the weekend, I sat in on selected sessions of Dancing Rabbit's annual retreat (see my previous blog, Stepping Back to Look Farther Ahead for more on that). Deep into Day Three we were 60 minutes into a conversation about paid staff evaluations and the protocol for firing people doing substandard work. After lining out the desired elements of a thorough job evaluation and spending some time dwelling on all the ways that things can go south (to make sure the Human Resources Committee has the chops it needs to handle whatever wonkiness comes up), community founder Tony Sirna sighed and reflected: "I'm not sure. Maybe we need all this structure, but it feels awfully corporate."

The subtext of Tony's lament was that Dancing Rabbit (or any intentional community for that matter) was purposefully created to be different than corporate culture. Hearing the anguish in his reflection, it got me thinking about the heart of evaluation...

• • •
As a process consultant I've worked with perhaps 100 cooperative groups over a 25-year career—mostly intentional communities, with a smattering of nonprofits, schools, and church groups sprinkled in. I discovered that the vast majority of these did not have a rigorous evaluation process. In fact, most had none. Those that did do any evaluating mostly did so in response to a problem, where a beleaguered manager or team is being called on the carpet. (And after a few experiences like that, it's easy to see why no one is eager to do it more.)

My sense is that groups avoid evaluations mainly because it feels too judgmental or too onerous. If there's not a problem, why bother? If there is a problem, nobody wants a witch hunt, yet there's considerable nervousness about how to avoid it becoming one if the group actually talks about the real issues. Not having confidence that they can do it well, they don't do it all. I'm not saying that's good thinking; I'm only making the case that it's understandable.

To their credit, folks at DR are past the point where they need to be convinced to do evaluations. Now they're struggling with the more advanced issues of how to do them fairly, how to do them efficiently, how to do them deeply enough to surface the problems, and how to be constructive—all of which are not easy.

They want to make sure they're identifying and addressing problems in job performance before they get worse. Note that if problems go unaddressed that several bad things can happen, not just work not getting done or done poorly:

o It undercuts morale among other workers (why should they bother to be more diligent if slacker or martinet behavior is deemed acceptable?).

o Co-workers who might be inclined to bring up issues directly with the person who is performing poorly will be more hesitant to do so, because the message being conveyed by the culture is that you're on your own. This is especially true of subordinates with critical feedback for managers. Lacking clear institutional support for keeping feedback channels open, in most cases they will naturally constrict. (If you don't regularly dredge waterway channels, they tend to clog with undissolved sediments; with feedback channels they can clog with
unresolved sentiments.)

o On a larger scale, it tends to erode the cooperative culture you intended in the first place. Problems fester, trust weakens, and before you know it you're back in the adversarial dynamics you were expressly trying to leave behind. Yuck.

OK, so where is the sweet spot? How can you have a robust tool, while avoiding robotic implementation? Nobody wants regular evaluations to transform HR into the performance police, nor do you want the life squeezed out of it under the press of interminable questionnaires and a endless gauntlet of backroom interviews to ascertain if someone has sufficient facility with gender-neutral third-person pronouns. Bureaucratic fatigue can kill the process just as effectively as a few poorly wrangled shootouts at the I'm-OK-you're-not-OK Corral.

I think there are three main objectives in developing an effective evaluation process: a) minimal impediments to surfacing critical feedback; b) maximal safety for all stakeholders; and c) dedication to creating and maintaining a constructive container in which information is exchanged.

Let's walk through these one at a time.

A. Minimal Impediments
The work here is understanding what it takes to put people at ease around naming hard things. In some cases the hurdles to jump are related to the person whose behavior is being called into question. If there's a history of such exchanges going poorly in the past, or there's otherwise low trust between the speaker and the listener, it's going to be harder.

It could be family of origin issues. If a person was raised in a household where it was considered rude and axiomatically disrespectful to speak critically of another, that conditioning is likely to affect the person's comfort level in naming issues as an adult.

In addition, people are going to be more hesitant to speak up if they feel they're exposing themselves in the process, which brings to safety...

B. Maximal Safety
Safety can be a tricky thing. While almost everyone is in favor of people having it, what does it mean? While structure (clarity about sanctioned ways to go about expressing concerns) helps some folks relax, it's constricting for others (limiting options). For some, safety equates to giving or getting feedback in a small group; for others it's the opposite (safety in numbers). Some need an ally present; others need no extraneous witnesses. Some want good minutes (and don't trust that they can take notes themselves in such moments) or even an audio recording, the better to capture agreements and commitments.

Without trying to lay out all the ways in which people have varying preferences regarding safety, there are three main points I want to make: a) do not assume that greater structure will be universally translated into increased safety; b) do assume that people will have a wide variety of preferences about safety (in fact, the same individual will want different things in different circumstances) and that the group is well served by creating the widest possible menu of choices to select from; and c) the things that will make the greatest difference in people feeling safe are:
—confidence that they will be accurately heard and understood;
—confidence that they will not be run over by Person B's aggression when Person B is expressing distress;
—belief that their input will be taken seriously.

C. Constructive Container
This has a couple of components. First, it may make sense to have the delivery of the feedback facilitated, both to make sure that it was heard accurately, and that there is the spaciousness and capacity to process any significant emotional responses before moving onto problem solving.

While the point of the feedback is to be constructive—not punishing—some people automatically equate the expression of strong feelings directed their way with being punished, and it can be excruciating to open yourself to hearing it. Unfortunately, if the triggered person doesn't feel heard around their emotional experience, they often don't trust that the recipient is taking them seriously.

In my experience, feedback has a much better chance of landing constructively if it is given directly (don't sugarcoat it); is behavior specific (give clear examples); avoids interpretation of the why the person acted as they did (no amateur psychoanalysis); and there is a clear statement of specific, measurable corrective behaviors that would be seen as responsive (give the person a way to make it better, and show that they care).

It can further help if you can: be specific about how much time you're willing to give the person to effect changes; delineate the potential consequences of persistent non-compliance; and make clear the ways in which the inappropriate behavior is seen as out of bounds based on job descriptions or group agreements (rather than on personal distaste).
• • •
The good side of a thorough process (which resulted in the structural overlay being discussed at DR that was dismaying for Tony) is that it defines what the group means by its commitment to due process. Players will know the sequence that will be followed, and the potential consequences of coloring outside the lines. As mind-numbing as it can be to do the work of laying all this out in the abstract, not having this spelled out ahead of need is an absolute bitch. If you delay developing protocol until you're in the midst of a problem, it's almost a guarantee that it will be viewed as a lynch mob by the person sitting in the cross hairs.

Having a known and established process for handling critical evaluation does not, fortunately, mean that you need to use the entire orchestra every time you want to hear some music. While fear of lawsuits may require corporate HR departments to conduct all evaluations by the book, cooperative HR committees can be more flexible. If they've successfully established that they can deliver on safety and constructiveness, then the HR folks can be much more informal in checking for concerns, and the full going-down-the-checklist-of-all-questions evaluation process need only be trotted out at need, or for a 50,000-mile checkup.

Think of it like going to the dentist. If you don't have any decay, the check up proceeds fairly smoothly and quickly—you're plaque gets scraped, your teeth get polished, and you're out of there. If however, there's a cavity, then everything slows down and the examination proceeds more deliberately. I'm proposing that HR does most of its evaluation work in that vein.

The nuance here is how often do you need to be offering evaluation opportunities in order to catch problems soon enough, versus the danger of suffering evaluation fatigue, where responses become wooden and are viewed more as a bureaucratic nuisance than a personnel life ring. For my suggestion to work (where HR did most of its work through informal checking until and unless they discovered a problem), you'd need people on HR who have been selected for the qualities of sensitivity to nuance (able to pick up clues about discomfort from people who are reluctant or unable to articulate their concerns without help), discretion (such that people feel it's safe to surface concerns), and the ability to work energetically (reading accurately what's happening in a given moment, not freaking out in the presence of serious distress, and having good instincts about how to proceed constructively when the shit hits the fan).

If you got that kind of savvy HR group, I don't think a comprehensive evaluation process need be invoked that often.
• • •
Finally, I don't want to leave the topic of evaluations without naming an added bonus. Many cooperative groups are weak when it comes to appreciation (not because they think it's a bad idea; rather because it's often the people who take initiative who deserve it and you can't reasonably count on those folks to toot their own horn). In the process of doing evaluations it's as much an opportunity to celebrate what's working well as it is for mid-course corrections.

While I appreciate that most of us don't go to the dentist if we're looking for an ice cream sundae, think about how much easier it will be for people to keep their heart rate down when HR comes calling if such a visit is just as apt to lead to gold stars as cold stares. As Frank Cicela sagely pointed at the DR meeting on this topic yesterday, it will tend to work much better if you offer Rabbits a carrot rather than a stick.

No comments: