When I used to work on high-level credit card complaints, I was sent on a customer service course to learn more about dealing with difficult customers. One of the techniques covered on the course was the broken record technique.
I had of course heard people say things like "they sound like a broken record", usually in a negative way when someone is banging on about something that no one else wants to hear. It had never occurred to me that it could be a useful technique for communication.
what is the broken record technique?
The idea is that if you have a position that you are unable or unwilling to budge on, then you calmly repeat the position over and over, whilst avoiding over explanation. This was very helpful when dealing with nightmare customers if there was a "take it or leave it" offer to get across. From what I remember of the course, which isn't much, the suggested approach was:
Point three was a valid approach as I was the last line of complaints — the customer was free to contact the Financial Ombudsman Service if they disagreed with the resolution.
For example:
"Yes, I understand that you're (irrationally) angry at my colleague in the offshore contact centre (for no apparent reason). I've listened to the call recording and found no evidence to uphold your complaint. I'm willing to offer 50p to cover the cost of being on hold, but I will not be providing any compensation for distress and inconvenience".
does it work in quality engineering?
Absolutely not. Things are rarely, if ever, as black and white in quality engineering. Nobody wants to work with someone who is so stubborn that they're unwilling to change their mind — especially when there are valid arguments against the point being made.
I have continued to make use of the first two points from the broken record technique as they served me well, but I've adapted the rest to provide a little more flexibility. I should probably think of a name.
I'll update if anything comes to mind. The key change here is to make it clear that there are different options available to get out of the deadlock. It's worth noting that offering a path forward forms part of the overall technique, as a decision might not be made straight away.
when should the adapted technique be used?
Not very often.
I typically use the technique as a last resort when all other attempts to reason with people have failed. Even in the adapted form, this isn't something that should be overused otherwise you're at risk of breaking down relationships with colleagues, which is probably not a good idea.
The trigger for me is usually when I start to feel like I'm in a war of attrition, when something I perceive to be very important isn't being taken seriously, or when the same issues with quality culture or processes are being repeated over and over like a broken record.
examples
we need more UI tests
I've worked in a few teams where there's a push to create more and more UI or end-to-end automated tests rather than increasing unit or integration test coverage. This results in an inverted test pyramid, which is a nightmare to maintain if it gets out of hand. Here's one way to approach a situation like this:
I understand what you're saying as UI tests add a lot of value, but we're better off increasing test coverage at the unit and integration layers instead. I can facilitate a workshop around writing effective unit tests, and I'm happy to pair with people to help create meaningful integration tests. Should we try this out or does anyone have other suggestions?
can you test this empty story pls
There are a lot of bad quality processes that can lead teams to a point where stories have a semi-descriptive title but lack any form of description or acceptance criteria.
It's easy to fall into a pattern where you have enough tacit knowledge to figure it out, but this is generally a bad idea when you consider acceptance criteria are a form of requirements. Well-written acceptance criteria is important to create a shared understanding between product, development, and QA. Without shared understanding, assumptions are made.
One way to address this using the adapted technique could be:
I realise it's a priority (like everything else, amirite?), but we shouldn't be testing, or developing, against stories with no acceptance criteria. I am going to start reviewing user story testability using a prompt based on this awesome blog post to get us started on the right path. Please take a look and let me know what you think.
and what about you?
Have you ever planted a flag on something and not budged on it? What led you to that point and how did you end up breaking the deadlock? Can you think of a name for the adapted technique? Let me know if you fancy a chat about it!