A manager gives an instruction, entrusts an assignment, and at the moment of presenting his work or debriefing his results, the employee comes forward quite confident, satisfied, if not more so, with the work he has done.
Then disaster strikes! Nothing goes right, it doesn’t live up to expectations, and the employee is roundly reprimanded. This is all the more unpleasant as he wasn’t expecting it at all, and was even expecting to be congratulated.
It’s a situation we’ve all either observed or personally experienced, or both. Very unpleasant when you’re directly involved, embarrassing when you witness it.
Manager too demanding? Incompetent employee?
That would be a rather easy shortcut, because more often than not, they just don’t understand each other.
The idea here is to think about how you can be sure of being understood by others when you make a request , and you’ll see that ChatGPT can be, if not a good teacher for you, at least a good guinea pig for testing your ability to communicate clearly and comprehensibly.
Manager/employee communication is highly perfectible
Here are a few examples.
The memo was not sufficiently detailed or well presented, but it was never said that it was to be used on the board and that the manager had a particular idea in mind.
The document arrived late, but the employee didn ‘t know that the manager was only waiting for a few bullet points to get an idea on a subject to be cleared up, and that this presentation was in no way intended for distribution.
By resolving a conflict with a customer, a supplier or even an internal conflict, the employee has simply added fuel to the fire, or protected the company’s interests badly, or protected them so well that they’ve lost the customer. But the negotiating room was never explained , nor was the expected outcome beyond “solving the problem”.
On the face of it, the deliverable was of good quality, but not thorough enough. On the other hand, the employee was told to “give me a briefing on it”.
The employee doesn’t know all the figures for a project in detail, with rates of return and utilization of each resource etc., just the macro economic data, and is given a dressing-down. On the other hand, he was told “let’s have a quick update”.
The office manager organizes the back-to-school party, but it’s either not qualitative enough, or there aren’t enough of them, or it’s too expensive. But we told her “organize the party”.
The same office manager organizes an executive’s trip. Train too early, return at a time when he had a meeting, hotel not to his liking. She had a hard time of it, but all she was told was to organize the trip.
We all have dozens or more examples of this in our memories.
What actually happens is that the manager or director knows exactly what he or she wants, says to himself or herself “you have to do this” and then delegates to someone else , formulating the task in the same way.
Haven’t you ever noticed? Whether it’s during or at the end of a meeting, or in a terse email or chat (the famous “thank you from ….”), the task/assignment is given in one sentence, one line, without any further indication of context.
Normal: the manager knows what he wants, but forgets that the person to whom he is delegating is not him.
Normal, but totally ineffective, as the person is left with two choices:
1°) Try to guess at everything that hasn’t been said
2°) Ask for clarification, at the risk of suggesting that they haven’t understood, that they’re not competent or, worse still, of being told off because the manager “doesn’t have time”.
The manager Vs. ChatGPT
I hear a lot of managers rejoicing at the arrival of AI, because it will enable them to quickly get the information and analysis they can’t get from employees who never understand anything and are morons.
If they haven’t done so, I’d advise them to try it now.
Anyone who has started to experiment with AI, even for simple things, quickly understands something: if the request isn’t clear, you get something of questionable quality, and you have to do multiple re-runs to refine and arrive at something acceptable.
I read a very interesting analogy about AIs not long ago:
‘When I first began playing with ChatGPT, I wasn’t sure how I would use it. Then, one day while working remotely, I began typing questions into it, speaking as I might to a colleague who was sitting in front of me, helping me think through new ideas. What resulted was a fascinating conversation that cut down on my work time and improved my end product. After becoming more familiar with the tool, I started to think of it as an ever-present thought partner, which got me thinking about how to “onboard” GenAI with nontechnical teams. I realized that integrating GenAI is not so different from adding a new team member to the mix.”
Fast Company – How to transition nontechnical teams to use GenAI
I really found the idea of acting as if we were adding a new member to the team very fair and relevant.
There’s also the implicit idea that he understands that he has to make himself understood by the AI.
Well, your employee or trainee isn’t all that different from ChatGPT: if you don’t make yourself understood, he won’t be able to do anything good for you.
But for many managers, there’s a difference in their heads: they’ve understood that they need to tame ChatGPT, whereas they assume that their staff will naturally understand without knowing the ins and outs.
It could even be said that the absence of any hierarchical link with AI creates a more balanced, not to say healthy,relationship.
And to come back to the fact that some managers think that all their colleagues are bad, we have rather the proof that their presumed incompetence is proportional to the manager’s inability to make himself understood.
Know how to prompt your staff
So the manager who understood that it was up to him to make the effort to be understood by ChatGPT learned to speak to it, to give it instructions. In the AI world, this is known as writing a prompt.
Without pretending to be an expert on the subject, which is far less simple than it may seem at first glance, here are a few rules for a good prompt:
Clarity and specificity: the more specific the prompt, the more relevant the response. It’s important to be clear about the context and expectations. For example, instead of asking “Explain AI”, you could say “Explain how AI improves productivity in business”.
Context: Include contextual information that can help refine the answer. For example, “Describe the benefits of AI in the healthcare sector in 2024” offers a more precise framework. Example “it’s for a presentation to my boss” or “it’s to write a popularization article on linkedn”.
Objective: Saying what the answer will be used for helps make it more relevant in both form and content. Example “it’s for a presentation to my boss” or “it’s to write a popularization article on linkedn”.
Desired format: Indicating the type of response expected (paragraph, bulleted list, table, etc.) as well as the length helps to orient the response. For example, “Make a list of the advantages and disadvantages of teleworking”.
Disambiguation: Avoid vague or ambiguous wording to reduce the risk of receiving an irrelevant answer.
Tone and style: Specify tone (formal, informal, technical) or style (persuasive, descriptive) if relevant to the context.
I’m sure that others will be able to provide a very precise technical guide to writing the perfect prompt, but my deep conviction is that if, instead of giving a terse one-line instruction, the manager took the time to give the context, specify his expectations in terms of content and form, his intentions and any constraints, he’d be less disappointed by the work of his staff.
The author of the article said that we should consider the AI as a new member of the team, but in a way we should perhaps consider certain employees as AIs, especially the least experienced and those with whom we have the least common experience.
But whatever the frame of reference we use, the idea is that, between human beings, we often overestimate either the ability of others to guess our thoughts or our own ability to be clear.
Lessons from agility
I was referring to ChatGPT because it’s a trendy subject, but in the past I’d been inspired by something older when it came to formalizing instructions: user stories.
Their principle is based on the creation of short descriptions or scenarios expressing the needs of a user or customer in a project, often in the context of software development, focusing on the added value that a feature or service should bring.
Although I was first exposed to these concepts in the development industry, I quickly realized that they could be applied to other areas.
When I was setting up a continuous improvement approach that had nothing to do with development, I identified agility as a criterion for the acceptability of the approach to those it was going to concern: moving forward quickly in small iterations, creating the target vision together.
But when it came to distributing tasks to everyone (including myself), it was important that we were truly aligned with expectations. As all the people involved were practicing agility in their jobs , we logically ended up with a formulation close to that of a user story, even though the objective was not to develop anything but rather to modify processes and things linked to organization and working methods.
For those unfamiliar with the concept, let me remind you of the characteristics of a user story.
User-centered perspective: they are written from the point of view of the end-user (customer, employee, etc.) to ensure that the end result meets his or her real needs.
Simple and concise: User stories are intentionally short and clear, without going into technical detail. They describe what the user wants to achieve and why it’s important to them.
Focused on business value: The emphasis is on the value that the feature or enhancement will bring to the user or organization.
Iterative and evolving: User stories are often added to or adjusted as the project progresses. They evolve as the team develops a better understanding of the user’s needs and priorities.
Acceptance criteria: Each user story is usually accompanied by acceptance criteria, which are specific conditions that must be met for the story to be considered complete.
This may sound very formal, but it has the merit of being exhaustive, and if you formulate a request in this way, the risk that what you get is the opposite of what you expected is low, whatever the domain to which you apply it.
But as much as we intuitively feel that it’s up to us to make the effort to be understood by an AI, that in the context of a project with multiple stakeholders from different professions we need a very simple but structured way of formulating our expectations, when we address another person we tend to pass the hot potato to them without making any effort and, unsurprisingly, we’re often disappointed by the result.
Conclusion
For lack of time, respect, attention or whatever reason, too many managers tend to be hasty in their requests, even if it means being misunderstood, being disappointed by the result and ultimately passing the blame on to others.
Or maybe they just don’t know what they want and hope that someone will find it for them?
Whatever the case, an effort must be made to ensure that instructions and requests are formulated clearly and unambiguously.
I’m not saying that if you can write a good prompt you’ll be a good manager, but I doubt that someone who can’t will be able to make themselves understood and give clear instructions.
Image: manager using chatGPT by Celia Ong via Shutterstock