Cheating, Institutional Knowledge, and AI Boundaries: Lessons for Leaders

by | Oct 31, 2025 | Leadership

Three people smile while using the “Failure Toy” during a team session, exploring lessons from failure to build psychological safety and a learning culture—an exercise led by Hannah Brown

I recently facilitated a two-day workshop with a group of leaders who were finishing an 18-month development program. They had been learning about and developing skills around adaptability, financial governance, problem-solving and how they each approach leadership. We wanted an experiential activity that pulled together themes from the entire program, and agreed the Failure Toy would work beautifully.  

I usually use the Failure Toy with leaders to learn about psychological safety and create a culture of learning in their teams. For teams to learn and grow, they need to feel comfortable taking risks, experimenting and failing. This relies on people feeling safe. Amy Edmondson and others have defined psychological safety as: 

A quote graphic that reads: “Psychological safety is the belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns or mistakes.”

The Failure Toy is so versatile, though, it works as a focus for creating psychological safety, and it worked well bringing together the threads from this larger program.  

In the lead-up to the activity, we discussed failure:  

  • How do you define failure? 
  • What emotions do you associate with failure, and what does that reveal about your relationship with failure?  
  • When is failure helpful?  
  • How can you get better at failure? 

One leader had almost a visceral reaction, “I never fail. It’s just…. Unacceptable!” This led to a great discussion on what failure is versus making mistakes versus experimenting. How are they different, and are some more ‘okay’ than others?  

The objective of the activity is to balance pieces on a wheel and remain steady for three minutes. Teams with free-standing structures earn points, and the team with the highest free-standing structure earns extra points.  

In the first round, the leader who never fails was in a group whose structure fell over – they received zero points. A great example of failure. In round two, this team spread apart two tables and placed the wheel in the gap, which stabilized it. Was this cheating? One team member had a sheepish grin on his face and looked noticeably uncomfortable.  Other teams noticed and called out the team. Perhaps the vocal leader who never fails couldn’t bear another round with zero points.  

In the debrief that followed, we talked about what constitutes cheating.  

  • Should cheating be avoided when it impacts others, but it’s okay if it’s isolated or has a limited impact? 
  • Is it cheating only when there’s an explicit rule that’s being broken? 
  • What about ‘loopholes’ and is it okay to bend the rules?  
  • If we loosen the definition of cheating and consider ‘loopholes’, do we encourage creativity and innovation?  

How do you respond to mistakes on your team? Are they perceived as failures and elicit strong emotional responses, or are they viewed as an experiment and something to learn from?  

As we continue to navigate our work environments that are constantly changing our teams need to adapt, learn and grow. To do this, they need to feel comfortable taking risks, experimenting and failing. We need more, not less, psychological safety on our teams.  


A turtle rests on a sunlit log in a calm, reflective marsh surrounded by green reeds—symbolizing the importance of slowing down to observe and learn from our environment.

Every year, I go camping with my daughter for her birthday. This year, I noticed the taps in the wash house were not all functioning. Some worked great, others had a trickle of water, and some nothing at all. As I was brushing my teeth, I overheard a conversation between two other women. One had worked at the campground years ago and said that it used to have a plumber who had since retired. Ever since then, they can’t figure out how the plumbing works. They don’t know the nuances of how pipes run underground and how the taps are configured. So, taps become leaky and stop working altogether.  

For years, I’ve been working with the Facilities department at the Region of Waterloo. These are the people who work behind the scenes to make sure lights are on, roofs (and taps) aren’t leaking, and sidewalks are cleared of snow at Region buildings. They make sure police vehicles are properly outfitted and get regular oil changes, so they are functioning at top performance. We’ve created onboarding and development programs for their staff, and part of the initial challenge – like the campground – was capturing all the knowledge existing staff have and documenting it for new folks.  

It feels like there’s a paradox right now. Some industries are in a recession and shedding jobs. Others are still scrambling to hire top talent with the skills to help organizations navigate the impacts and implementation of AI. It’s important to recognize the importance of institutional knowledge – how the plumbing pipes are configured, or the exception process for an incorrect financial payment for a stock purchase, or the steps a hospital takes to prioritize patients during an emergency when ICU beds.  

Yes, AI can capture the data and document the processes, but how can we leverage humans to pass on their experience to the next generations of employees? What success have you had in setting up mentoring programs between sr. and more jr. employees?  


In my previous newsletter, I wrote about what AI can do and introduced the concept of ‘AI Agents’ – autonomous systems that can reason, plan, and act to complete tasks or even entire workflows. I shared the idea that employees need to become AI Bosses, shifting from doing the work themselves to managing and guiding these agents. To define the boundary of what AI Agents can do and what people can do, I touched on breaking roles into tasks, the ROI of agents vs people doing work, and the need to balance routine versus non-routine tasks.

This month, I want to focus on what AI should do – how we set limits and decide what’s appropriate. This relates to human preferences. People often don’t have strong feelings about outcomes, but we care deeply about the process.

My father-in-law was diagnosed with bladder cancer in early 2024. His doctors have been fabulous as he received chemotherapy, had surgery to remove his bladder and is now receiving immunotherapy. AI could have correctly diagnosed his cancer, but it’s been the human relationships with the doctors that have made such a difference in my father-in-law’s experience. Of course, he wanted the correct outcome – an accurate diagnosis and treatment plan, which AI could have done. But AI would have fallen woefully short in the process – guiding him through the treatments.

Process Preferences

How something is done matters to us. Our preferences for process and how something is done generally fall into three categories:

  • Aesthetic – We value the beauty and craftsmanship of something created by human hands.
    In 2019 Dave and I and our two kids went to Italy. I remember crowding with all the other tourists and being so captivated by Michelangelo’s “David”. I remember reading about it and learning that Michelangelo felt David was in the large block of marble; it was his job to carve the stone and reveal him.
A large crowd gathers in Florence’s Galleria dell’Accademia to view Michelangelo’s marble statue of David, standing tall under a domed ceiling. The scene captures the awe of visitors drawn to this iconic symbol of human potential and artistic mastery.
  • Achievement – We admire the effort and skill behind human accomplishments. AI can easily win chess matches – even against chess masters. But we value the effort of a chess master’s long history of learning the game, and we value their victory over AI’s.
  • Empathy – We seek emotional connection and support, especially in moments that matter. This is worth a closer look because there are different components to empathy.
  • Cognitive empathy – This is the ‘knowing’ part of empathy, or perspective taking. It’s the ability to logically understand another person’s mental state, thoughts, and feelings without necessarily feeling them yourself. AI can be trained in cognitive empathy by analyzing patterns and language. It can respond appropriately and ‘say’ the right things, but it doesn’t ‘feel’.
  • Affective empathy – This is the “feeling” part of empathy. It is the ability to share and mirror the emotional experience of another person. Affective empathy is uniquely human. AI can’t be trained and will never be able to experience emotions.
    Let me nerd out for a moment. There are three moments in the Terminator 2: Judgement Day movie that highlight this distinction.
    • Early in the movie, there’s a scene where young John Connor tears up in the back of a car and the Terminator asks, “What’s wrong with your eyes?”
    • Later in the movie, there’s a scene where John tries to explain why people cry.
    • At the end of the movie, in the final scene, the Terminator tells John that he knows why humans cry, but that he can never do so.

I’ve shared in the past that I love the Terminator movies. Thanks for reading along and indulging my fascination.

Moral Preferences

Another area where we establish limits on what AI should do relates to our morals. In addition to having a preference for process, we care about the morality of what AI does. Moral limits are based on our beliefs about what is right and wrong, and what we believe should remain under human control, even when AI is technically capable of doing the task.

For example, most of us agree that decisions about life and death or the use of weapons should never be fully automated. Similarly, many believe that teachers should remain in classrooms, not because AI can’t deliver content, but because education is about human connection, mentorship, and values.

These boundaries reflect a deeper principle: some roles require human judgment, accountability, and presence – qualities that technology can’t replicate.

So, the question isn’t just “Can AI do this?” but “Should AI do this?” Where do you draw the line?

As a sr. leader in HR, how do you guide your organization in defining what AI should be doing? How have you defined the boundaries that align with your organization’s objectives and human values?


I’ve shared some additional posts online. Here they are, in case you missed them.  

  • The Bridge to Real Learning (video link
  • Lead, Learn, Grow conversation cards: Emotional Intelligence (video link
Avatar photo

Hannah Brown

I help close the gap between formal training from learning and development and leaders fostering learning on their teams to embed it into their DNA.