Future-Ready Leadership: Building Safe Teams, Embracing AI, and Knowing When to Step Back

by | Sep 30, 2025 | Leadership

I lead workshops for leaders and teams on psychological safety, helping them create a culture of learning and growth. Invariably, the discussion turns to teams and how their dynamics form. I draw Tuckman’s model of group development, which outlines the stages of group development:

  • Forming – Group members get to know each other, establish initial relationships, and seek clarity on their roles and goals.
  • Storming – The group experiences tension and conflict as members assert opinions and challenge one another’s approaches.
  • Norming – The group begins to find rhythm and cohesion, establishing shared norms, trust, and effective ways of working together.
  • Performing – The team functions at a high level, collaborating with confidence, independence, and a strong sense of purpose.

Note that in later years, Tuckman added another stage, Adjourning, in which the team prepares to disband, reflecting on achievements, celebrating successes, and transitioning responsibilities or relationships.

While Tuckman’s stages are presented linearly, it’s commonly understood that the process isn’t linear and is rarely as distinct as the stages suggest. Teams pass through different stages and may get ‘stuck’ in one stage and never advance. Psychological safety is one of those factors that impact if and how a team progresses.

A simple line diagram showing the five stages of group development over time: Forming, Storming, Norming, Performing, and Conforming.

As I illustrate in my workshops and above, psychological safety influences the norms the group creates. To move to high performance, the group members need a sense of belonging and the safety to learn and grow through asking questions and making mistakes. Group members need to be able to contribute to the group without feeling penalized or reprimanded for their ideas. Finally, in the highest performing teams, members can challenge the status quo. When these components of psychological safety are present, teams reach the ‘performing’ stage. When these are missing, people conform. They develop norms that reinforce storming and dysfunctional behaviour. Individuals do the minimum that’s needed and conform to the basic requirements. Over time, team performance looks like this.

The same Tuckman model diagram with an added orange arrow pointing from Norming to Performing, labelled “Psychological Safety,” emphasizing its role in team performance.

At the end of my workshops, I invite participants to identify commitments for how they’ll apply what they’ve learned. About four to six weeks after the workshop, I facilitate an Impact session to check in and ask about their progress on their commitments. I facilitated one of these last week, and the participants talked about the apathy they’re seeing. They work with teachers in the United States, and they talked about how this summer has been unique in that the teachers they support are more withdrawn, less enthusiastic, and indifferent to starting up their school year in the fall. There’s a lot going on for these teachers, and I wouldn’t attribute all of their apathy to a lack of psychological safety, but the participants and I agreed that it is a contributing factor.

What are the norms in your team, and which contribute to performing vs conforming behaviour? 

So often when I read about AI, I oscillate between excitement about what it can do – the novelty and ease of a thinking partner to organize my thoughts – and concern, or even fear about the change that is upon us, and the impact AI will have on our jobs and our children’s jobs.

The fear stems from the idea of “encroachment” – that technology or robots will gradually and relentlessly take over tasks that are currently performed by people.

If you imagine the future, do you see a version of Terminator where computers have taken over the world and humans are on the brink and fighting back? I have loved the Terminator movie series since it came out in the 1980s. But not until recently have I felt that this could be our future.

Or are you more hopeful about the future and see the opportunities that AI brings? Do you see a future where people work alongside robots or “AI Agents”? An AI Agent is a system that can reason, plan and act to autonomously complete tasks or entire workflows. AI Agents are already encroaching on the work people currently do and will continue to do so. The opportunity is for people, for employees, to become “AI Bosses”, where they become managers of one or more AI Agents. It’s a new concept and a role that provides oversight and parameters for AI Agents. Employees need to develop an AI Boss mindset so they can delegate, guide, and oversee AI with confidence. It’s like managing a person, but it’s technology. It’s a shift from doing the work to orchestrating it. Developing skills to become AI Bosses allows us to avoid a Terminator-type future.

If we have AI Agents working alongside people, and when employees have embraced becoming AI Bosses, there’s still the question of how much AI Agents do – how far do they encroach on the work people do?

[1] This was inspired by two articles: 2025: The Year the Frontier Firm is Born by Microsoft (link) and What will Remain for People to do? By Daniel Susskind (link)

What can AI do?

There are a few perspectives through which to understand encroachment and what’s appropriate for AI Agents to do.

First and foremost, we need to view employees’ work as tasks, not roles. This allows us to identify the tasks that can and should be delegated to AI.

If I go back to my roots in instructional design, and recall the I4PL Competencies, there is Task Analysis as part of the Assessing Performance Needs competency. When doing a task analysis, instructional designers itemize and sequence all the steps to perform a larger task. In a learning context, this becomes the building blocks for a step-by-step job aid, or a course. In an operational context, this becomes a workflow or standard operating procedure.

Identifying all the employee tasks across an entire organization is obviously a significantly larger undertaking, but the core concept remains the same – break down roles into tasks and sub-tasks.

Historically, we’ve focused on identifying routine and non-routine tasks to determine which tasks AI should do. AI can perform routine, repeatable tasks, but people are better suited for non-routine tasks. For example, generating standard reports by pulling data from systems and creating a summary or performance dashboard. This school of thought is represented this way and shows a fixed boundary between the tasks AI can and can’t do.

Diagram titled “Task-Based Approach” showing learning positioned outside of the work task, separated by a solid vertical line labeled “Fixed Boundary”—highlighting the rigid divide between formal training and job tasks.

However, as AI models advance, AI Agents can be trained to also complete non-routine tasks. For example, designing a marketing strategy or coaching an employee.

Diagram titled “Task-Spectrum Approach” showing learning embedded along a range of work tasks, connected by a dashed vertical line labeled “Flexible Boundary”—illustrating an integrated approach to learning within the flow of work.

The boundary between what people do and what AI Agents do is flexible and changes over time as AI Agents are trained on increasingly complex, non-routine tasks.

The opportunity for encroachment remains alive and well – as does my concern for my kids’ career prospects and futures.

Task Return on Investment (ROI)

The task Return on Investment is another criterion to determine which tasks AI Agents complete, how we define the level of encroachment, and where to draw the line in the task spectrum approach above. The ROI of a task considers both the cost of automating it and the productivity gains it delivers, helping organizations decide whether automation creates real value. Even if AI can technically perform a task, it only makes sense to automate when the efficiency gains outweigh the cost and complexity of doing so. For example, there are robots that can fold laundry more productively than humans, but they’re so expensive that it’s still cheaper to have a person do it. The same principle applies to knowledge work: the decision isn’t just about capability, but about value.

We’re left with the understanding of task encroachment of AI Agents and the mindset shift people need to become AI Bosses.

In Part 2 of this series, I will examine the tasks AI should do. There are lots of tasks AI Agents can do, even when we factor in ROI, but there are arguments for designating some tasks that AI shouldn’t do.

In Part 3, I’ll focus on the critical role Human Resources and Learning and Development have in supporting employees and organizations in getting organizations ready for AI and through the implementation process.


Two people in red life jackets paddle a canoe through calm water surrounded by lush greenery. Purple wildflowers bloom along the shoreline, with dense evergreen trees in the background under a clear blue sky.

I went camping with my daughter and her friends this summer. It’s an annual tradition that we started when she was eight. She’s 17 now! As the summers have gone by, I’ve intentionally stepped back and tried to have a smaller, less hands-on role so they can become more independent and take on more responsibility. This year, they did all the packing, which means that we didn’t have a towel, toothbrush, or maple syrup for pancakes. The chicken for the tin foil dinners remained in the freezer at home.

They’re all in the process of getting their driver’s license, which opens up the possibility of camping on their own. They talked about trying a short canoe trip to a remote site. I had visions of them forgetting to hang their food and having their campsite visited by raccoons (hopefully not bears) who eat all their meals.

I think parenting is a lot like leadership. We want to lead our team members to work independently and function autonomously, so they can excel and thrive in their work and leaders can focus on leading. To do that, leaders need to step back and allow them to experiment, make mistakes and learn from failures. It’s a balancing act. When should leaders step in and when should they step back? I recently worked with leaders to facilitate a workshop on curiosity and how it drives growth and innovation. To encourage curiosity, though, leaders need to ask questions instead of providing answers. We talked about creating space for curiosity, asking questions and experimenting. To find the balance between stepping back and stepping in, I think of puddles.

If the employee steps into a puddle and isn’t going to drown, the leader’s role is to ask questions so they can figure it out. If the employee could drown, the leader needs to be more hands-on.
A large puddle of murky water covers a section of sidewalk and asphalt curb, partially submerging interlocking bricks. Fallen leaves and green grass border the water's edge, with overcast skies reflected in the surface.

When camping with my daughter, the deep puddles are raccoons eating our food. So, I made sure all the food and garbage were in the trunk before we settled down for the night. The puddles were maple syrup and frozen chicken.

What are the shallow puddles where your employees can step in alone to experiment and grow?

I’ve shared some additional posts online. Here they are, in case you missed them.  

  • HR’s Strategic Role in AI Implementation (video link
  • Curiosity to drive innovation and growth (video link
  • Conversation card: Learning from success and failure (video link
Avatar photo

Hannah Brown

I help close the gap between formal training from learning and development and leaders fostering learning on their teams to embed it into their DNA.