Skip to content

The Blind Spot Coaches Miss—Until AI Shows Them

In 2019, WeWork's Adam Neumann sat across from his executive coach in what would be one of their final sessions before the company's spectacular IPO implosion. The coach later told The Wall Street Journal they'd spent months working on Neumann's "vision communication" and "stakeholder engagement." Meanwhile, three board members had already spotted what the coaching engagement never surfaced: Neumann's decision-making showed sophisticated pattern recognition in real estate deals but collapsed into magical thinking when integrating financial constraints, governance requirements, and market timing—all at once (Brown & Farrell, 2019; 2021).

The coach wasn't incompetent. The board wasn't clairvoyant. The difference was what they were looking at—and where they were looking.

Executive coaching works. Meta-analyses confirm it reliably improves performance, well-being, and goal attainment (Jones, Woods, & Guillaume, 2016; Theeboom, Beersma, & Van Vianen, 2014). Yet a troubling pattern appears in post-mortems of leadership failures: coaches often work on the presenting issue (communication style, delegation habits, executive presence) while the developmental issue—the precise capability ceiling that will fracture under next-quarter's load—hides in plain sight.

Why? Because we're all vulnerable to inattentional blindness: when attention is occupied by one demanding task, we fail to notice unexpected but critical stimuli, even when they appear directly in our visual field (Simons & Chabris, 1999). For coaches, the "demanding task" is often the conversation itself—listening, tracking themes, formulating the next question. The "unexpected stimulus" is the micro-signal that the client just hit their developmental edge: the half-second pause before answering "How do you prioritize competing board requests?", the shift from confident to vague language when describing cross-functional trade-offs, the sudden jump to abstraction when a concrete example is requested.

Miss that signal, and you miss the Zone of Proximal Development (ZPD)—the sweet spot where the right scaffold or challenge unlocks the next level of capability (Wood, Bruner, & Ross, 1976). Ask a generic question, and you get a polished answer. Ask a domain-relevant, edge-calibrated question, and you get diagnostic gold.

Why Context Is the Difference Between Coaching and Guessing

Here's what the research shows: leadership capability is not a trait you have; it's a skill you construct—and that construction is exquisitely context-sensitive (Fischer, 1980; Dawson-Tunik, 2004). A founder who demonstrates systems-level thinking when architecting a go-to-market motion may revert to binary, either-or reasoning when navigating co-founder conflict. A CFO who integrates five variables fluidly in capital allocation may freeze when those same variables appear in a people-planning conversation.

This isn't inconsistency; it's how human development works. We build skills in specific domains, and transfer is neither automatic nor complete (Fischer & Bidell, 2006). Yet most coaching assessments treat context as scenery—a backdrop to the "real" work of exploring values, strengths, or behavioral tendencies. The result: we assess leaders in the wrong situations and develop them for the last challenge, not the next one.

Three forces conspire to keep context invisible:

  1. Cognitive load narrows the attentional beam.
    Coaches juggle multiple tasks simultaneously: tracking the client's narrative, noting emotional shifts, holding the coaching framework, planning the next intervention. Each task consumes working memory, and under load, attention narrows to the most salient or expected cues (Lavie, 2005; Sweller, 1988). The unexpected cue—"Wait, she just described three strategic options but didn't mention how they interact"—gets filtered out.
  2. Motivated reasoning steers attention toward confirmatory evidence.
    Once a coach forms a hypothesis ("This leader needs to delegate more"), subsequent attention gravitates toward evidence that confirms it and away from disconfirming signals (Kunda, 1990; Nickerson, 1998). If the real issue is that the leader delegates appropriately in operations but under-delegates in strategy because they lack confidence integrating market, product, and financial variables simultaneously, the hypothesis blinds the coach to the domain-specific pattern.
  3. Organizational pressures prioritize performance over development.
    Sponsors want visible outcomes: faster decisions, better meetings, stronger exec presence. These are performance goals. But sustainable capability growth requires learning goals—and learning goals demand time, psychological safety, and tolerance for productive struggle (Payne, Youngcourt, & Beaubien, 2007). Under time pressure and role conflict (hindrance stressors), coaches narrow focus to the sponsor's KPIs, missing the developmental leverage hiding in domain-specific skill gaps (LePine, Podsakoff, & LePine, 2005).

The cost isn't just missed opportunities. It's mis-calibrated development. When you assess a leader's "strategic thinking" in a 360 survey or a personality instrument, you're measuring their reputation or preference, not their capability in context. You don't learn whether they can integrate six interdependent variables under time pressure in a domain where they lack pattern libraries. You don't discover that they can build elegant financial models but struggle to layer stakeholder politics onto those models in real time. And you certainly don't find the precise edge where a single well-placed question would unlock the next order of complexity.

The Anatomy of a Powerful Question: Domain, Order, and Edge

Not all questions are created equal. "What's your vision for the next quarter?" is pleasant. "Walk me through the last three times you said no to a board request—what variables did you weigh, and which did you park?" is diagnostic.

The difference lies in three dimensions:

Domain specificity.
Powerful questions anchor in the actual situations where the leader must perform. Not "How do you handle conflict?" but "In last Tuesday's exec team meeting, when the VP of Product and VP of Sales disagreed on roadmap sequencing, what did you notice, what did you decide, and what did you not decide?" Domain-specific questions surface the leader's constructed skill in that context, not their espoused skill in the abstract (Dawson-Tunik, 2004).

Hierarchical order.
Development proceeds by increasing the order of hierarchical complexity—the number of variables a person can coordinate simultaneously (Commons, Trudeau, Stein, Richards, & Krause, 1998). A leader operating at "abstract mappings" can compare two abstractions (e.g., "We need both speed and quality"). A leader at "abstract systems" can integrate multiple abstractions into a dynamic model (e.g., "Speed, quality, and team capacity interact differently depending on whether we're in discovery, scaling, or optimization mode—and the board's risk appetite shifts the weighting"). Powerful questions invite the next order: "If you layered investor sentiment onto your capacity model, what trade-offs emerge that you haven't yet tested?"

Proximity to the edge.
Fischer's dynamic skill theory distinguishes between a person's functional level (what they do alone) and optimal level (what they do with appropriate support) (Fischer, 1980). The ZPD is the gap between the two. Powerful questions land just beyond functional—close enough to activate existing schemas, far enough to require new coordination. Too easy, and you get rehearsed answers. Too hard, and you get cognitive overload and shutdown. Right at the edge, and you get productive struggle—the engine of growth (Wood et al., 1976).

Most coaching questions miss on all three. They're generic (not domain-specific), they elaborate within the current order (not raising it), and they're pitched to yesterday's performance (not today's optimal-with-support level). The result: conversations that feel good but don't move the needle.

What Venture Capitalists Already Know (and Coaches Should Borrow)

The best VCs don't assess founders with surveys. They watch them in context. They sit in on a board meeting and note whether the founder integrates financial, product, and people trade-offs in real time or toggles between them sequentially. They listen to a customer call and track whether the founder updates their mental model mid-conversation or defends it. They observe a co-founder disagreement and see whether the founder can hold multiple valid-but-conflicting frames simultaneously or collapses into binary thinking.

In other words, they assess performance variability across situations—the signature of developmental level (Dawson-Tunik, 2004; Fischer & Bidell, 2006). And they do it because they've learned the expensive way: a founder's capability in one domain (e.g., product intuition) does not predict capability in another (e.g., organizational design). Context is not noise; it's signal.

Now imagine bringing that same rigor to coaching. Instead of asking, "What's your leadership style?" you map the situations that matter—the recurring, high-stakes contexts where this leader's success or failure will be determined. Then you assess and develop in those contexts, using domain-relevant probes that reveal current order and edge.

This is not hypothetical. The DIAMONDS framework (Duty, Intellect, Adversity, Mating, Kin, Others, Negativity, Dominance, Status) and its organizational cousin CAPTION (Collaboration, Adversity, Power, Threat, Identity, Objectives, Novelty) provide a systematic map of the situations that activate different psychological systems and reveal different capability patterns (Buss, 2009; Lukaszewski et al., 2020). A leader may show integrated systems thinking in Collaboration contexts (cross-functional alignment) but revert to simpler mappings in Power contexts (board negotiations) or Threat contexts (competitive disruption). Without situational assessment, you'll never see the pattern—and you'll coach the wrong thing.

A 20-Minute Protocol That Changes What You See

Try this in your next engagement:

Step 1: Map the situations (5 minutes).
With the client and sponsor, identify the 3–5 recurring, high-stakes situations where this leader's capability will determine outcomes over the next 6–12 months. Use CAPTION as a checklist: Which situations involve Collaboration across silos? Adversity or setbacks? Power dynamics with boards or investors? Threat from competitors or market shifts? Identity (culture, values)? Objectives (strategy, prioritization)? Novelty (new business models, markets)?

Step 2: Micro-diagnose in one situation (10 minutes).
Pick one situation. Ask the client to walk you through a recent real example in granular detail: "What did you notice? What variables did you consider? Which did you integrate, and which did you set aside? What would you do differently now?" Listen for order: Are they coordinating single variables, mapping pairs, or integrating systems? Listen for edge: Where does the explanation become vague, abstract, or defensive?

Step 3: Raise order by one step (5 minutes).
Craft a question that invites the next level of complexity in that domain. If they're mapping pairs ("We balanced speed vs. quality"), invite a system ("If you layered team capacity and investor expectations onto that trade-off, what dynamic emerges?"). If they're integrating a system, invite a meta-system ("How would you explain to your board why that system works in this market but might fail in the next?").

Notice what happens. The client pauses. They think. They often say, "Huh—I haven't thought about it that way." That pause is the ZPD. That's where growth lives.

The Blind Spot You Didn't Know You Had

Here's the uncomfortable truth: if you're coaching without systematic situational assessment, you're operating partially blind. You're seeing the leader's espoused capability (what they say they do), not their constructed capability (what they actually coordinate in high-stakes moments). You're asking questions that feel incisive but land in the wrong domain or at the wrong order. And you're missing the micro-signals—the pauses, the vague language, the topic shifts—that mark the edge of their ZPD.

This isn't a character flaw. It's an attention problem, a motivation problem, and a tools problem. Under cognitive load, with motivated reasoning, and without situational diagnostics, even expert coaches default to generic questions and confirmatory listening (Lavie, 2005; Kunda, 1990; Simons & Chabris, 1999).

The solution isn't to try harder. It's to look differently. Situational assessment reveals what personality assessments and 360s cannot: the specific contexts where a leader's capability breaks down, the precise order of complexity where they hit their ceiling, and the domain-relevant edge questions that will scaffold the next level.

For venture capitalists evaluating founders, this is due diligence. For executive coaches developing leaders, it should be standard practice. And for leaders themselves, it's the difference between incremental improvement and genuine capability expansion.

A New Standard for Leader Development

The future of executive coaching will be won by those who solve the context problem. Not with more frameworks, longer engagements, or deeper rapport—but with precision. Precision in situational mapping. Precision in developmental diagnosis. Precision in edge-calibrated questioning.

VectorLead by TruMind.ai was built for exactly this. It combines situational assessment (using the DIAMONDS/CAPTION framework) with developmental scoring (tracking hierarchical complexity and domain-specific skill construction) to surface both the coach's and client's blind spots in real time. Instead of guessing where the ZPD is, you see it. Instead of asking generic questions, the platform frames powerful, domain-relevant questions tailored to the client's actual edge in the situations that matter. And instead of hoping transfer happens, you design for it—by developing capability in the contexts where it will be used.

The result: coaching that doesn't just feel good—it moves the needle. Because you're finally looking at the right thing, in the right place, at the right time.


References

Brown, E., & Farrell, M. (2019, December 14). The money men who enabled Adam Neumann and the WeWork debacle. The Wall Street Journal. https://www.wsj.com/articles/the-money-men-who-enabled-adam-neumann-and-the-wework-debacle-11576299616

Brown, E., & Farrell, M. (2021). The cult of We: WeWork, Adam Neumann, and the great startup delusion. Crown Publishing Group. ISBN: 978-0-593-13864-0.

Buss, D. M. (2009). How can evolutionary psychology successfully explain personality and individual differences? Perspectives on Psychological Science, 4(4), 359–366. https://doi.org/10.1111/j.1745-6924.2009.01138.x

Commons, M. L., Trudeau, E. J., Stein, S. A., Richards, F. A., & Krause, S. R. (1998). Hierarchical complexity of tasks shows the existence of developmental stages. Developmental Review, 18(3), 237–278. https://doi.org/10.1006/drev.1998.0467

Dawson, T. L. (2002). A comparison of three developmental stage scoring systems. Journal of Applied Measurement, 3(2), 146–189.

Fischer, K. W. (1980). A theory of cognitive development: The control and construction of hierarchies of skills. Psychological Review, 87(6), 477–531. https://doi.org/10.1037/0033-295X.87.6.477

Fischer, K. W., & Bidell, T. R. (2006). Dynamic development of action, thought, and emotion. In R. M. Lerner (Ed.), Handbook of Child Psychology (6th Edition): Vol. 1. Theoretical models of human development (6th ed., pp. 313–399). Wiley. ISBN: 9780471650745

Jones, R. J., Woods, S. A., & Guillaume, Y. R. F. (2016). The effectiveness of workplace coaching: A meta-analysis of learning and performance outcomes from coaching. Journal of Occupational and Organizational Psychology, 89(2), 249–277. https://doi.org/10.1111/joop.12119

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480

Lavie, N. (2005). Distracted and confused? Selective attention under load. Trends in Cognitive Sciences, 9(2), 75–82. https://doi.org/10.1016/j.tics.2004.12.004

Lepine, J. A., Podsakoff, N. P., & Lepine, M. A. (2005). A meta-analytic test of the challenge stressor-hindrance stressor framework: An explanation for inconsistent relationships among stressors and performance. Academy of Management Journal, 48(5), 764–775. https://doi.org/10.5465/AMJ.2005.18803921

Lukaszewski, A. W., Lewis, D. M. G., Durkee, P. K., Sell, A. N., Sznycer, D., & Buss, D. M. (2020). An adaptationist framework for personality science. European Journal of Personality, 34(6), 1151–1174. https://doi.org/10.1002/per.2292

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175

Payne, S. C., Youngcourt, S. S., & Beaubien, J. M. (2007). A meta-analytic examination of the goal orientation nomological net. Journal of Applied Psychology, 92(1), 128–150. https://doi.org/10.1037/0021-9010.92.1.128

Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28(9), 1059–1074. https://doi.org/10.1068/p281059

Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. https://doi.org/10.1207/s15516709cog1202_4

Theeboom, T., Beersma, B., & Van Vianen, A. E. M. (2014). Does coaching work? A meta-analysis on the effects of coaching on individual level outcomes in an organizational context. The Journal of Positive Psychology, 9(1), 1–18. https://doi.org/10.1080/17439760.2013.837499

Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2), 89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x