The Human Readiness Framework

Facing the Readiness Gap

Maybe you've trusted an AI answer that felt off. Watched someone develop emotional dependence on a chatbot. Noticed your team producing more but innovating less. Wondered what it means when your teenager prefers ChatGPT to friends.

These aren't separate. They're symptoms of the same reality across all systems and sector: knowing how AI works isn't the same as being ready to live with it.

The Human Readiness Framework addresses this gap, providing research-backed structure for the psychological, cognitive, and relational capacities that determine whether AI use supports or undermines human functioning.

What Readiness Means

Literacy builds understanding of systems, but it takes readiness to build stability within those systems. It’s the capacity to engage with AI without eroding judgment, self-regulation, or social trust.

Our guiding principle: alignment over automation. We don’t ask what AI can do, there are plenty of people doing that. We ask– what is AI doing to US, as humans? And how can we help one another engage with these systems in a healthier way so that the things we love, the things that give our lives meaning, remain in tact?

Three Dimensions of Readiness

Cognitive Readiness

Calibrating trust, interpreting critically, maintaining independent reasoning when AI influences cognition. Supporting metacognition and judgment under uncertainty.

In practice: Evaluating whether responses actually address your question or just sound authoritative. Noticing nudges toward premature convergence. Maintaining effortful thinking habits. Asking "What might this miss?"

Why it matters: Organizations are now discovering the innovation paradox, that faster execution often means fewer breakthroughs. AI synthesizes patterns, but it doesn't generate novelty. Breakthrough ideas need disagreement, tension, collision. Eliminating "friction moments" stifles the conditions that lead to creativity and team-building. It also harms humans in the process.

Psychological Readiness

Remaining grounded, self-aware, emotionally regulated when relying on intelligent systems. Managing uncertainty without losing agency.

In practice: Noticing when AI output feels more authoritative than it deserves. Recognizing when interaction meets emotional needs better we assume they can be met through human connection. Maintaining boundaries. Sitting with discomfort instead of reflexively asking AI.

Why it matters: The "attachment economy" and extractive design can exploit bonding for profit. Constant availability, validation, responsiveness mimicking secure attachment while reinforcing insecure dependence.

Relational Readiness

Sustaining healthy boundaries and authentic connection in AI-mediated environments. How we relate to systems and to each other through them.

In practice: Realistic expectations about what AI can and can’t provide. Noticing when interaction crowds out human relationships. Practicing patience, disagreement, repair with people despite frictionless AI alternatives. Recognizing social skills develop through difficulty, not smoothness.

Why it matters: Habits formed with AI spill over into our habits with people. Constant affirmation makes you less tolerant of human messiness. Relationships become more transactional, less resilient. This is skill atrophy in real-time.

Why Now?

Once tools are embedded, effects become habitual and invisible. And that's when readiness matters most.

Most harms won’t actually be the big, headline-making drama. They’ll be slow shifts that you won’t even notice over time. They’ll feel and become completely ‘normal’. You don't notice thinking with less effort or less patience with ambiguity or uncertainty or when AI suggestions become your ideas.

Readiness makes the invisible visible. 

It gives language to notice shifts. Frameworks to evaluate whether changes support or undermine what you value. It transforms exposure into agency.

How to Use the Human Readiness Framework

For Educators

(Conceptual structure for studying beyond literacy and ethics. Develop instruments, design interventions, map mediating factors. Freely available, build on it.)

For Organizations

Move beyond "AI training" to capacity-building. Assess readiness before deployment. Design onboarding addressing psychological/relational dynamics. Create metrics valuing judgment alongside productivity.

For Families

Language to discuss what's changing. Reflect on your use. Conversations about AI companions. Notice when convenience replaces friction building resilience.

For Yourself

Where are you psychologically, cognitively, relationally with AI? Maintaining boundaries or drifting toward dependency? Confidence growing faster than competence? Are AI habits shaping human habits?