Mastering the Art of Knowledge Flow for Breakthrough Innovation

The hidden costs of relying on AI in knowledge work

It’s almost 11 PM. Sarah, a senior partner, is still in the office. She’s polishing a brief for tomorrow’s client meeting, reflecting on the strategic angles, the choice of words, and the narrative framing. Her younger colleagues, including Alex who works with her on that topic, have already gone home. Different rhythm, different mindset. For these juniors, it often seems like AI gives them the power to have everything at hand. A surface-level investigation appears to be enough. Getting a quick answer has become the norm. Often, it feels like a copy-paste culture. Sarah sees it in her inbox. Same style. Same phrasing. Nothing personal. And it frustrates her. More than once, she has to correct basic mistakes in the documents they send. Instead of saving time, she spends it fixing what should have been carefully thought through. they rely a lot on their AI prompting skills more than acquiring or developing knowledge.  Sarah? She uses AI too, but only after thinking through the core arguments herself. She has enough tacit knowledge to challenge what GenAI suggests, using it more as a sparring partner for brainstorming than a source of final answers.

She remembers a time when preparing a case meant digging into dusty archives, reviewing annotations, cross-checking jurisprudence. She didn’t need to do it in the physical library like her predecessors, as she had digital access to many of those files. More knowledge was available, but she still had to search, compile, and combine information manually. It was slower, yet the process taught her deeply. That kind of learning, where meaning is built through integration and reflection, is what cognitive psychologists call constructive learning. It was slower, but richer. AI has changed all that. The work is faster, cleaner, more complete. But also more passive. She wonders what happens to knowledge when it becomes too easy to retrieve, too tempting to trust.  Does this passivity come at a hidden cost ? 

In a world shaped by AI, will the next generation learn to think critically, or just to retrieve what seems right? 

We’ve been here before but this time feels different

Forty years ago, my French teacher made us work in class on a newspaper article warning us about the dangers of television. It was a hand-cut piece of news, duplicated using one of those old rolling copiers, its inky smell still lingers in my memory. The article warned that TV would hijack our attention, replacing books and creative thought with passive consumption. Twenty years later, we faced another fear: the internet, declared the end of memory and deep thinking. A decade after that, smartphones shrank both TV and internet into our pockets, making us slaves to our screens even outside the walls of our homes. We were, according to the headlines, “doomed.”

Then came social networks, redefining what it means to connect, trading physical presence for digital signals. And video games? They were accused of hijacking teenage brains through addictive reward loops.

See? Nothing new. Each wave of technology sparked its share of fears and ink (when ink was still on paper, not e-ink on screens). We laughed, we adapted. Were the fears overblown? Often, yes. But they weren’t wrong. We did change, maybe not always for the better, but undeniably, our daily lives shifted.

Today, however, the technology feels different. It has permeated every layer of life in under two years. An unprecedented velocity. The world hasn’t had time to digest, to assess. We’re not adapting. we’re being swept along, urged to embrace before we understand. 

What sets AI apart is its illusion of understanding. Unlike older technologies that merely broadcast, entertain, or connect, AI performs reasoning. It adopts human tone ( natural Language (NLP) ) , mimics logical structure, and outputs answers that feel coherent and intelligent. It creates what Luciano Floridi calls ‘cognitive realism’ …. the sense that the system actually knows something.

And that’s where the shift happens. When a machine sounds human, we start treating its output as thought. Especially for those still forming judgment, this is dangerous terrain. We stop questioning. We skip the struggle. We confuse confidence for correctness. It’s not just that AI answers faster, it starts to shape what kinds of questions we even dare to ask. 

This technology saves us time but what is the cost to our cognitive development? 

The changing nature of knowing

As Firestone highlighted in his KM theory, creating knowledge isn’t about retrieving « prepackaged » answers. It’s about the deeper process of formulating, validating, and applying insights to lived practice. This is the difference between knowing something and knowing how to use it meaningfully in context. Of course, this process still exists today and many professionals, researchers, and thinkers continue to create knowledge in meaningful, rigorous ways. But this habit of thinking could erode if replaced by tools that only simulate understanding. 

AI doesn’t truly create knowledge. It assists, mirrors, and accelerates workflows, but it doesn’t grasp meaning. As Luciano Floridi articulates, AI can act but it lacks the grounding, the awareness, and the intentionality of human cognition. It generates plausible outputs without contextual depth. You need to challenge this answer with your expertise, your cognitive mindset trained through years of capturing, digesting, and processing information. This is not an obsolete practice, it’s a mindset we must preserve and pass on. If we don’t, we risk raising a generation that knows how to prompt, but not how to think

Take a junior lawyer using AI. The machine drafts a solid brief. The law is correct. The tone? Off. The context? Missed. No awareness of the judge’s background, the company’s history, or the cultural nuance. Polished output. Poor judgment. And Alex believes he’s done. Alex must understand that we expect more from his way of thinking—his ability to analyze, judge, and reflect. It’s not just about producing output; it’s about demonstrating the cognitive mindset that this profession demands. 

The risk of surface-level thinking

AI excels at surface synthesis. For those with judgment, it’s a boost. For those still building judgment, it’s a risk. You get coherence without confrontation. That friction … the messy part of learning is where real insight comes from. But something else is at risk too. AI may also replace the human and personal interactions that are essential for the « melting pot » of tacit knowledge. Junior professionals may start relying more on AI because it feels safer, less judgmental, more efficient. They might feel they no longer need guidance from senior colleagues. But that interpersonal tension, those debates and mentoring conversations, are what build professional identity and critical depth. Lose that, and we don’t just lose knowledge … we lose the will to pursue it together. In recent years, I’ve observed the quiet fading of the professional communities we once called ‘tribes’, those vibrant, informal networks where peers challenged and inspired one another. We’re heading toward a form of isolation where collaboration feels optional, and independent output is falsely equated with mastery. We may forget how to think together. And without that shared intellectual friction, knowledge may becomes static.Efficient, maybe, but lifeless. As the SECI model reminds us, especially in its ‘Socialization’ phase, tacit knowledge is best exchanged through human-to-human interaction. AI cannot replicate the richness of informal conversations, shared experiences, or unspoken cues that drive deep learning. When juniors turn to AI instead of senior mentors, it’s not just a preference for efficiency, it can also reflect a deeper discomfort. AI doesn’t judge. It doesn’t challenge. It gives answers without pushing back. He is there at will and he is never « mocking » the questioning.  But that illusion of independence can isolate emerging professionals from the very people who could forge their thinking most profoundly and build the mental framework they need to face their missions.

Without it, we lose something deeper: epistemic humility. The sense that truth is messy, layered, and worth the effort. AI offers answers. But it rarely reminds you to challenge them.

The echo chamber of probability

LLMs are trained on dominant narratives. They predict what’s most likely to sound right and not necessarily what’s most true. If you’re not careful, you get a stream of average thinking. Safe. Familiar. And subtly biased. Without critical distance, you’re not informed…you’re enclosed.

If you confine engineers in the same office for years, a subtle shift begins. Their diversity of thought fades. They gravitate toward the same theories, echo the same assumptions, and reinforce each other’s conclusions. This is the echo chamber effect, well documented in organizational behavior and social psychology. It’s why, in Social Network Theory (SNT) ,which predates digital platforms, weak ties and potential ties are deemed more valuable than strong ties. Trusted circles bring safety, but weak ties bring challenge, novelty, and cognitive expansion.

That’s why I always defend my motto: #getoutofthebuilding. Stirring the pot of knowledge (brasser de la connaissance) isn’t just poetic, it’s essential. Without this constant flux, the dynamic of knowledge creation stagnates. These flows are invisible to most of us, but their absence is deeply felt. The same applies to LLMs. These systems regurgitate what they’ve seen. They don’t create knowledge … they remix it. If we only consume and republish, without adding or challenging, we risk a downward spiral in the richness of our collective intelligence.

What’s the way forward? Inject specificity. Infuse LLMs with contextual knowledge through Retrieval-Augmented Generation (RAG) systems tailored to your organization’s own intellectual capital. And don’t forget the human blend: pair juniors with experienced staff. Let those who know where knowledge sleeps teach those who still seek it. When used with care, LLMs aren’t a threat, they’re a bridge.

From caution to craft: embracing AI without eroding expertise

AI is here. That’s not up for debate. But how we use it is. Especially in knowledge-heavy fields like law, engineering, or research. We don’t need rules… we need rituals ( good habits?) .

  • Mentorship is critical: Guide juniors not just in content, but in thought. Process knowledge is the real key when it comes on how things are done inside an organisation.
  • Think before you prompt: Try first, then consult AI. Learn to collect and organize your information. Use AI to strengthen your knowledge, not shortcut it. Brainstorm your ideas. Don’t just use it as a shortcut or to give the illusion of expertise without understanding.
  • Teach critical friction: Validate. Cross-check. Don’t accept passively. Encourage intellectual debate and respectful disagreement. 
  • Let them get it wrong: Learning sticks when it’s earned. It’s in the process of making mistakes that real growth occurs, not just in content knowledge, but in critical thinking, resilience, and self-awareness. Encouraging juniors to experiment, reflect, and iterate helps build long-term confidence and autonomy. Rather than shielding them from failure, create safe environments for thoughtful trial and error because learning how to learn is one of the most enduring skills we can foster.
  • Use guardrails, not gates: Define practical use cases. Encourage smart application by showcasing positive examples. AI isn’t going away, we must learn how to harness it with intention. Focus on internal knowledge: enrich LLMs with your organization’s unique expertise, and use AI to amplify… not replace human insight. Too often, I hear managers talk about replacing an expert simply by combining emails and documents with an LLM. The reality is not that simple and this is a dangerous shortcut. Nothing replaces the cognitive mindset of an experienced professional, especially one who carries years of tacit knowledge. AI may assist, but it cannot replicate depth, judgment, or human nuance. 

Conclusion

AI won’t kill knowledge. But it could erode the deeper layers of how knowledge is created and sustained, especially if we let it substitute process with output.

Firestone reminds us that knowledge management is not only about storing content, but about preserving the ways knowledge is constructed, tested, and transferred. It’s this process knowledge, the embedded routines, decision patterns, and critical reflections that define how an organization learns and adapts. AI can retrieve and recombine facts, but it cannot create or process knowledge in the full human sense. It lacks the intentionality, dialogue, and shared understanding that underlie real insight. 

The true risk is not ignorance, but the erosion of our ability to question, synthesize, and learn together. Knowledge does not live in databases. It lives in the interplay of human minds, in collective meaning-making, in the rituals of thinking and doing.

So let us not try to minimise AI. Let us humanize it. Let us use it deliberately to amplify cognitive depth, to support thoughtful mentorship, and to keep alive the shared journey of knowledge creation. If we do that, AI will not dilute our craft. It will enrich it.


Sources & References mentioned in my article:
https://link.springer.com/article/10.1007/s13347-025-00858-9 ( Luciano Floridi on “AI as Agency without Intelligence: On Artificial Intelligence as a New Form of Artificial Agency and the Multiple Realisability of Agency Thesis” )

Firestone’s book on enterprise information portals and Knowledge management – A must read. https://www.amazon.com/Enterprise-Information-Portals-Knowledge-Management/dp/0750674741

Leave a comment

Basic HTML is allowed. Your email address will not be published.

Subscribe to this comment feed via RSS