How would you like to be a Guest Blogger for KMI? Email us at: info@kminstitute.org and let us know your topic(s)!

What’s in Your KM Go Bag? (Spoiler: It’s Not a Chatbot)

March 17, 2026

A “go‑bag “ is the pre-prepared emergency backpack you grab when everything goes sideways. It’s filled with water, documents, a flashlight, maybe a granola bar if you planned well. But what if one of the tools in your emergency kit was knowledge?

This was the premise of my presentation at the 2025 Knowledge Summit Dublin.



During the session, I asked participants to reflect on their personal KM Go-Bag - what is the one thing they would want in their knowledge go-bag during a crisis? They broke into groups, discussed and chose one essential KM tool, (e.g., lessons learned database, community of practice, chatbot, playbook, etc.) to pitch back to the group.

What do you think the top tool was? Here’s a hint: it didn’t involve fancy technology.

One group suggested an AI chatbot. The others proposed establishing communities of practice or mapping expertise.

So when the proverbial chips were down, most people decided to reach for their experts. For connection and collaboration. For people.

I have three ideas as to why this might be:


1️⃣ 𝗛𝘂𝗺𝗮𝗻𝘀 𝗮𝗿𝗲 𝘄𝗶𝗿𝗲𝗱 𝗳𝗼𝗿 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻.

Ever wondered why your first reaction when faced with a problem is usually to “phone a friend”? Numerous studies have pointed to social connection being as critical to human survival as food, water, and shelter.


2️⃣ 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝗶𝗲𝘀 𝗮𝗿𝗲 𝗰𝗼𝘀𝘁-𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲.

When budgets shrink and needs become greater, there’s often little appetite for splashy solutions. Launching and convening a community of practice or similar learning network is a no- or very-low cost intervention. Which is great considering #3…


3️⃣ 𝗧𝗵𝗲𝗿𝗲 𝗶𝘀 𝗵𝗶𝗴𝗵 𝗥𝗢𝗜.

I’ve seen firsthand how powerful communities and people networks can be as catalysts for collaboration, especially across functions and regions. They’re spaces where learning is shared, where people connect, and where knowledge actually gets re-applied. They’re not a silver bullet, but when done well, they can move the needle in areas like knowledge retention, collaboration, visibility of expertise, even culture.

Leveraging our Knowledge Management go-bags as practitioners is increasingly a necessity and not an option, especially in the rapidly-changing international development space. Sharing insights and learning from each other has never been more critical. Technology still gets a lot of attention thanks to advancements in AI, and it’s true that technology can enhance our people networks. But in times of crisis and unprecedented change, when every resource counts, we cannot discount the value of peer-to-peer connection.

__________________

Conversational Leadership: Expanding the Future of Knowledge Management

March 13, 2026

For decades, Knowledge Management (KM) has helped organizations answer a vital question: How do we know what we know?

Through lessons learned, Communities of Practice, taxonomies, collaboration technology, expertise location, and countless more approaches, KM has strengthened how knowledge flows around organizations. Long-time KM practitioners have shown how to design ecosystems that prevent reinvention and enable expertise to travel across boundaries.

But today, a deeper question is emerging:

How do we work together when what we already know is not enough?

This is where Conversational Leadership enters, not as a replacement for KM, but as its expansion.

From Knowledge Assets to Knowledge Flow

Traditional KM often emphasizes artifacts: documents, playbooks, databases, dashboards. These are essential. They stabilize information and extend organizational memory. Fully enhanced KM adds culture and process improvement aspects to KM.

Yet any knowledge is deeply contextual. What one person “knows” cannot be fully captured or transferred as static content. Something always remains tacit, embedded in experience, judgment, intuition, and interpretation.

Tacit knowledge does not travel well in files. It travels in conversation.

KM practices such as Peer Assists, Knowledge Cafés, After Action Reviews, and Communities of Practice succeed not because they produce documentation, but because they create dialogue. The real value is not the report; it is the reasoning, sense-making, and meaning-making that unfolds between people.

Conversational Leadership builds on this insight. It shifts attention from managing knowledge as content to cultivating knowledge as a relational, emergent flow.

The Flow of Tacit Knowledge

Tacit knowledge includes pattern recognition, ethical stance, cultural awareness, emotional intelligence, practical wisdom and often exists in networks as much as it exists in an individual. It is the individual and collective lived dimension of knowing.

Tacit knowledge flows when people:

  • Trust one another
  • Listen deeply
  • Ask deep questions
  • Surface assumptions
  • Engage in heightened dialogue

Conversational Leadership treats conversation not merely as a channel for sharing knowledge, but as the medium through which collective intelligence forms.

In complex environments, no individual holds the full answer. Meaning emerges through interaction. People reason together. They test interpretations. They challenge and refine assumptions. Through conversation, shared understanding has the potential to be created.

Knowledge is not only transferred—it is generated. And it is not only generated, it is relational and pressure tested. It is ever evolving.

Collective Reasoning and Sensemaking

Modern organizations operate in conditions of ambiguity and interdependence. Under these conditions, stored knowledge alone is insufficient.

KM provides an environment for organizational memory. Conversational Leadership provides adaptive capacity for deep organizational learning, sense-making, and meaning-making.

When teams face novel challenges, they cannot simply retrieve a best practice or even a novel practice. They must interpret signals, weigh competing perspectives, surface unspoken concerns, and decide together.

This is collective sensemaking.

Conversational skill becomes a strategic capability. The quality of reasoning in an organization depends on:

  • How safely dissent can be voiced
  • How rigorously assumptions are examined
  • How clearly distinctions are made
  • How aware people are of power, group dynamics, and conversational dynamics

Poor conversational habits distort knowledge flow. Unchecked power can silence insight. Speed can override reflection. Data and information too often substitute for understanding.

Conversational Leadership strengthens the micro-skills that enable better macro-decisions. It develops environments where thinking is visible and meaning can evolve.

The Next Horizon for KM

If early KM focused on repositories, and later KM emphasized networks and collaboration, the next horizon may be conversational awareness and skills.

KM practitioners are uniquely positioned to lead this shift. You already understand knowledge flows, barriers to sharing, and the importance of trust. You’ve worked hard to learn how to get buy-in and measure the immeasurable. Conversational Leadership furthers this momentum by focusing on how people reason together in real time. How people truly move things forward at the speed of need and understanding.

In an era shaped by rapid change and AI-enabled information abundance, the differentiator is not access to data. It is the ability to make sense of it together and take action from there.

The future of KM is not less human. It is more conversational.

Conversational Leadership does not replace Knowledge Management. It animates it, ensuring that knowledge remains alive, relational, and capable of guiding wise collective action.

 ___________________________________________

Improving the Front-End Experience of Your Knowledge Systems

February 12, 2026
Guest Blogger Devin Partida


The success of a knowledge system depends on how easily people can find and use that information in their everyday work. The front-end experience — which includes the interface and overall usability of the system — helps bridge the stored knowledge and the employees who use it to create value.

Why Front-End Design Is Critical for Knowledge Systems

A knowledge management system is often only as effective as its user interface. When the front end is cluttered or slow, users may disengage. This disengagement then becomes a direct barrier to knowledge adoption, regardless of the content's accuracy. Research shows that user interface design can significantly influence engagement through factors like visual aesthetics, accessibility, usability and personalization.

The benefits of a well-designed front-end experience are both practical and psychological. A user-friendly front end allows workers to find and use information essential to their everyday work. It reduces friction and frustration, boosting productivity and trust in the knowledge system itself.

Strategies for a User-Centric Front-End

Improving the front-end experience requires intentionally shifting toward user-centric thinking. Instead of organizing information around internal structures or legacy systems, effective knowledge system design reflects how team members actually search for and use information.

Simplify Navigation

An intuitive information architecture is essential to a usable knowledge system. Navigation should support existing workflows, helping users understand where they are and how to move forward with minimal confusion. Clear hierarchies and consistent terminology reduce the mental effort required to interact with the system.

Best practices in knowledge base UX design include minimizing unnecessary decision points. If business auto-attendants only provide three to five menu options, knowledge base front-end designers should strive for similar simplicity. When users can reach their desired content in fewer steps, the system becomes a natural part of daily workflows.

Optimize Search Functionality

For many users, search functionality is the primary mode of interaction they have with the knowledge system. When navigation gets unfamiliar or the system contains a lot of information spanning multiple categories, search becomes the easiest and fastest way to find answers. Inaccurate or disorganized results can affect user confidence in the system.

While keyword matching is important, effective search functionality design considers user intent. Advanced systems can use natural language processing to interpret queries, while filtering options allow users to refine results according to attributes like content type or date. Optimized search functionality turns the knowledge system into a responsive support tool for everyday workflows.

Personalize the Content Experience

Personalization helps reduce information overload, especially in comprehensive knowledge systems. Different team members often only need access to specific files or information at certain times. A front end that treats all users identically may seem equitable, but it can also overwhelm people with irrelevant content.

Tailoring experiences by role or department enables organizations to deliver knowledge that aligns with immediate needs. Personalized dashboards or contextual recommendations help improve the system’s usability and reinforce its value as a trusted, time-saving resource.

Implement an Organized Content Creation Template

Consistent content presentation is another factor influencing usability. Standardized content creation templates improve scannability and help staff quickly assess whether a resource meets their needs.

A well-structured template usually contains clear summaries and headings, organizing content using a clear visual hierarchy. Each file should also have defined ownership and regular reviews to ensure accuracy and timeliness.

Setting Up for Continuous Improvement

Front-end design requires intention and consistent effort. As priorities and user behaviors change, the knowledge system’s interface must adapt accordingly to stay effective.

Actively Solicit User Feedback

The most reliable insights into front-end performance come from the people who interact with the system daily. Actively collecting user feedback ensures improvements come from the demands of lived experience instead of general assumptions.

Standard methods include quantitative research like surveys and analytics or qualitative techniques like focus groups and interviews. Teams may also conduct moderated testing sessions for a hands-on look at the interface’s functionality. Intentionally collecting and analyzing user feedback allows them to identify friction points early and prioritize changes that deliver the most positive impact.

Embrace Iterative Design

Front-end experiences should evolve through iterative design informed by feedback and usage data. Small, continuous changes reduce disruption while allowing employees to test design decisions in real conditions.

An iterative approach also supports agility and competitive advantage, allowing knowledge management teams to respond to change without requiring large-scale overhauls. Over time, this practice results in a responsive and relevant front end that aligns with real people’s working styles.

Establish a Cross-Functional Governance Team

A cross-functional governance team ensures there is defined ownership over the creation and maintenance of the knowledge system experience. This team should include representatives from key business departments such as IT and HR, along with a dedicated knowledge system manager.

They should regularly review user feedback and implement improvements. Formalizing governance allows companies to ensure consistency and create a more cohesive user experience for all workers.

The Value of User-Centered Design

Improving the front-end experience is necessary to facilitate knowledge adoption and application effectively. Knowledge management teams can use intuitive navigation and continuous improvement to ensure their systems stay comprehensive and usable, powering innovation and sustainable growth.

____________________________________

How to Build a Knowledge Management Strategy for a New Venture

February 11, 2026


Startups generate knowledge faster, alongside early decisions, unplanned processes and rapid experimentation, all of which outpace formal documentation. The moment a company reaches a certain scale, or people start switching roles, this knowledge becomes thin. It can disappear if leadership doesn't have the proper knowledge management (KM) safeguards in place.

The trick is to create a system that retains it early without breaking the flow or slowing progress.

In contrast, high-impact knowledge management in a startup sees these insights as an asset for growth. Priorities are based on present and future needs, and coordination is flexible. The organization pays attention to the present and its anticipated future.

Why Knowledge Management Matters at the Venture Stage

In new ventures, there is little margin for error. Decisions build on previous choices. Without documentation, companies can suffer from repeats and misalignments. Knowledge management is constructive when people, priorities or funding change, which happens frequently in the first year of a business's life.

The United States Bureau of Labor Statistics cites the difficulty of starting and running new businesses. Only 34.7% of private-sector ventures established in 2013 remained operational in 2023. Continuity of decision-making, clearly defined processes, and retained institutional knowledge separate companies capable of evolving to accommodate change from those that stagnate due to team and priority changes.

A lightweight, low-friction KM strategy encourages teams to capture institutional knowledge, enabling speed and scalability. The goal is to provide a foundation for governance, onboarding and strategy alignment as the startup grows.

How Can Companies Ensure KM Strategy Keeps up With Growth?

As organizations grow, they create more knowledge than many systems can process. Changing data makes it less clear where to get the information needed. Alignment of the KM strategy can focus on what knowledge is necessary, how to capture it and whether its use still supports decision-making at scale. The following practices bolster continuity and help the KM approach mature alongside the business.

1. Identify Critical Knowledge Assets Early

It is essential to ensure that the organization captures the proper knowledge, since KM systems should not try to catalog everything. Early efforts should focus on information that has the most significant impact or carries the greatest risk.

Founders and early-stage executives often believe a decision will be memorable or easy to explain later. However, experience shows that explaining their purpose helps get the reasoning behind them out of the way.

Explanations can include product and service choices, potential customer feedback from testing or pilots, core operations to comply with or deliver, and rationale for pricing or partnership decisions. Documenting the reasons for critical decisions is just as vital as recording the outcomes. Attention to context helps improve future processes as conditions change.

2. Embed Knowledge Management Into Venture Governance

Considering governance at the beginning might seem early, but a light structure here helps avoid conflict later. It establishes knowledge ownership, quality norms and life cycle expectations without bureaucratizing the process.

Straightforward, practical answers to practical questions can make a difference over time. Who owns core knowledge assets? How often should leadership review and update information?

Documentation lapses are often discovered when companies reach major milestones such as incorporation, audits, financing and regulatory inspections, resulting in rework and increased risk of compliance issues. Embedding KM into governance early ensures credibility, improves functionality and prepares for future transitions.

3. Establish Knowledge Capture and Sharing Processes

Once priority knowledge is identified, its acquisition and distribution should be clear. In the context of startup companies, this means creating simple, repeatable practices that do not add burden to employees' existing tasks.

Make knowledge capture a regular practice, such as during onboarding or reviews. Ownership should be clear for the task, such as HR completing a form for each employee and management having access to the details. Consistency is crucial. As the venture matures, leadership can implement these processes without diminishing velocity.

4. Select KM Tools That Scale With the Business

Choosing the right tools matters, but unnecessary focus on them creates friction all too early. New companies need KM tools that support collaboration, search and versioning without overwhelming administration.

Start with a core knowledge base, collaborative tools integrated with existing workflows and access controls to avoid silos. Value excellent usability and simplicity over a collection of features.

In 2024, 56% of business leaders reported productivity gains from collaboration and artificial intelligence tools, suggesting that the right ones can significantly improve efficiency if widely adopted.

With digital knowledge systems, adoption is the key determinant of impact. KM strategies are unsuccessful if teams resist or sabotage them. Managers can introduce early KM tools when the organization is ready, keeping in mind that it’s easier to migrate content than to lose it. Choosing the right time varies from company to company.

5. Adapt the Strategy as the Venture Evolves

KM strategies should not be static. As organizations grow, more knowledge is created, tasks are specialized and risk appetite changes. Regular reassessment keeps the strategy aligned with operational reality.

When onboarding is slow, asking the same questions can lead to multiple versions of the truth. It may be time to introduce more structure, taxonomy or tooling. Measurements can guide those adjustments.

In some market settings, AI-powered retrieval and memory systems are routinely deployed to enable personalization and responsiveness. Research has found that 80% of consumers prefer personalized shopping experiences enhanced by these data management and retrieval capabilities.

A sound KM system improves retrieval and onboarding time, as well as decision quality. The system is flexible. Its relevance adjusts as the organization changes.

What Endures Determines What Scales

The way an organization learns and what it retains will become the dominant characteristic of its future. Knowledge management professionals contribute to this by capturing, sharing and evolving critical information as the organization and its systems grow. The best strategies are human, practical and adaptable, and companies that embrace them build a strong foundation for the future.

____________________________

Sparking the Knowledge Management Engine with an AI Centre of Excellence

January 31, 2026
Rooven Pakkiri


For the first time in the history of enterprise technology, the people using the technology know more about its potential than the people buying it.

Let that sink in for a moment. Because it inverts everything we know about organizational change management - and it's why your traditional approach to building a Centre of Excellence will fail when it comes to AI.

The ChatGPT Moment

Dr. Debbie Qaqish, in her white paper on AI Centres of Excellence (2024), captures this perfectly. She describes watching every major tech evolution of the past four decades - from rotary phones to smartphones, from dial-up internet to cloud computing, from on-premise servers to SaaS platforms. Nothing, she says, was as earth-shaking as the release of ChatGPT on November 30, 2022.

Why? Because every previous technology came with a predictable evolution path. You could see where it was going. You could plan for it. You could reasonably accurately define use cases upfront and execute against them.

AI shatters that predictability. We are in unknown territory. And that changes everything about how organisations must adapt.

How We've Always Done Tech Implementation

Let me show you what I mean with a concrete example.

Think about a CRM rollout in the 2010s - let's say Salesforce:

  • Leadership identifies the problem: "Our sales pipeline visibility is terrible; deals are falling through cracks"
  • Leadership selects the solution: They evaluate vendors and choose Salesforce
  • Leadership defines the use cases: Lead tracking, opportunity management, forecasting reports - all documented upfront in requirements
  • Workers execute the plan: Sales reps get trained on defined fields, follow mandatory processes, use standardized reports
  • Knowledge flows DOWN: "Here's how you'll use it, here's the dashboard you'll look at, here's the fields you'll fill in"

The Centre of Excellence's role in this world? Implementation, training, and optimisation of those predetermined use cases.

This model worked beautifully for decades. The technology was stable. The use cases were knowable. The path was clear.

Enter AI - And Everything Breaks

Now let me show you what's actually happening with AI in organisations today.

I recently worked with a European Customer Support team on AI integration. Here's what we discovered:

Support agents started using AI to draft responses. Nothing revolutionary there - that was the planned use case. But then something interesting happened. Agents began noticing that the AI was identifying sentiment patterns they had never formally tracked. One agent said, "Wait - this AI noticed that customers who use certain phrases are actually asking about X, not Y."

Then they discovered the AI could predict escalation risk based on subtle language cues that nobody had ever documented. These weren't use cases we planned for. These were discoveries made by front-line workers experimenting with the technology.

The knowledge didn't flow down. It flowed up.

The AI CoE's role became capturing these emergent insights and scaling them across teams. Not training people on predetermined workflows but harvesting what workers discovered about AI's capabilities.

The Tacit Knowledge Goldmine

But here's where it gets really interesting - where AI and knowledge management converge in a way that's never been possible before.

Consider financial advisors. I recently delivered a customised program for  an Insurance client, working with their team of several advisors nationwide. These senior advisors hold extraordinary tacit knowledge - the kind that traditional technology could never capture:

Pattern Recognition: "I can tell from a conversation if someone's underinsured." That's not in any manual. That's 20 years of experience reading between the lines.

Client Psychology: "How to explain complex coverage in simple terms. When to push and when to back off. How to have difficult conversations about underinsurance." No CRM workflow can teach this. It's intuitive, contextual emotional intelligence built over thousands of client interactions.

Local/Regional Expertise: Understanding flood zones, weather patterns, crime rates, local business ecosystems, community relationships and networks. This is place-based tacit knowledge that exists in advisors' heads, not in databases.

Claims Wisdom: How to guide clients through claims processes, what to document at the scene, how to advocate for clients with claims teams. Real-world responses to "that's too expensive." How to explain the value of coverage.

Creative Problem-Solving: Which products naturally go together, how to package solutions for different life stages, creative solutions for unique client situations. Each client is different. Senior advisors have a mental library of "I once had a client who..." scenarios that saved the day.

Underwriting Judgment: When to escalate a risk versus handle it, how to present a borderline risk to underwriters, what information underwriters really need.

The traditional tech approach would have built workflows for standard cases, created dropdown menus for common scenarios, documented "best practices" in a manual nobody reads - and missed 80% of the actual value in those advisors' heads.

But here's what we discovered with AI:

When advisors start experimenting with AI in Communities of Practice, something remarkable can happen. The AI could help them articulate their tacit knowledge. One veteran advisor would be able to say: "The AI just explained the pattern I've been following unconsciously for 15 years. I never knew how to teach this to newer advisors, but now I can see it."

AI becomes the externalisation engine - converting "I just know" into "Here's why I know."

And the AI CoE's role in this brave new world? Systematically capturing these discoveries flowing UP from practitioners and scaling them across all the many advisors.

This Is Pure SECI in Action

If you're familiar with knowledge management theory, you'll recognize Nonaka's SECI model at work:

  • Socialisation: Practitioners in Communities of Practice sharing "hey, I tried this with AI and it worked"
  • Externalisation: The CoE capturing those tacit discoveries and converting them into documented use cases
  • Combination: The CoE synthesising patterns across experiments into frameworks and best practices
  • Internalisation: Organisation-wide learning and capability building

The AI Centre of Excellence becomes the knowledge conversion engine - transforming frontline tacit knowledge about AI's emergent capabilities into organisational strategic advantage.

This has never been possible before. Traditional technology couldn't access tacit knowledge. It could only automate explicit processes. AI can help surface, articulate, and scale what people know but couldn't explain.

Why AI CoEs Are Completely Different

Dr. Qaqish identifies three key differences that make AI Centres of Excellence unlike any CoE you've built before:

1. Continuous big changes vs. step-chain improvement

Traditional tech followed a "pilot, test, deploy, optimise" model. You implemented once, then made incremental improvements. AI doesn't work that way. It requires ongoing adaptation to rapid, sometimes disruptive changes. Your CoE isn't optimising a stable platform - it's managing continuous experimentation and change.

2. Bottom-up vs. top-down

This is the game-changer. Because nobody can predict AI's evolution, initiatives must come from hands-on users experimenting and learning, not from leadership defining use cases upfront. The insights flow up from practitioners, not down from executives.

This inverts traditional change management. Your workers know more about AI's potential applications than your leadership does. The CoE's job is to harvest that knowledge and convert it into organisational capability.

3. Requires more leadership, resourcing, and budget

Unlike other technology CoEs that could operate as "nice to have" side projects staffed by people in their free time, the AI CoE needs dedicated time, real budget, executive clout, new incentives, and structured support.

Why? Because this isn't about implementing a predetermined solution. It's about creating an organisational learning system that can adapt at the speed of AI evolution.

The Two Functions Your AI CoE Must Integrate

Some frameworks separate the AI Council (governance, risk, compliance) from the AI Centre of Excellence (innovation, experimentation, capability building). I've found this creates unnecessary friction and slows everything down.

Your AI CoE needs to integrate both functions:

Governance Function: Policy development, risk assessment, ethical frameworks, compliance. The "don't screw up" guardrails.

Innovation Function: Managed experimentation, capability building, training, best practices. The "make cool stuff happen" engine.

Why keep them together? Because effective experimentation requires governance guardrails. You can't separate "try new things" from "do it safely" without creating either chaos or paralysis. One integrated team moves faster than two teams coordinating.

What This Means For Your Organization

The implications are profound:

Traditional tech CoE role: Train people to use the platform as designed.

 AI CoE role: Harvest what people discover about AI's capabilities and convert it into strategic advantage

Traditional knowledge flow: Leadership → "Here's the system" → Workers use it

AI knowledge flow: Workers → "Here's what we discovered" → CoE → Organisational transformation

Traditional CoE success metric: Adoption rates, process compliance, efficiency gains

AI CoE success metric: Rate of knowledge capture, speed of capability scaling, tacit knowledge externalisation

Companies that treat their AI CoE like a traditional implementation team will lose to companies that treat it like a knowledge creation system.

Getting Started

If you're building or reimagining your AI Centre of Excellence, here's where to focus:

1. Establish Communities of Practice - Create structured spaces for hands-on workers to experiment and share discoveries. This is your knowledge generation engine.

2. Build knowledge capture systems - Don't just let experiments happen. Systematically document what's being learned, especially tacit knowledge that AI helps surface.

3. Ensure executive clout - Your CoE leader needs power to move quickly on discoveries. When front-line workers find a game-changing application, you need to scale it fast.

4. Resource it properly - This isn't a side project. People need dedicated time to experiment, reflect, and collaborate. Budget for tools, training, and incentives.

5. Integrate governance and innovation - Don't separate them. Build one CoE that can experiment safely and scale learnings responsibly.

The Bottom Line

For the first time in enterprise technology history, the knowledge about what's possible flows from the bottom up, not the top down. Your front-line workers, experimenting with AI in their daily work, are discovering capabilities and applications that leadership couldn't have predicted.

The AI Centre of Excellence isn't about deploying technology. It's about harvesting tacit knowledge, converting discoveries into capabilities, and building organisational learning systems that can adapt at the speed of AI evolution.

This is where AI and knowledge management meet. And it changes everything about how we think about Centres of Excellence.

The question isn't whether to build an AI CoE. The question is: Are you building a traditional implementation team or a knowledge conversion engine?

Because only one of those will succeed in the AI era.

 ______________________________________________________