From Slop to Signal: The Capability Infrastructure Behind AI That Works

From Slop to Signal: The Capability Infrastructure Behind AI That Works

Monday, May 18, 2026 11:15 AM to 11:45 AM · 30 min. (America/Los_Angeles)
Learning Stage 5, EXPO, South Hall, Level One
Learning Stage (Product Demonstrations)
Learning Stage

Information

AI is supposed to fix learning. Faster content. Smarter development plans. Personalized paths at scale. For a handful of organizations, that's exactly what's happening. For the rest, AI is making the same problems worse, faster — more content, less trust, and no way to tell whether any of it is building capability.
And there's a conversation most people aren't having out loud. Adoption is low, spend is high, and the returns aren't there yet — but raising that in most organizations right now means being labeled an AI skeptic, which isn't great for your career and doesn't align with the prevailing narrative. So the gap between what people are seeing and what they're willing to say keeps getting wider.
This session uses original data from the 2026 State of Capability Development to tell the honest story of what happens when AI gets deployed on top of broken learning infrastructure. We'll share findings on learner engagement trends in AI-heavy programs, the growing gap between what AI vendors promised and what organizations actually experienced, and the trust erosion that accelerates when AI-generated content lands with employees who've already stopped believing development means anything.
The crash: AI without a capability framework produces plausible but imprecise content. It looks like development. It doesn't build capability. And because it moves fast, the damage scales before anyone notices.
The learn: The organizations seeing real outcomes didn't start with AI. They built the infrastructure first — capability frameworks tied to business OKRs, structured development plans, and role-level definitions for how AI should be used in different jobs. That foundation gives leaders something concrete to work with when they sit down with their people, instead of a blank page and a chatbot.
Attendees will leave with a clear understanding of what separates AI signal from AI slop, a practical framework for embedding role-level AI expectations into job descriptions and competencies, a diagnostic for evaluating whether their own AI investments are building capability or just generating volume, and access to the same survey instrument used in the 2026 State of Capability Development research — ready to run with their own teams to benchmark where they stand.
Learning Objective 1:
Define AI slop across its three forms — generic content at scale, vendor overclaiming, and learner fatigue — and identify where each is showing up in enterprise L&D.
Learning Objective 2:
Assess your organization's AI readiness against the structural prerequisites that separate effective AI deployment from noise: capability frameworks, structured development plans, and business OKR alignment.
Learning Objective 3:
Write role-level job descriptions and competency definitions that specify how AI should be used in each job — and use that output as the foundation for manager-led conversations about AI expectations with their people.
Format
In-Person
Schedule-At-A-Glance
EXPO Happenings

Join the event!

See all the content and easy-to-use features by logging in or registering!