
Launching Uber’s GenAI Knowledge Hub
How we centralized guidance for 10,000+ employees during a year of rapid AI adoption
In 2025, Uber committed to adopting Generative AI across the company at an unprecedented pace. From ChatGPT and Gemini to Zoom AI Companion and more than twenty additional tools, teams across engineering, operations, support, and enterprise applications were suddenly working with powerful new capabilities.
This fast acceleration came with equally fast-emerging challenges. Employees did not always know which tools they had access to, what data they could share, how to stay compliant with security policies, or how to troubleshoot issues. Service desk teams were receiving a growing volume of tickets that often involved sensitive questions about data privacy, access eligibility, and tool limitations. And the engineering teams building and integrating these tools needed a way to communicate updates clearly and consistently.
To address this, my team in IT Knowledge Management partnered with global Engineering, Security, IT, and Compliance to build the Uber GenAI Knowledge Hub, a centralized, end-to-end system for AI tool onboarding, guidance, and troubleshooting. By the time we reached full launch, the hub had already become the single source of truth for more than 10,000 employees, powering over 120,000 internal views across ServiceNow and Confluence.
This is the story of how we built it.
Why a GenAI Knowledge Hub Was Needed
A rapid rollout of more than twenty GenAI tools created confusion around access, policies, and troubleshooting. The Knowledge Hub was built to centralize guidance, reduce friction, and support both employees and service teams. Throughout 2025, Uber teams were integrating GenAI tools into critical workflows at extraordinary speed. Each new tool brought its own:
- access path
- onboarding steps
- data privacy rules
- security limitations
- usage restrictions
- integrations with internal systems
As the Senior Technical Writer leading this initiative, I often saw two different problems happening at the same time:
- Employees did not know where to go for accurate guidance. Information was scattered across Slack threads, email announcements, Jira tickets, and team-specific documents.
- Support teams lacked standardized, authoritative resources. L1/L2 agents were spending time troubleshooting issues that could be solved through clearer documentation or proactive guidance.
Meanwhile, Uber’s Global EngSec teams were defining strict policies governing what data could be shared with AI tools, who could request access, and how employees should evaluate risk. These policies were essential, but they were also complex and evolving.
We needed a single system that could bring clarity to all of it.
My Role: Building the Knowledge System Behind Uber’s AI Rollout
I served as the end-to-end documentation owner, collaborating with engineering, security, and support teams to create more than 80 knowledge articles and become the primary link between employees, support agents, and technical teams. As the lead writer and documentation owner for the project, I was responsible for the end-to-end creation of a global knowledge base that captured:
- tool-specific onboarding instructions
- security and data handling policies
- limitations for each GenAI vendor
- troubleshooting guidance
- workflow examples and best practices
- escalation paths for complex issues
This meant working closely with:
- engineering teams integrating tools like ChatGPT, Gemini, Zoom AI, and dozens of others
- Global EngSec, who defined usage and data restrictions
- Privacy, Compliance, and Legal reviewers
- L1/L2 teams responsible for supporting employees
- Product and BizTech leadership who were driving GenAI adoption strategy
I personally authored more than 80 knowledge articles, ranging from user guides to deep technical explanations. These resources became foundational for both employees and service agents, and they formed the backbone of the GenAI Knowledge Hub.
As documentation matured, I became the primary point of contact between:
- employees and support teams, clarifying questions about usage, access, risk, and troubleshooting
- engineering teams, translating technical and security requirements into clear, actionable content
I contributed directly to resolving hundreds of support tickets, often dealing with complex issues involving AI, SaaS, identity, and enterprise applications.
How the Hub Supported Uber’s GenAI Gateway
The Knowledge Hub complemented Uber’s GenAI Gateway by helping developers understand security, data handling, compliant prompting, and provider behaviors, enabling consistent AI adoption across internal tools. This project also intersected with Uber’s launch of the GenAI Gateway, the platform that allows teams to integrate AI directly into internal applications through a unified, secure API.
The Gateway enables developers to:
- connect multiple GenAI providers behind a single interface
- apply Uber’s safety, auditing, and compliance controls automatically
- build features like AI summaries or natural language workflows
- use the same security guardrails across all AI experiences
My documentation work supported the Gateway rollout by helping developers understand:
- how the Gateway handled data
- what could and could not be routed through providers
- how to build compliant prompts
- how to interpret provider-specific behaviors
- how the Gateway aligned with Uber’s global EngSec policies
Supporting both the Gateway and the employee-facing GenAI Knowledge Hub gave me a deeper view into the technical and organizational alignment needed to adopt AI at scale.
A Global Project With High Complexity
With dozens of stakeholders, evolving policies, and org-wide impact, this project required strong coordination, clarity, and constant iteration across ServiceNow, Confluence, and engin The GenAI Knowledge Hub was not a simple documentation refresh. It required:
- coordination with dozens of stakeholders across continents
- constant updates as policies evolved
- deep understanding of internal systems across security, identity, access, vendor limitations, and app management
- careful balance between usability and compliance
- systematic rollout across ServiceNow, Confluence, and internal communications channels
The project lasted from Q1 to Q3 of 2025 and touched nearly every enterprise team in some way.
By the time we stabilized the system, the hub had become a reliable part of Uber’s AI adoption strategy.
What I Learned
Leading this project strengthened some of the most important skills I use as a technical writer and knowledge systems owner:
Translating complex requirements into clear user guidance
Security and engineering teams speak a different language than frontline employees. Turning strict policies into simple, actionable steps was essential.
Driving alignment across global stakeholders
With teams in the US, LATAM, EMEA, and APAC, clarity and rhythm were crucial.
Investigating and resolving complex issues
AI-related tickets often required deep problem-solving across data access, identity, vendor limitations, and user error.
Managing constant change without creating chaos
GenAI policies evolved almost monthly. Documentation had to adapt quickly while still remaining trustworthy.
Building durable knowledge systems
The hub now requires far less maintenance because the foundation is strong and the processes are standardized.
Where the Hub Stands Today
The GenAI Knowledge Hub is now a mature, stable system supporting employees, engineering teams, and helpdesk agents. It remains one of my most impactful projects at Uber, not only because of its scale but because of the clarity and alignment it helped bring during a chaotic and transformative year.
Building it taught me how to navigate complex AI ecosystems, collaborate across many technical and non-technical teams, and lead documentation efforts that truly improve the user experience at scale.