Optimize knowledge articles for internal AI search engines

Optimize knowledge articles for internal AI search engines

Developed and piloted an agentic workflow that automatically analyzes, scores, and optimizes internal knowledge content for AI-readiness across Uber’s enterprise knowledge systems.

3 min read
Published October 5, 2025
Uber
Agentic Workflow
AI
SEO
ServiceNow
ChatGPT
HTML

Overview

As part of Uber’s Enterprise Applications Knowledge Management team, I led the design and implementation of an agentic workflow to measure and improve the AI-readiness of internal content.

The project aimed to make enterprise knowledge more accessible and actionable for internal AI search engines—particularly Slack-based AI assistants and AI knowledge retrieval models—by ensuring that content is consistently structured for machine interpretation.


Problem

Throughout 2025, Uber piloted several AI Assistants that helped employees find information across internal systems like UberHub, Google Drive, Confluence, and ServiceNow.

Our team observed that these AI Assistants often returned incomplete, innaccurate, or irrelevant results, causing user confusion.

One of the identified causes stemmed from content issues: due to variance across internal platforms, many knowledge resources had inconsistent structure, unclear metadata, information chunking, and other defects that made it difficult for LLM-based tools to extract context appropriately and generate accurate responses.

Our challenge was to quantify AI SEO friendliness and then create a way to systematically improve it at scale.


Goal

Create a scalable, repeatable workflow that could:

  1. Analyze internal content against a formalized set of AI SEO parameters — structure, metadata, chunking, promptability, semantic density, etc.
  2. Score content based on weighted metrics defined in a new AI SEO Guidelines.
  3. Provide recommendations for improvement and help content owners revise their pages.
  4. Ensure repeatability so that re-running the same content should always yield the same score.

Solution

I proposed and developed an agentic workflow utilizing Uber's internal AI tools. The workflow called multiple specialized AI agents that would:

  • Extract content and metadata from various internal knowledge repositories.
  • Evaluate each article against weighted guideline parameters.
  • Generate deterministic scores and qualitative feedback.**
  • Output improvement recommendations, such as restructuring sections, clarifying metadata, and improving headings or summaries.

Key differentiator

The workflow produces repeatable, deterministic scores, ensuring consistency and traceability across content updates.


Deliverables

  • Impact: 70+ knowledge articles impacted in pilot phase.
  • Guideline creation: Contributed to the authoring of Uber’s first AI SEO Guidelines Document, defining measurable standards and weights.
  • Agentic workflow:: Designed a multi-agent pipeline for content ingestion, analysis, and consistent scoring.
  • Improved user satisfaction: Directly improved the performance of internal AI assistants, increasing both precision and user confidence in AI-generated responses.

Next Steps

We are currently expanding the scope of the project to achieve the following:

  • Automate ingestion of updated articles.
  • Extend scoring functionality to more knowledge repositories.
  • Embedd real-time AI-friendliness feedback within authoring tools.
  • Partner with other knowledge management teams to help them adopt the workflow in their documentation lifecycle.
  • Build internal dashboards to visualize content improvements over time.

Key Learnings

This project demonstrated how technical writing, AI literacy, and workflow automation converge to improve enterprise knowledge quality.

By treating documentation as a data product, we enabled scalable, measurable, and repeatable improvement — setting a foundation for AI-driven documentation quality frameworks across Uber.


If you made it all the way down here, thank you for reading! - Santiago