Initial request data
Input text
"The Great Stories" system has two main processing flows:

Asynchronous pipeline (Kafka + workers): For full job processing with webhooks, polling, and API-key quotas. The worker processes jobs, segments text, generates images and audio, and stores assets in S3.

Synchronous agents (gRPC + MCP): A separate agents service exposes segmentation, audio, and image generation over gRPC and MCP (Model Context Protocol) with API key auth. External systems can call these directly:

gRPC provides all agents (segmentation, audio narration + TTS, image prompt + generation) with protobuf contracts.
MCP (JSON-RPC 2.0 over HTTP) exposes segment_text, generate_image_prompt, and generate_image as tools with schema discovery.
Large assets (audio, images) are automatically uploaded to S3 with user-scoped paths (agents/<user_uid>/audio/...) and returned as URLs to avoid message size limits.
Files 1 file(s)
Generation parameters
Type
educational
Segments
4
Audio
podcast
Fact check
no

Great Stories Processing Flows

"The Great Stories" system has two main processing flows: Asynchronous pipeline (Kafka + workers): For full job processing with webhooks, polling, and API-key quotas. The worker processes jobs, segments text, generates images and audio, and stores assets in S3. Synchronous agents (gRPC + MCP): A separate agents service exposes segmentation, audio, and image generation over gRPC and MCP (Model Context Protocol) with API key auth. External systems can call these directly: gRPC provides all agents (segmentation, audio narration + TTS, image prompt + generation) with protobuf contracts.

MCP API with S3 Asset Handling

MCP (JSON-RPC 2.0 over HTTP) exposes segmenttext, generateimageprompt, and generateimage as tools with schema discovery. Large assets (audio, images) are automatically uploaded to S3 with user-scoped paths (agents/<user_uid>/audio/...) and returned as URLs to avoid message size limits.

Great Stories System Architecture


To understand how "The Great Stories System" functions, we look at the flow of data through three main internal layers: the API Layer, the Processing Layer, and the Data Layer, all interacting with external users and AI services. 1. Job Ingestion and Validation The process begins when an external User or Client sends a POST request to the /v1/jobs endpoint. The API Service acts as the gatekeeper.

Asynchronous Job Processing and Notification

2. Asynchronous Processing The heavy lifting happens in the Processing Layer. The Worker Service consumes the new message from the queue.

3. Completion and Notification Once the processing is finished, the system handles delivery via the Webhook Dispatcher.