Architecture
llm-connector is designed to provide a unified, type-safe interface for various LLM providers while handling the differences in their underlying protocols.
Core Design Principles
- Unified Interface: A single
LlmClientstruct that can be instantiated with any backend provider. - Protocol Abstraction: Separation between
Provider(high-level logic) andProtocol(request/response formatting). - Unified Output: All providers flow into a
StreamingResponseunified format, simplifying downstream consumption.
Unified Output Format
Regardless of the provider (OpenAI, Anthropic, Tencent, etc.), all responses are normalized.
rust
pub struct StreamingResponse {
pub content: Option<String>,
pub reasoning_content: Option<String>, // For reasoning models (o1, R1)
pub tool_calls: Option<Vec<ToolCall>>,
pub usage: Option<Usage>,
pub finish_reason: Option<FinishReason>,
// ...
}This ensures that switching providers does not require changing your response handling logic.
Multi-Modal Support
The library supports multi-modal inputs (Text + Image) natively via the MessageBlock enum.
rust
pub enum MessageBlock {
Text { text: String },
Image { source: ImageSource }, // Base64
ImageUrl { image_url: ImageUrl }, // URL
}- OpenAI: Converted to
content: [{type: "text"}, {type: "image_url"}] - Anthropic: Converted to
content: [{type: "text"}, {type: "image"}] - Text-Only Providers: Images are automatically stripped or result in an error (depending on implementation).
Streaming Architecture
Streaming is handled via reqwest and eventsource-stream.
- Parsing: Each provider has a
parse_stream_responsemethod. - Normalization: Provider-specific SSE events are mapped to the unified
StreamingResponse. - Reasoning: For reasoning models (DeepSeek R1, Volcengine),
reasoning_contentis extracted from specific fields (e.g.,delta.reasoning_content) and exposed separately.
Configuration System
The ConfigurableProtocol allows creating new providers purely through configuration, without writing new code, for any OpenAI-compatible service.
rust
ProtocolConfig {
endpoints: EndpointConfig {
chat_template: "{base_url}/v1/chat/completions".to_string(),
..
},
auth: AuthConfig::Bearer,
}