AiRelay
LLM relay adapter β OpenAI-compatible call + response normalization (LiteLLM compatible, excludes Chorus) (Layer 1 Core).
Statusβ
| Key | Value |
|---|---|
| Layer | core |
| Tier | L1 |
| Status | released |
| Version | 1.0.0 |
| Price | Free (free) |
| Category | AI / LLM |
Overviewβ
Overviewβ
AiRelay is a Layer 1 Core Plugin of multi-saas-kit. Following apis.how's External-First principle (/api/v1/edge/... mint β relay), it standardizes the OpenAI-compatible LLM call adapter.
LiteLLM is the LLM control plane; uses virtual keys + model aliases. apis.how does not reimplement LLM; focuses on operational policy / token management / documentation / extension layer.
Scope boundary: Chorus runs locally/server-installed AI Agent CLI processes as an orchestration layer, so it is outside AiRelay. Chorus integration belongs to a separate AgentCliBridge/ChorusBridge plugin.
Core Componentsβ
TextContentExtractor (Pure)β
Extracts text from OpenAI-compatible chat completion payload. Supports both string and multi-part content shapes.
RelayResponseNormalizer (Pure)β
Standardizes LLM relay response to apis.how's standard shape:
success: { status: 'success', data: { text, model, usage, raw }, relay_meta }
error: { status: 'error', error: { code, message, http_status }, relay_meta }
Auto-detects HTTP 4xx/5xx, extracts error.code / error.message from error payload.
LlmRelayClientβ
OpenAI-compatible LLM relay client.
complete($payload)β POST/v1/chat/completions+ normalized standard response- Auth:
Authorization: Bearer {api_key}+x-litellm-api-keyheaders - Missing apiKey β
AiRelayNotConfiguredException - Network exception β
error.code = 'llm_relay_exception'safe fallback - Trailing slash in base_url auto-normalized
Configurationβ
return [
'enabled' => env('PLG_AI_RELAY_ENABLED', true),
'litellm' => [
'base_url' => env('LITELLM_BASE_URL', 'https://llm.apis.how'),
'api_key' => env('LITELLM_API_KEY'),
'completion_path' => '/v1/chat/completions',
'timeout_seconds' => 60,
],
'response_mode' => 'server_relay_llm',
];
Usageβ
$client = LlmRelayClient::fromConfig(config('ai-relay'));
$result = $client->complete([
'model' => 'gpt-4o-mini',
'messages' => [['role' => 'user', 'content' => 'Hello']],
]);
if ($result['status'] === 'success') {
return $result['data']['text'];
}
Originβ
Extracted from apis.how's LlmApiRelayClient (174 LOC). env multi-candidate resolution remains in project; only pure call + response normalization in plugin.
apis.how External-First Principleβ
[All callers] β POST /api/v1/edge/{service}/sessions (mint, long-term key)
β JWT (Ed25519, 3 min)
β POST relay.apis.how/v1/{service}/{endpoint} (relay)
This plugin is the standard adapter for the relay part.
Plugin Relationshipsβ
- PromptTemplating (optional) β system prompt composition before
messagesbuild - Project-owned metering/pricing layer β consumes
data.usagefor cost calculation (apis.how, etc.) - AiTracking (Phase 2) β call history / cost / latency logging
Roadmap (Phase 3+)β
- Streaming support (SSE)
- Speech relay integration
- EdgeRelay JWT auto-mint
- Additional OpenAI-compatible gateway adapters (LiteLLM-compatible family)
- Agent CLI execution, including Chorus, belongs to a separate AgentCliBridge/ChorusBridge plugin
- AiTracking plugin auto-hook
Licenseβ
MIT
Dependenciesβ
- PromptTemplating (optional)
Demosβ
- Platform κ΄λ¦¬μ ν¨λμμ λ©ν νμΈ π Login required
- μ¬μ© μμ (PHP)
π View on Plugin Store: store.codebase.how/plugins/ai-relay