Skip to main content

AiRelay

LLM relay adapter β€” OpenAI-compatible call + response normalization (LiteLLM compatible, excludes Chorus) (Layer 1 Core).

Status​

KeyValue
Layercore
TierL1
Statusreleased
Version1.0.0
PriceFree (free)
CategoryAI / LLM

Overview​

Overview​

AiRelay is a Layer 1 Core Plugin of multi-saas-kit. Following apis.how's External-First principle (/api/v1/edge/... mint β†’ relay), it standardizes the OpenAI-compatible LLM call adapter.

LiteLLM is the LLM control plane; uses virtual keys + model aliases. apis.how does not reimplement LLM; focuses on operational policy / token management / documentation / extension layer.

Scope boundary: Chorus runs locally/server-installed AI Agent CLI processes as an orchestration layer, so it is outside AiRelay. Chorus integration belongs to a separate AgentCliBridge/ChorusBridge plugin.

Core Components​

TextContentExtractor (Pure)​

Extracts text from OpenAI-compatible chat completion payload. Supports both string and multi-part content shapes.

RelayResponseNormalizer (Pure)​

Standardizes LLM relay response to apis.how's standard shape:

success: { status: 'success', data: { text, model, usage, raw }, relay_meta }
error: { status: 'error', error: { code, message, http_status }, relay_meta }

Auto-detects HTTP 4xx/5xx, extracts error.code / error.message from error payload.

LlmRelayClient​

OpenAI-compatible LLM relay client.

  • complete($payload) β€” POST /v1/chat/completions + normalized standard response
  • Auth: Authorization: Bearer {api_key} + x-litellm-api-key headers
  • Missing apiKey β†’ AiRelayNotConfiguredException
  • Network exception β†’ error.code = 'llm_relay_exception' safe fallback
  • Trailing slash in base_url auto-normalized

Configuration​

return [
'enabled' => env('PLG_AI_RELAY_ENABLED', true),
'litellm' => [
'base_url' => env('LITELLM_BASE_URL', 'https://llm.apis.how'),
'api_key' => env('LITELLM_API_KEY'),
'completion_path' => '/v1/chat/completions',
'timeout_seconds' => 60,
],
'response_mode' => 'server_relay_llm',
];

Usage​

$client = LlmRelayClient::fromConfig(config('ai-relay'));
$result = $client->complete([
'model' => 'gpt-4o-mini',
'messages' => [['role' => 'user', 'content' => 'Hello']],
]);

if ($result['status'] === 'success') {
return $result['data']['text'];
}

Origin​

Extracted from apis.how's LlmApiRelayClient (174 LOC). env multi-candidate resolution remains in project; only pure call + response normalization in plugin.

apis.how External-First Principle​

[All callers] β†’ POST /api/v1/edge/{service}/sessions (mint, long-term key)
β†’ JWT (Ed25519, 3 min)
β†’ POST relay.apis.how/v1/{service}/{endpoint} (relay)

This plugin is the standard adapter for the relay part.

Plugin Relationships​

  • PromptTemplating (optional) β€” system prompt composition before messages build
  • Project-owned metering/pricing layer β€” consumes data.usage for cost calculation (apis.how, etc.)
  • AiTracking (Phase 2) β€” call history / cost / latency logging

Roadmap (Phase 3+)​

  • Streaming support (SSE)
  • Speech relay integration
  • EdgeRelay JWT auto-mint
  • Additional OpenAI-compatible gateway adapters (LiteLLM-compatible family)
  • Agent CLI execution, including Chorus, belongs to a separate AgentCliBridge/ChorusBridge plugin
  • AiTracking plugin auto-hook

License​

MIT

Dependencies​

Demos​


πŸ›’ View on Plugin Store: store.codebase.how/plugins/ai-relay