Used to check for browser translation.
用于检测浏览器翻译。
ブラウザの翻訳を検出する

Announcing AIGNE Hub: A Unified API Gateway for LLMs

wangshijun
Aug 18, 2025 · edited
B
Blogs
cover

AIGNE Hub: The Next Evolution of AI Kit#

AIGNE Hub is the upgraded, reimagined evolution of AI Kit—your all‑in‑one gateway for building, managing, and scaling applications powered by Large Language Models (LLMs). It delivers:

  • A single API to access any major model
  • A centralized dashboard to track costs, performance, and security
  • Built‑in observability and analytics tools
  • An integrated credit‑based billing system

With AIGNE Hub, you can scale AI features, monitor usage, and even monetize them—without juggling multiple systems.

The Central Hub of the AIGNE Ecosystem#

AIGNE Hub is the unified entry point for multiple LLM and AIGC service providers. Out of the box, it includes:

  • OAuth authorization
  • Credit‑based billing
  • Usage analytics

This allows teams to launch AI‑driven apps without managing separate keys, dashboards, or billing platforms.

AIGNE Architecture Diagram#

image.png

Key Upgrades#

Personal Mode vs Enterprise Mode#

Whether you’re an independent developer or managing a multi‑tenant AI platform, AIGNE Hub adapts:

  • Personal Mode (default) – Ideal for local development with direct provider access. Self‑host, use your own API keys, and skip billing setup.

image.png

  • Enterprise Mode – Enable multi-user capabilities with credit-based billing, where usage is metered per token.
      1. Install the Payment Kit component.
      2. Enterprise Mode Setup:
        • Install the Payment Kit component.
        • Enable Credit Billing in: AIGNE Hub → Preferences.
        • Set base credit price, profit margin, and initial credits for new users.
        005821b6991aa687ef10a6c6777ffc5b (1).png

      3. Configure Model Rates
        1. Navigate to: AIGNE Hub → Config → AI Config → Model Rates.
        2. Assign per‑model pricing for each provider.
        3. Set distinct credit rates for input/output tokens.
        Users can monitor credit balance, call history, and daily usage trends directly from their profile—making consumption tracking effortless.
        image.png
        image.png

image.png

Deep Integration with AIGNE Framework#

  • Fully compatible with the AIGNE Framework and its provider adapters for unified model access, permissions, and usage tracking. Developers can use all Hub features via the @aigne/aigne-hub package.

Screenshot: feat: enhance AIGNE Hub with multi-provider support, credit billing, and updated documentation

Seamless Compatibility with Blocklet(s)#

  • During the Blocklet boot sequence, AIGNE Hub injects runtime LLM configuration as environment variables directly into Blocklet components.

Built in CLI Support#

  • Run any connected model directly from your terminal with the aigne run command—no individual API key setup required.
  • No separate API key setup required. Works in both Personal and Enterprise modes—perfect for rapid testing, demos, or quick queries.s.

Supported AI Providers #

The following LLM providers are supported, including:

Cloud Providers

  • Amazon Bedrock (AWS models)
  • Google Gemini (text + image)

Model Labs

  • OpenAI (GPT, DALL·E, Embedding)
  • Anthropic (Claude)
  • xAI (Grok series)
  • DeepSeek
  • Doubao

Multi-Model Platforms

  • OpenRouter
  • Poe

Local Models

  • Ollama

Upgrade from AI Kit’s Subscription Mode#

Coming from AI Kit’s pay‑as‑you‑go model? You can now switch to the credit‑based system:

  • No local AI Kit installation required
  • Remove the old setup and link your account via: Services → AIGNE Hub

.

image.png

Benefits for New and Existing Users#

To start using AIGNE Hub, all new accounts receive a one‑time credit boost—ideal for exploring features right away.

  • Credits are valid for a limited time
  • Applicable to all supported providers
  • Full details provided upon activation

Resources#

Learn more and get started with AIGNE Hub using these resources:

With this release, ArcBlock’s AIGNE Hub delivers the flexibility, control, and scalability needed to streamline AI projects and unlock a seamless multi‑provider experience. Your next build starts here.