mirror of
https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools.git
synced 2025-12-16 21:45:14 +00:00
1.4 KiB
1.4 KiB
Universal AI Agent – Artifact Bundle
Contents:
openapi/openapi.yaml: OpenAPI 3.0 specpostman/UniversalAI.postman_collection.json: Postman collection (variables:baseUrl,token)clients/node: Node client (ESM), SSE streaming, WebSocket sample, rate-limited fetchclients/python: Python client (requests), SSE streamingutils: Rate-limit-safe helpers (rate_limit.js,rate_limit.py)scripts/curl.sh: Handy cURL commands
Quickstart:
- Postman
- Import
postman/UniversalAI.postman_collection.json - Set variables
baseUrl,token
- OpenAPI
- Load
openapi/openapi.yamlinto Swagger UI/Insomnia/Stoplight
- Node client
cd clients/node
npm install
BASE_URL=https://your-domain.com TOKEN=YOUR_TOKEN npm start
# WebSocket sample
BASE_URL=https://your-domain.com TOKEN=YOUR_TOKEN npm run ws
- Python client
cd clients/python
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
BASE_URL=https://your-domain.com TOKEN=YOUR_TOKEN python examples.py
- cURL
cd scripts
chmod +x curl.sh
BASE_URL=https://your-domain.com TOKEN=YOUR_TOKEN ./curl.sh chat "Hello"
Notes:
- Set
BASE_URLandTOKENenvironment variables for all samples. - Streaming responses are SSE (Server-Sent Events) and parsed via
data: ...lines. - Retry/backoff on HTTP 429/5xx is implemented in the rate-limit helpers.