API Integration Issues
Troubleshooting API integration issues, rate limiting, and connectivity problems
This guide helps you resolve issues related to API integrations, rate limiting, timeouts, and connectivity problems with BroxiAI and third-party services.
Rate Limiting Issues
Too Many Requests (429) Errors
Problem: "Too Many Requests" (429) errors when calling APIs
Symptoms:
HTTP 429 status codes
"Rate limit exceeded" messages
API calls being rejected
Temporary service unavailability
Solutions:
Implement Exponential Backoff
import time import random from typing import Callable, Any def api_call_with_retry(func: Callable, max_retries: int = 3) -> Any: for attempt in range(max_retries): try: return func() except RateLimitError as e: if attempt == max_retries - 1: raise e # Exponential backoff with jitter delay = (2 ** attempt) + random.uniform(0, 1) print(f"Rate limited. Retrying in {delay:.2f} seconds...") time.sleep(delay) # Usage result = api_call_with_retry(lambda: make_api_call())
Check Rate Limits
# Check current usage curl -H "Authorization: Bearer $TOKEN" \ "https://api.broxi.ai/v1/usage" | jq # Response includes rate limit info { "requests_remaining": 450, "requests_limit": 500, "reset_time": "2024-01-01T12:00:00Z", "window": "hour" }
Rate Limit Management
Monitor API usage regularly
Upgrade plan if hitting limits frequently
Distribute requests over time
Use batch operations when available
Implement Request Queuing
import asyncio from asyncio import Queue import aiohttp class RateLimitedClient: def __init__(self, requests_per_second: float = 1.0): self.rate_limit = requests_per_second self.queue = Queue() self.last_request_time = 0 async def make_request(self, url: str, **kwargs): # Add rate limiting logic current_time = time.time() time_since_last = current_time - self.last_request_time min_interval = 1.0 / self.rate_limit if time_since_last < min_interval: await asyncio.sleep(min_interval - time_since_last) # Make the actual request async with aiohttp.ClientSession() as session: async with session.get(url, **kwargs) as response: self.last_request_time = time.time() return await response.json()
BroxiAI Rate Limits
Current Limits (as of latest update):
Free Plan: 100 requests/hour, 1,000 requests/day
Pro Plan: 1,000 requests/hour, 10,000 requests/day
Enterprise: Custom limits
Monitoring Usage:
# Get current usage stats
curl -H "Authorization: Bearer $TOKEN" \
"https://api.broxi.ai/v1/account/usage"
Connection and Timeout Issues
API Timeouts
Problem: API calls timing out before completion
Solutions:
Adjust Timeout Settings
import requests from requests.adapters import HTTPAdapter from urllib3.util.retry import Retry # Configure session with custom timeouts session = requests.Session() # Retry strategy retry_strategy = Retry( total=3, backoff_factor=1, status_forcelist=[429, 500, 502, 503, 504], ) adapter = HTTPAdapter(max_retries=retry_strategy) session.mount("http://", adapter) session.mount("https://", adapter) # Make request with custom timeout response = session.get( "https://api.broxi.ai/v1/flows", timeout=(10, 30), # (connect_timeout, read_timeout) headers={"Authorization": f"Bearer {token}"} )
Component-Specific Timeouts
{ "timeout_config": { "llm_components": 60000, "file_processing": 120000, "api_requests": 30000, "webhook_delivery": 15000 } }
Network Diagnostics
# Test connectivity ping api.broxi.ai # Check DNS resolution nslookup api.broxi.ai # Test SSL connection openssl s_client -connect api.broxi.ai:443 -servername api.broxi.ai # Trace route traceroute api.broxi.ai
Connectivity Issues
Problem: Cannot connect to API endpoints
Solutions:
Network Configuration
Check firewall settings
Verify proxy configuration
Ensure outbound HTTPS (443) is allowed
Check corporate network restrictions
DNS Issues
# Test DNS resolution dig api.broxi.ai # Try alternative DNS dig @8.8.8.8 api.broxi.ai # Check hosts file cat /etc/hosts | grep broxi
SSL/TLS Issues
# Check SSL certificate curl -vI https://api.broxi.ai # Test with specific TLS version curl --tlsv1.2 -I https://api.broxi.ai
Third-Party API Integration Issues
OpenAI API Issues
Problem: OpenAI integration failing
Solutions:
API Key Validation
# Test OpenAI API key curl -H "Authorization: Bearer $OPENAI_API_KEY" \ "https://api.openai.com/v1/models"
Common OpenAI Errors
Invalid API Key: Check key format and permissions
Quota Exceeded: Monitor usage and upgrade plan
Model Not Available: Use supported models
Content Filter: Review input for policy violations
Configuration Example
{ "openai_config": { "api_key": "sk-...", "organization": "org-...", "model": "gpt-4", "max_tokens": 1000, "temperature": 0.7, "timeout": 60 } }
Google Cloud API Issues
Problem: Google Cloud services integration failing
Solutions:
Service Account Setup
{ "google_credentials": { "type": "service_account", "project_id": "your-project-id", "private_key_id": "...", "private_key": "-----BEGIN PRIVATE KEY-----\n...", "client_email": "service@project.iam.gserviceaccount.com" } }
API Enable Check
# Check enabled APIs gcloud services list --enabled --project=YOUR_PROJECT_ID # Enable required APIs gcloud services enable aiplatform.googleapis.com
Azure API Issues
Problem: Azure services integration failing
Solutions:
Authentication Setup
from azure.identity import DefaultAzureCredential from azure.cognitiveservices.language.textanalytics import TextAnalyticsClient credential = DefaultAzureCredential() client = TextAnalyticsClient( endpoint="https://your-resource.cognitiveservices.azure.com/", credential=credential )
Common Azure Errors
401 Unauthorized: Check authentication credentials
403 Forbidden: Verify resource permissions
404 Not Found: Check endpoint URLs
429 Too Many Requests: Implement rate limiting
Webhook Integration Issues
Webhook Delivery Failures
Problem: Webhooks not being delivered or received
Solutions:
Endpoint Validation
# Test webhook endpoint curl -X POST "https://your-domain.com/webhook" \ -H "Content-Type: application/json" \ -d '{"test": "payload"}'
Webhook Configuration
{ "webhook_config": { "url": "https://your-domain.com/webhook", "secret": "your-secret-key", "events": ["workflow.completed", "workflow.failed"], "retry_count": 3, "timeout": 10000 } }
Security Validation
import hmac import hashlib def verify_webhook_signature(payload: str, signature: str, secret: str) -> bool: expected_signature = hmac.new( secret.encode('utf-8'), payload.encode('utf-8'), hashlib.sha256 ).hexdigest() return hmac.compare_digest(f"sha256={expected_signature}", signature)
Webhook Debugging
Common Issues:
SSL Certificate Problems: Ensure valid SSL certificate
Response Timeouts: Return 200 status quickly
Authentication Failures: Verify signature validation
Payload Size: Check for size limits
Debug Tools:
# Test webhook locally with ngrok
ngrok http 3000
# Monitor webhook deliveries
tail -f /var/log/nginx/access.log | grep webhook
API Response Issues
Malformed Responses
Problem: API returning unexpected or malformed responses
Solutions:
Response Validation
import json from jsonschema import validate, ValidationError def validate_api_response(response_data, expected_schema): try: validate(instance=response_data, schema=expected_schema) return True except ValidationError as e: print(f"Response validation failed: {e.message}") return False # Example schema workflow_schema = { "type": "object", "properties": { "id": {"type": "string"}, "status": {"type": "string", "enum": ["running", "completed", "failed"]}, "result": {"type": "object"} }, "required": ["id", "status"] }
Content-Type Handling
# Check response content type response = requests.get(url) if response.headers.get('content-type') == 'application/json': data = response.json() else: print(f"Unexpected content type: {response.headers.get('content-type')}")
Empty or Missing Responses
Problem: API returning empty responses or missing data
Solutions:
Response Checking
def safe_api_call(url, **kwargs): try: response = requests.get(url, **kwargs) response.raise_for_status() if not response.content: raise ValueError("Empty response received") return response.json() except requests.exceptions.RequestException as e: print(f"Request failed: {e}") return None except json.JSONDecodeError as e: print(f"Invalid JSON response: {e}") return None
Performance Optimization
Caching Strategies
Implementation:
import redis
import json
from functools import wraps
# Redis cache
cache = redis.Redis(host='localhost', port=6379, db=0)
def cache_api_response(expiration=3600):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
# Create cache key
cache_key = f"api_cache:{func.__name__}:{hash(str(args) + str(kwargs))}"
# Try to get from cache
cached_result = cache.get(cache_key)
if cached_result:
return json.loads(cached_result)
# Call API and cache result
result = func(*args, **kwargs)
cache.setex(cache_key, expiration, json.dumps(result))
return result
return wrapper
return decorator
@cache_api_response(expiration=1800)
def get_model_response(prompt):
# Expensive API call
return openai_client.chat.completions.create(...)
Batch Processing
Example:
async def batch_process_requests(requests_batch):
"""Process multiple API requests in parallel"""
async with aiohttp.ClientSession() as session:
tasks = []
for request in requests_batch:
task = asyncio.create_task(
make_api_request(session, request)
)
tasks.append(task)
results = await asyncio.gather(*tasks, return_exceptions=True)
return results
# Usage
batch_size = 10
requests = [{"prompt": f"Question {i}"} for i in range(100)]
for i in range(0, len(requests), batch_size):
batch = requests[i:i + batch_size]
results = await batch_process_requests(batch)
Monitoring and Alerting
API Health Monitoring
Setup Monitoring:
import time
import logging
from datadog import DogStatsdClient
statsd = DogStatsdClient(host='localhost', port=8125)
def monitor_api_call(func):
@wraps(func)
def wrapper(*args, **kwargs):
start_time = time.time()
try:
result = func(*args, **kwargs)
statsd.increment('api.calls.success')
return result
except Exception as e:
statsd.increment('api.calls.error')
logging.error(f"API call failed: {e}")
raise
finally:
duration = time.time() - start_time
statsd.timing('api.calls.duration', duration * 1000)
return wrapper
Error Rate Monitoring
Alert Thresholds:
Error rate > 5% for 5 minutes
Response time > 10 seconds
Rate limit approaching (>80% of limit)
Getting Help
For API integration issues:
Check API Status: Monitor third-party service status pages
Review Documentation: Check API documentation for updates
Test Endpoints: Use tools like Postman or curl to test directly
Contact Support: Include API logs and error messages
Support Information to Include:
API endpoint being called
Request/response examples
Error messages and codes
Integration configuration
Network environment details
Never share actual API keys or credentials when requesting support. Use example or redacted keys instead.
Last updated