Prompt Enhancement API
Prompt Enhancement API Complete Reference¶
This section provides comprehensive examples for all Prompt Enhancement API endpoints available in Vision Studio, including prompt enhancement, background adaptation, and subject-based prompt generation.
Prerequisites¶
import requests
import json
# Your API endpoint and key
API_URL = "http://localhost:8527/api/v1/prompt"
API_KEY = "your-api-key"
headers = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
Prompt Enhancement¶
1. Enhance Prompt¶
Transform basic prompts into detailed, professional descriptions for better image generation results.
Basic Prompt Enhancement¶
data = {
"prompt": "A mountain landscape with a lake",
"temperature": 0.7,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/enhance-prompt",
headers=headers,
json=data
)
Creative Enhancement with Higher Temperature¶
data = {
"prompt": "A cozy coffee shop interior",
"temperature": 1.2, # Higher temperature for more creative variations
"model_provider": "gemini"
}
response = requests.post(
f"{API_URL}/enhance-prompt",
headers=headers,
json=data
)
Conservative Enhancement with Lower Temperature¶
data = {
"prompt": "Professional headshot of a business person",
"temperature": 0.3, # Lower temperature for more consistent results
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/enhance-prompt",
headers=headers,
json=data
)
Background Prompt Generation¶
1. Adapt Background Prompt¶
Generate prompts to adapt existing backgrounds for specific audiences and locations.
Family-Friendly Background Adaptation¶
data = {
"audience": "families with young children",
"location": "Paris, France",
"temperature": 0.7,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/adapt-background-prompt",
headers=headers,
json=data
)
Professional Audience Background¶
data = {
"audience": "business professionals",
"location": "Tokyo, Japan",
"temperature": 0.5,
"model_provider": "gemini"
}
response = requests.post(
f"{API_URL}/adapt-background-prompt",
headers=headers,
json=data
)
Youth-Oriented Background¶
data = {
"audience": "young adults and millennials",
"location": "Morocco",
"temperature": 0.9,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/adapt-background-prompt",
headers=headers,
json=data
)
2. Generate Background Prompt¶
Create prompts for generating backgrounds from scratch.
Scenic Location Background¶
data = {
"audience": "nature enthusiasts",
"location": "Southern France",
"temperature": 0.6,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/generate-background-prompt",
headers=headers,
json=data
)
Cultural Background Creation¶
data = {
"audience": "cultural tourists",
"location": "Japan",
"temperature": 0.8,
"model_provider": "gemini"
}
response = requests.post(
f"{API_URL}/generate-background-prompt",
headers=headers,
json=data
)
Senior-Friendly Background¶
data = {
"audience": "seniors and retirees",
"location": "Italy",
"temperature": 0.4,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/generate-background-prompt",
headers=headers,
json=data
)
Complete Image Generation¶
Generate Image Prompt¶
Create comprehensive prompts for complete image generation.
Travel Photography Style¶
data = {
"audience": "travel enthusiasts",
"location": "Morocco",
"temperature": 0.7,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/generate-image-prompt",
headers=headers,
json=data
)
Corporate Image Generation¶
data = {
"audience": "professionals and executives",
"location": "Tokyo, Japan",
"temperature": 0.5,
"model_provider": "gemini"
}
response = requests.post(
f"{API_URL}/generate-image-prompt",
headers=headers,
json=data
)
Family-Oriented Image¶
data = {
"audience": "families with children",
"location": "Paris, France",
"temperature": 0.6,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/generate-image-prompt",
headers=headers,
json=data
)
Subject-Based Prompts¶
1. Adapt Subject Prompt¶
Generate prompts for custom models featuring specific subjects.
Pet Photography Prompt¶
data = {
"subject": "golden retriever dog",
"audience": "pet lovers and families",
"location": "New York Central Park",
"temperature": 0.7,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/adapt-subject-prompt",
headers=headers,
json=data
)
Fashion Subject Prompt¶
data = {
"subject": "young woman",
"audience": "fashion enthusiasts",
"location": "Paris fashion district",
"temperature": 0.8,
"model_provider": "gemini"
}
response = requests.post(
f"{API_URL}/adapt-subject-prompt",
headers=headers,
json=data
)
Professional Portrait Subject¶
data = {
"subject": "business executive",
"audience": "professionals and corporations",
"location": "Tokyo business district",
"temperature": 0.4,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/adapt-subject-prompt",
headers=headers,
json=data
)
2. Generate Subject Pose Prompt¶
Create prompts for subjects in specific poses against white backgrounds.
Animal Pose Prompt¶
data = {
"subject": "cat",
"temperature": 0.7,
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/generate-subject-pose-prompt",
headers=headers,
json=data
)
Human Subject Pose¶
data = {
"subject": "person dancing",
"temperature": 0.9, # Higher temperature for creative poses
"model_provider": "gemini"
}
response = requests.post(
f"{API_URL}/generate-subject-pose-prompt",
headers=headers,
json=data
)
Product Subject Pose¶
data = {
"subject": "luxury watch",
"temperature": 0.3, # Low temperature for consistent product shots
"model_provider": "openai"
}
response = requests.post(
f"{API_URL}/generate-subject-pose-prompt",
headers=headers,
json=data
)
Advanced Example: Concurrent Prompt Enhancement¶
Here's an example of processing multiple prompts concurrently using asyncio and aiohttp:
import asyncio
import aiohttp
import time
from typing import List, Dict, Any
async def enhance_prompt_async(session: aiohttp.ClientSession, prompt: str,
temperature: float = 0.7, model_provider: str = "openai",
request_id: int = 0) -> Dict[str, Any]:
"""Enhance a single prompt asynchronously."""
url = f"{API_URL}/enhance-prompt"
data = {
"prompt": prompt,
"temperature": temperature,
"model_provider": model_provider
}
headers_async = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
start_time = time.time()
try:
async with session.post(url, headers=headers_async, json=data) as response:
result = await response.json()
duration = time.time() - start_time
return {
"request_id": request_id,
"original_prompt": prompt,
"temperature": temperature,
"model_provider": model_provider,
"status": response.status,
"duration": duration,
"result": result
}
except Exception as e:
return {
"request_id": request_id,
"original_prompt": prompt,
"temperature": temperature,
"model_provider": model_provider,
"status": -1,
"duration": time.time() - start_time,
"error": str(e)
}
async def process_prompts_concurrently():
"""Process multiple prompts with different settings concurrently."""
# Test prompts with different complexity levels
test_prompts = [
{
"prompt": "A sunset over mountains",
"temperature": 0.5,
"model_provider": "openai"
},
{
"prompt": "Modern office workspace",
"temperature": 0.7,
"model_provider": "gemini"
},
{
"prompt": "Vintage car in city street",
"temperature": 0.9,
"model_provider": "openai"
},
{
"prompt": "Abstract digital art",
"temperature": 1.2,
"model_provider": "gemini"
},
{
"prompt": "Cozy living room interior",
"temperature": 0.4,
"model_provider": "openai"
},
{
"prompt": "Street food market scene",
"temperature": 0.8,
"model_provider": "gemini"
},
{
"prompt": "Professional headshot",
"temperature": 0.3,
"model_provider": "openai"
},
{
"prompt": "Fantasy landscape with dragons",
"temperature": 1.5,
"model_provider": "gemini"
}
]
# Configure session for concurrent requests
timeout = aiohttp.ClientTimeout(total=60) # 1 minute timeout
connector = aiohttp.TCPConnector(limit=10)
async with aiohttp.ClientSession(connector=connector, timeout=timeout) as session:
tasks = []
# Create tasks for all prompts
for i, prompt_config in enumerate(test_prompts):
task = enhance_prompt_async(
session=session,
prompt=prompt_config["prompt"],
temperature=prompt_config["temperature"],
model_provider=prompt_config["model_provider"],
request_id=i
)
tasks.append(task)
print(f"Starting {len(tasks)} concurrent prompt enhancement requests...")
start_time = time.time()
# Execute all requests concurrently
results = await asyncio.gather(*tasks)
total_time = time.time() - start_time
print(f"Completed all requests in {total_time:.3f} seconds")
# Process and display results
successful = 0
failed = 0
results_by_provider = {"openai": [], "gemini": []}
for result in results:
if result["status"] == 200 and result["result"]["success"]:
successful += 1
enhanced_prompt = result["result"]["data"]["enhanced_prompt"]
provider = result["model_provider"]
results_by_provider[provider].append({
"original": result["original_prompt"],
"enhanced": enhanced_prompt,
"temperature": result["temperature"],
"duration": result["duration"]
})
print(f"✅ Request {result['request_id']}: {provider} (T={result['temperature']}) - {result['duration']:.3f}s")
else:
failed += 1
error_msg = result.get('error', result['result'].get('message', f"HTTP {result['status']}"))
print(f"❌ Request {result['request_id']}: {result['model_provider']} - {error_msg}")
print(f"\n📊 Summary: {successful} successful, {failed} failed")
if successful > 0:
avg_duration = sum(r['duration'] for r in results if r['status'] == 200) / successful
print(f"Average processing time: {avg_duration:.3f}s")
print(f"Requests per second: {len(results) / total_time:.2f}")
# Display enhanced prompts organized by provider
print("\n" + "="*80)
print("ENHANCED PROMPTS BY PROVIDER")
print("="*80)
for provider, provider_results in results_by_provider.items():
if provider_results:
print(f"\n{provider.upper()} RESULTS:")
print("-" * 40)
for i, res in enumerate(provider_results, 1):
print(f"\n{i}. Original (T={res['temperature']}):")
print(f" {res['original']}")
print(f" Enhanced ({res['duration']:.3f}s):")
# Wrap long enhanced prompts for better readability
enhanced = res['enhanced']
if len(enhanced) > 80:
words = enhanced.split()
lines = []
current_line = []
current_length = 0
for word in words:
if current_length + len(word) + 1 <= 76:
current_line.append(word)
current_length += len(word) + 1
else:
lines.append(" " + " ".join(current_line))
current_line = [word]
current_length = len(word)
if current_line:
lines.append(" " + " ".join(current_line))
print("\n".join(lines))
else:
print(f" {enhanced}")
# Run the concurrent prompt enhancement
if __name__ == "__main__":
asyncio.run(process_prompts_concurrently())
This concurrent example demonstrates: - Multi-provider testing: Compares OpenAI and Gemini results side by side - Temperature variations: Tests different creativity levels (0.3 to 1.5) - Organized results: Groups results by provider for easy comparison - Performance metrics: Tracks processing times and success rates - Error handling: Robust error handling for network issues and API errors - Text formatting: Pretty-prints long enhanced prompts with proper wrapping
Available Endpoints Summary¶
| Endpoint | Description | Parameters | Output |
|---|---|---|---|
/enhance-prompt |
Transform basic prompts into detailed descriptions | prompt, temperature, model_provider |
Enhanced prompt |
/adapt-background-prompt |
Adapt backgrounds for audience/location | audience, location, temperature, model_provider |
Background adaptation prompt |
/generate-background-prompt |
Create backgrounds from scratch | audience, location, temperature, model_provider |
Background creation prompt |
/generate-image-prompt |
Generate complete image prompts | audience, location, temperature, model_provider |
Full image generation prompt |
/adapt-subject-prompt |
Create subject-focused prompts | subject, audience, location, temperature, model_provider |
Subject-specific prompt |
/generate-subject-pose-prompt |
Generate pose prompts for subjects | subject, temperature, model_provider |
Pose-specific prompt |
Best Practices¶
- Temperature selection: Use 0.3-0.5 for consistent results, 0.7-0.9 for balanced creativity, 1.0-2.0 for maximum variation
- Model provider choice: OpenAI tends to be more detailed and photographic, Gemini more artistic and varied
- Audience specificity: Be specific about target demographics (e.g., "young professionals aged 25-35" vs "professionals")
- Location context: Include cultural and geographical details for better localized results
- Subject clarity: Use descriptive, specific subject names for better pose and styling suggestions
- Concurrent processing: Use async patterns for batch processing multiple prompts efficiently
- Error handling: Implement retry logic for network failures and validation errors
- Result validation: Always check the success flag and handle potential API errors gracefully