🐸 FroggerAPI · On-prem / private deployment
On-premise / Private Deployment Options
The public demo at froggerapi.io is convenient, but many teams need to keep collections
inside their own environment. FroggerAPI is designed to run as a pair of containers
(public API + Python converter sidecar) entirely inside your VPC or on-premises.
Deployment models
- AWS ECS (Fargate or EC2)
- Kubernetes (Helm chart or manifests)
- Docker Compose
- Bare metal / on-prem Docker runtime
Reference architecture
At a high level:
- A .NET 8 API container exposes
/api/convert to your internal clients.
- A Python FastAPI sidecar container performs the actual Postman → OpenAPI conversion.
- The .NET API calls the sidecar over an internal network (e.g.,
http://localhost:5001).
- Everything runs inside your network boundary; no external calls are made.
Requirements
- Docker runtime or container orchestrator (ECS, Kubernetes, etc.).
- 1–2 vCPU and 512MB–1GB RAM to start (per environment, depending on load).
- Optional: internal load balancer or ingress controller to expose the API.
Benefits of private deployment
- No collections or environments ever leave your environment.
- Logging, storage, and retention are under your control.
- Direct integration with your internal CI/CD and Tenable deployment.
- Can be locked down to specific subnets, VPNs, or bastion access only.
Next steps
If you’re interested in running FroggerAPI privately, email
feedback@froggerapi.io with a short note about:
- Your environment (AWS, on-prem, hybrid).
- Rough scale (collections per day, file sizes).
- Whether you’re already using Tenable WAS.
We’ll follow up with more detailed deployment docs, licensing options, and a technical
conversation about your use case.