Reverse-proxy mode
Optional deployment where your reverse proxy (Nginx, Envoy, or custom gateway) calls the same RiskShield Assess API and enforces allow, challenge, or block at the edge before traffic reaches your origin.
What it is
In SDK/middleware mode, your application server calls the Assess API before handling sensitive requests. In reverse-proxy mode, a reverse proxy in front of your origin calls the same POST /api/v1/protect/assess for each request (or selected paths), then forwards to origin, redirects to a challenge, or returns 403. No traffic is proxied through RiskShield; your proxy runs in your environment.
When to use it
- You want to protect all traffic (or whole paths) at the edge before it hits your app.
- One proxy can protect many origins without changing application code.
- It fits into existing proxy/gateway policies and enterprise traffic management.
Flow
- Client hits your proxy (e.g.
https://yourapp.com/login). - Proxy sends a normalized payload to
POST /api/v1/protect/assesswithX-Forwarded-For: <real_client_ip>. - RiskShield returns
decision(ALLOW, CHALLENGE_*, BLOCK),risk_score, and optionallychallenge_url. - Proxy enforces: forward to origin, redirect to
challenge_url, or return 403.
Header contract
When your proxy calls RiskShield, set X-Forwarded-For to the real client IP so scoring and rate limits apply to the end user. Optional headers for audit: X-RiskShield-Site-Id, X-RiskShield-Request-Id, X-RiskShield-Client-IP. Strip any client-supplied RiskShield headers before building the request to the API.
Full documentation
The repository includes a full guide with objectives, architecture diagram, request/response contract, security requirements, and example flows (Nginx, Envoy). See docs/REVERSE_PROXY_MODE.md in the RiskShield codebase.