Security & privacy

How we protect your community's data

Hortz is built for creators who take their community seriously. Here's exactly what we do to keep member data safe, private, and yours.
Infrastructure
Frontend
Static HTML/JS · Netlify Edge
Functions
Netlify serverless · Node.js
Database
Postgres · Supabase
Storage
Supabase Storage
Auth
Supabase Auth (email + magic link)
Realtime
Supabase Realtime (Postgres changes)
Secrets
Per-tenant config + platform env
AI
Multi-provider LLM abstraction
Security controls
01
Tenant isolation by discriminator
Every row in every table carries a tenant_id. All read and write paths filter by it before returning data; the platform code never executes a query without an authenticated tenant scope. Service-role access is gated to the functions layer — never exposed to client code.
02
Auth-verified data access
Every database read and write goes through a Netlify function that verifies the caller's Supabase session token, resolves their member record + tenant, and constrains queries to that scope. Members can only access their own data; admin endpoints additionally check role and tenant ownership.
03
Tenant-owned LLM keys (BYOK)
Pegasus tenants bring their own LLM API key. The platform stores it server-side only — it's never returned to the browser, never written to logs. Tenants who set up BYOK can also rotate or revoke the key from the admin settings panel without contacting support.
04
Rate limiting
Every function that calls an external API is rate-limited per authenticated user. A compromised account cannot abuse the platform or run up provider costs.
05
Prompt injection defense
All user input is sanitized before being injected into AI prompts. Every system prompt includes an injection-defense header. Member content cannot manipulate the AI advisor's behavior or extract the prompt.
06
Per-thread AI off (poison rule)
Any participant can flip AI off for a thread. Once flipped off, the thread is permanently locked from AI access — re-enabling would leak the muted member's words to AI. The lock is checked atomically on every AI call path.
07
Security headers
Responses include Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, and Referrer-Policy headers. Clickjacking, MIME sniffing, and data leakage are mitigated at the network layer.
08
Audit logging
Sensitive operations — badge actions, admin changes, data deletions, API key rotations, AI-toggle flips — are written to an append-only audit log. Clients cannot modify or delete audit entries.
09
GDPR right to erasure
Members can request deletion of all personal data at any time. Journal entries, goals, badges, and profile data are permanently deleted. Chat messages are anonymized so the conversation thread continuity for other members is preserved without identifying the deleting member. The request is logged in the audit trail.
10
Multi-provider LLM abstraction
Tenants can choose any supported provider through the LLM abstraction layer. Switching providers is a config change — no code rewrite. HIPAA-eligible providers (AWS Bedrock, Azure OpenAI) can be enabled per-tenant when their compliance posture requires it.
11
No ads. Ever.
Hortz does not serve ads, sell member data, or share behavioral data with third parties. Member activity stays in the platform database and nowhere else.
12
Cross-tenant member-data scoping (share_scope)
What we learn about a person — identity anchors, named patterns, voice traits, durable expertise — belongs to that person, not the community where it was first observed. Each insight carries a share_scope chosen by the member: visible across all their communities, visible only in specific ones, or private to the community of origin. The visibility predicate is enforced on every cross-tenant read; private content cannot leak even when the same person is active in multiple Hortz tenants.
13
Delimited prompt-injection defense
Every user-controlled field that flows into an AI prompt is wrapped in typed delimiter blocks (PERSON_BUILDING, PERSON_IDENTITY, SOURCE_BODY, etc.) and the system prompt explicitly instructs the model that delimited content is data, not instructions. Sanitization strips both standard injection tokens ([INST], <|im_start|>) and our own delimiter sequences from user input, so a member cannot forge new prompt blocks by including delimiter strings in their own profile data.
14
Atomic concurrency on shared resources
Daily AI surface budgets and de-duplication checks run inside per-member advisory locks at the database level. Concurrent requests for the same member can't race past the budget cap or queue duplicate triggers. Translation-cache misses serialize per cache key so N parallel callers produce one model call, not N.
15
Ongoing adversarial testing
Before every deploy that touches the AI prompt layer, surface delivery, or member-data plumbing, a regression smoke probes ten attack categories: prompt-injection delimiter escape, cross-tenant share_scope leak, concurrent cache thrash, dedupe race, endpoint ownership bypass, proposal double-act, foreign-tenant access, GDPR sweep coverage, and limit-bound exploitation. Issues found are fixed and the smoke re-run before ship.
Threat model
ThreatMitigation
Tenant data leak
Tenant_id discriminator enforced in every query path; service role gated to server functions
Cross-tenant member-data leak
person_knowings share_scope enforced in JS visibility predicate; private content visible only at origin tenant; archived rows filtered; DB check constraint blocks invalid scope values
Credential exposure
LLM and provider keys held server-side; never returned to clients or written to logs
Prompt injection (model-token)
Sanitize strips [INST], <|im_start|>, <s>, ### and similar tokens from every user-controlled field before prompt assembly
Prompt injection (delimiter forge)
Sanitize also strips <<<+/>>>+ sequences from user input; system prompt frames all delimited blocks as data, not instructions; output capped at 8000 chars regardless of model response
Concurrent budget abuse
Per-member pg_advisory_xact_lock around budget check + dedupe + insert; verified under 50-call concurrent stress
Translation cache poisoning
Cache key includes tenant_id, so cross-tenant collisions are structurally impossible; member context changes invalidate via context_signature; member's cache wiped on GDPR erasure
API abuse
Per-user rate limiting on all LLM and write endpoints
XSS
CSP headers + HTML escaping on all user-generated content
CSRF
Supabase JWT verification on every function call
Calendar injection
All ICS fields sanitized — newlines, colons, semicolons stripped from user-provided content
AI thread leak
Per-thread AI-off poison rule; once locked, re-enable is blocked atomically
Endpoint ownership bypass
Surface and proposal mutation endpoints filter the lookup by (tenant_id, member_uid) from the verified token; cross-member access returns 404, cross-tenant proposal access returns 404, already-decided proposal returns 409
Reporting a vulnerability

If you discover a security vulnerability, please report it responsibly. We take all reports seriously and will respond within 48 hours.

security@hapitalist.com

Need a security review for your organization?

Enterprise tenants receive a full security architecture brief and a dedicated review. Get in touch.

Learn about Pegasus →