VantageDash

Deployment Guide

Vercel, Coolify, Supabase config, DNS, troubleshooting

VantageDash is deployed across three services: Vercel (frontend), Coolify on Proxmox (backend, migrated from Railway 2026-03-21), and Supabase (database + auth).

Frontend â€" Vercel

URL: https://vantage-dash.vercel.app

Configuration

  • Root directory: frontend
  • Framework preset: Next.js
  • Build command: npm run build
  • Output directory: .next
  • Auto-deploys: Every push to main

Environment Variables

VariableValue
NEXT_PUBLIC_SUPABASE_URLhttps://vucohdxuqzhujliyinly.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEYSupabase Dashboard â†' Settings â†' API
NEXT_PUBLIC_BACKEND_URLhttps://api.vantagedash.io

Build Verification

cd frontend
npm run build   # must pass before committing
npm test        # 651 vitest tests (all passing)

Backend â€" Railway

URL: https://api.vantagedash.io (Coolify self-hosted)

Configuration

  • Builder: Dockerfile (backend/Dockerfile)
  • Build context: Repository root (not backend/ â€" Dockerfile copies root scripts)
  • Start command: Defined in Dockerfile CMD, NOT in railway.toml
  • Healthcheck: GET /api/health (60s timeout, 5 retries)

Environment Variables

VariableValue
SUPABASE_URLhttps://vucohdxuqzhujliyinly.supabase.co
SUPABASE_ANON_KEYSupabase anon key
SUPABASE_SERVICE_ROLE_KEYSupabase service role key (bypasses RLS)
CORS_ORIGINShttps://vantagedash.io
PORT8000 (must match domain config)
OPENAI_API_KEYOpenAI API key (for AI matching + embeddings)
SHOPIFY_STORE_URLGlobal Shopify store URL fallback
SHOPIFY_ACCESS_TOKENGlobal Shopify access token fallback
FRONTEND_URLFrontend URL for billing redirects (default: https://vantage-dash.vercel.app)

Dockerfile Details

FROM python:3.12-slim

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    PLAYWRIGHT_BROWSERS_PATH=/opt/pw-browsers

WORKDIR /app

# Build deps for native packages (rapidfuzz)
RUN apt-get update && apt-get install -y --no-install-recommends gcc g++ \
    && rm -rf /var/lib/apt/lists/*

COPY backend/requirements.txt ./requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Install Playwright + Chromium for JS-rendered scraping (~200MB)
# PLAYWRIGHT_BROWSERS_PATH ensures non-root user can access browsers
RUN playwright install --with-deps chromium

# Remove build tools (skip autoremove to preserve Playwright system deps)
RUN apt-get purge -y gcc g++ && rm -rf /var/lib/apt/lists/*

# Non-root user (NIST AC-6: Least Privilege)
RUN groupadd -r appuser && useradd -r -g appuser -d /app -s /sbin/nologin appuser
RUN chmod -R o+rx /opt/pw-browsers

# Copy root scripts + backend
COPY scraper.py ai_matcher.py product_matcher.py shopify_sync.py \
     price_utils.py config.py tenant.py industry_profile.py \
     industry_templates.py embedding_service.py ./
COPY backend/app ./app

RUN chown -R appuser:appuser /app
USER appuser
EXPOSE 8000
CMD ["sh", "-c", "exec uvicorn app.main:app --host 0.0.0.0 --port ${PORT:-8000} --log-level info"]

Critical Deployment Notes

  1. Do NOT add **startCommand** to **railway.toml** â€" it bypasses shell expansion, making $PORT a literal string instead of the env var
  2. PORT must be 8000 â€" Railway injects its own PORT that can differ; we pin PORT=8000 as a service variable
  3. Build context is repo root â€" the Dockerfile copies root-level Python scripts alongside backend/app/
  4. Startup diagnostics: main.py prints [STARTUP] lines â€" check Railway deploy logs (not build logs) to debug crashes

Resilient Startup

The backend starts even with missing env vars:

  1. config.py catches Settings() validation errors â†' stores in settings_error
  2. main.py only mounts health router if settings fail
  3. Health endpoint reports the specific config error
  4. Other routers mount only when all required settings are present

Troubleshooting

SymptomCauseFix
502 on deployContainer crashingCheck deploy logs for [STARTUP] lines
Health returns degradedMissing env varsCheck config_error field, set missing vars
CORS errorsWrong CORS_ORIGINSFormat: JSON array, comma-separated, or single URL
$PORT literal in logsstartCommand in railway.tomlRemove startCommand, use Dockerfile CMD
Native dep build failsMissing gcc/g++Verify Dockerfile installs build tools before pip

Stripe Billing Activation

Status: ACTIVATED as of 2026-03-17. Stripe Tax enabled (automatic collection, SaaS tax code, Maryland registered, NAICS 541512).

Stripe handles subscription billing for the 3-tier pricing model (Free / Pro / Enterprise).

Setup Steps

  1. Create Stripe products and prices by running the setup script:
STRIPE_SECRET_KEY=sk_test_... python stripe_setup.py

This creates the Pro ($49/mo) and Enterprise ($199/mo) products in Stripe and prints the price IDs for Railway.

  1. Set Railway environment variables (5 required):
VariableValue
STRIPE_SECRET_KEYStripe secret key (sk_test_... or sk_live_...)
STRIPE_PUBLISHABLE_KEYStripe publishable key (pk_test_... or pk_live_...)
STRIPE_WEBHOOK_SECRETWebhook signing secret (whsec_...)
STRIPE_PRICE_ID_PROPrice ID for Pro plan (from stripe_setup.py output)
STRIPE_PRICE_ID_ENTERPRISEPrice ID for Enterprise plan (from stripe_setup.py output)
  1. Configure Stripe webhook in the Stripe Dashboard:
    • Endpoint URL: https://vantagedash-production.up.railway.app/api/billing/webhook
    • Events to send:
      • checkout.session.completed
      • customer.subscription.updated
      • customer.subscription.deleted
      • invoice.payment_failed
  2. Configure Stripe Customer Portal in the Stripe Dashboard:
    • Enable plan switching between Pro and Enterprise
    • Enable subscription cancellation
    • Set return URL to https://vantage-dash.vercel.app/settings

Database Tables

Two RLS-enforced tables support billing:

  • subscriptions â€" One row per tenant: tenant_id, stripe_customer_id, stripe_subscription_id, plan (free/pro/enterprise), status (active/past_due/canceled)
  • billing_events â€" Webhook event log: stripe_event_id, event_type, payload, processed_at

Plan Limits

FeatureFreePro ($49/mo)Enterprise ($199/mo)
Competitors210Unlimited
AI matchingNoYesYes
Auto-scrapeNo24h minimum1h minimum
Vector embeddingsNoNoYes
Webhooks/SlackNoYesYes

Notes

  • The webhook endpoint (/api/billing/webhook) bypasses rate limiting and JWT auth â€" it authenticates via Stripe signature verification
  • Free plan is the default for all new tenants (no Stripe interaction needed)
  • billing_service.py enforces plan limits (competitor count, feature access) at the API layer
  • stripe_setup.py is idempotent â€" safe to run multiple times (creates new prices each time)

Supabase

Project: vucohdxuqzhujliyinly Organization: kjmxyjahhqnhzjzvabae Region: us-east-1

Key Configuration

  • pgvector extension: Enabled (for embedding similarity search)
  • RLS: Enforced on all tables
  • Auth trigger: handle_new_user() auto-provisions tenants on signup
  • Schema: See Database Schema & RLS page

Keys

All keys available at: Supabase Dashboard â†' Settings â†' API â†' Project API keys

  • Anon key: Used by frontend and backend per-request clients (respects RLS)
  • Service role key: Used by backend background tasks (bypasses RLS)

Configuration Files

FilePurpose
railway.tomlRailway build + deploy config (healthcheck, retries)
render.yamlRender alternative deploy config
backend/DockerfileContainer image definition
backend/requirements.txtProduction Python dependencies
backend/requirements-dev.txtDev dependencies (includes test packages)
.dockerignoreExcludes frontend, tests, node_modules, .git

CI/CD Pipeline

GitHub Actions Workflows (DISABLED â€" manual-only since 2026-03-19)

All 4 workflows set to workflow_dispatch only (manual trigger). Auto-triggers disabled because GitHub Actions free minutes were exhausted. To re-enable: restore on: push/pull_request triggers in the YAML files.

  • CI (ci.yml): npm run build + vitest + bundle size check
  • Backend CI: pytest + hypothesis property-based testing + coverage thresholds (80%/72%)
  • Playwright: E2E tests (~170 tests)
  • CodeQL: Static analysis (JS/TS + Python)
  • Security: Gitleaks + Trivy + SBOM + license compliance + OWASP ZAP
  • Dependabot: Automated dependency PRs (pip + npm + GitHub Actions)

Pre-commit Hooks

  • Husky + lint-staged: Runs gitleaks secrets scan before every commit
  • Prevents accidental secret commits (API keys, tokens)

Deployment Flow

git push origin main
    â"‚
    â"œâ"€â"€â–¶ Vercel auto-deploys frontend (2-3 min)
    â"œâ"€â"€â–¶ Railway auto-deploys backend via Dockerfile (3-5 min)

Local Development

Frontend

cd frontend
npm install
npm run dev          # http://localhost:3000

Requires .env.local:

NEXT_PUBLIC_SUPABASE_URL=https://vucohdxuqzhujliyinly.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=<anon_key>
NEXT_PUBLIC_BACKEND_URL=http://localhost:8000

Backend

cd backend
pip install -r requirements-dev.txt
uvicorn app.main:app --reload  # http://localhost:8000

Requires environment variables (or .env file in backend/):

SUPABASE_URL=https://vucohdxuqzhujliyinly.supabase.co
SUPABASE_ANON_KEY=<anon_key>
SUPABASE_SERVICE_ROLE_KEY=<service_role_key>
OPENAI_API_KEY=<optional>

Running Tests

# Frontend
cd frontend && npm test                    # vitest (635 tests)
cd frontend && npx playwright test         # e2e (~170 tests)

# Backend
cd backend && pytest tests/ -v             # pytest (~1,417 tests)
cd backend && pytest tests/ -v --cov=app   # with coverage

Monitoring Env Vars (ACTIVE since Session 40)

Vercel (Frontend)

VariableRequiredPurpose
NEXT_PUBLIC_SENTRY_DSNOptionalSentry error tracking DSN
SENTRY_AUTH_TOKENOptionalSource map upload (build-time)
SENTRY_ORGOptionalSentry org slug
SENTRY_PROJECTOptionalSentry project slug
NEXT_PUBLIC_POSTHOG_KEYOptionalPostHog project API key
NEXT_PUBLIC_POSTHOG_HOSTOptionalPostHog host (default: https://us.i.posthog.com)

Coolify (Backend)

VariableRequiredPurpose
SENTRY_DSNOptionalSentry error tracking DSN
ENVIRONMENTOptionalSentry environment tag (default: production)

Both Sentry and PostHog are no-ops when their env vars are not set â€" the app runs normally without them.

Umami Analytics (Session 52)

Self-hosted, privacy-friendly web analytics running on Proxmox CT 153.

SettingValue
Instancehttps://umami.ahrlink.me
Version3.0.3
Website IDcf9a4916-5d5a-42b2-8f26-ddbc1fccd3fd
Domainvantagedash.io
ContainerCT 153 (10.0.10.14:3000, VLAN 10)
DBPostgreSQL umami_db on same container

Frontend Integration

  • next/script in frontend/src/app/layout.tsx with strategy="afterInteractive"
  • CSP script-src + connect-src allow https://umami.ahrlink.me in proxy.ts
  • No env vars needed — website ID is hardcoded (self-hosted, not sensitive)
  • Tracks all pageviews automatically, no cookies, GDPR-compliant

Analytics Stack

ToolPurposeHosting
PostHogProduct analytics, session replay, custom eventsCloud (us.i.posthog.com)
SentryError tracking, performance monitoringCloud (sentry.io)
UmamiPrivacy-friendly pageview analyticsSelf-hosted (CT 153)

All three are no-ops/graceful when unavailable.

Scraper Fallback Chain (Session 40)

The scraper tries multiple strategies in order for each competitor store:

Step 1: Shopify /products.json          (Shopify stores)
Step 2: Uline dedicated scraper         (uline.com only)
Step 3: WooCommerce Store API           (WC 8+ stores)
Step 4: Generic Sitemap + JSON-LD       (any platform with sitemaps + Product schema)
Step 4b: Microdata fallback             (Magento pages with schema.org microdata)
Step 5: HTML Listing + BeautifulSoup    (microdata, CSS patterns, follow links)
Step 6: Ecwid public REST API           (Ecwid-powered stores)
Step 7: Playwright headless browser     (JS-rendered: Wix, Squarespace, Ecwid, React SPAs)
Step 8: Playwright + OpenAI             (last resort, ~$0.003/site)

All steps are free except Step 8 which uses the OpenAI API for AI-powered extraction (requires OPENAI_API_KEY). Firecrawl has been eliminated (session 47).

The generic sitemap scraper (Step 4) works with WooCommerce, Magento, BigCommerce, and any platform that provides XML sitemaps and JSON-LD Product markup. Step 4b adds a BeautifulSoup-based microdata fallback for Magento pages. URL filtering uses exclusion-based logic (_is_likely_product_url()) instead of requiring /product/ in the path.

Step 6 (Playwright) uses headless Chromium to render JS-heavy pages and extract product data. This handles platforms like Squarespace+Ecwid and React SPAs that don't serve product data in static HTML. Requires playwright>=1.40.0 in requirements.txt and playwright install --with-deps chromium in the Dockerfile (adds ~200MB to image).

Wix stores are now supported via dedicated DOM selectors (data-hook, ProductItem) and Wix Stores API interception in the Playwright XHR handler (Step 7).

Database: super_admins Table (Session 37)

The super_admins table grants platform-wide admin access:

  • Schema: user_id UUID PK REFERENCES auth.users(id)
  • RLS: Enabled with NO policies (only accessible via service_role key)
  • To add a super admin: INSERT INTO super_admins (user_id) SELECT id FROM auth.users WHERE email = 'admin@example.com';
  • Also set app_metadata: UPDATE auth.users SET raw_app_meta_data = raw_app_meta_data || '{"is_super_admin": true}'::jsonb WHERE email = 'admin@example.com';
  • The user must re-login after metadata update for the JWT to include the flag.

Developer Workflow (Session 39)

Pre-commit Checks

The .husky/pre-commit hook runs automatically before every commit:

  1. gitleaks â€" Scans staged files for secrets (API keys, tokens)
  2. ruff check â€" Lints staged Python files for errors and style issues
  3. ruff format --check â€" Verifies Python formatting
  4. lint-staged â€" Runs ESLint on staged JS/TS files

Running Tests Before Push

# Backend (1,389 tests, ~13 min)
cd backend && pytest tests/ -x -q

# Frontend (651 tests, ~10 sec)
cd frontend && npm test

# Python linting
ruff check . && ruff format --check .

# TypeScript type check
cd frontend && npx tsc --noEmit

New Contributor Setup

See CONTRIBUTING.md at the repository root for:

  • Development environment setup (Node 20+, Python 3.12, Supabase)
  • Frontend and backend dev server instructions
  • Required environment variables
  • Code style guidelines (TypeScript strict, ruff for Python)
  • Branch naming conventions and PR checklist
  • How to add new backend endpoints (with testing pattern)

Recent Updates (Sessions 38â€"48)

Last updated: 2026-03-21 (Session 48)

CI/CD Changes (Session 41)

GitHub Actions are now manual-only. All 4 workflows have been changed to workflow_dispatch triggers only (no on: push or on: pull_request). This was done because GitHub Actions free minutes were exhausted, causing failure notification emails on every push.

To trigger a workflow manually: GitHub repo â†' Actions tab â†' select workflow â†' "Run workflow" button.

To re-enable automatic CI: restore on: push and on: pull_request triggers in the YAML files under .github/workflows/.

Stripe Tax Configuration (Session 41)

Stripe Tax has been enabled with the following settings:

  • Tax collection mode: Automatic
  • Tax code: txcd_10000000 (SaaS â€" Software as a Service, business use)
  • NAICS code: 541512 (Computer Systems Design Services)
  • Registered state: Maryland
  • Checkout portal: Configured with plan switching and cancellation enabled

No additional env vars needed â€" tax settings are configured in the Stripe Dashboard.

Monitoring â€" Now Active (Session 40)

Sentry and PostHog are active in production as of Session 40. All env vars listed in the Monitoring section above are set and confirmed working.

  • Sentry: Capturing frontend errors (Next.js client/server/edge), backend exceptions (FastAPI), and performance traces
  • PostHog: Capturing page views, custom events (scrape_started, match_started, competitor_added), and session replays

Both SDKs are no-ops when env vars are unset, so local development works without them.

Pre-commit Hooks Update (Session 39)

The .husky/pre-commit hook now runs 4 checks (up from 2):

  1. gitleaks â€" Secrets scanning on staged files
  2. ruff check â€" Python linting (errors + style)
  3. ruff format --check â€" Python formatting verification
  4. lint-staged â€" ESLint on staged JS/TS files

Ruff is configured in pyproject.toml at the repo root. Install with pip install ruff>=0.8.0 (included in backend/requirements-dev.txt).

Scraper Resilience (Session 42)

The scraper now has per-competitor retry logic with exponential backoff:

  • Each competitor scrape retries up to 3 times on failure
  • Session status is always updated in a finally block (no more orphaned "running" sessions)
  • Single-competitor scrapes (POST /api/scrape/{competitor_id}) now create their own tracked session

Updated Test Counts

# Frontend
cd frontend && npm test         # 651 vitest tests (all passing)

# Backend
cd backend && pytest tests/ -v  # 1,417+ pytest tests

# E2E
cd frontend && npx playwright test  # ~170 tests

# Total: ~2,240+ tests (vitest + pytest), ~170 Playwright e2e

Deployment Fixes (Session 45)

Dockerfile Playwright permissions fixed. The Playwright browser install was writing to /root/.cache/ms-playwright/, inaccessible by the non-root appuser. Fixed by setting PLAYWRIGHT_BROWSERS_PATH=/opt/pw-browsers (shared path) and chmod -R o+rx /opt/pw-browsers. Also removed apt-get autoremove to prevent accidental removal of Playwright system dependencies.

Test failures fixed (22 total â†' 0):

  • CompetitorAvatar crash on empty name â€" added null guard
  • javascript: URL XSS on competitors page â€" added safeHref() sanitizer
  • Comparison test mock missing competitors table query
  • Scrape status endpoint handling list vs dict from .single()

Updated test counts: 651 vitest + 1,389 pytest = 2,040 tests, zero failures

Deployment Updates (Sessions 38â€"42)

CI/CD Changes (Session 41)

GitHub Actions are manual-only. All 4 workflows (ci.yml, security.yml, mutation-testing.yml, test-backend.yml) are set to workflow_dispatch only â€" no push or pull_request triggers. This prevents failure notification emails during active development.

To run CI manually:

  1. Go to Actions tab on GitHub
  2. Select the workflow
  3. Click "Run workflow"

Pre-commit Hooks (Session 39)

The .husky/pre-commit hook runs automatically:

  1. gitleaks â€" Scans staged files for secrets
  2. ruff check â€" Lints staged Python files
  3. ruff format --check â€" Verifies Python formatting
  4. lint-staged â€" Runs ESLint on staged JS/TS files

Monitoring (Session 40)

ServicePlatformEnv VarStatus
Sentry (frontend)VercelNEXT_PUBLIC_SENTRY_DSN, SENTRY_AUTH_TOKENActive
Sentry (backend)RailwaySENTRY_DSNActive
PostHog (analytics)VercelNEXT_PUBLIC_POSTHOG_KEYActive

Stripe Billing (Session 41)

  • Stripe Tax: Enabled with automatic tax collection
  • Tax code: SaaS (txcd_10000000)
  • Checkout portal: Configured for subscription management
  • Webhook endpoint: /api/billing/webhook (bypasses rate limiting + JWT auth, uses Stripe signature verification)

Route Changes (Session 42)

  • Landing page: / is now a public marketing page (no auth required)
  • Dashboard: Moved from / to /overview
  • Middleware: Updated to allow / without authentication
  • All sidebar, mobile nav, and auth redirect links updated from / to /overview

Playwright Dependency (Session 44)

backend/requirements.txt now includes playwright>=1.40.0 for JS-rendered site scraping. The Dockerfile runs playwright install --with-deps chromium after pip install, which adds ~200MB to the container image (Chromium browser + system dependencies).

This enables Step 6 in the scraper fallback chain: headless browser rendering for sites that require JavaScript execution to display product data (e.g., Squarespace+Ecwid stores).

Competitor Platform Corrections (Session 44)

  • Design & Customize was previously assumed to be WooCommerce but is actually Squarespace+Ecwid. It requires the Playwright scraper (Step 6) to extract prices, since Ecwid loads product data via JavaScript.

Custom Domain Setup (Session 46)

Added: 2026-03-20

VantageDash now uses custom domains instead of default platform subdomains:

ServiceCustom DomainFallback
Frontend (Vercel)vantagedash.iovantage-dash.vercel.app
Backend (Railway)api.vantagedash.iovantagedash-production.up.railway.app

Domain registrar: Porkbun

DNS Records (configured in Porkbun):

TypeHostValue
ALIAS(root)cname.vercel-dns.com
CNAMEwwwcname.vercel-dns.com
CNAMEapivantagedash-production.up.railway.app

Configuration changes made:

  1. Railway **CORS_ORIGINS**: Updated to include https://vantagedash.io and https://www.vantagedash.io
  2. Vercel **NEXT_PUBLIC_BACKEND_URL**: Changed to https://api.vantagedash.io
  3. Supabase Site URL: Changed to https://vantagedash.io
  4. Supabase Redirect URLs: Added https://vantagedash.io/** and https://www.vantagedash.io/**
  5. Stripe Customer Portal return URL: Should be updated to https://vantagedash.io/settings

Playwright verified working on Railway â€" Chromium launches in Docker container, completes scrapes without errors. Design & Customize (Squarespace+Ecwid) returns 0 products due to Ecwid widget DOM selectors, not infrastructure.

Supabase anon key rotated â€" old key (ending cEk) no longer valid. New key available in Supabase Dashboard â†' Settings â†' API.

SEO Configuration (Session 46)

Added: 2026-03-20

VantageDash now has full SEO support:

FeatureImplementation
Meta tagsmetadataBase in root layout, per-page title/description
Open GraphTitle, description, image on all pages
Twitter CardsSummary large image format
JSON-LDOrganization + SoftwareApplication structured data
OG ImageDynamic opengraph-image.tsx (1200x630 branded)
Sitemapsitemap.ts â€" auto-generates from pages + blog posts
Robotsrobots.ts â€" allows all crawlers, points to sitemap

All SEO files are in frontend/src/app/ (sitemap.ts, robots.ts, opengraph-image.tsx).

Blog (Session 46)

Added: 2026-03-20

Markdown-based blog at /blog under the (marketing) route group:

  • Content: frontend/content/blog/*.md with YAML frontmatter (title, date, tags, excerpt)
  • Rendering: lib/blog/index.ts loads markdown, parses frontmatter, calculates reading time
  • Styling: @tailwindcss/typography for rendered post content
  • Pages: Blog index (/blog) + post detail (/blog/[slug])
  • To add a new post: Create a .md file in frontend/content/blog/ with proper frontmatter, deploy

Ecwid Scraper (Session 46)

Added: 2026-03-20

New scraper strategy for Ecwid-powered stores:

  • Function: scrape_ecwid_store() in scraper.py
  • Detection: Auto-detects Ecwid store ID from page HTML
  • API: Uses Ecwid public REST API (app.ecwid.com/api/v3/) â€" no authentication needed
  • Fallback: If API fails, falls back to Playwright DOM scraping
  • Position: Step 6.5 in fallback chain (after Playwright JS, before Firecrawl+AI)

Updated fallback chain: Shopify â†' Uline â†' WooCommerce API â†' WooCommerce Sitemap â†' HTML Listing â†' Playwright (JS) â†' Ecwid API â†' Firecrawl+AI

Session 47 Changes

Updated: 2026-03-20

Firecrawl Removed

  • firecrawl-py removed from backend/requirements.txt.
  • Replaced with beautifulsoup4>=4.12.0.
  • No more Firecrawl API key needed (FIRECRAWL_API_KEY env var no longer required).
  • Scraper now uses Playwright + OpenAI as last resort (only needs OPENAI_API_KEY).

Email Drip System (Supabase)

New infrastructure deployed to Supabase project:

  • Extensions enabled: pg_cron (job scheduling), pg_net (HTTP from SQL).
  • Table created: email_drip_log â€" tracks which onboarding emails have been sent to each user.
  • Edge Function: send-drip-email â€" queries users by signup age, checks activity, sends emails via Resend.
  • Cron job: drip-email-daily â€" runs at 2:07pm UTC daily, calls the Edge Function via pg_net.

Email delivery is ACTIVE as of 2026-03-20. RESEND_API_KEY is set in Supabase Edge Function secrets. Emails are sent from onboarding@vantagedash.io via Resend SMTP. The 4-email drip sequence (welcome, add competitor, features, upgrade) runs daily at 2:07pm UTC via pg_cron.

Blog Content

7 new SEO-targeted blog posts added (10 total). No code changes â€" just markdown files in frontend/content/blog/. Auto-discovered by existing infrastructure, included in sitemap.

Updated Test Counts

SuiteCount
pytest (backend)1,417
vitest (frontend)651
playwright (e2e)~170
Total~2,240

Session 48 â€" RLS Bug Fix, Blog Public, Platform Scraper Expansion (2026-03-20)

Critical: Single-Competitor Scrape RLS Bypass Fixed

POST /api/scrape/{competitor_id} was saving 0 products because the single-competitor code path did not inject the auth-scoped DB client into scrape_and_save_store(). It fell through to the anon key, and RLS silently blocked all product_tracking inserts. Fixed by injecting the service-role DB client in run_scrape_single.

Blog Routes Made Public

/blog and /blog/* routes were behind auth middleware, preventing search engine crawling. Added to the public paths list in frontend/src/middleware.ts. Blog content is now crawlable by Google and other search engines.

Scraper Fallback Chain Updated

The fallback chain has been expanded with platform-specific support:

  • Magento: Microdata extraction via BeautifulSoup (schema.org itemprop fallback when JSON-LD is absent)
  • Wix: Dedicated DOM selectors (data-hook, ProductItem) + Wix Stores API XHR interception
  • Ecwid: Expanded shop paths (/shop-all, /all-products, /all)
  • RSS/WP Feed: Accept all URLs from WordPress product feeds (not just /product/ paths)
  • DRY helpers: _xhr_items_to_products(), _extract_price_from_xhr_item()

Single-competitor scrapes now also write scrape_logs entries for debugging.

Email Template SVG

Stacked windows SVG hosted at /email/stacked-windows.svg for drip email templates (CSS positioning is stripped by email clients, so static SVG is required).

RLS Policy Hardening

supabase_enable_rls.sql migration script updated: tautology USING true policies replaced with real get_user_tenant_id() enforcement matching the live Supabase database state.

Updated Test Counts

SuiteCount
pytest (backend)1,417
vitest (frontend)651
Playwright (e2e)~170
Total~2,240

Completed Items (Session 49 â€" 2026-03-20)

  • Resend API key activated: RESEND_API_KEY set in Supabase Edge Function secrets. Email drip system is live â€" 4-email onboarding sequence runs daily via pg_cron.
  • 3 WooCommerce competitors re-scraped: Mylar Legends (21 products), ClearBags (13 products), Design & Customize (4 products) â€" all completed successfully using enhanced Playwright + BeautifulSoup scrapers. Previously failed due to exhausted Firecrawl credits.
  • Total product count: 7,648+ products across 17 competitors.

Session 52 Updates (2026-03-21)

Railway → Coolify Doc Cleanup Complete

All Railway references removed from codebase (10 files updated):

  • CLAUDE.md — Deployment section rewritten for Coolify
  • stripe_setup.py — Webhook URL updated to api.vantagedash.io
  • proxy.ts — CSP connect-src updated: *.up.railway.appapi.vantagedash.io
  • Dockerfile — Removed Railway-specific port comments
  • README.md, backend/README.md, CONTRIBUTING.md — Updated deployment target
  • docs/nist-800-53-mapping.md — Updated infrastructure provider references
  • .env.example — Updated example backend URL

SKS Bottle Scraping Fix

SKS Bottle & Packaging (sks-bottle.com) is a custom PHP platform with 6K+ products. Previously only scraped 4 products because:

  • Sitemap has ~750 category page URLs (not product detail pages)
  • Category pages have no JSON-LD/microdata → sitemap scraper discarded all URLs

Fix — two new scraper capabilities:

  1. BS4 Strategy C: Product link + nearby price heuristic — finds <a> tags linking to /product/{id} paths and extracts prices from sibling elements. Handles case quantity parsing (e.g., "400/cs | $128.00").
  2. Sitemap listing page fallback: When sitemap URLs fail JSON-LD validation, scrapes them as category/listing pages via BS4 instead of discarding. Prioritises shop-all URLs, caps at 500 products, deduplicates.

13 new tests (200 scraper total). Updated fallback chain: Shopify → Uline → WooCommerce API → Sitemap+JSON-LD (with listing fallback) → HTML+BS4 → Ecwid API → Playwright (stealth) → Playwright+OpenAI.

Updated Test Counts

# Frontend
651 vitest tests (53 files)

# Backend
~1,452 pytest tests (50 files)

# E2E
~170 Playwright e2e tests

# Total: ~2,278+ tests, zero failures

Backend Migration: Railway → Coolify (2026-03-21)

The backend API has been migrated from Railway to a self-hosted Coolify instance on the Proxmox homelab.

New Infrastructure

SettingValue
PlatformCoolify v4.0.0-beta.468 on CT 154
IP10.0.10.15 (VLAN 10)
Domainapi.vantagedash.io
RoutingCloudflare Tunnel → Coolify Caddy proxy (port 80)
BuildDockerfile at backend/Dockerfile
HealthGET /api/health
MonitoringUptime Kuma monitor #45

What Changed

  • DNS for api.vantagedash.io switched from Railway CNAME to Cloudflare Tunnel route
  • Same Dockerfile, same env vars, same API — just different hosting
  • Cloudflare handles TLS termination (domain uses http:// in Coolify, NOT https://)

Known Issue

  • Docker image lacks curl/wget FIXED (commit 2470a6d) — curl now installed in Dockerfile, Coolify healthcheck works.

See also: Coolify (CT 154) — Self-Hosted PaaS in Server Documentation

On this page

Frontend â€" VercelConfigurationEnvironment VariablesBuild VerificationBackend â€" RailwayConfigurationEnvironment VariablesDockerfile DetailsCritical Deployment NotesResilient StartupTroubleshootingStripe Billing ActivationSetup StepsDatabase TablesPlan LimitsNotesSupabaseKey ConfigurationKeysConfiguration FilesCI/CD PipelineGitHub Actions Workflows (DISABLED â€" manual-only since 2026-03-19)Pre-commit HooksDeployment FlowLocal DevelopmentFrontendBackendRunning TestsMonitoring Env Vars (ACTIVE since Session 40)Vercel (Frontend)Coolify (Backend)Umami Analytics (Session 52)Frontend IntegrationAnalytics StackScraper Fallback Chain (Session 40)Database: super_admins Table (Session 37)Developer Workflow (Session 39)Pre-commit ChecksRunning Tests Before PushNew Contributor SetupRecent Updates (Sessions 38â€"48)CI/CD Changes (Session 41)Stripe Tax Configuration (Session 41)Monitoring â€" Now Active (Session 40)Pre-commit Hooks Update (Session 39)Scraper Resilience (Session 42)Updated Test CountsDeployment Fixes (Session 45)Deployment Updates (Sessions 38â€"42)CI/CD Changes (Session 41)Pre-commit Hooks (Session 39)Monitoring (Session 40)Stripe Billing (Session 41)Route Changes (Session 42)Playwright Dependency (Session 44)Competitor Platform Corrections (Session 44)Custom Domain Setup (Session 46)SEO Configuration (Session 46)Blog (Session 46)Ecwid Scraper (Session 46)Session 47 ChangesFirecrawl RemovedEmail Drip System (Supabase)Blog ContentUpdated Test CountsSession 48 â€" RLS Bug Fix, Blog Public, Platform Scraper Expansion (2026-03-20)Critical: Single-Competitor Scrape RLS Bypass FixedBlog Routes Made PublicScraper Fallback Chain UpdatedEmail Template SVGRLS Policy HardeningUpdated Test CountsCompleted Items (Session 49 â€" 2026-03-20)Session 52 Updates (2026-03-21)Railway → Coolify Doc Cleanup CompleteSKS Bottle Scraping FixUpdated Test CountsBackend Migration: Railway → Coolify (2026-03-21)New InfrastructureWhat ChangedKnown Issue