HTTP → S3 ingest
New: schema-aware Parquet views
EdgeMQ is a managed HTTP to S3 edge ingest layer that takes events from services, devices, and partners on the public internet and lands them durably in your S3 bucket as segments, raw Parquet, or schema-aware Parquet views, ready for tools like Snowflake, Databricks, ClickHouse, DuckDB, and your feature pipelines.
Start free - 10 GiB/month included on the Starter plan.
Stop babysitting brittle data feeds.
Start assuming S3 is always fresh.
p95 ingest latency
≤ 10 ms
P95 request → WAL
WAL → S3 commit
< 1 min
Median S3 commit
Regional availability
≥ 99.95%/region
Availability
Scroll to see how it works ↓
Starter is free up to 10 GiB/month, then pay per GiB ingested. See pricing
Query with
Your tools read Parquet (raw or views) or compressed segments from S3. EdgeMQ just keeps those datasets fresh.
No connectors or plugins required.
The ML bottleneck
Getting data into the lake
As an ML engineer, MLOps engineer, or AI platform owner, you're held back by one thing over and over:
Data doesn't show up in S3 reliably.
Instead, you deal with:
- ▸Training pipelines that depend on homegrown data collectors that break quietly.
- ▸Constant questions like: "Is this dataset actually up to date?" "Did we drop any events during that incident?"
- ▸Painful back-and-forth with product / data engineering teams just to get a new event stream wired up.
You want to focus on models, features, and evaluation-not HTTP retries and S3 multipart uploads.
Solution
EdgeMQ: a managed edge ingest layer for ML and lakehouse stacks
EdgeMQ is a managed ingest layer for modern data and ML stacks.
- Apps/devices send NDJSON to
/ingestover HTTPS - EdgeMQ materializes sealed segments into S3 as segments and/or Parquet (raw or views), depending on your endpoint configuration
- Commit markers list the artifacts for each segment, and tell your jobs what's fully written and safe to read
Simple integration
ML-friendly ingest in one call
Your upstream teams can send training and feature data with a simple call:
Choose your S3 artifacts
Compressed WAL segments plus commit markers. Ideal for long-term retention, raw replays, and custom pipelines that expand back to NDJSON when needed.
Partitioned by tenant and date. Designed for direct reads by lakehouse and warehouse engines from S3, without changing how producers send NDJSON.
No upfront schema: full‑fidelity payload preserved (in a payload column). Filter efficiently by tenant/time via partitions and metadata columns.
Schema-aware, typed Parquet generated from view definitions. Great for feature tables and analytics without a separate “expand + parse” job.
curl -X POST "https://<region>.edge.mq/ingest" \
-H "Authorization: Bearer $EDGEMQ_TOKEN" \
-H "Content-Type: application/x-ndjson" \
--data-binary @events.ndjsonEdgeMQ guarantees:
- ▸WAL ensures events hit disk before acknowledging.
- ▸503 + Retry-After backpressure prevents silent drops during overload.
- ▸S3 + commit markers tell your jobs which segments are safe to read.
You don't build or own any of this ingest plumbing. You just depend on it.
Want to see your data in S3 in under 10 minutes?
Use cases
Built for ML workflows
EdgeMQ powers the data foundation for training, evaluation, features, and integrations.
Training & evaluation datasets
Keep training and eval datasets current without rebuilding pipelines. EdgeMQ streams new events into S3; your jobs load from EdgeMQ-managed prefixes.
Feature pipelines & replay
Rebuild feature tables from historical data when you change logic, without complex pipelines. EdgeMQ streams events into S3; transform segments into features as needed. The same raw data feeds both offline training and online feature stores.
Tool integrations
Query Parquet output with Snowflake, Databricks, ClickHouse, DuckDB, and Postgres (or expand segments when you need raw replay). EdgeMQ doesn't ask you to switch engines-it just keeps your S3 tables fed.
Built for teams that live on S3
You don't need to own ingest infrastructure
Most ML teams don't want to:
- ▸Run Kafka or Kinesis just for ingest.
- ▸Run and debug critical HTTP collectors, S3 uploads, and edge-case retries.
- ▸Explain to security why there are random access keys in source trees.
EdgeMQ takes this off your plate:
Managed edge infrastructure
Per-tenant microVMs, WAL on NVMe, S3 shippers, and health checks are operated for you.
Predictable overload behavior
If things get hot, producers see 503 + Retry-After. You don't get silent gaps in datasets.
Security that fits your platform
S3 writes via short-lived IAM roles and scoped prefixes; data teams and platform teams can govern it using the tools they already know.
You get a dependable data hose; platform/infrastructure stays in control; ML teams move faster.
Collaborate cleanly with data and platform teams
EdgeMQ is a shared primitive you can rally around. It's the common lakehouse ingest layer that data engineers, ML teams, and platform engineers all depend on, with S3 as the shared source of truth.
EdgeMQ is the shared lakehouse ingest layer each team sees slightly differently:
Platform / infra
- ▸Set up S3 buckets, prefixes, and IAM roles.
- ▸Provision EdgeMQ endpoints as a "paved road" for ingest.
Data engineers
- ▸Define schemas, prefixes, and downstream load jobs.
- ▸Use EdgeMQ as the standard way data enters the lake.
- ▸Point warehouses and lakehouse engines at EdgeMQ's Parquet prefixes, or expand segments to NDJSON when you need raw control.
ML teams
- ▸Consume from the same S3 lake for training, evaluation, and features.
- ▸Ask for "one more prefix + schema" instead of "a new ingest system."
Everyone aligns on a single, well-understood lakehouse ingest layer.
Make S3 the live heart of your ML platform
- ▸Treat S3 as your live raw layer for ML data.
- ▸Build training, evaluation, and feature datasets from one source of truth.
- ▸Add new signals by pointing them at the same HTTP → S3 endpoint.
Related pages
- For Data Engineers - how your S3 Bronze layer is built and maintained on top of EdgeMQ.
- For Platform / Infra - how EdgeMQ is operated as a standardized ingest primitive.
Ready to feed your models with live data instead of brittle pipelines?
Stop babysitting brittle data feeds. Start assuming S3 is always fresh.