Environment List

All accessible accounts and environments

Your Accounts

Loading accounts...
API Quick Reference

Authenticate and load a store to see API endpoints and field reference here.

Schema

Loading schema...
Query Preview q=*&rows=100

No sorting applied

No dimensions selected

No metrics selected (count is always available)

Max 100,000 rows
Ask in plain English
Query Parameters (JSON)

Ready to Query

Build your query above and click "Run Query" to see results

Facets

Simulated Workloads

MinusOneDB $99/day
Snowflake
Databricks
BigQuery
Snowflake projected monthly
MinusOneDB daily $99/day
Monthly savings

Account

Loading details...

Environment Detail

Use Environment List to choose an environment.

Loading environments...

Data Management

Schema, stores, and data publishing

Schema Properties

Select an environment to view schema
Select an environment to view stores

Publishing writes documents to the data lake where they are indexed and queryable by all stores. Use this for quick inserts — paste JSON or upload a file. For bulk loading with schema detection, transforms, and error handling, use AELTL instead.

1. Enter Data

2. Preview & Publish

Select a store to browse data

API Products

Create clean-room products, policies, templates, keys, and generated API contracts.

Products

Loading products...

Select a Product

Choose or create a product to start.

Product Details

Policy

Templates

No templates yet.

Keys

No keys yet.

Try Query

{ "note": "Run a query to preview runtime response." }

Generated OpenAPI

{ "note": "Click Fetch OpenAPI to preview spec." }

Audience Builder

Build, estimate, save, and export high-value audiences in seconds.

Definition

Build with AI

Filters

Live Estimate

Ready
Estimated Audience -
Match Rate -
Population -

Updated: -

Audience Quality

-

Age

Income

Top States

Channel

Filter Impact

Attribute Explorer

Select a dataset to browse attributes.

Saved Audiences

Name Dataset Estimated Size Updated Actions
No audiences saved yet.

Users

Manage users and access control

Loading users...

Environment Health

Live runtime health for this environment

Environment

-
-
-

Readiness

-
-
-
-
-

Checks

Checking health...

Settings

Account and application settings

Account

-
••••••••

Ops Account

Create an additional ops account namespace.

CORS

Add one full origin at a time (scheme + host + optional port).
Loading CORS origins...

Appearance

Dark mode is the only way

API

https://ops.minusonedb.com
docs.minusonedb.com

Session

Ends your current session

AELTL

Archive, Extract, Load, Transform (&Fix) and then Load your Data.

Coherence Rules
Target Store Resolving environment store routing…

Browse existing schema or add new properties.

Add a property to the schema.

Upload files to detect columns not yet in schema
Column Detected Type M1DB Type Confidence Sample Issues
Why use Denorm?

Denorm combines related datasets into query-ready documents. By denormalizing at ingest with minusonedb, you avoid painful query-time joins.

Historically, joining a whole dataset has been too computationally expensive, but m1db's architecture turns join workloads into cheap, virtually instant index lookups. It lets you validate join quality before publishing, so you can catch missing keys, fanout explosions, and ambiguous matches early.

Typical flow: pick base + dimension files, generate spec, run preflight, then materialize sample docs.
Denorm Preflight
Generate or paste a DenormSpec v0 JSON to profile join coverage, fanout, and as-of ambiguity using your loaded sample rows.
Quick Builder (recommended): pick files + keys, then generate a valid DenormSpec.

Persistent rules that apply to all future loads automatically.

No transform rules defined yet.

Rules are created from the Work Queue when fixing errors, or you can add them manually below.

Drag files here or

CSV, JSON, JSONL, TSV, Parquet — including .gz compressed

Errors from past loads that need attention.

Open in Query Studio →

Move curated rows from this environment into downstream destinations.

Store -> Warehouse
No destinations added (current form is used).

Warehouse reload targets currently support Databricks SQL. Add multiple destinations to fan out one source query to many tables. Schedules run on a long-running local server and use server env vars for credentials.

Store -> S3
No destinations added (current form is used).
Custom Credentials (optional)

Exports queried rows to one or more S3 objects using optional transform rules. Key templates support {{store}} and {{timestamp}}.