menu_bookNavigate Docsexpand_more
Introduction
Core Concepts
Cloud Provider Setup
API Reference
Best Practices
Quickstart
Three steps to your first feature flag. No proxy servers. No sidecars. Just a config file in your cloud and an SDK in your code.
How it works
FlagDrop
Control Plane
Your Cloud
S3 / GCS / Azure
Your App
flags.getBool()
Three steps to feature flags
Create a project
Sign up and create your first project in the FlagDrop dashboard. Choose your environments and cloud provider.
Set up your cloud
Run our Terraform module to create a storage bucket and IAM role in your cloud account.
module "flagdrop" {
source = "flagdrop/bucket/aws"
version = "1.0.0"
project = "web-app"
environment = "production"
region = "us-east-1"
}
# terraform apply
# Apply complete! Resources: 3 added.
# bucket_name = "flagdrop-web-app-prod"Install the SDK
Install the SDK for your language. Point it at your bucket. Start reading flags.
$ npm install @flagdrop/sdkimport { FlagClient } from '@flagdrop/sdk'
const client = new FlagClient({
bucket: 'flagdrop-web-app-prod',
environment: 'production',
provider: 'aws',
region: 'us-east-1'
})
// Type-safe flag evaluation
const showCheckout = client.getBool('new-checkout', false)
// String flag with context
const theme = client.getString(
'app-theme', 'light',
{ userId: 'user-123' }
)
Ready for production?
You're all set. Free tier included, no credit card required.
AWS Cross-Account Role
The recommended way to connect your AWS account. FlagDrop assumes a role in your account to write flag configuration files to your S3 bucket.
Step 1: Create an IAM Role
In your AWS account, create a new IAM role with a trust relationship that allows FlagDrop to assume it.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::FLAGDROP_ACCOUNT_ID::root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "YOUR_ORG_EXTERNAL_ID"
}
}
}
]
}Replace FLAGDROP_ACCOUNT_ID and YOUR_ORG_EXTERNAL_ID with the values shown in your FlagDrop dashboard under Environment settings.
Step 2: Attach S3 Permissions
Attach the following inline policy to the role. This grants FlagDrop the minimum permissions needed to write configuration files.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME",
"arn:aws:s3:::YOUR_BUCKET_NAME/*"
]
}
]
}Step 3: Enter the Role ARN
Copy the Role ARN from the AWS console and paste it into the FlagDrop dashboard under your environment's cloud connection settings.
Step 4: Test the Connection
Click Test Connection in the dashboard. FlagDrop will attempt to assume the role and write a test object to your bucket. A green checkmark confirms everything is configured correctly.
AWS Bucket Policy
An alternative to cross-account roles. Add a bucket policy that grants FlagDrop direct write access. Best for teams that prefer bucket-level access control over IAM roles.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "FlagDropWrite",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::FLAGDROP_ACCOUNT_ID::root"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
},
{
"Sid": "FlagDropList",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::FLAGDROP_ACCOUNT_ID::root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME"
}
]
}Cross-Account Role vs Bucket Policy
Cross-account role (recommended): Centralizes access in IAM. Easier to audit, revoke, and rotate. Supports the ExternalId condition for additional security.
Bucket policy: Simpler to set up if you only need one bucket. No IAM role creation required. However, access is tied to the bucket itself and harder to track across your organization.
GCP Service Account Key
The simplest way to connect Google Cloud Storage. A service account JSON key lets FlagDrop authenticate directly with GCS using a dedicated identity.
Step 1: Create a Service Account
In the Google Cloud Console, navigate to IAM & Admin > Service Accounts and create a new service account. Grant it the Storage Object Admin role on your target bucket.
$ gcloud iam service-accounts create flagdrop-writer \
--display-name="FlagDrop Config Writer"
$ gcloud storage buckets add-iam-policy-binding \
gs://YOUR_BUCKET_NAME \
--member="serviceAccount:flagdrop-writer@YOUR_PROJECT.iam.gserviceaccount.com" \
--role="roles/storage.objectAdmin"Step 2: Generate a JSON Key
Go to IAM & Admin > Service Accounts, click the service account, then Keys > Add Key > Create new key > JSON.
Step 3: Download the Key File
The JSON key file will download automatically. Keep it secure — this grants write access to your bucket.
Step 4: Paste into FlagDrop
In your environment's cloud connection settings, select GCP (Service Account Key) as the auth method. Paste the full JSON contents of the key file into the auth config field.
Step 5: Save and Test
Click Test Connection to verify FlagDrop can write to your bucket. Once confirmed, save the configuration.
GCP Workload Identity Federation
The recommended approach for production GCP environments. Eliminates long-lived credentials by federating trust from FlagDrop's AWS identity to a GCP service account.
Step 1: Create a Workload Identity Pool
$ gcloud iam workload-identity-pools create flagdrop-pool \
--location="global" \
--display-name="FlagDrop Pool"Step 2: Add an AWS Provider
$ gcloud iam workload-identity-pools providers create-aws flagdrop-aws \
--location="global" \
--workload-identity-pool="flagdrop-pool" \
--account-id="FLAGDROP_AWS_ACCOUNT_ID"Step 3: Create and Bind a Service Account
$ gcloud iam service-accounts create flagdrop-sa \
--display-name="FlagDrop Service Account"
$ gcloud iam service-accounts add-iam-policy-binding \
flagdrop-sa@YOUR_PROJECT.iam.gserviceaccount.com \
--role="roles/iam.workloadIdentityUser" \
--member="principalSet://iam.googleapis.com/projects/YOUR_PROJECT_NUMBER/locations/global/workloadIdentityPools/flagdrop-pool/attribute.aws_role/arn:aws:sts::FLAGDROP_ACCOUNT_ID:assumed-role/flagdrop-role"
$ gcloud storage buckets add-iam-policy-binding \
gs://YOUR_BUCKET_NAME \
--member="serviceAccount:flagdrop-sa@YOUR_PROJECT.iam.gserviceaccount.com" \
--role="roles/storage.objectAdmin"Step 4: Enter in FlagDrop
Select GCP (Workload Identity) as the auth method. Enter the Workload Identity Provider resource name and the service account email. FlagDrop will exchange its AWS credentials for short-lived GCP tokens automatically.
Azure SAS Token
The quickest way to connect Azure Blob Storage. Generate a Shared Access Signature with write permissions for your container.
Step 1: Generate a SAS Token
In the Azure Portal, navigate to your Storage Account > Shared access signature. Configure the following permissions:
Step 2: Enter in FlagDrop
Select Azure (SAS Token) as the auth method. Enter your Storage Account name, container name, and the SAS token. Click Test Connection to verify.
Azure Workload Identity Federation
The recommended approach for production Azure environments. Federates trust from FlagDrop's AWS identity to an Azure AD application, eliminating long-lived secrets.
Step 1: Register an Azure AD Application
In Azure Portal, go to Azure Active Directory > App registrations > New registration. Name it something like flagdrop-federation.
Step 2: Add a Federated Credential
Under Certificates & secrets > Federated credentials, add a new credential:
https://sts.amazonaws.comFLAGDROP_AWS_ACCOUNT_ID:flagdrop-roleapi://AzureADTokenExchangeStep 3: Grant Storage Blob Data Contributor
$ az role assignment create \
--assignee "APP_CLIENT_ID" \
--role "Storage Blob Data Contributor" \
--scope "/subscriptions/SUB_ID/resourceGroups/RG/providers/Microsoft.Storage/storageAccounts/ACCOUNT"Enter the Application (client) ID and Tenant ID in FlagDrop. Select Azure (Workload Identity) as the auth method.
Multi-Region Delivery
Push flag configurations to multiple cloud regions simultaneously. Each environment in FlagDrop can have one or more region destinations.
Adding Region Destinations
In your environment's settings, click Add Region to configure additional destinations. Each region gets its own cloud connection (bucket, credentials, and region). When you push config, FlagDrop writes to all destinations in parallel.
Use Cases
Latency Reduction
Deploy flag configs to the same region as your application. SDKs read from the nearest bucket, keeping evaluation under 1ms.
Compliance & Data Residency
Meet data residency requirements by pushing configs to region-specific buckets. EU data stays in EU regions, APAC in APAC.
API Reference
The FlagDrop API uses REST conventions. All endpoints require authentication via a Bearer token (Clerk session) or an API key passed in the x-api-key header.
| Resource | Method | Endpoint |
|---|---|---|
| Projects | GET POST | /api/v1/projects |
| Flags | GET POST PUT DELETE | /api/v1/projects/{slug}/flags |
| Toggle | POST | /api/v1/projects/{slug}/flags/{key}/toggle |
| Rollout | POST | /api/v1/projects/{slug}/flags/{key}/rollout |
| Rules | POST | /api/v1/projects/{slug}/flags/{key}/rules |
| Environments | GET POST PUT | /api/v1/projects/{slug}/environments |
| Config Push | POST | /api/v1/projects/{slug}/push/{env} |
| Webhooks | GET POST | /api/v1/projects/{slug}/webhooks |
Common Operations
List all flags in a project:
$ curl -s https://api.flagdrop.io/api/v1/projects/web-app/flags \
-H "Authorization: Bearer YOUR_TOKEN"Create a new boolean flag:
$ curl -s -X POST https://api.flagdrop.io/api/v1/projects/web-app/flags \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"key": "new_checkout_flow",
"type": "boolean",
"description": "Enable the redesigned checkout",
"defaultValue": false
}'Toggle a flag on or off:
$ curl -s -X POST https://api.flagdrop.io/api/v1/projects/web-app/flags/new_checkout_flow/toggle \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{ "environment": "staging", "enabled": true }'Push config to an environment:
$ curl -s -X POST https://api.flagdrop.io/api/v1/projects/web-app/push/production \
-H "Authorization: Bearer YOUR_TOKEN"Set a percentage rollout:
$ curl -s -X POST https://api.flagdrop.io/api/v1/projects/web-app/flags/new_checkout_flow/rollout \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{ "environment": "production", "percentage": 25 }'Best Practices
Flag Naming Conventions
Use snake_case for all flag keys. Keep them descriptive and scoped to the feature they control.
- new_checkout_flow
- enable_dark_mode
- search_v2_algorithm
- promo_banner_holiday_2026
- flag1
- test
- newCheckoutFlow
- ENABLE_FEATURE
Flag Lifecycle Management
Every flag should progress through a defined lifecycle. Stale flags add complexity and technical debt.
Once a flag reaches full rollout and the feature is stable, remove the flag from your code and delete it from FlagDrop. FlagDrop marks flags as stale if they haven't been modified in 30 days while still enabled.
Rollout Strategies
Percentage-Based
Gradually roll out to a percentage of users. Start at 5%, monitor metrics, then ramp to 25%, 50%, and 100%.
Segment-Based
Target specific user segments using rules. Roll out to internal users first, then beta testers, then everyone.
Canary
Deploy to a single region or instance first. Validate zero regressions before expanding to the full fleet.
Testing Flags in Staging vs Production
FlagDrop environments are independent. Each environment has its own flag states, rollout percentages, and cloud connections.
1. Enable in development first. Test the flag in your local or dev environment with the SDK pointed at the dev bucket.
2. Promote to staging. Enable the flag in staging and run your integration test suite against it.
3. Ramp in production. Start with a low percentage rollout and monitor error rates, latency, and business metrics before going to 100%.
Frontend Relay Pattern
For client-side feature flags, use the relay pattern: FlagDrop pushes a JSON configuration file to your cloud storage. Serve it through a CDN and evaluate flags client-side for zero-latency decisions.
// Fetch the flag config from your CDN-backed bucket
const res = await fetch('https://cdn.example.com/flags/production.json')
const config = await res.json()
// Evaluate locally — no network call per flag
const showBanner = config.flags['promo_banner']?.enabled ?? false
const rollout = config.flags['new_checkout_flow']?.rolloutPercentage ?? 0
// Hash the user ID for deterministic percentage rollout
const userBucket = hashUserId(userId) % 100
const inRollout = userBucket < rolloutThis pattern keeps flag evaluation under 1ms. The JSON file is typically under 10KB and caches well at the CDN edge. Set a short TTL (30-60s) to balance freshness with performance.
Connect FlagDrop to your workflow
FlagDrop fires webhooks for every flag and config event. Point them at Slack, Jira, Datadog, PagerDuty, Zapier, or any HTTP endpoint to keep your team in the loop automatically.
tagSlack
Setup Steps
- In Slack, go to Your Apps → Incoming Webhooks and create a new webhook for the channel you want notifications in.
- Copy the webhook URL (e.g.
https://hooks.slack.com/services/T.../B.../xxx). - In FlagDrop, navigate to Settings → Webhooks → Add Webhook.
- Paste the Slack webhook URL as the endpoint.
- Select the events you want to forward:
flag.toggled,flag.created,config.pushed. - Save. FlagDrop will send a test ping to verify the connection.
{
"attachments": [{
"color": "#6C5CE7",
"pretext": "FlagDrop: flag toggled",
"title": "new-checkout",
"fields": [
{ "title": "Environment", "value": "production", "short": true },
{ "title": "State", "value": "enabled", "short": true }
],
"footer": "FlagDrop",
"ts": 1711545600
}]
}Webhook Payload → Slack Attachment Mapping
| FlagDrop Field | Slack Attachment Field |
|---|---|
| event.type | pretext |
| flag.key | title |
| environment | fields[0].value |
| flag.enabled | fields[1].value |
| timestamp | ts |
confirmation_numberJira
Setup Steps
- In Jira, go to Project Settings → Automation and create a new rule.
- Choose Incoming webhook as the trigger. Copy the generated URL.
- In FlagDrop, create a webhook pointing to the Jira automation URL.
- Select the events you care about (e.g.
flag.lifecycle_changed,flag.toggled). - In Jira Automation, add actions that use the webhook payload data.
Example: Auto-Create Ticket on Flag Cleanup
When a flag enters the cleanup lifecycle stage, automatically create a Jira ticket to remove the flag from code.
event.type == "flag.lifecycle_changed" and lifecycle == "cleanup""Remove flag {{webhookData.flag.key}}", assign to on-call engineerExample: Comment on Toggle
When a flag is toggled, add a comment to the linked Jira ticket with the new state and who toggled it.
event.type == "flag.toggled""Flag {{webhookData.flag.key}} set to {{webhookData.flag.enabled}} in {{webhookData.environment}} by {{webhookData.actor}}"monitoringMonitoring (Datadog / PagerDuty)
Forward config.pushed and flag.toggled events to your monitoring stack so every config change appears as an annotation or event alongside your metrics.
Datadog Events API
- In Datadog, grab your API key from Organization Settings → API Keys.
- In FlagDrop, create a webhook with the endpoint
https://api.datadoghq.com/api/v1/events. - Add a header:
DD-API-KEY: <your-key>. - Select events:
config.pushed,flag.toggled.
{
"title": "FlagDrop: config.pushed",
"text": "Config pushed to production (us-east-1)",
"tags": [
"source:flagdrop",
"env:production",
"region:us-east-1"
],
"alert_type": "info",
"source_type_name": "flagdrop"
}PagerDuty Events API (for push failures)
- In PagerDuty, create an Events API v2 integration on the service you want alerted.
- Copy the integration key (routing key).
- In FlagDrop, create a webhook with the endpoint
https://events.pagerduty.com/v2/enqueue. - Select the event:
config.push_failed.
{
"routing_key": "<your-integration-key>",
"event_action": "trigger",
"payload": {
"summary": "FlagDrop config push failed (production)",
"severity": "critical",
"source": "flagdrop",
"component": "config-push",
"group": "us-east-1"
}
}boltZapier / Make
Use a generic Catch Webhook trigger in Zapier or Make to receive FlagDrop events, then route them to any of 5,000+ connected apps.
Zapier Setup
- In Zapier, create a new Zap with Webhooks by Zapier → Catch Hook as the trigger.
- Copy the generated webhook URL.
- In FlagDrop, create a webhook pointing to the Zapier URL and select the events you want.
- Test the trigger in Zapier — it will receive a sample payload from FlagDrop.
- Add any action step: Google Sheets, Gmail, Slack, Notion, etc.
Example Flows
Flag Change → Google Sheets Log
Every flag.toggled event appends a row with the flag key, environment, new state, actor, and timestamp. Build an audit trail with zero code.
Flag Toggle → Email Notification
Send an email via Gmail or SendGrid whenever a production flag is toggled. Filter in Zapier on environment == "production".