By default, Postgres stores FDW credentials inside pg_catalog.pg_foreign_server in plain text. Anyone with access to this table will be able to view these credentials. Wrappers is designed to work with Vault, which provides an additional level of security for storing credentials. We recommend using Vault to store your credentials.
1 2 3 4 5 6 7 8 9101112
-- Save your AWS credentials in Vault and retrieve the created-- `aws_access_key_id` and `aws_secret_access_key`selectvault.create_secret('<access key id>',-- secret to be encrypted'aws_access_key_id',-- secret name'AWS access key for Wrappers'-- secret description);selectvault.create_secret('<secret access key>''aws_secret_access_key','AWS secret access key for Wrappers');
Connecting to Iceberg
We need to provide Postgres with the credentials to connect to Iceberg. We can do this using the create server command.
For any server options need to be stored in Vault, you can add a prefix vault_ to its name and use the secret ID returned from the select vault.create_secret() statement as the option value.
Connecting to AWS S3 Tables
1 2 3 4 5 6 7 8 9101112131415
createservericeberg_serverforeigndatawrappericeberg_wrapperoptions(-- The key id saved in Vault from abovevault_aws_access_key_id'<key_ID>',-- The secret id saved in Vault from abovevault_aws_secret_access_key'<secret_key>',-- AWS regionregion_name'us-east-1',-- AWS S3 table bucket ARNaws_s3table_bucket_arn'arn:aws:s3tables:us-east-1:204203087419:bucket/my-table-bucket');
1 2 3 4 5 6 7 8 9101112131415
createservericeberg_serverforeigndatawrappericeberg_wrapperoptions(-- The AWS access key IDaws_access_key_id'<key_ID>',-- The AWS secret access keyaws_secret_access_key'<secret_key>',-- AWS regionregion_name'us-east-1',-- AWS S3 table bucket ARNaws_s3table_bucket_arn'arn:aws:s3tables:us-east-1:204203087419:bucket/my-table-bucket');
createservericeberg_serverforeigndatawrappericeberg_wrapperoptions(-- The key id saved in Vault from abovevault_aws_access_key_id'<key_ID>',-- The secret id saved in Vault from abovevault_aws_secret_access_key'<secret_key>',-- AWS regionregion_name'us-east-1',-- Iceberg REST Catalog URIcatalog_uri'https://rest-catalog/ws',-- Warehouse namewarehouse'warehouse',-- AWS S3 endpoint URL, optional"s3.endpoint"'https://alternative-s3-storage:8000');
1 2 3 4 5 6 7 8 9101112131415161718192021
createservericeberg_serverforeigndatawrappericeberg_wrapperoptions(-- The key id saved in Vault from aboveaws_access_key_id'<key_ID>',-- The secret id saved in Vault from aboveaws_secret_access_key'<secret_key>',-- AWS regionregion_name'us-east-1',-- Iceberg REST Catalog URIcatalog_uri'https://rest-catalog/ws',-- Warehouse namewarehouse'warehouse',-- AWS S3 endpoint URL, optional"s3.endpoint"'https://alternative-s3-storage:8000');
Additional Server Options
batch_size - Controls the batch size of records read from Iceberg (default: 4096)
Create a schema
We recommend creating a schema to hold all the foreign tables:
createschemaifnotexistsiceberg;
Options
The full list of foreign table options are below:
table - Fully qualified source table name with all namespaces in Iceberg, required.
rowid_column - The column to use as the row identifier for INSERT operations, required for data insertion.
Entities
We can use SQL import foreign schema to import foreign table definitions from Iceberg.
For example, using below SQL can automatically create foreign tables in the iceberg schema.
1 2 3 4 5 6 7 8 910111213
-- create all the foreign tables from Iceberg "docs_example" namespaceimportforeignschema"docs_example"fromservericeberg_serverintoiceberg;-- or, only create "readme" and "guides" foreign tablesimportforeignschema"docs_example"limitto("readme","guides")fromservericeberg_serverintoiceberg;-- or, create all foreign tables except "readme"importforeignschema"docs_example"except("readme")fromservericeberg_serverintoiceberg;
Note
By default, the import foreign schema statement will silently skip all the incompatible columns. Use the option strict to prevent this behavior. For example,
importforeignschema"docs_example"fromservericeberg_serverintoicebergoptions(-- this will fail the 'import foreign schema' statement when Iceberg table-- column cannot be mapped to Postgresstrict'true');
This FDW supports where clause pushdown with below operators.
Operator
Note
=, >, >=, <, <=, <>, !=
is null, is not null
x, not x, x is true, x is not true
column x data type is boolean
x between a and b
column x data type can be datetime or numeric types
like 'abc%', not like 'abc%'
only support starts with pattern
in (x, y, z), not in (x, y, z)
Note
For multiple filters, only logical AND is supported. For example,
-- this can be pushed downselect*fromtablewherex=aandy=b;-- this cannot be pushed downselect*fromtablewherex=aory=b;
Supported Data Types
Postgres Type
Iceberg Type
boolean
boolean
real
float
integer
int
double precision
double
bigint
long
numeric
decimal
text
string
date
date
time
time
timestamp
timestamp, timestamp_ns
timestamptz
timestamptz, timestamptz_ns
jsonb
struct, list, map
bytea
binary
uuid
uuid
Data Insertion
The Iceberg FDW supports inserting data into Iceberg tables using standard SQL INSERT statements.
Basic Insert
-- insert a single rowinsertintoiceberg.guides(id,title,content,created_at)values(1,'Getting Started','Welcome to our guides',now());-- insert multiple rowsinsertintoiceberg.guides(id,title,content,created_at)values(2,'Advanced Guide','Advanced topics',now()),(3,'Best Practices','Tips and tricks',now());
Insert from Select
-- insert data from another tableinsertintoiceberg.guides(id,title,content,created_at)selectid,title,content,created_atfromsome_other_tablewherecondition=true;
Partition Considerations
When inserting data into partitioned Iceberg tables, the FDW automatically handles partitioning based on the table's partition spec. Data will be written to the appropriate partition directories.
-- for a table partitioned by sale_date, data is automatically partitionedinsertintoiceberg.sales(product_id,amount,sale_date)values(123,99.99,'2025-01-15');
Performance Tips
Batch Inserts: Use multi-row inserts for better performance
Partition Awareness: When possible, insert data in partition order to optimize file organization
Transaction Size: Consider breaking very large inserts into smaller transactions
Limitations for Inserts
Schema evolution during insert is not supported
Only append operations are supported (no upserts)
Complex data types (nested structs, arrays, maps) have limited support
Limitations
This section describes important limitations and considerations when using this FDW:
Only supports specific data type mappings between Postgres and Iceberg
UPDATE, DELETE, and TRUNCATE operations are not supported
-- Run below SQL to import all tables under namespace 'docs_example'importforeignschema"docs_example"fromservericeberg_serverintoiceberg;-- or, create the foreign table manuallycreateforeigntableifnotexistsiceberg.guides(idbigint,titletext,contenttext,created_attimestamp)servericeberg_serveroptions(table'docs_example.guides',rowid_column'id');
Then query the foreign table:
select*fromiceberg.guides;
Read Cloudflare R2 Data Catalog
First, follow the steps in Getting Started Guide to create a R2 Catalog on Cloudflare. Once it is completed, create a server like below:
createservericeberg_serverforeigndatawrappericeberg_wrapperoptions(aws_access_key_id'<R2_access_key_ID>',aws_secret_access_key'<R2_secret_access_key>',token'<R2 API token>',warehouse'xxx_r2-data-catalog-tutorial',"s3.endpoint"'https://xxx.r2.cloudflarestorage.com',catalog_uri'https://catalog.cloudflarestorage.com/xxx/r2-data-catalog-tutorial');
Then, import all the tables in default namespace and query it:
-- the filter 'id = 42' will be pushed down to Icebergselect*fromiceberg.guideswhereid=42;-- the pushdown filter can also be on the partition column 'created_at',-- this can greatly reduce query costselect*fromiceberg.guideswherecreated_at>=timestamp'2025-05-16 12:34:56';-- multiple filters must use logical 'AND'select*fromiceberg.guideswhereid>42andtitlelike'Supabase%';
Data Insertion Examples
1 2 3 4 5 6 7 8 9101112131415161718192021222324
-- insert a single recordinsertintoiceberg.guides(id,title,content,created_at)values(100,'New Guide','This is a new guide',now());-- insert multiple records at onceinsertintoiceberg.guides(id,title,content,created_at)values(101,'Guide A','Content for Guide A',now()),(102,'Guide B','Content for Guide B',now()),(103,'Guide C','Content for Guide C',now());-- insert data from a SELECT queryinsertintoiceberg.guides(id,title,content,created_at)selectid+1000,'Migrated: '||title,content,created_atfromother_guideswhereid<10;-- verify the inserted dataselectcount(*)fromiceberg.guides;select*fromiceberg.guideswhereid>=100orderbyid;