Trino 475+
High-performance distributed SQL query engine with advanced DML, time travel, and native Iceberg optimization for interactive analytics
Key Features
Multi-Catalog Support
hive_metastore, glue, jdbc, rest, nessie, or snowflake catalogs; each exposes same tables once configured in catalog properties
Advanced SQL Analytics
Ad-hoc SQL reads with filter, projection, and partition pruning; writes via INSERT, CREATE TABLE AS, CREATE OR REPLACE TABLE, INSERT OVERWRITE
Complete DML Operations
UPDATE, DELETE, and MERGE INTO supported, emitting position/equality delete files instead of rewriting entire partitions when possible
Intelligent Storage Strategy
Default Merge-on-Read for row-level DML (compact delete files, merge on-the-fly). CTAS/INSERT OVERWRITE follow Copy-on-Write semantics
No Streaming Support
Trino is batch/interactive only; happily reads Iceberg tables updated by streaming engines, but does not run continuous ingestion jobs
Legacy Format Support
Not yet GA for spec v3; currently supports only spec v1/v2; deletion vectors & row lineage planned but not available
Advanced Time Travel
Automatic hidden partition pruning; time travel via FOR VERSION AS OF and FOR TIMESTAMP AS OF (also to branches/tags)
Schema Evolution & Metadata
ALTER TABLE add/drop/rename columns; metadata tables ($history, $snapshots, $files) queryable; system.table_changes() for row-level change streams
Enterprise Security
Delegates ACLs to underlying catalog (Hive Ranger, AWS IAM, Nessie policies); supports snapshot isolation; commit metadata visible for audit
Advanced Maintenance
Built-in maintenance procedures (optimize, expire_snapshots, remove_orphan_files), metadata caching, bucket-aware execution, fault-tolerant execution
Trino Iceberg Feature Matrix
Comprehensive breakdown of Iceberg capabilities in Trino 475+
Dimension | Support Level | Implementation Details | Min Version |
---|---|---|---|
Catalog Types | FullMulti-Catalog | hive_metastore, glue, jdbc, rest, nessie, snowflake - unified access via catalog properties | 414+ |
SQL Analytics | FullInteractive | Ad-hoc SQL with pushdown optimizations; INSERT, CREATE TABLE AS, CREATE OR REPLACE TABLE | 414+ |
DML Operations | FullRow-Level | UPDATE, DELETE, MERGE INTO with position/equality delete files for efficiency | 414+ |
Storage Strategy | FullAdaptive | Default MoR for DML (delete files), CoW for CTAS/INSERT OVERWRITE | 414+ |
Streaming Support | NoneBatch/Interactive | No streaming capabilities; reads tables updated by streaming engines | N/A |
Format Support | Limitedv1/v2 Only | Spec v1/v2 support; v3 (deletion vectors, row lineage) not yet GA | 414+ |
Time Travel | FullSQL Native | FOR VERSION AS OF and FOR TIMESTAMP AS OF; branch/tag navigation | 414+ |
Schema Evolution | FullComplete DDL | ALTER TABLE add/drop/rename; metadata tables; system.table_changes() streams | 414+ |
Security & Governance | FullDelegated | Delegates to catalog ACLs (Ranger, IAM, Nessie); snapshot isolation | 414+ |
Maintenance Procedures | FullBuilt-in | optimize, expire_snapshots, remove_orphan_files via ALTER TABLE EXECUTE | 414+ |
Performance Features | FullAdvanced | Metadata caching, bucket-aware execution, fault-tolerant execution | 414+ |
Known Limitations | MinorManageable | Small file proliferation impacts performance; static catalog configuration | 414+ |
Showing 12 entries
Use Cases
Interactive Data Analytics
High-performance ad-hoc queries and data exploration
- Business intelligence and reporting dashboards
- Data science and ML feature engineering
- Interactive data exploration and analysis
- Complex analytical queries across large datasets
Multi-Catalog Data Federation
Unified access to data across heterogeneous systems
- Cross-cloud data lake analytics
- Legacy system integration with modern catalogs
- Multi-vendor data platform consolidation
- Federated queries across different storage systems
Lambda Architecture Query Layer
Batch processing and serving layer for real-time architectures
- Analytical queries on streaming-updated tables
- Historical analysis complementing real-time views
- Batch aggregation and reporting workflows
- Data quality validation and reconciliation
Enterprise Data Warehouse
Modern cloud-native data warehouse with ACID compliance
- Traditional data warehouse modernization
- Time travel for data auditing and compliance
- Row-level data corrections and updates
- Schema evolution for changing business needs
Resources & Documentation
Official Documentation
Complete API reference and guides
Getting Started Guide
Quick start tutorials and examples
Iceberg Connector Documentation
Documentation
Trino Performance Tuning
Documentation
Catalog Configuration Guide
Documentation
Table Maintenance Procedures
Documentation
Security Configuration
Documentation
Metadata Tables Reference
Documentation
Time Travel Syntax
Documentation