Skip to main content

Trino 475+

High-performance distributed SQL query engine with advanced DML, time travel, and native Iceberg optimization for interactive analytics

Key Features

100
Universal Access

Multi-Catalog Support

hive_metastore, glue, jdbc, rest, nessie, or snowflake catalogs; each exposes same tables once configured in catalog properties

Explore details
100
Interactive Queries

Advanced SQL Analytics

Ad-hoc SQL reads with filter, projection, and partition pruning; writes via INSERT, CREATE TABLE AS, CREATE OR REPLACE TABLE, INSERT OVERWRITE

Explore details
100
Row-Level Efficiency

Complete DML Operations

UPDATE, DELETE, and MERGE INTO supported, emitting position/equality delete files instead of rewriting entire partitions when possible

Explore details
100
MoR + CoW

Intelligent Storage Strategy

Default Merge-on-Read for row-level DML (compact delete files, merge on-the-fly). CTAS/INSERT OVERWRITE follow Copy-on-Write semantics

Explore details
0
Batch/Interactive Only

No Streaming Support

Trino is batch/interactive only; happily reads Iceberg tables updated by streaming engines, but does not run continuous ingestion jobs

Explore details
40
V1/V2 Only

Legacy Format Support

Not yet GA for spec v3; currently supports only spec v1/v2; deletion vectors & row lineage planned but not available

Explore details
100
SQL Native

Advanced Time Travel

Automatic hidden partition pruning; time travel via FOR VERSION AS OF and FOR TIMESTAMP AS OF (also to branches/tags)

Explore details
95
Advanced Management

Schema Evolution & Metadata

ALTER TABLE add/drop/rename columns; metadata tables ($history, $snapshots, $files) queryable; system.table_changes() for row-level change streams

Explore details
100
Delegated ACLs

Enterprise Security

Delegates ACLs to underlying catalog (Hive Ranger, AWS IAM, Nessie policies); supports snapshot isolation; commit metadata visible for audit

Explore details
100
Built-in Procedures

Advanced Maintenance

Built-in maintenance procedures (optimize, expire_snapshots, remove_orphan_files), metadata caching, bucket-aware execution, fault-tolerant execution

Explore details

Trino Iceberg Feature Matrix

Comprehensive breakdown of Iceberg capabilities in Trino 475+

Dimension
Support Level
Implementation Details
Min Version
Catalog Types
FullMulti-Catalog
hive_metastore, glue, jdbc, rest, nessie, snowflake - unified access via catalog properties
414+
SQL Analytics
FullInteractive
Ad-hoc SQL with pushdown optimizations; INSERT, CREATE TABLE AS, CREATE OR REPLACE TABLE
414+
DML Operations
FullRow-Level
UPDATE, DELETE, MERGE INTO with position/equality delete files for efficiency
414+
Storage Strategy
FullAdaptive
Default MoR for DML (delete files), CoW for CTAS/INSERT OVERWRITE
414+
Streaming Support
NoneBatch/Interactive
No streaming capabilities; reads tables updated by streaming engines
N/A
Format Support
Limitedv1/v2 Only
Spec v1/v2 support; v3 (deletion vectors, row lineage) not yet GA
414+
Time Travel
FullSQL Native
FOR VERSION AS OF and FOR TIMESTAMP AS OF; branch/tag navigation
414+
Schema Evolution
FullComplete DDL
ALTER TABLE add/drop/rename; metadata tables; system.table_changes() streams
414+
Security & Governance
FullDelegated
Delegates to catalog ACLs (Ranger, IAM, Nessie); snapshot isolation
414+
Maintenance Procedures
FullBuilt-in
optimize, expire_snapshots, remove_orphan_files via ALTER TABLE EXECUTE
414+
Performance Features
FullAdvanced
Metadata caching, bucket-aware execution, fault-tolerant execution
414+
Known Limitations
MinorManageable
Small file proliferation impacts performance; static catalog configuration
414+

Showing 12 entries

Use Cases

Interactive Data Analytics

High-performance ad-hoc queries and data exploration

  • Business intelligence and reporting dashboards
  • Data science and ML feature engineering
  • Interactive data exploration and analysis
  • Complex analytical queries across large datasets

Multi-Catalog Data Federation

Unified access to data across heterogeneous systems

  • Cross-cloud data lake analytics
  • Legacy system integration with modern catalogs
  • Multi-vendor data platform consolidation
  • Federated queries across different storage systems

Lambda Architecture Query Layer

Batch processing and serving layer for real-time architectures

  • Analytical queries on streaming-updated tables
  • Historical analysis complementing real-time views
  • Batch aggregation and reporting workflows
  • Data quality validation and reconciliation

Enterprise Data Warehouse

Modern cloud-native data warehouse with ACID compliance

  • Traditional data warehouse modernization
  • Time travel for data auditing and compliance
  • Row-level data corrections and updates
  • Schema evolution for changing business needs

Need Assistance?

If you have any questions or uncertainties about setting up OLake, contributing to the project, or troubleshooting any issues, we’re here to help. You can:

  • Email Support: Reach out to our team at hello@olake.io for prompt assistance.
  • Join our Slack Community: where we discuss future roadmaps, discuss bugs, help folks to debug issues they are facing and more.
  • Schedule a Call: If you prefer a one-on-one conversation, schedule a call with our CTO and team.

Your success with OLake is our priority. Don’t hesitate to contact us if you need any help or further clarification!