From Teradata to Databricks using Microsoft OneLake
Configure your LakeXpress data pipeline with the selected components
LakeXpress Orchestrator
- Manages end-to-end pipelines
- Orchestrates FastBCP extracts with retries & logging
- Handles incremental sync and schema-aware metadata

./LakeXpress config create \
-a data/ds_credentials.json \
--log_db_auth_id log_db_ms \
--source_db_auth_id datasource_teradata_01 \
--source_db_name sales \
--source_schema_name "sales,dim" \
--fastbcp_dir_path ./FastBCP_linux-x64/latest/ \
--fastbcp_p 2 \
--n_jobs 4 \
--target_storage_id onelake_lake_prd \
--generate_metadata \
--sub_path /ingest/bronze \
--incremental_table "sales.orders:o_orderdate:date" \
--incremental_table "sales.lineitem:l_shipdate:date" \
--publish_method internal \
--publish_target databricks_tgt
# First sync - full load
./LakeXpress sync
# Subsequent syncs - incremental updates (much faster!)
./LakeXpress syncSource - Teradata
Teradata is an enterprise-class data warehousing and analytics platform. LakeXpress uses FastBCP with optimized Teradata connectors for efficient data extraction to Parquet format.
Features:
- •Native Teradata .NET driver via FastBCP
- •Full support for Teradata-specific types
- •Parallel extraction to Parquet files
- •Optimized for large-scale data warehouses
Format - Apache Parquet
Parquet is the industry-standard columnar file format for analytics. LakeXpress uses FastBCP to extract data from source databases and convert it to Parquet format, ensuring optimal compression, query performance, and compatibility with all modern data platforms.
Advantages:
- •Columnar format optimized for analytics
- •Efficient compression (typically 3-10x)
- •Schema evolution support
- •Predicate pushdown for fast queries
- •Universal compatibility with cloud platforms
- •Preserves data types and precision
Cloud Stage - Microsoft OneLake
OneLake is Microsoft Fabric's unified data lake. LakeXpress stages data directly in OneLake, providing native integration with all Fabric workloads and eliminating data duplication.
Features:
- •Native Fabric integration
- •Single copy of data across all workloads
- •Automatic Delta Lake format
- •Built-in governance and security
- •Seamless data sharing across Fabric
Destination - Databricks
Databricks provides a unified analytics platform built on Apache Spark. LakeXpress publishes Parquet files as Delta Lake tables for optimal lakehouse performance.
Publishing method:
Delta Lake MERGE or COPY INTO
Features:
- •Native Parquet and Delta Lake support
- •ACID transactions
- •Time travel and versioning
- •Optimized for analytical queries
- •Unity Catalog integration

