Catapult’s Ron L’Esteve, Managing Consultant for our Data & AI practice, was recognized this month as the Rookie of the Year, 2019. This is the sixth year that published authors qualified for the recognition. Authors need to have at least 10 published SQL server tips and new authors need to have at least five published tips. The winners are determined by the community, peer and leadership voting. Congratulations for your contributions to the SQL Server Community, Ron!


Here are three of Ron’s latest articles:

Using COPY INTO command to load Azure Synapse Analytics from Azure Data Lake Storage Gen2

I currently have numerous parquet (snappy compressed) files in Azure Data Lake Storage Gen2 from an on-premises SQL Server that I had generated using my previous article, Azure Data Factory Pipeline to fully Load all SQL Server Objects to ADLS Gen2. Now I would like to fully load the snappy parquet files from ADLS gen2 into an Azure Synapse Analytics (SQL DW) table. I am familiar with Polybase and BULK INSERT options, but would like to explore Azure Synapse’s new COPY INTO command which is currently in preview. How might I be able to get started with COPY INTO and load my ADLS gen2 files into an Azure Synapse Table? Read more here.

Create Azure Data Lake Database, Schema, Table, View, Function and Stored Procedure

In my previous article, Using Azure Data Lake Analytics and U-SQL Queries, I demonstrated how to write U-SQL in Azure Data Lake Analytics (ADLA). I now want to understand how I can create a database in Azure Data Lake and perform some similar routines as I would in a traditional SQL Server Database such as creating schemas, tables, views, table-valued functions and stored procedures… Read more here.

Logging Azure Data Factory Pipeline Audit Data

In my last article, Azure Data Factory Pipeline to fully Load all SQL Server Objects to ADLS Gen2, I discussed how to create a pipeline parameter table in Azure SQL DB and drive the creation of snappy parquet files consisting of On-Premises SQL Server tables into Azure Data Lake Store Gen2. Now that I have a process for generating files in the lake, I would also like to implement a process to track the log activity for my pipelines that run and persist the data. What options do I have for creating and storing this log data? Read more here.


To read more of Ron L’Esteve’s articles click here., founded in 2006, is dedicated to bringing readers quality SQL Server tips, techniques and tutorials.

All contenders and winners can be found here: