Table of Contents
SQLpipe is a free, open-source tool that can transfer the result of a query from one database to another. You can use SQLpipe as:
- A command-line tool
- A long-lived server with an API and UI
The command-line tool is excellent for running ad-hoc transfers between two databases, while the server is intended to be part of a recurring ETL process. The command-line tool is the easiest way to get started, but the server grants additional features like an API, a UI, user authentication, and horizontal scalability.
This tutorial will show you how to use the command-line tool to transfer data from Redshift to Snowflake. If you’d like to learn more about running a SQLpipe server, check out our SQLpipe server usage guide, or consider one of our training and consulting packages.
Step 1 – Install SQLpipe
SQLpipe is available as an executable program and has no dependencies. You can get it from the SQLpipe downloads page.
If you would like step by step installation instructions, check out our installing SQLpipe page.
The gist of the installation guide is:
- Download the pre-built binary from our downloads page with your browser or command-line tool like curl.
- Grant execution permission to run with:
sudo chmod +x sqlpipe
- Run the version command to check installation success:
If installation is successful, the version command will print something like this:
Git hash: d3b390c Human version: 1.0.0
Step 2 – Gather connection information
The next step is gathering the following information for both Redshift and Snowflake:
- Database name
- Schema name
- Hostname (Redshift only)
- Port (Redshift only)
- Snowflake account ID (Snowflake only)
Step 3 – Run a transfer
Running a transfer is accomplished with the “sqlpipe transfer” command and passing some information via flags.
Let’s say we want to transfer a table called “users”, in the schema “public”, from Redshift to Snowflake. We would run the following command:
sqlpipe transfer \ --source-ds-type "redshift" \ --source-hostname "your-redshift-hostname" \ --source-port 5439 \ --source-db-name "your-redshift-db-name" \ --source-username "your-redshift-username" \ --source-password "your-redshift-password" \ --target-ds-type "snowflake" \ --target-account-id "your-snowflake-account-id" \ --target-db-name "your-snowflake-db-name" \ --target-username "your-snowflake-username" \ --target-password "your-snowflake-password" \ --target-table "name-of-table-to-insert-into" \ --target-schema "your-snowflake-schema-name" \ --overwrite \ --query "select * from public.users"
A few notes on this command:
- The “–source-ds-type” and “–target-ds-type” flags denote the database type and must be one of SQLpipe’s supported systems.
- The “–overwrite” flag indicates that SQLpipe should drop the table specified by the “–target-table” flag, then create a new table in the target database with the query result’s column names and types.
- The “–query” flag denotes the query to be run on the source database.
If the command is successful, the program will print a completion message. If it is unsuccessful, you will get an error message.
Complete API documentation for the transfer command is available on the SQLpipe transfer documentation page.
Yes, it really is that easy!
If you would like help setting up SQLpipe, we offer training and consulting packages to help you understand how to run SQLpipe in production. And, of course, we also offer ongoing support!
Lastly, if you’d like to ask a public question or let us know about a bug, we recommend asking on SQLpipe’s Github page.