Hello Folks,

Do you know how to import CSV file into SQL Server? Or, How to load comma delimited file into SQL Server? It’s with the help of SQL Server Bulk Insert.

Well I have noted down some keynotes about this:

  • SQL Server Bulk-Insert can be used within any T-SQL script or stored procedures to import the data into SQL Server.
  • It is being used to import the data into a database table or view in a user-specified format.
  • The parameters of the command specify the table receiving the data, the location of the source file, and the options.
  • Bulk-Insert is being used to efficiently transfer data between SQL Server and heterogeneous data sources.
  • CSV stands for Comma Separated Values, sometimes also called Comma Delimited Values.
  • Bulk-Insert is one in which every column from the source table is simply inserted directly to the destination table using one-to-one mapping.
  • It goes like this; first column from the source file is being dumped into the first column of the destination table. If there are too many columns in the destination table, then BULK INSERT will work, as the extra data is placed into the bit bucket and simply being discarded.
  • The Syntax for the Bulk-Insert is as follows:

Well, there is lot of parameters here; we will discuss the significant one later on.

  • The Bulk-Insert command won’t accept a string concatenation or variable in the FROM clause. If you need to do this, it will be possible only with the help of dynamic SQL statement.
  • Since BULK INSERT is dependent on the column position of both the source file and the destination file, it is the best practice to use a view as an abstraction layer between the BULK INSERT command and the table.
  • If the structure of the source file or the destination file is altered, then modifying the view can keep the BULK INSERT running without having to change the other object’s structure.
  • You can also BULK INSERT the data into a staging table, check the data, and then perform the rest of the transformations as we merge the data into permanent tables.
  • It will become easier if we see this example:

Here we have created table ‘sample_addtype’, and therefore we are using the BULK INSERT to get it done.

The result for this can be seen as:

1_SQL_Server_Bulk_Insert

Now, it’s time to discuss some significant parameters in the BULK INSERT:

  • Field Terminator: It specifies the character used to delimit or separate columns in the source file. It can be “,” (comma) or “|” (pipe).
  • Row Terminator: It specifies the character that ends a row in the source file. ‘\n’ means end of row and is the typical setting. Files from mainframe don’t use a clean end of line. So we have to use hex editor to view the actual end of the line characters and specify the row terminator in hex. A hex value of ‘0A’ is quoted as ‘0x0A’.
  • FirstRow: It is useful when specifying whether the incoming file has column headers.
  • TabLock: It places an exclusive lock on the entire table and saves SQL Server the trouble of having to lock the table’s data pages being created.
  • Rows per Batch: It tells SQL Server to insert n number of rows in a single batch, rather than the entire file.
  • Max Errors: It specifies how many rows can fail before the bulk insert fails.

Well this was all about Bulk-Insert.

And also comments on this!!

 

Regards

Piyush Bajaj

Like us on FaceBook Follow us on Twitter | Join the fastest growing SQL Server group on FaceBook

Follow me on Twitter  |  Follow me on FaceBook