Have you ever tried to load csv data in Teradata tables? I mean: for huge files with more than a million of records we surely have to go with FastLoad, MultiLoad etc. But what about if we have a file with a few number of records and we don’t want to write FastLoad routines? Surely we’ve got the option of Teradata SQLAssistant import feature, but – as many of those who went this road will confirm – that’s not what we’d call a fast way!

That’s why I’ve decided to write a few lines of c sharp’s code to speed up the load of “not so big files”.

You can find the source code here and you’ll need Visual Studio 2017 Community Edition or higher to compile an executable version.

Also you can have a look at the main “.cs” file’s content just below.

  • I’m not stating that this is an alternative to Teradata fast loading tools especially for big data files.
  • In no way I mean that Teradata SQLAssistant won’t work, but just that LoadData has revealed to be a bit faster (take a look at the screenshot below for an idea: 1.224.916 records in about 6 minutes). 
  • In my experience it’s worth using LoadData for files with 1 million record max.
  • We’re talking of an alpha – tough fully functioning – release. There are a lot of tests the code should go under and suggestions, bug reports and enhancements are welcome!


Over the years it happened to me to come across stuck situations trying to manage business changes in the Italian’s municipalities archive. My advice has always been the same: take advantage of the well engineered and up-to-date dataset exposed by ISTAT (Italian national statistics institute). You can download excel as long as csv versions and there’s plenty of descriptions for each field/entity.

My hopefully appreciated two cents has been to make available a slightly revised version to avoid some unnecessary data redundancies. You’ll be able to load everything on you database just running the script.

Find below the revised data model. “GpE” ‘s prefixed entities are the modified/added ones. One note: the model has a logical value which means that y0u won’t find a strong referential integrity on your “after script” physical schema.

Updates will reflect the original site’s ones.


Feb 13th, 2019: added “comuni_cap” table. It contains postal codes for every municipality. Find below record’s structure 

CodiceComuneAlfanumerico, CAP_DA, CAP_A

(Municipality ID, Postal code from, Postal code to)

Jan 13th, 2019: GpEElenco’s table’s structure modified due to ISTAT’s changes and to some “personal” ones (added LATITUDE and LONGITUDE float fields with geocoding’s coordinates. Got them via Google’s API; deleted some “unused” fields). ERD’s schema modified accordingly.


May 16th 2018: here a demo with a few lines of code to show dataset interaction

May 26th 2018: added Teradata DDLs

June 10th, 2018: added SQL Server DDLs and data

June 9th, 2018:  VARIAZIONIAMMINISTRATIVETERRITORIALIDAL01011991’s table structure altered



LAST UPDATE: February 13th, 2019

ISTAT’s data update on feb 1st, 2019.

January 13th, 2019

ISTAT’s data update on jan 1st, 2019.

June 9th, 2018


September 9th, 2018

03/06/2018 MONRUPINO gets ‘REPENTABOR’ as second language identification

03/06/2018 SGONICO gets ‘ZGONIK’ as second language identification



OracleDDL: Y
USED TABLESPACE: USERS (must be manually edited, if needed).
Run scripts following the 'nn' ordinal in 'STEPnn' file name's prefix.
Script must be launched using the destination schema.
MySQLServer version: 5.6.38-log
Script must be launched using destination user's credentials.
PostgreSQLScript creates structures and data in PUBLIC schema.
SQLiteJust copy the file ... wherever you want and use it 🙂
Microsoft AccessMicrosoft Access (Office 365)
Unzip the file wherever you need it!
Teradata SQLTeradata
DDL: yes
DATA: no. You can load CSV data with LoadData or whatever your favourite utility is.
SQL ServerSQL Server