I'm running PostgreSQL 9.2.6 on OS X 10.6.8. I would like to import data from a CSV file with column headers into a database. I can do this with the COPY statement, but only if I first manually create a table with a column for each column in the CSV file. Is there any way to automatically create this table based on the headers in the CSV file?

Per this question I have tried

COPY test FROM '/path/to/test.csv' CSV HEADER;

But I just get this error:

ERROR: relation "test" does not exist

And if I first create a table with no columns:


I get:

ERROR: extra data after last expected column

I can't find anything in the PostgreSQL COPY documentation about automatically creating a table. Is there some other way to automatically create a table from a CSV file with headers?


You can't find anything in the COPY documentation, because COPY cannot create a table for you.
You need to do that before you can COPY to it.


There is a very good tool that imports tables into Postgres from a csv file. It is a command-line tool called pgfutter (with binaries for windows, linux, etc.). One of its big advantages is that it recognizes the attribute/column names as well.

The usage of the tool is simple. For example if you'd like to import myCSVfile.csv:

pgfutter --db "myDatabase" --port "5432" --user "postgres" --pw "mySecretPassword" csv myCSVfile.csv

This will create a table (called myCSVfile) with the column names taken from the csv file's header. Additionally the data types will be identified from the existing data.

A few notes: The command pgfutter varies depending on the binary you use, e.g. it could be pgfutter_windows_amd64.exe (rename it if you intend to use this command frequently). The above command has to be executed in a command line window (e.g. in Windows run cmd and ensure pgfutter is accessible). If you'd like to have a different table name add --table "myTable"; to select a particular database schema us --schema "mySchema". In case you are accessing an external database use --host "myHostDomain".

A more elaborate example of pgfutter to import myFile into myTable is this one:

pgfutter --host "localhost" --port "5432" --db "myDB" --schema "public" --table "myTable" --user "postgres" --pw "myPwd" csv myFile.csv

Most likely you will change a few data types (from text to numeric) after the import:

alter table myTable
  alter column myColumn type numeric
    using (trim(myColumn)::numeric)
  • I requires the destination table and columns to be created if I specify a schema name. Frustrating – Muhammad Gelbana Jul 28 '17 at 15:28
  • This tool is presently failing with an "index out of range" exception when you try to import a CSV file. – Luís de Sousa Jul 16 '18 at 12:34

There is a second approach, which I found here (from mmatt). Basically you call a function within Postgres (last argument specifies the number of columns).

select load_csv_file('myTable','C:/MyPath/MyFile.csv',24)

Here is mmatt's function code, which I had to modify slightly, because I am working on the public schema. (copy&paste into PgAdmin SQL Editor and run it to create the function)

    target_table text,
    csv_path text,
    col_count integer)


iter integer; -- dummy integer to iterate columns with
col text; -- variable to keep the column name at each iteration
col_first text; -- first column name, e.g., top left corner on a csv file or spreadsheet

    set schema 'public';

    create table temp_table ();

    -- add just enough number of columns
    for iter in 1..col_count
        execute format('alter table temp_table add column col_%s text;', iter);
    end loop;

    -- copy the data from csv file
    execute format('copy temp_table from %L with delimiter '','' quote ''"'' csv ', csv_path);

    iter := 1;
    col_first := (select col_1 from temp_table limit 1);

    -- update the column names based on the first row which has the column names
    for col in execute format('select unnest(string_to_array(trim(temp_table::text, ''()''), '','')) from temp_table where col_1 = %L', col_first)
        execute format('alter table temp_table rename column col_%s to %s', iter, col);
        iter := iter + 1;
    end loop;

    -- delete the columns row
    execute format('delete from temp_table where %s = %L', col_first, col_first);

    -- change the temp table name to the name given as parameter, if not blank
    if length(target_table) > 0 then
        execute format('alter table temp_table rename to %I', target_table);
    end if;


  COST 100;
ALTER FUNCTION load_csv_file(text, text, integer)
  OWNER TO postgres;

Note: There is a common issue with importing text files related to encoding. The csv file should be in UTF-8 format. However, sometimes this is not quite achieved by the programs, which try to do the encoding. I have overcome this issue by opening the file in Notepad++ and converting it to ANSI and back to UTF8.

  • 2
    This is a tidy solution. Take care to ensure that none of the columns have reserved keyword names - e.g., if the CSV file has a column called order (for order numbers, say) change it to order_num. Also, remember to change column TYPE as required. – GT. Sep 15 '16 at 23:58

For a single table, I did very simply, quickly and online through one of the many good converters that can be found on the web. Just google convert csv to sql online and choose one.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

Not the answer you're looking for? Browse other questions tagged or ask your own question.