for loading huge amounts of data mysql, load data infile far fastest option. unfortunately, while can used in way insert ignore or replace works, on duplicate key update not supported.
however, on duplicate key update
has advantages on replace
. latter delete , insert when duplicate exists. brings overhead key management. also, autoincrement ids not stay same on replace.
how can on duplicate key update
emulated when using load data infile?
these steps can used emulate functionality:
1) create new temporary table.
create temporary table temporary_table target_table;
2) optionally, drop indices temporary table speed things up.
show index temporary_table; drop index `primary` on temporary_table; drop index `some_other_index` on temporary_table;
3) load csv temporary table
load data infile 'your_file.csv' table temporary_table fields terminated ',' optionally enclosed '"' (field1, field2);
4) copy data using on duplicate key update
show columns target_table; insert target_table select * temporary_table on duplicate key update field1 = values(field1), field2 = values(field2);
5) remove temporary table
drop temporary table temporary_table;
using show index from
, show columns from
process can automated given table.