How to delete many rows from a large table in MySQL
If you want to delete many rows from a large table, you may exceed the lock table size for an InnoDB table. To avoid this problem, or simply to minimize the time that the table remains locked, the following strategy (which does not use DELETE at all) might be helpful:
- Select the rows not to be deleted into an empty table that has the same structure as the original table:
create table t_copy like t; insert into t_copy SELECT * FROM t WHERE ... ;
See also:
How to Copy Table in MySQL
See also:
How to Copy Table in MySQL
- Use RENAME TABLE to atomically move the original table out of the way and rename the copy to the original name:
RENAME TABLE t TO t_old, t_copy TO t;
- Drop the original table:
DROP TABLE t_old;
Note that Foreign keys that point to the renamed table are not automatically updated. In such cases, you must drop and re-create the foreign keys in order for them to function properly.
If you need a smaller table for development, to keep the foreign keys:
If you need a smaller table for development, to keep the foreign keys:
create table contacts_new like contacts;
insert into contacts_new select * from contacts limit 88;
truncate table contacts;
insert into contacts select * from contacts_new;
drop table contacts_new;
Sample procedure to reduce table size:
backup original table to a new table called original_backup.
create a new table original_new with the same structure.
copy either 1M or 100M records from original table to original_new table depends on how much data you want to keep
disable foreign key checking for the session.
copy again from original table to original_new table with those ids larger than the recorded maximum id.
truncate original table.
copy data from original_new table back to original table.
enable foreign key checking
drop original_new table.
No other sessions can access the tables involved while RENAME TABLE executes, so the rename operation is not subject to concurrency problems.
Another way to remove many rows from a large table is by doing something similar to the following:If you don't need to join tables, you can use the LIMIT row_count option to DELETE to tell the server the maximum number of rows to be deleted before control is returned to the client. This can be used to ensure that a given DELETE statement does not take too much time. You can simply repeat the DELETE statement until the number of affected rows is less than the LIMIT value.
If you need to join tables, you can not use the LIMIT row_count option directly to DELETE. You cannot use ORDER BY or LIMIT in a multiple-table DELETE. Then we can turn to a temp table to do that:CREATE PROCEDURE `new_procedure` ()
BEGIN
create temporary table tmp (id int, index using btree(id)) engine = memory;
truncate tmp;
repeat
insert into tmp select id from big_table where some_condition limit 8888;
delete low_priority s, tmp from big_table s join tmp on tmp.id=s.id;
until ROW_COUNT()=0
end repeat;
END
Another way is to delete one row at one time without using temporary table
You can use sub-query to do that too. In general, you cannot modify a table and select from the same table in a subquery. But you can if you are using a subquery for the modified table in the FROM clause.repeat
select id into @id from big_table where some_condition limit 1;
delete b from big_table b where b.id=@id;
until row_count()=0
end repeat;
delete t from transit.tmp t join ( select t.contactid FROM transit.tmp t limit 8) t2 on t.contactid=t2.contactid ;
You can compare them and choose the suitable one.
Comments
Post a Comment