Saturday, November 13, 2010

mysql deadlocks with concurrent inserts

It is possible to cause deadlocks in mysql (Innodb) on concurrent insert statements, without there being any transactions in progress. Deadlocks are possible even when the inserts don't collide on any key.

The deadlocks occur due to gap locking done by mysql. There are several reasons for gap locking, and in this particular case, it has to do with preserving a unique key constraint on an index. The situation presents itself to us this way: There is a unique key constraint on a column and we are doing an insert. Mysql has to make sure that the lock it takes is sufficient to prevent another concurrent insert from adding a record with the same key, thus breaking the unique key constraint.

Mysql innodb engine performs row locking on inserts. If column A has a unique key constraint, and we are adding the value "bbb" for column A in an insert statement, mysql needs to lock any gap in the index between the two current records where "bbb" will be inserted at.

To illustrate the deadlock, let us start with a table schema:

TABLE vegetable (
   id bigint(10) NOT NULL auto_increment,
   name varchar(255) NOT NULL,
   PRIMARY KEY (id),
   UNIQUE KEY uk_name (name)

Let us assume the existence of these records in the table, and look at them in 'name' index order:

id name
10 ggg
05 jjj

Now, imagine two concurrent connections executing the following inserts in the following order:

Connection 1:

insert ignore into vegetable values(null, "ppp");

For this insert to proceed, connection 1 will lock the gap between "jjj" and "ppp" in the name index.

Connection 2:

insert ignore into vegetable values (null,"iii");

This will require locking the gap after "ggg", upto "iii". Since the lock from connection 1 does not span this, it will succeed.

insert ignore into vegetable values (null, "mmm");

This needs to lock the gap after "jjj" upto "mmm". Since connection 1 has a lock between "jjj" and "ppp", effectively spanning the lock connection 2 is attempting to take, this will block.

Connection 1:

insert ignore into vegetable values (null, "hhh");

This requires the gap lock between "ggg" and "hhh". This will block as it spans the the lock ["ggg" to "iii"] held by connection 2.

Thus we have both connections blocked on each other. This is the deadlock.

Here is a diagram. Transactions to the left are done by Connection 2. Transactions to the right are done by Connection 1. The sequence of transactions is donated by numbers 1) through 4).

Connection 2                         Connection1

---------------------------  ggg
G                                           G
A                                           AP
P                                            <------------------- 4) hhh
Lock                                                                  blocks (deadlock)
2) iii -------------------->

---------------------------  jjj 
G                                           G
P                                            A
3) mmm --------------->            P  
                                             <--------------------- 1) ppp

You can avoid this if you can order the inserts on each connection on the same direction. The deadlock happens as connection 2 inserts in ascending order of the index, while connection 1 inserts on descending order.

If you can't do this for practical reasons, you could retry the operation. Unless there is a high level of concurrency with a high load on the db where each transaction takes a heavy hit, a simple retry should work.


Anonymous said...

Nice explanation. This mysql design choice has some serious consequences though ...

Anonymous said...

Good explanation! Helped me understand how two insert operations could cause the issue.

Anonymous said...

Could you explain, how to order the inserts on connection level and anyway to do this. i have a serious issue in my application on this similar pattern

thushara said...

You could order the inserts by sorting in on the key first.

Anonymous said...

in MySQL5.6,things seem acting different,the ref manual says:
Prior to inserting a row,a type of gap lock alled an intsert intenting gap lock is set. multi trans can inserting into the same index gap need not wait for each other if thay are not inserting the same position in the same gap. suppose there are value 4,7 in the index,separeate trans inserting 5,6 do not block wach other as the rows are non-conflicting

brainodude said...

Can it also happen if the column doesn't have a unique index?

thushara said...

Without a unique index, I'm not sure how a deadlock can happen on an insert.

Anonymous said...

If each connection were to commit after an insert statement, would that avoid the deadlock? In other words, wouldn't the gap lock be released after the commit?

thushara said...

If each connection commits after each insert, this won't happen if there are just two connections. But if there are more than two connections, one can imagine several insert statements executing together (before proceeding to the corresponding commit), and then deadlocking in a similar way.

besides of course, it is not realistic to commit after each insert for a busy server, it will be too slow.

Anonymous said...

If I have a primary key and a unique key, i effectively have two unique keys, right?

I cannot order my inserts by both, the primary key and the unique key.

Does that mean I am screwed? I cannot even think of a practical situation where this would not be an issue.

Anonymous said...


Blogger said...

Did you know that you can make cash by locking special pages of your blog / website?
All you need to do is join Mgcash and embed their Content Locking widget.

Unknown said...
This comment has been removed by the author.
ashish said...

Very well explained, Thanks!

Anonymous said...

I'm running mysql 5.7.17, I can not reproduce this with normal insert. This problem only occurs if insert on duplicate key update is used.

Anonymous said...

Hi,Very well explanation.
i have a table with composite unique key(uniqueness on combination of 3 columns.)
table structure:
CREATE TABLE `userreview` (
`cid` bigint(12) unsigned NOT NULL,
`conid` bigint(12) unsigned NOT NULL,
`userid` bigint(12) unsigned NOT NULL,
`flag` int(12) unsigned NOT NULL DEFAULT '0',
UNIQUE KEY `idx_userreview_cid_conid_userid` (`cid`,`conid`,`userid`)
) ;
Below is stored procedure
CREATE PROCEDURE `testdeadlock`(IN pconid BIGINT(12), IN pcid BIGINT(12), IN puserid BIGINT(12),IN pflag INT(1))


IF pflag = 1 THEN

INSERT ignore into userreview(cid,conid,userid,flag)

ELSEIF pflag = 0 THEN

INSERT ignore into userreview(cid,conid,userid)

ELSEIF pflag = 2 THEN

INSERT ignore into userreview(cid,conid,userid)

end if;

When this procedure is executing from client side(using node.js),sometime it occurs deadlock.Though there is unique index still facing deadlock problem.
Is there any suggestion to avoid deadlock?

Anonymous said...

I am getting this type of deadlock, but i don't know the reason. so please help

javax.persistence.PersistenceException: ERROR executing DML bindLog[] error[Deadlock found when trying to get lock; try restarting transaction]
at com.avaje.ebeaninternal.server.persist.dml.DmlBeanPersister.execute(
at play.api.mvc.Action$$anonfun$apply$2$$anonfun$apply$5$$anonfun$apply$6.apply(Action.scala:112)
at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
at play.api.mvc.Action$$anonfun$apply$2$$anonfun$apply$5.apply(Action.scala:111)
at play.api.mvc.Action$$anonfun$apply$2$$anonfun$apply$5.apply(Action.scala:110)
at play.api.mvc.Action$$anonfun$apply$2.apply(Action.scala:110)
at play.api.mvc.Action$$anonfun$apply$2.apply(Action.scala:103)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:253)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
at java.lang.reflect.Constructor.newInstance(
at com.mysql.jdbc.Util.handleNewInstance(
at com.mysql.jdbc.Util.getInstance(
at com.mysql.jdbc.SQLError.createSQLException(
at com.mysql.jdbc.MysqlIO.checkErrorPacket(
at com.mysql.jdbc.MysqlIO.checkErrorPacket(
at com.mysql.jdbc.MysqlIO.sendCommand(
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(
at com.mysql.jdbc.ConnectionImpl.execSQL(
at com.mysql.jdbc.PreparedStatement.executeInternal(
at com.mysql.jdbc.PreparedStatement.executeUpdateInternal(
at com.mysql.jdbc.PreparedStatement.executeUpdateInternal(
at com.mysql.jdbc.PreparedStatement.executeLargeUpdate(
at com.mysql.jdbc.PreparedStatement.executeUpdate(
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(

... 59 common frames omitted