但是如果锁的竞争激烈,或者持有锁的线程需要长时间占用锁执行同步块,这时候就不适合使用自旋锁了,因为自旋锁在获取锁前一直都是占用
cpu
做无用功,同时有大量线程在竞争一个锁,会导致获取锁的时间很长,线程自旋的消耗大于线程阻塞挂起操作的消耗,其它需要
cpu 的线程又不能获取到 cpu,造成 cpu
的浪费。所以这种情况下我们要关闭自旋锁。
classmcs_lock { public: mcs_lock():tail(nullptr) {} voidlock(){ std::cout << "lock\n"; /* Atomically place ourselves at the end of the queue: */ qnode.locked = true; qnode.next = nullptr; constauto predecessor = tail.exchange(&qnode);
/** * If tail was nullptr, predecesor is nullptr, thus nobody has been waiting, * and we've acquired the lock. * Otherwise, we need to place ourselves in the queue, and spin: */ if (predecessor != nullptr) { /** * If the lock is taken, there's two cases: * 1. Either there is nobody waiting on the lock, and *tail == this.qnode (more * on this later) * 2. One or more CPUs are waiting on the lock, and *tail is the tail of the queue * Either way, we mark the lock is taken: */
/* Link ourselves to the tail of the queue: */ predecessor->next = &qnode;
/* Now we can spin on a local variable: */ while (qnode.locked) { } } }
voidunlock(){ /* We are holding the lock, therefore qnode.next is our successor: */ auto *successor = qnode.next;
if (successor == nullptr) { auto *expcted = &qnode; if (tail.compare_exchange_strong(expcted, nullptr, std::memory_order_acquire)) { /* No CPUs were waiting for the lock, set it to nullptr: */ return; } }
/** * We could not set our successor to nullptr, therefore qnode.next is out of sync with tail, * therefore another CPU is in the middle of `enter`, prior to linking themselves in the queue. * We wait for that to happen: */ while (successor == nullptr) { // 这里非常关键。可能出现的情况是在赋值的那一刻后继节点为空,但是立刻有新的节点进入了,这样就会导致死循环 successor = qnode.next; }
/* The other CPU has linked themselves, all we need to do is wake it up as the next-in-line: */ successor->locked = false; }