《Redis原理分析丨Redis 作為 LRU 緩存的實現原理》要點:
本文介紹了Redis原理分析丨Redis 作為 LRU 緩存的實現原理,希望對您有用。如果有疑問,可以聯系我們。
文 | zxszcaijin
本文轉載自:blog.chinaunix.net
在使用redis作為緩存的場景下,內存淘汰策略決定的redis的內存使用效率.在大部門場景下,我們會采用LRU(Least Recently Used)來作為redis的淘汰策略.本文將由淺入深的介紹redis lru策略的具體實現.
首先我們來科普下,什么是LRU ?(以下來自維基百科)
Discards the least recently used items first. This algorithm requires keeping track of what was used when, which is expensive if one wants to make sure the algorithm always discards the least recently used item. General implementations of this technique require keeping "age bits" for cache-lines and track the "Least Recently Used" cache-line based on age-bits. In such an implementation, every time a cache-line is used, the age of all other cache-lines changes.
簡而言之,就是每次淘汰最近最少使用的元素 .一般的實現,都是采用對存儲在內存的元素采用 'age bits’ 來標記該元素從上次拜訪到現在為止的時長,從而在每次用LRU淘汰時,淘汰這些最長時間未被拜訪的元素.
這里我們先實現一個簡單的LRU Cache,以便于后續內容的理解 .(來自leetcot,不外這里我重新用Python語言實現了)實現該緩存滿足如下兩點:
1.get(key) - 如果該元素(總是正數)存在,將該元素移動到lru頭部,并返回該元素的值,不然返回-1.
2.set(key,value) - 設置一個key的值為value(如果該元素存在),并將該元素移動到LRU頭部.不然插入一個key,且值為value.如果在設置前檢查到,該key插入后,會超過cache的容量,則根據LRU策略,刪除最近最少使用的key.
闡發
這里我們采用雙向鏈表來實現元素(k-v鍵值對)的存儲,同時采用hash表來存儲相關的key與item的對應關系.這樣,我們既能在O(1)的時間對key進行操作,同時又能利用Double LinkedList的添加和刪除節點的方便性.(get/set都能在O(1)內完成).
具體實現(Python語言)
class Node:
key=None
value=None
pre=None
next=None
def __init__(self,key,value):
self.key=key
self.value=value
class LRUCache:
capacity=0
map={} # key is string ,and value is Node object
head=None
end=None
def __init__(self,capacity):
self.capacity=capacity
def get(self,key):
if key in self.map:
node=self.map[key]
self.remove(node)
self.setHead(node)
return node.value
else:
return -1
def getAllKeys(self):
tmpNode=None
if self.head:
tmpNode=self.head
while tmpNode:
print (tmpNode.key,tmpNode.value)
tmpNode=tmpNode.next
def remove(self,n):
if n.pre:
n.pre.next=n.next
else:
self.head=n.next
if n.next:
n.next.pre=n.pre
else:
self.end=n.pre
def setHead(self,n):
n.next=self.head
n.pre=None
if self.head:
self.head.pre=n
self.head=n
if not self.end:
self.end=self.head
def set(self,key,value):
if key in self.map:
oldNode=self.map[key]
oldNode.value=value
self.remove(oldNode)
self.setHead(oldNode)
else:
node=Node(key,value)
if len(self.map) >= self.capacity:
self.map.pop(self.end.key)
self.remove(self.end)
self.setHead(node)
else:
self.setHead(node)
self.map[key]=node
def main():
cache=LRUCache(100)
#d->c->b->a
cache.set('a','1')
cache.set('b','2')
cache.set('c',3)
cache.set('d',4)
#遍歷lru鏈表
cache.getAllKeys()
#改動('a','1') ==> ('a',5),使該節點從LRU尾端移動到開頭.
cache.set('a',5)
#LRU鏈表變為 a->d->c->b
cache.getAllKeys()
#拜訪key='c'的節點,是該節點從移動到LRU頭部
cache.get('c')
#LRU鏈表變為 c->a->d->b
cache.getAllKeys()
if __name__ == '__main__':
main()
通過上面簡單的介紹與實現,現在我們基本已經了解了什么是LRU,下面我們來看看LRU算法在redis 內部的實現細節,以及其會在什么情況下帶來問題.在redis內部,是通過全局結構體redisServer 保留redis啟動之后相關的信息.比如:
struct redisServer {
pid_t pid; /* Main process pid. */
char *configfile; /* Absolute config file path, or NULL */
…..
unsigned lruclock:LRU_BITS; /* Clock for LRU eviction */
...
};
serverCron運行頻率hz等lruclock:LRU_BITS,此中存儲了服務器自啟動之后的lru時鐘,該時鐘是全局的lru時鐘.該時鐘100ms(可以通過hz來調整,默認情況hz=10,因此每1000ms/10=100ms執行一次定時任務)更新一次.
接下來我們看看LRU時鐘的具體實現:
server.lruclock = getLRUClock();
getLRUClock函數如下:
#define LRU_CLOCK_RESOLUTION 1000 /* LRU clock resolution in ms */
#define LRU_BITS 24
#define LRU_CLOCK_MAX ((1<<LRU_BITS)-1) /* Max value of obj->lru */
/* Return the LRU clock, based on the clock resolution. This is a time
* in a reduced-bits format that can be used to set and check the
* object->lru field of redisObject structures. */
unsigned int getLRUClock(void) {
return (mstime()/LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;
}
因此lrulock最大能到(2**24-1)/3600/24 = 194天,如果跨越了這個時間,lrulock重新開始.
對于redis server來說,server.lrulock表示的是一個全局的lrulock,那么對于每個redisObject都有一個本身的lrulock.這樣每redisObject就可以根據本身的lrulock和全局的server.lrulock比較,來確定是否能夠被淘汰掉.
redis key對應的value的寄存對象:
typedef struct redisObject {
unsigned type:4;
unsigned encoding:4;
unsigned lru:LRU_BITS; /* LRU time (relative to server.lruclock) or
* LFU data (least significant 8 bits frequency
* and most significant 16 bits decreas time). */
int refcount;
void *ptr;
} robj
那么什么時候,lru會被更新呢 ?拜訪該key,lru都會被更新,這樣該key就能及時的被移動到lru頭部,從而避免從lru中淘汰.下面是這一部分的實現:
/* Low level key lookup API, not actually called directly from commands
* implementations that should instead rely on lookupKeyRead(),
* lookupKeyWrite() and lookupKeyReadWithFlags(). */
robj *lookupKey(redisDb *db, robj *key, int flags) {
dictEntry *de = dictFind(db->dict,key->ptr);
if (de) {
robj *val = dictGetVal(de);
/* Update the access time for the ageing algorithm.
* Don't do it if we have a saving child, as this will trigger
* a copy on write madness. */
if (server.rdb_child_pid == -1 &&
server.aof_child_pid == -1 &&
!(flags & LOOKUP_NOTOUCH))
{
if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
unsigned long ldt = val->lru >> 8;
unsigned long counter = LFULogIncr(val->lru & 255);
val->lru = (ldt << 8) | counter;
} else {
val->lru = LRU_CLOCK();
}
}
return val;
} else {
return NULL;
}
}
接下來,我們在來闡發,key的lru淘汰策略如何實現,分別有哪幾種:
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set. //在設置了過期光陰的key中,使用近似的lru淘汰策略
# allkeys-lru -> Evict any key using approximated LRU. //所有的key均使用近似的lru淘汰策略
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set. //在設置了過期光陰的key中,使用lfu淘汰策略
# allkeys-lfu -> Evict any key using approximated LFU. //所有的key均使用lfu淘汰策略
# volatile-random -> Remove a random key among the ones with an expire set. //在設置了過期光陰的key中,使用隨機淘汰策略
# allkeys-random -> Remove a random key, any key. //所有的key均使用隨機淘汰策略
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL) //使用ttl淘汰策略
# noeviction -> Don't evict anything, just return an error on write operations . //不允許淘汰,在寫操作發生,但內存不夠時,將會返回差錯
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
這里暫不討論LFU,TTL淘汰算法和noeviction的情況,僅僅討論lru所有場景下的,淘汰策略具體實現.(LFU和TTL將在下一篇文章中詳細闡發).
LRU淘汰的場景:
1.主動淘汰.
1.1 通過定時任務serverCron按期的清理過期的key.
2.被動淘汰
2.1 每次寫入key時,發現內存不夠,調用activeExpireCycle釋放一部門內存.
2.2 每次拜訪相關的key,如果發現key過期,直接釋放掉該key相關的內存.
首先我們來闡發LRU主動淘汰的場景:
serverCron每間隔1000/hz ms會調用databasesCron辦法來檢測并淘汰過期的key.
void databasesCron(void){
/* Expire keys by random sampling. Not required for slaves
* as master will synthesize DELs for us. */
if (server.active_expire_enabled && server.masterhost == NULL)
activeExpireCycle(ACTIVE_EXPIRE_CYCLE_SLOW);
…..
}
主動淘汰是通過activeExpireCycle 來實現的,這部門的邏輯如下:
遍歷至多16個DB .【由宏CRON_DBS_PER_CALL界說,默認為16】
隨機挑選20個帶過期時間的key.【由宏ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP界說,默認20】
如果key過時,則將key相關的內存釋放,或者放入失效隊列.
如果操作時間超過允許的限定時間,至多25ms.(timelimit = 1000000*ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC/server.hz/100,,ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC=25,server.hz默認為10), 則此次淘汰操作結束返回,不然進入5.
如果該DB下,有超過5個key (ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP/4=5) 實際失效,則進入 2,不然選擇下一個DB,再次進入2.
遍歷完成,結束.
流程圖如下
注:(圖中年夜于等于%5的可以是實際過期的,應改為年夜于等于%25的key是實際過期的.iteration++是在遍歷20個key的時候,每次加1).
被動淘汰 - 內存不夠,挪用activeExpireCycle釋放
該步調的實現方式如下:
processCommand 函數關于內存淘汰策略的邏輯:
/* Handle the maxmemory directive.
*
* First we try to free some memory if possible (if there are volatile
* keys in the dataset). If there are not the only thing we can do
* is returning an error. */
if (server.maxmemory) {
int retval = freeMemoryIfNeeded();
/* freeMemoryIfNeeded may flush slave output buffers. This may result
* into a slave, that may be the active client, to be freed. */
if (server.current_client == NULL) return C_ERR;
/* It was impossible to free enough memory, and the command the client
* is trying to execute is denied during OOM conditions? Error. */
if ((c->cmd->flags & CMD_DENYOOM) && retval == C_ERR) {
flagTransaction(c);
addReply(c, shared.oomerr);
return C_OK;
}
}
每次執行命令前,都會調用freeMemoryIfNeeded來檢查內存的情況,并釋放相應的內存,如果釋放后,內存仍然不夠,直接向哀求的客戶端返回OOM.
具體的步調如下:
獲取redis server當前已經使用的內存mem_reported.
如果mem_reported < server.maxmemory ,則返回ok.不然mem_used=mem_reported,進入步驟3.
遍歷該redis的所slaves,mem_used減去所有slave占用的ClientOutputBuffer.
如果配置了AOF,mem_used減去AOF占用的空間.sdslen(server.aof_buf)+aofRewriteBufferSize().
如果mem_used < server.maxmemory,返回ok.不然進入步驟6.
如果內存策略配置為noeviction,返回錯誤.不然進入7.
如果是LRU策略,如果是VOLATILE的LRU,則每次從可失效的數據集中,每次隨機采樣maxmemory_samples(默認為5)個key,從中選取idletime最大的key進行淘汰.不然,如果是ALLKEYS_LRU則從全局數據中進行采樣,每次隨機采樣maxmemory_samples(默認為5)個key,并從中選擇idletime最大的key進行淘汰.
如果釋放內存之后,還是跨越了server.maxmemory,則繼續淘汰,只到釋放后剩下的內存小于server.maxmemory為止.
被動淘汰 - 每次拜訪相關的key,如果發現key過期,直接釋放掉該key相關的內存:
每次拜訪key,都會調用expireIfNeeded來判斷key是否過期,如果過期,則釋放掉,并返回null,否則返回key的值.
總結
1.redis做為緩存,經常采納LRU的策略來淘汰數據,所以如果同時過期的數據太多,就會導致redis發起主動檢測時耗費的時間過長(最大為250ms),從而導致最大應用超時 >= 250ms.
timelimit = 1000000*ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC/server.hz/100
ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC=25
server.hz>=1(默認為10)
timelimit <= 250ms
2.內存使用率過高,則會導致內存不夠,從而發起被動淘汰策略,從而使應用拜訪超時.
3.合理的調整hz參數,從而控制每次主動淘汰的頻率,從而有效的緩解過時的key數量太多帶來的上述超時問題.
歡迎參與《Redis原理分析丨Redis 作為 LRU 緩存的實現原理》討論,分享您的想法,維易PHP學院為您提供專業教程。