Redis + phpredis 丢失密钥 — 内存溢出


Redis + phpredis losing keys — memory overflow?

Redis的新功能,使用phpredis客户端在只有512Mb RAM的小盒子上用php测试它。

将 3m 整数值插入到集合中。但是该集合的sCard()方法仅返回大约 270k 计数。

这是我面临的内存限制吗?插入时如何检查错误?

应用程序:有两个二进制文件存储四字节无符号整数序列,我想将它们加载到 Redis 中以进行快速内存差异。这是我的插入方法(跳过错误检查行):

function loadToRedis( $id, $filename){
    $length = filesize( $filename) / 4; // how many ids are there? Each is 4 bytes.
    $divisor = 100; // how many ids to insert in a single batch
    printf( "Length of %s: %d 4-byte numbers'n", $filename, $length);
    $FP = fopen($filename, 'r');
    for( $b=0; $b<=floor( $length/ $divisor); $b++){
        $set = array( $id);
        for( $i=$b*$divisor; $i < min(( $b+1)*$divisor, $length); $i++) {
            $bytes = unpack( "L", fread( $FP, 4));
            array_push( $set, array_shift( $bytes));
        }
        call_user_func_array( array( $this->redis, 'sAdd'), $set);
    }
    fclose($FP);
    printf( "%d items in the list named %s'n", $this->redis->sCard( $id), $id);
}

因此,在读取了两个 3m 值文件中的第一个后,第一组文件的大小只有大约 270k,而第二个文件似乎完全错过了 Redis:

Length of /var/www/.../dat/OLD_26750264: 3123758 4-byte numbers
270457 items in the list named OLD_26750264
Length of /var/www/.../dat/NEW_26750264: 3125000 4-byte numbers
0 items in the list named NEW_26750264

在此之后立即输出的 Redis 信息:

redis_version:2.4.10
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.6
process_id:8416
uptime_in_seconds:1471232
uptime_in_days:17
lru_clock:1618016
used_cpu_sys:387.21
used_cpu_user:414.13
used_cpu_sys_children:0.03
used_cpu_user_children:0.32
connected_clients:1
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:19997864
used_memory_human:19.07M
used_memory_rss:22544384
used_memory_peak:27022288
used_memory_peak_human:25.77M
mem_fragmentation_ratio:1.13
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1379328354
bgrewriteaof_in_progress:0
total_connections_received:153
total_commands_processed:16073
expired_keys:0
evicted_keys:0
keyspace_hits:99
keyspace_misses:83
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:835
vm_enabled:0
role:master
db0:keys=2,expires=0
我想

通了:maxmemory实现的速度比我预期的要快得多。在进一步的测试中,对于maxmemory = 40mb,只有 1048600 个整数值可以放入一个集合中。平均每个整数为 44,62 字节。效率不高。