redis海量数据测试
作者: 曲文庆 日期: 2011-10-09 16:00
redis海量数据测试
提纲:
- redis海量数据测试
- 环境
- 运行脚本
- redis配置
- 运行时间
- 运行结果
- aof导出
- 重启redis
100000000 keys in Redis 2.2.12
用1到100000000数字做key,用随机uuid做value,写入redis
key:value格式:
100000000:a47d8af2-09d3-4195-afd3-c2d8a094a614 |
环境
CPU:Intel(R) Xeon(R) CPU E5620 @ 2.40GHz X 2
MEM:32G
DISK:300G SAS
运行脚本
#!/bin/sh log=~/redis_run.log slog=~/redis_status.log redis=/usr/local/redis/bin/redis-cli cat /dev/null > $log cat /dev/null > $slog echo `date` >> $log k=1 while [ $k -le 100000000 ] do v=`cat /proc/sys/kernel/random/uuid` $redis set $k $v >> $log echo "$k:$v" >> $log y=$[ $k % 1000 ] if [ $y -eq 0 ] ; then echo -e "\n\n\n`date`" >> $slog $redis dbsize >> $slog $redis info >> $slog fi ((k++)) done echo `date` >> $log |
redis配置
daemonize yes pidfile /var/run/redis.pid port 6379 timeout 30 loglevel verbose logfile /home/redis/logs/redis.log databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump.rdb dir /home/redis/rdbs slave-serve-stale-data yes maxmemory 30G maxmemory-policy volatile-lru appendonly yes appendfsync everysec no-appendfsync-on-rewrite no slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /home/redis/redis.swap vm-max-memory 30G vm-page-size 32 vm-pages 134217728 vm-max-threads 16 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 activerehashing yes |
配置中同时启用了rdb和aof两种持久化方式
运行时间
从
Tue Sep 13 18:24:29 CST 2011
到
Fri Sep 16 10:43:35 CST 2011
约64小时
运行结果
redis_version:2.2.12 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll process_id:13954 uptime_in_seconds:231564 uptime_in_days:2 lru_clock:1590677 used_cpu_sys:6818.43 used_cpu_user:10344.37 used_cpu_sys_children:79348.24 used_cpu_user_children:9447.04 connected_clients:1 connected_slaves:1 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:0 used_memory:15394548832 used_memory_human:14.34G used_memory_rss:20111581184 mem_fragmentation_ratio:1.31 use_tcmalloc:0 loading:0 aof_enabled:1 changes_since_last_save:73933 bgsave_in_progress:1 last_save_time:1316140933 bgrewriteaof_in_progress:0 total_connections_received:100200005 total_commands_processed:100200002 expired_keys:0 evicted_keys:0 keyspace_hits:1 keyspace_misses:100000000 hash_max_zipmap_entries:512 hash_max_zipmap_value:64 pubsub_channels:0 pubsub_patterns:0 vm_enabled:0 role:master db0:keys=100000000,expires=0 |
appendonly.aof | 6.6G |
dump.rdb | 4.1G |
内存占用 | 14.34G |
aof导出
上述结果集做aof导出
从
Fri Sep 16 14:09:11 CST 2011
到
Fri Sep 16 14:10:10 CST 2011
将近1分钟时间
重启redis
[13954] 16 Sep 14:27:04 # User requested shutdown... [13954] 16 Sep 14:27:04 * Calling fsync() on the AOF file. [13954] 16 Sep 14:27:04 * Saving the final RDB snapshot before exiting. [13954] 16 Sep 14:28:31 * DB saved on disk [13954] 16 Sep 14:28:31 * Removing the pid file. [13954] 16 Sep 14:28:31 # Redis is now ready to exit, bye bye... [8985] 16 Sep 14:28:33 * Server started, Redis version 2.2.12 [8985] 16 Sep 14:31:15 * DB loaded from append only file: 162 seconds [8985] 16 Sep 14:31:15 * The server is now ready to accept connections on port 6379 [8985] 16 Sep 14:31:16 - DB 0: 100000002 keys (0 volatile) in 134217728 slots HT. [8985] 16 Sep 14:31:16 - 0 clients connected (0 slaves), 15394532696 bytes in use |
重启约用时249秒
重启过程中:
写入数据提示:
(error) LOADING Redis is loading the dataset in memory |
从库提示:
[22382] 16 Sep 14:31:10 * Connecting to MASTER... [22382] 16 Sep 14:31:10 * MASTER <-> SLAVE sync started: SYNC sent [22382] 16 Sep 14:31:10 # MASTER aborted replication with an error: LOADING Redis is loading the dataset in memory |
评论: 0 |
引用: 0 |
阅读: 12036
redis使用过程中的几点经验教训 (2012-10-25 15:08)
Redis 2.2 性能问题 (2012-03-26 18:55)
redis将slave转换为master (2011-12-16 17:30)
Redis for cacti 模板 (2011-11-17 11:32)
redis海量数据测试(续三) (2011-11-17 11:04)
redis海量数据测试(续二) (2011-11-17 11:02)
redis海量数据测试(续一) (2011-10-09 16:02)
CentOS 下 Redis 2.2.12 安装配置详解 (2011-10-09 15:56)