浅谈Linux dirty data配置

2021-11-09 09:49:19 浏览数 (1)

结合www.kernel.org给出的官方解释以及centos7代码的理解先对dirty相关内核参数做一个概述:

1、vm.dirty_background_ratio

Contains, as a percentage of total available memory that contains free pages

and reclaimable pages, the number of pages at which the background kernel

flusher threads will start writing out dirty data.

The total available memory is not equal to total system memory.

进程写IO时检测到文件系统缓存脏页超过当前系统可用内存vm.dirty_background_ratio%时会唤醒内核后台进程回刷脏页,唤醒脏数据回刷工作后进程直接返回并不会等待回收完成,最终回收工作还是由内核per-bdi((每个盘分区关联一个struct bdi)刷新线程完成。

以vm.dirty_writeback_centisecs为周期执行的per-bdi 刷新线程也会检查脏页是否超过当前系统可用内vm.dirty_background_ratio%并判断是否进行回写dirty page。vm.dirty_background_ratio这个比例是指可用内存的占比(FREE内存页和可以被回收的文件页),而不是系统内存总量的占比。该值范围为[0, 100]

2、vm.dirty_ratio

Contains, as a percentage of total available memory that contains free pages

and reclaimable pages, the number of pages at which a process which is

generating disk writes will itself start writing out dirty data.

The total available memory is not equal to total system memory.

当有进程写文件判断当前文件系统缓存脏页超过以当前系统可用内存、vm_dirty_ratio和dirty_background_ratio为变量计算出来的门限值的一半((thresh bg_thresh)*available/2)时,除了会唤醒内核per-bdi 刷新线程触发回写脏页外,进程进入休眠等待一段时间,减缓数据写入速度,等待时间范围[10ms,200ms],具体参考https://lwn.net/Articles/405076/和commit 143dfe8611a63030ce0c79419dc362f7838be557

vm.dirty_ratio这个比例是指可用内存的占比(未被使用的内存页和可以被回收的文件页),而不是系统内存总量的占比。

当进程在写IO时判断脏数据比例继续增加到触(vm_dirty_ratio*available_memory dirty_background_ratio*available_memory)/100值的一半的条件时,进程设置为TASK_KILLABLE状态后休眠等待一段时间。

vm.dirty_writeback_centisecs

The kernel flusher threads will periodically wake up and write `old' data

out to disk. This tunable expresses the interval between those wakeups, in

100'ths of a second.

Setting this to zero disables periodic writeback altogether.

回刷进程定时唤醒时间(单位:1/100s),当内存中存在dirty inode时内核per-bdi 刷新线程以dirty_writeback_centisecs/100 秒的周期执行。

代码语言:javascript复制
/*
 * Retrieve work items and do the writeback they describe
 */
static long wb_do_writeback(struct bdi_writeback *wb)
{
        struct backing_dev_info *bdi = wb->bdi;
        struct wb_writeback_work *work;
        long wrote = 0;

        set_bit(BDI_writeback_running, &wb->bdi->state);
        while ((work = get_next_work_item(bdi)) != NULL) {

                trace_writeback_exec(bdi, work);

                wrote  = wb_writeback(wb, work);

                /*
                 * Notify the caller of completion if this is a synchronous
                 * work item, otherwise just free it.
                 */
                if (work->done)
                        complete(work->done);
                else
                        kfree(work);
        }

        /*
         * Check for periodic writeback, kupdated() style
         */
        wrote  = wb_check_old_data_flush(wb);//周期性回写
        wrote  = wb_check_background_flush(wb);//判断脏页比例是否超过vm.dirty_background_ratio
        clear_bit(BDI_writeback_running, &wb->bdi->state);

        return wrote;
}


static long wb_check_old_data_flush(struct bdi_writeback *wb)
{
        unsigned long expired;
        long nr_pages;

        /*
         * When set to zero, disable periodic writeback
         */
        if (!dirty_writeback_interval)
                return 0;

        expired = wb->last_old_flush  
                        msecs_to_jiffies(dirty_writeback_interval * 10);
        if (time_before(jiffies, expired))
                return 0;

        wb->last_old_flush = jiffies;
        nr_pages = get_nr_dirty_pages();

        if (nr_pages) {
                struct wb_writeback_work work = {
                        .nr_pages       = nr_pages,
                        .sync_mode      = WB_SYNC_NONE,
                        .for_kupdate    = 1,
                        .range_cyclic   = 1,
                        .reason         = WB_REASON_PERIODIC,
                };

                return wb_writeback(wb, &work);
        }

        return 0;
}

static long wb_check_background_flush(struct bdi_writeback *wb)
{
        if (over_bground_thresh(wb->bdi)) {

                struct wb_writeback_work work = {
                        .nr_pages       = LONG_MAX,
                        .sync_mode      = WB_SYNC_NONE,
                        .for_background = 1,
                        .range_cyclic   = 1,
                        .reason         = WB_REASON_BACKGROUND,
                };

                return wb_writeback(wb, &work);
        }

        return 0;
}

vm.dirty_expire_centisecs

This tunable is used to define when dirty data is old enough to be eligible

for writeout by the kernel flusher threads. It is expressed in 100'ths

of a second. Data which has been dirty in-memory for longer than this

interval will be written out next time a flusher thread wakes up.

脏数据老化时间(单位:1/100s),周期性内核per-bdi 刷新线程执行时会判断IO数据被写入page缓存到当前时间是否已经超过

vm.dirty_expire_centisecs/100 秒,如果是则刷新该dirty page。

调用bdi_writeback_workfn->wb_do_writeback回写dirty page,回写时wb_do_writeback通过wb_check_old_data_flush判断脏数据在内存中存在的时间是否已经超过dirty_expire_centisecs/100 秒,超过的脏数据才真正执行回写

static long wb_writeback(struct bdi_writeback *wb,

struct wb_writeback_work *work)

代码语言:javascript复制
static long wb_writeback(struct bdi_writeback *wb,
                         struct wb_writeback_work *work)
{                         
    ...
    ...
    for (;;)
    { 
     ....
    if (work->for_kupdate) {//周期性回写dirty page调用该逻辑,
                           此时wb_writeback_work.reason值为WB_REASON_PERIODIC
                        oldest_jif = jiffies -
                                msecs_to_jiffies(dirty_expire_interval * 10);
                } else if (work->for_background)//dirty page超过可用内存比例调用该逻辑,
                                                此时struct wb_writeback_work.reason值为WB_REASON_BACKGROUND
                        oldest_jif = jiffies;

     ...        
    }    
}    

/*
 * why some writeback work was initiated
 */
enum wb_reason {
        WB_REASON_BACKGROUND,
        WB_REASON_VMSCAN,
        WB_REASON_SYNC,
        WB_REASON_PERIODIC,
        WB_REASON_LAPTOP_TIMER,
        WB_REASON_FREE_MORE_MEM,
        WB_REASON_FS_FREE_SPACE,
        /*
         * There is no bdi forker thread any more and works are done
         * by emergency worker, however, this is TPs userland visible
         * and we'll be exposing exactly the same information,
         * so it has a mismatch name.
         */
        WB_REASON_FORKER_THREAD,

        WB_REASON_MAX,
};

基于centos7.x内核3.10.0-1062.18.1.el7来看下这些配置项是怎么生效的:

在default_bdi_init中申请了workqueue bdi_wq:

代码语言:javascript复制
static int __init default_bdi_init(void)
{
       ...

        bdi_wq = alloc_workqueue("writeback", WQ_MEM_RECLAIM | WQ_FREEZABLE |
       ...                                       WQ_UNBOUND | WQ_SYSFS, 0);
        return err;
}

工作任务对应的函数为bdi_writeback_workfn,该函数在后台进程回刷脏页时被调用:

代码语言:javascript复制
static void bdi_wb_init(struct bdi_writeback *wb, struct backing_dev_info *bdi)
{
      ...
      INIT_DELAYED_WORK(&wb->dwork, bdi_writeback_workfn);
      
}

通过ftrace发现调用bdi_writeback_workfn回写Dirty page的进程名为kworker/u***

代码语言:javascript复制
#cat /sys/kernel/debug/tracing/current_tracer
#cat /sys/kernel/debug/tracing/set_ftrace_filter
#echo 1 >/sys/kernel/debug/tracing/tracing_on
#cat trace_pipe >/root/trace.txt
开启另外一个终端用dd构造脏页
#dd if=/dev/zero of=1Gb.file bs=4096 count=262144

#echo 0 >tracing_on
# cat /root/trace.txt
 kworker/u8:1-26776 [000] .... 351682.608545: bdi_writeback_workfn <-process_one_work
 kworker/u8:1-26776 [000] .... 351685.330749: bdi_writeback_workfn <-process_one_wor

因为申请bdi_wq 工作队列时传递了WQ_UNBOUND ,分析alloc_workqueue相关实现逻辑如下:

1)声明一个等待队列, 如果这个队列没有声明是WQ_UNBOUND,那么直接将它的pool关联到系统的pool上就可以了;

2)如果这个等待队列被声明是WQ_UNBOUND, 那么要干的事情就多了: 需要给它申请一个专门的pool,并且!还要为它专门申请一个叫kworker/u***的线程,守护着它;

相关代码调用路径如下:

alloc_workqueue->__alloc_workqueue_key->alloc_and_link_pwqs->apply_workqueue_attrs->apply_workqueue_attrs_locked->apply_wqattrs_prepare->alloc_unbound_pwq->get_unbound_pool->create_worker->kthread_create_on_node

ps命令可以看到有一个writeback内核线程,这个线程又是干嘛用的呢?

# ps aux | grep "writeback" | grep -v grep

root 31 0.0 0.0 0 0 ? S< May16 0:00 [writeback]

调用alloc_workqueue申请工作队列时如果传递了WQ_MEM_RECLAIM,则会使用参数1的字符作为线程名创建内核rescuer线程,在这里是"writeback",只有kworker线程不足以处理work时才会被启用,具体实现参考https://www.binss.me/blog/analysis-of-linux-workqueue/

代码语言:javascript复制
struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
                                               unsigned int flags,
                                               int max_active,
                                               struct lock_class_key *key,
                                               const char *lock_name, ...)
{
  ....
/*
         * Workqueues which may be used during memory reclaim should
         * have a rescuer to guarantee forward progress.
         */
        if (flags & WQ_MEM_RECLAIM) {
                struct worker *rescuer;

                rescuer = alloc_worker();
                if (!rescuer)
                        goto err_destroy;

                rescuer->rescue_wq = wq;
                rescuer->task = kthread_create(rescuer_thread, rescuer, "%s",
                                               wq->name);
                if (IS_ERR(rescuer->task)) {
                        kfree(rescuer);
                        goto err_destroy;
                }

                wq->rescuer = rescuer;
                rescuer->task->flags |= PF_NO_SETAFFINITY;
                wake_up_process(rescuer->task);
        }
  ....
}

代码语言:javascript复制
bdi_writeback_workfn完成kworker/u***线程处理回写脏页work的实际工作
void bdi_writeback_workfn(struct work_struct *work)
{
    ...
    /*工作队列中还有未处理的任务再次唤醒writeback线程*/
    if (!list_empty(&bdi->work_list))
        mod_delayed_work(bdi_wq, &wb->dwork, 0);
    /*还有dirty数据,延迟dirty_writeback_interval * 10毫秒唤醒per-bdi writeback线程,
    相当于以dirty_writeback_interval * 10为周期回刷脏页直到内存再无脏页*/    
    else if (wb_has_dirty_io(wb) && dirty_writeback_interval)
        bdi_wakeup_thread_delayed(bdi);//另一处调用延迟唤醒函数是__mark_inode_dirty,当IO被写入page缓存后__mark_inode_dirty会标记page为dirty page并唤醒writeback线程
    ...    
}    

void bdi_wakeup_thread_delayed(struct backing_dev_info *bdi)
{
    unsigned long timeout;

    timeout = msecs_to_jiffies(dirty_writeback_interval * 10);//dirty_writeback_interval对应vm.dirty_writeback_centisecs
    spin_lock_bh(&bdi->wb_lock);
    if (test_bit(BDI_registered, &bdi->state))
        queue_delayed_work(bdi_wq, &bdi->wb.dwork, timeout);
    spin_unlock_bh(&bdi->wb_lock);
}    


static long wb_do_writeback(struct bdi_writeback *wb)
{
    ...
    /*
     * Check for periodic writeback, kupdated() style
     */
    wrote  = wb_check_old_data_flush(wb);//仅回写内存脏数据超dirty_expire_centisecs*10 毫秒的脏数据 
    wrote  = wb_check_background_flush(wb);//仅当当前dirty page数量超过background_thresh才执行回写脏页
    clear_bit(BDI_writeback_running, &wb->bdi->state);
      
    return wrote;
}

vm.dirty_background_ratio和vm.dirty_ratio参数在balance_dirty_pages函数中被使用,用户态使用WRITE函数写文件时会调用到该函数:

代码语言:javascript复制
static void balance_dirty_pages(struct address_space *mapping,
                unsigned long pages_dirtied)
{
    ...
    for (;;) {
        ...

        /*
         * Unstable writes are a feature of certain networked
         * filesystems (i.e. NFS) in which data may have been
         * written to the server's write cache, but has not yet
         * been flushed to permanent storage.
         */
        nr_reclaimable = global_page_state(NR_FILE_DIRTY)  
                    global_page_state(NR_UNSTABLE_NFS);
        nr_dirty = nr_reclaimable   global_page_state(NR_WRITEBACK);//当前脏页数量
        //background_thresh对应dirty_background_ratio结算结果, dirty_thresh对应dirty_ratio计算结果
        global_dirty_limits(&background_thresh, &dirty_thresh);
        ....
        if (unlikely(!writeback_in_progress(bdi)))
                        bdi_start_background_writeback(bdi);//wakeup background writeout
        ....
        __set_current_state(TASK_KILLABLE);
        io_schedule_timeout(pause); //写IO进程提交IO request并休眠等待               
    }
    ....
    if (writeback_in_progress(bdi))
                return;

        /*
         * In laptop mode, we wait until hitting the higher threshold before
         * starting background writeout, and then write out all the way down
         * to the lower threshold.  So slow writers cause minimal disk activity.
         *
         * In normal mode, we start background writeout at the lower
         * background_thresh, to keep the amount of dirty memory low.
         */
        if (laptop_mode)
                return;

        if (nr_reclaimable > background_thresh)
                bdi_start_background_writeback(bdi);//wakeup background writeout

}        

void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty)
{
    unsigned long background;
    unsigned long dirty;
    unsigned long uninitialized_var(available_memory);
    struct task_struct *tsk;

    if (!vm_dirty_bytes || !dirty_background_bytes) //vm_dirty_bytes和dirty_background_bytes默认为0
        available_memory = global_dirtyable_memory();//这里获取的是当前系统可用内存页数量,后面细说global_dirtyable_memory

    if (vm_dirty_bytes)//默认为0
        dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE);
    else
        dirty = (vm_dirty_ratio * available_memory) / 100;//vm_dirty_ratio脏页比率

    if (dirty_background_bytes)
        background = DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE);
    else
        background = (dirty_background_ratio * available_memory) / 100;//dirty_background_ratio脏页比率

    if (background >= dirty)//确保dirty_background_ratio计算出来的比率必须小于vm_dirty_ratio,不然容易导致大量写IO进程进入D状态
        background = dirty / 2;//如果dirty_background_ratio>=vm_dirty_ratio则效果相当于dirty_background_ratio=vm_dirty_ratio/2
    tsk = current;
    if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) {
        background  = background / 4;
        dirty  = dirty / 4;
    }
    *pbackground = background;//当有进程调用write函数写文件时会调用到balance_dirty_pages,该函数检查到脏页数量超过该值将回写任务提交给workqueue,并唤醒per-bdi刷新线程执行bdi_writeback_workfn
    *pdirty = dirty;//当有进程调用write函数写文件时会调用到balance_dirty_pages,该函数检查到脏页数量超过该值时将把进程设置为D状态挂起10ms到200ms之间,减缓数据写入速度。具体参考https://lwn.net/Articles/405076/
    trace_global_dirty_state(background, dirty);
}


static unsigned long global_dirtyable_memory(void)
{
    unsigned long x;

    x = global_page_state(NR_FREE_PAGES);//当前free page
    x -= min(x, dirty_balance_reserve);//减掉dirty_balance_reserve预留内存页,该值由calculate_totalreserve_pages函数根据内存总大小动态计算

    x  = global_page_state(NR_INACTIVE_FILE);//inactive file page
    x  = global_page_state(NR_ACTIVE_FILE);//active file page

    if (!vm_highmem_is_dirtyable)
        x -= highmem_dirtyable_memory(x); //64位系统目前为0,具体查看/proc/buddyinfo

    /* Subtract min_free_kbytes */
    x -= min_t(unsigned long, x, min_free_kbytes >> (PAGE_SHIFT - 10));//减掉预留的最小空闲内存vm.min_free_kbytes

    return x   1;    /* Ensure that we never return 0 */
}

在未设置vm.dirty_background_bytes和vm.dirty_bytes前提下总结下dirty_background_ratio和dirty_ratio的作用就是:

available_memory=NR_FREE_PAGES-dirty_balance_reserve NR_INACTIVE_FILE NR_ACTIVE_FILE-(min_free_kbytes/4)

background_thresh=(dirty_background_ratio * available_memory) / 100=(vm.dirty_background_ratio*available_memory)/100

dirty_thresh = (vm_dirty_ratio * available_memory) / 100 =(vm.dirty_ratio*available_memory)/100

dirty_background_ratio的值必须小于dirty_ratio,如果设置dirty_background_ratio大于或等于dirty_ratio时,最后生效的值实际上为:

dirty_background_ratio=dirty_ratio/2. 之所以要保证dirty_ratio比dirty_background_ratio大的原因是为了避免因

系统脏页数量小于background_thresh未唤醒后台进程回写脏数据,大于dirty_thresh导致应用进程因等待脏数据回写而进入IO阻塞状态。

根据上面的分析,可以总结出针对不同场景这些参数的调整策略:

vm.dirty_background_ratio

vm.dirty_ratio

vm.dirty_expire_centisecs

vm.dirty_writeback_centisecs

1. 追求数据安全的场景适当调小这四个参数让脏数据尽快回刷磁盘;

2. 追求更高的性能而忽略丢数据风险则适当调大这些参数,增加内存缓存,减少IO操作;

3. 有不定时IO突增情况则适当调小dirty_background_ratio和增大dirty_ratio.

假设总内存为250G,IO带宽为100MB/s,那么理论上如果要尽可能确保所有脏数据在120s(hung_task_timeout_secs

默认值)内全部落盘,dirty_background_ratio应该设置为多大?

按iostat监控到的带宽100MB/s计算

120s x 100MB/s = 12000MB=12GB

12GB/250G = 4.8%

4.8%取整相当于dirty_background_ratio的值要设置为4。带宽和总内存都不变的前提下,如果要确保dirty数据要全部在60s内落盘则将dirty_background_ratio设置为2.

0 人点赞