org.apache.hadoop.mapred.YarnChild: GC overhead limit

2018-10-24 15:21:34 浏览数 (1)

记录一次错误:

环境:CDH5.10 jdk8

hive query 时,报错org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: GC overhead limit exceeded at org.apache.hadoop.io.Text.setCapacity(Text.java:268) at org.apache.hadoop.io.Text.set(Text.java:224) at org.apache.hadoop.io.Text.set(Text.java:214)

一般会有下面几种情况: 1.map/reduce的内存不足 2.client 的heap不足 client hive heap(一般是发生mapjoin时导致的) 3.opts(mapreduce.map/reduce.java.opts不足,但是堆内存必须小于memory的大小)

调大相应的参数即可解决。 如:

代码语言:javascript复制
<property>
    <name>mapreduce.map.java.opts.max.heap</name>
    <value>983</value>
</property>
<property>
    <name>mapreduce.reduce.java.opts.max.heap</name>
    <value>983</value>
</property>

或者

代码语言:javascript复制
set mapreduce.map.java.opts=-Xmx2g

备注: 1. mapreduce.map/reduce.memory.mb 默认是0,不是不限制memory的大小,而是 The amount of memory to request from the scheduler for each map task. If this is not specified or is non-positive, it is inferred from mapreduce.map.java.opts and mapreduce.job.heap.memory-mb.ratio. If java-opts are also not specified, we set it to 1024. 2. CDH mapreduce.map/reduce.java.opts.max.heap 默认是0,不是不限制,而是默认状态下的大小是依据mapreduce.map/reduce.memory.mb 大小 3. CDH有mapreduce.map.java.opts.max.heap而apache hadoop并没有这个参数,却有mapreduce.map.java.opts, mapreduce.map.java.opts会覆盖掉 mapreduce.map.java.opts.max.heap

4.还会看到mapred.map/reduce.child.java.opts

0 人点赞