--Language: node v14.7.0
On Ubuntu with 64GB of memory, when I moved the node with the maximum memory consumption expanded to 48GB with the --max-old-space-size
option, 48GB was increased depending on the contents of the program. There were two behaviors, one was when it was used up and the other was when a memory error occurred when about 20GB was consumed.
The cause was the difference in the memory allocation method due to the difference in the operation of the program, and the mmap system call generated ENOMEM due to the number of mappings being exceeded in the program that could not use up the memory.
The sudo sysctl -w vm.max_map_count = 655300
command increased the maximum number of mappings in the system, and as a result, the program used up the given memory.
The following block code shows the one-liner that displays the memory consumption of node on bash.
python
$ node -e 'console.log(process.memoryUsage().rss / 1024 / 1024 / 1024);'
0.02828216552734375
The following shows the memory consumption after creating an array consisting of 10,000,000 numbers of 0s.
python
$ node -e 'var a = Array.from({length: 10000000}, v => 0); console.log(process.memoryUsage().rss / 1024 / 1024 / 1024);'
0.10385513305664062
Memory consumption has risen slightly from 0.028GB to 0.103GB. A 10-megabyte integer generated about 70MB of consumption.
--max-old-space-size
, the memory limit will be reached immediately.So what if you generate an array of 100 0s instead of 0s?
python
$ node -e 'var a = Array.from({length: 10000000}, v => Array.from({length: 100}, v => 0)); console.log(process.memoryUsage().rss / 1024 / 1024 / 1024);'
This environment has about 64GB of memory, so I feel that it is physically sufficient. However, when node consumes about 5GB, the following error appears.
python
$ node -e 'var a = Array.from({length: 10000000}, v => Array.from({length: 100}, v => 0)); console.log(process.memoryUsage().rss / 1024 / 1024 / 1024);'
<--- Last few GCs --->
[27037:0x648f650] 45191 ms: Scavenge 4093.1 (4100.7) -> 4093.0 (4101.5) MB, 5.7 / 0.0 ms (average mu = 0.125, current mu = 0.081) allocation failure
[27037:0x648f650] 45204 ms: Scavenge (reduce) 4094.0 (4105.5) -> 4094.0 (4106.2) MB, 5.6 / 0.0 ms (average mu = 0.125, current mu = 0.081) allocation failure
[27037:0x648f650] 45218 ms: Scavenge (reduce) 4095.1 (4100.5) -> 4095.0 (4103.7) MB, 6.2 / 0.0 ms (average mu = 0.125, current mu = 0.081) allocation failure
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x9fd5f0 node::Abort() [node]
2: 0x94a45d node::FatalError(char const*, char const*) [node]
3: 0xb7099e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
4: 0xb70d17 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
5: 0xd1a905 [node]
6: 0xd1b48f [node]
7: 0xd294fb v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
8: 0xd2d0bc v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
9: 0xcfb7bb v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
10: 0x1040c4f v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
11: 0x13cc8f9 [node]
Aborted (core dumped)
ʻIneffective mark-compacts near heap limit Allocation failed --Google search with JavaScript heap out of memory, you can extend the upper limit of memory that node can consume with
--max-old-space-size` as follows I got a lot of articles.
-Avoiding the phenomenon that builds rarely drop with Webpack
This environment has 64GB of memory, so let's try to consume up to 48GB.
python
$ node --max-old-space-size=$((1024 * 48)) -e 'var a = Array.from({length: 10000000}, v => Array.from({length: 100}, v => 0)); console.log(process.memoryUsage().rss / 1024 / 1024 / 1024);'
8.26059341430664
After consuming 8.2GB of memory, it ended normally.
So what if you generate an "array of 1000 empty arrays" instead of an "array of 100 0s"? In the first place, the memory consumption should be 10 times higher, so the memory should be insufficient. Furthermore, instead of 0, let's generate an empty array here.
python
$ node --max-old-space-size=$((1024 * 48)) -e 'var a = Array.from({length: 10000000}, v => Array.from({length: 1000}, v => [])); console.log(process.memoryUsage().rss / 1024 / 1024 / 1024);'
The expectation is that it will run out of 48GB and crash with an error due to lack of memory. Garbage collection may cause the operation speed to become extremely slow before 48GB is used up.
The result ended with the following error when consuming only 16.8GB:
From the results in the previous section, it is clear that the effect of --max-old-space-size
appears.
python
$ node --max-old-space-size=$((1024 * 48)) -e 'var a = Array.from({length: 10000000}, v => Array.from({length: 1000}, v => [])); console.log(process.memoryUsage().rss / 1024 / 1024 / 1024);'
<--- Last few GCs --->
[10261:0x648f730] 77051 ms: Scavenge 16379.4 (16414.4) -> 16377.4 (16428.1) MB, 32.2 / 0.0 ms (average mu = 0.794, current mu = 0.795) allocation failure
[10261:0x648f730] 77103 ms: Scavenge 16393.4 (16428.1) -> 16395.3 (16430.4) MB, 27.8 / 0.0 ms (average mu = 0.794, current mu = 0.795) allocation failure
[10261:0x648f730] 77189 ms: Scavenge 16395.3 (16430.4) -> 16393.6 (16441.6) MB, 86.3 / 0.0 ms (average mu = 0.794, current mu = 0.795) allocation failure
<--- JS stacktrace --->
FATAL ERROR: Scavenger: semi-space copy Allocation failed - JavaScript heap out of memory
Segmentation fault (core dumped)
Also, after executing it several times, the error statement might be random and the following.
python
$ node --max-old-space-size=$((1024 * 48)) -e 'var a = Array.from({length: 10000000}, v => Array.from({length: 1000}, v => [])); console.log(process.memoryUsage().rss / 1024 / 1024 / 1024);'
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
Why does it end with only 16.8GB being used even if 48GB is specified with --max-old-space-size
?
How can I use up 48GB?
The following code increases the number of impressions of memory consumption.
python
$ node --max-old-space-size=$((1024 * 48)) -e 'Array.from({length: 10000}, v => { console.log(process.memoryUsage().rss / 1024 / 1024 / 1024); return Array.from({length: 1000000}, v => []); });'
When I executed it, the memory consumption was displayed and it ended abnormally at about 20GB.
python
<Omitted>
20.046581268310547
20.084598541259766
20.122615814208984
20.160381317138672
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
The following code makes it generate an "array of arrays of 1" instead of an "array of arrays of empty arrays".
python
$ node --max-old-space-size=$((1024 * 48)) -e 'Array.from({length: 10000}, v => { console.log(process.memoryUsage().rss / 1024 / 1024 / 1024); return Array.from({length: 1000000}, v => 1); });'
This code doesn't stop at 20GB and ends up running out of the specified 48GB of memory! Moreover, when the memory consumption exceeded 46GB, the garbage collection frequently got stuck, and the memory consumption did not rise easily ~~ Achilles and the turtle ~~.
python
<Omitted>
48.053977966308594
48.06153106689453
48.06904602050781
48.076324462890625
48.08384323120117
<--- Last few GCs --->
<Omitted>
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x9fd5f0 node::Abort() [node]
<Omitted>
17: 0x1018255 v8::internal::Runtime_NewArray(int, unsigned long*, v8::internal::Isolate*) [node]
18: 0x13cc8f9 [node]
Aborted (core dumped)
The only thing that has changed is whether the deepest part of the data structure is an empty array or one of the numbers. In both cases, there is not enough physical memory in the first place, so there is no change in the eventual termination.
std :: bad_alloc
seems to be an error when new in C ++ failsA Google search for std :: bad_alloc
found that it came from new in C ++.
-Detect new memory allocation failure (std :: bad_alloc)
node is written in C ++, which means that there is a place where new is done.
std :: bad_alloc
can be captured by a C ++ try statement, but it may occur in both the captured and non-captured locations, as the general location is new.
I expect that to be the cause of the error statement changing randomly.
strace is a command that can output a process system call.
-How to use the strace command
For the time being, I put it in front of node and executed it as follows.
python
$ strace node --max-old-space-size=$((1024 * 48)) -e 'Array.from({length: 10000}, v => { console.log(process.memoryUsage().rss / 1024 / 1024 / 1024); return Array.from({length: 1000000}, v => 1); });'
The angry log is output, but the end is as follows.
python
<Omitted>
mprotect(0x29960bbc0000, 262144, PROT_READ|PROT_WRITE) = 0
brk(0x1bf9a000) = 0x1bf9a000
mmap(0x2ebbbafc0000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x2ebbbafc0000
munmap(0x2ebbbb000000, 258048) = 0
mprotect(0x2ebbbafc0000, 262144, PROT_READ|PROT_WRITE) = 0
mmap(0x3ba59c200000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x3ba59c200000
munmap(0x3ba59c240000, 258048) = 0
mprotect(0x3ba59c200000, 262144, PROT_READ|PROT_WRITE) = 0
mmap(0x3f3009d40000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x3f3009d40000
munmap(0x3f3009d80000, 258048) = 0
mprotect(0x3f3009d40000, 262144, PROT_READ|PROT_WRITE) = 0
mmap(0x330b57380000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x330b57380000
munmap(0x330b573c0000, 258048) = 0
mprotect(0x330b57380000, 262144, PROT_READ|PROT_WRITE) = 0
mmap(0x207b9d440000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x207b9d440000
munmap(0x207b9d480000, 258048) = 0
mprotect(0x207b9d440000, 262144, PROT_READ|PROT_WRITE) = 0
mmap(0x300db2380000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x300db2380000
munmap(0x300db23c0000, 258048) = 0
mprotect(0x300db2380000, 262144, PROT_READ|PROT_WRITE) = 0
mmap(0x8e44e340000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x8e44e340000
munmap(0x8e44e380000, 258048) = 0
mprotect(0x8e44e340000, 262144, PROT_READ|PROT_WRITE) = 0
mmap(0x1a79a5c00000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x1a79a5c00000
munmap(0x1a79a5c40000, 258048) = 0
mprotect(0x1a79a5c00000, 262144, PROT_READ|PROT_WRITE) = 0
mmap(0x9abb4d00000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(0x9abb4d00000, 520192, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mprotect(0xafbb8382000, 86016, PROT_READ|PROT_WRITE) = 0
mprotect(0xafbb83c2000, 249856, PROT_READ|PROT_WRITE) = 0
mprotect(0xafbb8402000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0xafbb8442000, 4096, PROT_READ|PROT_WRITE) = 0
mprotect(0xafbb8482000, 4096, PROT_READ|PROT_WRITE) = 0
mmap(NULL, 1040384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
brk(0x1c0a2000) = 0x1bf9a000
mmap(NULL, 1175552, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(0x7efbe4000000, 67108864, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 134217728, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 67108864, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 134217728, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 67108864, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 1040384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
futex(0x7efbfebab1a0, FUTEX_WAKE_PRIVATE, 2147483647) = 0
write(2, "terminate called after throwing "..., 48terminate called after throwing an instance of ') = 48
write(2, "std::bad_alloc", 14std::bad_alloc) = 14
write(2, "'\n", 2'
) = 2
write(2, " what(): ", 11 what(): ) = 11
write(2, "std::bad_alloc", 14std::bad_alloc) = 14
write(2, "\n", 1
) = 1
rt_sigprocmask(SIG_UNBLOCK, [ABRT], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1], [], 8) = 0
getpid() = 15400
gettid() = 15400
tgkill(15400, 15400, SIGABRT) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
--- SIGABRT {si_signo=SIGABRT, si_code=SI_TKILL, si_pid=15400, si_uid=1003} ---
+++ killed by SIGABRT (core dumped) +++
Aborted (core dumped)
Here, you can see from the following line that the error ʻENOMEMappears in the mmap part. Why is it
Cannot allocate memory` when there is excess memory?
python
mmap(NULL, 67108864, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
Important information was written in the mmap man.
ENOMEM There is no free memory, or the number of mappings for the process being processed has exceeded the maximum.
If the number of mappings exceeds the maximum
, ENOMEM will be output even if there is excess memory.
The error itself is a common ENOMEM, so it's probably indistinguishable from out of memory from a program perspective.
It's understandable that the error message is JavaScript heap out of memory
.
sudo sysctl -w vm.max_map_count =
A way to increase the maximum number of mappings was obtained from the following location.
-6.1. How can I avoid the "mmap Cannot allocate memory" error? -What you should be careful about before introducing Elasticsearch!
You can find the current value with sysctl vm.max_map_count
.
You can set the value with sudo sysctl -w vm.max_map_count = 65536
.
It seems that the changes affect the entire system and will be restored by rebooting.
I tried to multiply the value of vm.max_map_count
by 10 times the current value.
python
$ sysctl vm.max_map_count
vm.max_map_count = 65530
$ sudo sysctl -w vm.max_map_count=655300
vm.max_map_count = 655300
After the countermeasure, I started the code to generate the array of the empty array again.
python
$ node --max-old-space-size=$((1024 * 48)) -e 'Array.from({length: 10000}, v => { console.log(process.memoryUsage().rss / 1024 / 1024 / 1024); return Array.from({length: 1000000}, v => []); });'
As a result, the memory of about 48GB given as expected was used up to the end, and then it terminated abnormally.
<Omitted>
48.66949462890625
48.756752014160156
48.79401397705078
48.831199645996094
48.86867141723633
<--- Last few GCs --->
<Omitted>
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x9fd5f0 node::Abort() [node]
<Omitted>
11: 0x13cc8f9 [node]
Aborted (core dumped)
After all, it wasn't a node-specific problem. It wasn't even a C ++ problem.
Initially, the heap size is limited separately because it stops at "16.8GB", I wonder if memory usage is such a thing due to GC, and if you new 2.2 billion objects in C ++, 32-bit integers are enough I was thinking that it might disappear and fail in new.
Recommended Posts