Re: Memory issue
* Balog Pal:
"Alf P. Steinbach"
I want to use std::vector::pushback function to keep data in
RAM. What will happened if no more memory available?
In practice, what happens on a modern system as free memory
starts to become exhausted and/or very fragmented, is that the
system slows to a crawl, so, you're unlikely to actually reach
that limit for a set of small allocations.
On modern systems, it's not rare for the actual memory to be the
same size as the virtual memory, which means that in practice,
you'll never page.
No, that theory isn't practice. I think what you would have meant to
write, if you thought about it, would have been "which means that,
provided there are no other processes using much memory, an ideal OS won't
page". And some systems may work like that, and when you're lucky you
don't have any other processes competing for actual memory. :-)
You set the swap file size to 0 and there is no swap, ever.
First, very few would use a program that modified the system page file settings
just to simplify its own internal memory handling.
Second, there's no way to do it in standard C++.
Third, the context wasn't about how to turn page file usage off, it was about
what happens on a system with enough memory but page file usage enabled.
On a machine
that has 2G or 4G ram, it is a reasonable setting. (I definitely use such
configurarion with windows, and only add swap after there is a clear
reason...)
Adding so much swap to gain your crawl effect is practical for what?
I'm sorry but that's a meaningless question, incorporating three false assumptions.
First, not all systems have enough physical RAM to turn off use of page file,
even if a program could recommend to the user that he/she do that, so this
assumption is false.
Second, right now we're at a special moment in time where on most PCs the
physical RAM size matches the address space size available to and used by most
programs. But as you note below, "bloatware is like gas, fills every cubic bit
of space". It's also known as Wirth's law. And it means that in a few years
we'll be back at the usual usual where the processes running in total use far,
far more virtual memory than there is physical RAM. So the crawl effect is what
is observed in general, and it does not belong to any particular person, so
also this assumption is false.
Third, the assumption that most or all PCs are configured with use of page file
turned off, so that one would actively have to turn it on, is also false.
Andy Chapman's comment else-thread was more relevant, I think.
Because with enough RAM you can conceivably hit the limit of the address
space and get a std::bad_alloc before the OS starts trashing.
That is another hard limit, very common OS-es use memory layout model that
only gives out 2G or 3G of address space, then that is it. (When WIN32 was
introduced with this property back in 90s we were talking about the new
'640k shall be enough for everyone' problem, though the usual memory in
those days PCs were 8-16M, and filling a gig looked insane -- but bloatware
is like a gas, fills every cubic bit of space. ;)
Yeah.
[snip]
However, a very large allocation might fail.
Or not. Some OS's don't tell you when there's not enough
virtual memory, in which case, the allocation works, but you get
a core dump when you use the memory.
Example?
Linux. google for memory overcommit. (or there is good description in
Exceprional C++ Style). It is exactly as nasty as it sounds -- OS just
gives you address space, and running out of pages you get shot on ANY
access. There is no way to write a conforming C++ implementatuion for such
environment. :(
In summary, standard C++ memory exhaustion detection is unreliable on Linux and
in general unreliable on Windows. ;-)
Cheers,
- Alf