Bernd Schmidt bernds_cb1 at t-online.de
Mon Jun 9 14:56:27 UTC 2008

Replying to multiple messages in one.

>> Well, wasting memory at run-time is inherent in the design of busybox.
> Only on machines which cannot share text segment.

Sorry, but this whole discussion exists only because the data/bss is 
bigger than it needs to be for any single applet.

Denys Vlasenko wrote:
> On Monday 09 June 2008 12:55, Bernd Schmidt wrote:
>>> IOW: crypt() which uses static buffers is as likely
>>> to die/lock up on OOMing machine as malloc() based one.
>> Incorrect on some systems, notably nommu which is quite a likely setup 
>> for uClibc.  Also, an order-5 malloc on nommu is much more likely to 
>> fail than taking an order-0 page fault on mmu.  A 70k allocation _will_ 
>> fail occasionally, even on systems that have plenty of memory free.
> In this situation, execv'ing of an application with additional 70k
> of static buffers will fail even more readily, right? Because
> you do not require two smaller contiguous areas
> (N kb for app and 70k for buffer) but one contiguous area of N+70 kb.

Sigh.  Yes, and that's the desired behaviour.  From the execve manpage:

        ENOMEM Insufficient kernel memory was available.

 From the crypt manpage:
        ENOSYS The crypt() function was not implemented, probably 
because  of  U.S.A.  export  restric‐

It's really not a hard concept, this whole documented behaviour thing.

>> and infinitely better  
>> > than crashing in the middle of a program and leaving things in an 
>> > inconsistent state.  Even when you get OOM, there are different modes of 
>> > failure,
> On the microscopic level, yes, they are different.
> In a bigger picture, no. The machine is unusable one way or another.

But what are the consequences of failure?  Okay, so I agree that a 
production machine should be big enough not to go OOM.  Reality is more 
difficult, maybe there's a memory leak somewhere, in any case OOM does 
happen occasionally.  The question then becomes: where and how do we 
fail, and what are the consequences.  Do we fail at a point where the 
possibility of a memory allocation failure (or other hard error) is 
documented and can be dealt with, or do we fail in a random place?  In 
one of these cases it's possible to engineer a certain measure of 
robustness, in the other case it's impossible.  The machine may be 
unusable in both cases, but in one case the damage can be contained more 

There are computers which run software that is more complex than 
busybox, where it's not just a question of the user experience of 
printing "out of memory" on a shell prompt.

With that, I'll give up on convincing you; unless someone else objects 
soon I'll take Bernhard's and Daniel's messages as consensus to start 
fixing things.

This footer brought to you by insane German lawmakers.
Analog Devices GmbH      Wilhelm-Wagenfeld-Str. 6      80807 Muenchen
Sitz der Gesellschaft Muenchen, Registergericht Muenchen HRB 40368
Geschaeftsfuehrer Thomas Wessel, William A. Martin, Margaret Seif

More information about the uClibc mailing list