bugs in malloc

Freeman Wang xwang at ubicom.com
Mon Nov 23 19:55:26 UTC 2009


We are using uClibc over uClinux which extends the heap using MMAP
instead of SBRK. We found the following issues with uClibc version
0.9.30.1.
 
1. We found with some special application, the application would get
stuck at line 162 of malloc.c and the reason was mem->next points back
to itself.
 
It turns out, we believe, to be because new_mmb is allocated after the
mmb list is iterated throught to find the insertion point. When the
mmb_heap also runs out and needs to be extended when the regular heap is
just extended, the mmb list could be messed up. We moved the new_mmb
allocation up and the problem seems to have been fixed.
 
2. While trying to fix the above issue, we read the code and found a
multi-threading issue with this mmb list handling. This list is halfway
protected in free.c and not protected by any lock at all in malloc.c. Is
it intentional?
 
3. In an embedded world without MMU, it is not garanteed that the mmap
syscall would always get back a valid block, and that's probably why the
return value, block, is checked immediately after the syscall. But it
seems we are not checking the return value of new_mmb which is allocated
from the mmb_heap? Is it a potential issue?
 
Thanks
Freeman
 
 
This is a proposed fix for the first two problems. 
For the second issue, the mmb_heap_lock is changed into recursive and
the regular heap_lock is extended to cover the mmb heap handling. It
seems ok as long as the handling of the two locks always follows the
same order.
 
diff --git a/libc/stdlib/malloc/free.c b/libc/stdlib/malloc/free.c
index 9155291..b3b8eb6 100644
--- a/libc/stdlib/malloc/free.c
+++ b/libc/stdlib/malloc/free.c
@@ -179,14 +179,14 @@ __free_to_heap (void *mem, struct heap_free_area
**heap
              /* Start searching again from the end of this block.  */
              start = mmb_end;
 
+             /* Release the descriptor block we used.  */
+             free_to_heap (mmb, &__malloc_mmb_heap,
&__malloc_mmb_heap_lock);
+
              /* We have to unlock the heap before we recurse to free
the mmb
                 descriptor, because we might be unmapping from the mmb
                 heap.  */
               __heap_unlock (heap_lock);
 
-             /* Release the descriptor block we used.  */
-             free_to_heap (mmb, &__malloc_mmb_heap,
&__malloc_mmb_heap_lock);
-
              /* Do the actual munmap.  */
              munmap ((void *)mmb_start, mmb_end - mmb_start);
 
diff --git a/libc/stdlib/malloc/malloc.c b/libc/stdlib/malloc/malloc.c
index e449625..6aff8e3 100644
--- a/libc/stdlib/malloc/malloc.c
+++ b/libc/stdlib/malloc/malloc.c
@@ -48,7 +48,7 @@ struct malloc_mmb *__malloc_mmapped_blocks = 0;
 HEAP_DECLARE_STATIC_FREE_AREA (initial_mmb_fa, 48); /* enough for 3
mmbs */
 struct heap_free_area *__malloc_mmb_heap = HEAP_INIT_WITH_FA
(initial_mmb_fa);
 #ifdef HEAP_USE_LOCKING
-malloc_mutex_t __malloc_mmb_heap_lock = PTHREAD_MUTEX_INITIALIZER;
+malloc_mutex_t __malloc_mmb_heap_lock =
PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP;
 #endif
 #endif /* __UCLIBC_UCLINUX_BROKEN_MUNMAP__ */
 
@@ -151,19 +151,18 @@ __malloc_from_heap (size_t size, struct
heap_free_area **heap
          /* Try again to allocate.  */
          mem = __heap_alloc (heap, &size);
 
-         __heap_unlock (heap_lock);
-
 #if !defined(MALLOC_USE_SBRK) &&
defined(__UCLIBC_UCLINUX_BROKEN_MUNMAP__)
          /* Insert a record of BLOCK in sorted order into the
             __malloc_mmapped_blocks list.  */
 
+         new_mmb = malloc_from_heap (sizeof *new_mmb,
&__malloc_mmb_heap, &__malloc_mmb_heap_lock);
+
          for (prev_mmb = 0, mmb = __malloc_mmapped_blocks;
               mmb;
               prev_mmb = mmb, mmb = mmb->next)
            if (block < mmb->mem)
              break;
 
-         new_mmb = malloc_from_heap (sizeof *new_mmb,
&__malloc_mmb_heap, &__malloc_mmb_heap_lock);
          new_mmb->next = mmb;
          new_mmb->mem = block;
          new_mmb->size = block_size;
@@ -177,6 +176,7 @@ __malloc_from_heap (size_t size, struct
heap_free_area **heap
                            (unsigned)new_mmb,
                            (unsigned)new_mmb->mem, block_size);
 #endif /* !MALLOC_USE_SBRK && __UCLIBC_UCLINUX_BROKEN_MUNMAP__ */
+         __heap_unlock (heap_lock);
        }
     }
 


More information about the uClibc mailing list