svn commit: branches/uClibc-nptl: libc/sysdeps/linux/common libpthread/lin etc...

Will Newton will.newton at gmail.com
Fri Mar 13 11:26:31 UTC 2009


On Thu, Mar 12, 2009 at 8:57 PM, Denys Vlasenko
<vda.linux at googlemail.com> wrote:
>> > --- metag-uClibc2/libpthread/linuxthreads/descr.h       5 Feb 2008 14:49:33 -0000       1.2
>> > +++ metag-uClibc2/libpthread/linuxthreads/descr.h       23 Apr 2008 13:27:00 -0000
>> > @@ -123,9 +123,9 @@
>> >       union dtv *dtvp;
>> >       pthread_descr self;      /* Pointer to this structure */
>> >       int multiple_threads;
>> > -# ifdef NEED_DL_SYSINFO
>> >       uintptr_t sysinfo;
>> > -# endif
>> > +      uintptr_t stack_guard;
>> > +      uintptr_t pointer_guard;
>> > ...
>> > +  /* Copy the stack guard canary.  */
>> > +#ifdef THREAD_COPY_STACK_GUARD
>> > +  THREAD_COPY_STACK_GUARD (new_thread);
>> > +#endif
>> > +
>> > +  /* Copy the pointer guard value.  */
>> > +#ifdef THREAD_COPY_POINTER_GUARD
>> > +  THREAD_COPY_POINTER_GUARD (new_thread);
>> > +#endif
>> >
>> > these members appear to be only written to.
>> > Why do you need write-only members?
>> > THREAD_COPY_STACK_GUARD and THREAD_COPY_POINTER_GUARD
>> > are not defined anywhere.
>>
>> They are defined in the TLS code from glibc. They do not have any
>> particular effect if they are not defined but they reduce the diff
>> from upstream LinuxThreads.
>
> I take it means that we can omit them and have smaller code.

Code size is not the only metric of good quality code, even in a
resource constrained embedded system.

>> > -     If no threads have been created yet, clear it just in the
>> > -     current thread.  */
>> > +     If no threads have been created yet, or if we are exiting, clear
>> > +     it just in the current thread.  */
>> >
>> >   struct pthread_key_delete_helper_args args;
>> >   args.idx1st = key / PTHREAD_KEY_2NDLEVEL_SIZE;
>> >   args.idx2nd = key % PTHREAD_KEY_2NDLEVEL_SIZE;
>> > -  if (__pthread_manager_request != -1)
>> > +  if (__pthread_manager_request != -1
>> > +      && !(__builtin_expect (__pthread_exit_requested, 0)))
>> >
>> > what does this fix? (I'd also remove __builtin_expect, it uglifies code
>> > much more than it helps wrt performance).
>>
>> It's trade off with reducing the diff from upstream LinuxThreads. It's
>> easy to break code cleaning it up and e.g. losing a !.
>
> So what does it fix?

I have not seen the bug myself or a direct report. However it appears
that there is a possible deadlock on exit where a thread is notifying
the manager of a key deletion but the manager is shutting down.

>> >  #if defined HAS_COMPARE_AND_SWAP
>> >        wait_node_dequeue(pp_head, pp_max_prio, p_max_prio);
>> >  #endif
>> > +
>> > +      /* Release the spinlock before restarting.  */
>> > +#if defined TEST_FOR_COMPARE_AND_SWAP
>> > +      if (!__pthread_has_cas)
>> > +#endif
>> > +#if !defined HAS_COMPARE_AND_SWAP || defined TEST_FOR_COMPARE_AND_SWAP
>> > +       {
>> > +         __pthread_release(&lock->__spinlock);
>> > +       }
>> > +#endif
>> > +
>> >       restart(p_max_prio->thr);
>> > -      break;
>> > +
>> > +      return;
>> >     }
>> >   }
>> >
>> > what does this fix?
>>
>> The spinlock unlock issue. That was the main issue that I needed to
>> fix with this patch set, most of the rest is just synching up with
>> LinuxThreads HEAD so we can more easily compare and merge in the
>> future.
>
> I spend 20 minutes to understand that the reason here is:
>
>        Release the spinlock *before* restarting
>
> with emphasis on "before", because the code already does that
> via "break" statement - it falls out of the while() loop
> and does __pthread_release(&lock->__spinlock). But it happens
> after restart(p_max_prio->thr).
>
> What bad things currently happen if we do it other way around?

The waiting thread is restarted but the lock is still held, so the
waiting thread will spin on the lock waiting for it to be
relinquished. Eventually the lock holding thread will be rescheduled
and will complete the unlock, but not before a lot of time has been
wasted spinning.


More information about the uClibc mailing list