Forgot to CC tech-talk
On 7/17/20 9:27 AM, Michael Davidsaver wrote:
> On 7/17/20 9:03 AM, Simon Reiter wrote:
>> Hi Bruno and Michael,
>>
>> @Bruno:
>> first of all thanks for mentioning the pyDevSup package. I was not aware of it. I will have a look into this later.
>>
>> Since you mentioned that pyDevSup ends up with real IOC with real records, what is the disadvantage of pcaspy? I know that the PVs do not follow any of the existing PV style with all the sub elements of a record, but so far I did not see any disadvantage of this for now.
>>
>>
>> @Michael:
>> I rerun the dummy code and ended up with the back trace of all 3 processes (see below).
>
> To be clear. I was asking for a dump of all threads from the one process with the
> apparent deadlock.
>
> If there really is only one thread in process #3, then this can't be a simple
> deadlock. The only possibility which comes to my mind is a race with fork()
> such that, at the moment process #3 was clone()'d from process #2, another
> thread in process #2 was holding the mutex.
>
> The python subprocess module shouldn't let this happen though since it does
> fork() and then execv(), and the situation which I describe is only possible
> if fork() is not followed by execv().
>
>
>
> ...
>> process 3:
>>>
>>> Thread 1 (Thread 0x7f18d1de5740 (LWP 59765)):
>>> #0 __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
>>> #1 0x00007f18d1611eb6 in _L_lock_941 () from /lib64/libpthread.so.0
> ...
>>> #99 0x00007f18d193819f in Py_Main (argc=<optimized out>, argv=<optimized out>) at /usr/src/debug/Python-2.7.5/Modules/main.c:640
>>> #100 0x00007f18d0b53555 in __libc_start_main (main=0x400660 <main>, argc=2, argv=0x7ffdcc3bd358, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffdcc3bd348) at ../csu/libc-start.c:266
>>> #101 0x000000000040068e in _start ()
>>
>>
>>
>>
>>> On Jul 16, 2020, at 19:29, Michael Davidsaver <mdavidsaver at gmail.com> wrote:
>>>
>>> On 7/16/20 4:03 AM, Simon Reiter via Tech-talk wrote:
>>>> But we ran into some issues by using CAProcess. I already reported this issue on github/pyepics (https://github.com/pyepics/pyepics/issues/210), but it was (highly ;) ) recommended to ask this topic in a bigger round.
>>>>
>>>> It seems that creating a CAProcess is not always successful, please see the detailed information on github. Somehow stuck at "libca.ca_context_create(ctx)”. The example shown there was only used to disclose and pin down the problem.
>>>
>>> Which version(s) of Base have you tested with?
>>>
>>> If this is some kind of deadlock, it would be useful to know what
>>> any other threads are doing. With GDB run "thread apply all backtrace"
>>> and attach the (likely verbose) output.
>>>
>>> Are there any special generalTime providers installed (either
>>> current time, or event time)?
>>>
>>> FYI. Base >= 7.0.3 has an optimization to skip the locking in
>>> epicsTimeGetCurrent() when no special current time provider
>>> is registered (the common case).
>>
>
- Replies:
- Re: Pure Python IOC (CAProcess issue) Michael Davidsaver via Tech-talk
- References:
- Pure Python IOC (CAProcess issue) Simon Reiter via Tech-talk
- Re: Pure Python IOC (CAProcess issue) Michael Davidsaver via Tech-talk
- Navigate by Date:
- Prev:
Re: Pure Python IOC (CAProcess issue) Ralph Lange via Tech-talk
- Next:
Re: Pure Python IOC (CAProcess issue) Michael Davidsaver via Tech-talk
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
<2020>
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: Pure Python IOC (CAProcess issue) Michael Davidsaver via Tech-talk
- Next:
Re: Pure Python IOC (CAProcess issue) Michael Davidsaver via Tech-talk
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
<2020>
2021
2022
2023
2024
|