What is your image size, datatype, and frame rate? How many FFTs do you need to do per second?
Mark
________________________________
From: Jörn Dreyer <j.dreyer at hzdr.de>
Sent: Friday, September 25, 2020 1:44 AM
To: tech-talk at aps.anl.gov; Mark Rivers
Subject: Re: EPICS QT question
Hi Mark,
thanks for the tips. I will give that a try. The first step is to do an FFT on the image data.
Am Donnerstag, 24. September 2020, 18:29:07 CEST schrieb Mark Rivers:
> Hi Jörn,
>
> > One solution would be to implement all the magic math we use as plugins
> > for areaDetector
> >
> > and combine them to the necessary chain.
> >
> > But the machine we use to read the camera is not powerful enough (Odroid
> > XU1) to do that.
> >
> > What would be the performance if we would run the plugins on a separate
> > machine?
> What you propose is quite reasonable. The camera IOC would need to run the
> NDPluginPva, which is quite efficient. Here is a screen shot where it is
> processing 784 frames/s of 1024x1024 Float32 images = 3.2 GB/s.
>
> [cid:[email protected]]
>
> You can then run an IOC on a more powerful machine with the pvaDriver. Here
> is a screen shot when it is processing 303 frame/s of 1024x1024 Float32
> images = 1.2 GB/s.
>
> [cid:[email protected]]
>
> Of course you need to make sure the network link between the machines can
> handle the required bandwidth.
> > And how to make sure that they all are in sync?
>
> Can you explain what you mean by that?
I would need to make sure that all the post processed images are based on the live image that is displayed. I guess that if one of the processes takes some time (more time then the trigger rate of the camera) EPICS-QT will update the live image before the post process is finished and the data displayed. The reason is, that we want to store all the data in a common file at the end..
Jörn
>
> Mark
>
> ________________________________
> From: Tech-talk <tech-talk-bounces at aps.anl.gov> on behalf of Jörn Dreyer via
> Tech-talk <tech-talk at aps.anl.gov> Sent: Thursday, September 24, 2020 8:53
> AM
> To: tech-talk at aps.anl.gov
> Subject: EPICS QT question
>
>
> Hi,
>
>
> I'm currently developing an application based on EPICS QT. This toolkit
> makes things so much easier than using PyQwt which I have used before for
> the first version of the app. The app reads a picture from a camera,
> displays the original and does some processing of the image like
> fouriertransforms, peak finding and fitting etc.
>
>
> To make sure the analyzed pictures and the camera image are in sync, I
> wanted to read the image data from a PvaClient and display it in an
> QEImage.Then do all the math and display the results in other QEImage
> widgets.
>
> But unfortunately the set(Pva)Image function is private.
>
>
> One solution would be to implement all the magic math we use as plugins for
> areaDetector and combine them to the necessary chain. But the machine we
> use to read the camera is not powerful enough (Odroid XU1) to do that. What
> would be the performance if we would run the plugins on a separate machine?
> And how to make sure that they all are in sync?
>
>
> Are there any experiences with such a scenario?
>
>
> Regards
>
>
> Jörn Dreyer
- References:
- EPICS QT question Jörn Dreyer via Tech-talk
- Re: EPICS QT question Mark Rivers via Tech-talk
- Re: EPICS QT question Jörn Dreyer via Tech-talk
- Navigate by Date:
- Prev:
RE: RE: How to build MEDM (Jetson nano) 신동호 via Tech-talk
- Next:
Re: ADProsilica crashing on RHEL7 3.10.0-1127 Dunning, Michael via Tech-talk
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
<2020>
2021
2022
2023
2024
- Navigate by Thread:
- Prev:
Re: EPICS QT question Jörn Dreyer via Tech-talk
- Next:
StreamDevice "type punning" floating point conversion Eric Norum via Tech-talk
- Index:
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
<2020>
2021
2022
2023
2024
|