Hi Abdalla,
I don’t recall seeing any problems with the asyn device support when moving from 3.14 to 3.15 or 7.0. So I don’t think there are general issues you need to worry about.
The first thing is to diagnose the problem better.
-
When the CPU goes to 100%, run “top” and see what process is using the CPU
-
In “top” press “H” and you will see what threads are using the most CPU
If there is a memory leak you can run the IOC application using “valgrind”. That is good at telling you where there are definite and probably memory leaks.
If you are interested in eventually moving the device support to asyn, here is the page for asynPortDriver.
https://epics.anl.gov/modules/soft/asyn/R4-35/asynPortDriver.html
It contains this link to an introductory talk on writing a driver, starting with a simple version and getting progressively more complex:
https://subversion.xray.aps.anl.gov/synApps/measComp/trunk/documentation/measCompTutorial.pdf
With asyn you typically don’t need to write any device support, because it comes with standard device support for most EPICS records. You only need to write a driver.
If you describe your device and device support I may be able to point you to an existing driver that most closely matches what you need to write.
Mark
Hi
When we had EPICS base 3.14.12.3 and .6 we had a driver developed using EPICS devSup model and it was working fine. Now with EPICS base 3.15.6 the driver is running fine for some period of time until, for unknown reason, the server gets
100% CPU usage. After restarting the server I noticed that (through htop) the IOC’s memory usage is actually increasing although not in a high rate. We noticed the same behavior with another driver we developed. The first one was using a standard socket to
communicate while the second one is using SNMP v1.
The devSup structure is defined in the device support as:
struct devsup {
long number;
DEVSUPFUN report;
DEVSUPFUN init;
DEVSUPFUN init_record;
DEVSUPFUN get_ioint_info;
DEVSUPFUN io;
DEVSUPFUN misc;
} aiDevice = {
6,
NULL,
Init,
initRecord,
NULL
ioRecord,
NULL
};
And drvet structure is defined in the driver support as:
static drvet drvDevice = {
2,
NULL,
Init
};
Here is how everything works in the driver:
1.
The drvet init function initializes communication to the configured devices and fills up array of structures each for a single device.
2.
The devSup init record parses the INP field, gets the device name and uses it to get the device from the previous array storing it in the record’s DPVT.
3.
The devSup io record function reads the DPVT and creates a POSIX thread.
4.
This thread is detached and reads the parsed INP field and calls the corresponding function from the driver support.
5.
All driver support functions communicate with the corresponding device, fetch the value and stores it in the record’s VAL pointer.
6.
After that, dbScanLock is called followed by a call to the record’s RECSUPFUN process function pointer, then dbScanUnlock is called.
Right now I am investigating the driver’s code for any possible bugs, but I am sure the problem is that we are doing something wrong which was done right with base 3.14.12.3. Where could be the problem? Let me know if further info Is needed.
We are aware of the benefits of asyn-based drivers but we did not have time to do something from scratch. It would be very helpful if there are good material on developing asyn-based drivers.
Best Regards,
Abdalla Ahmad
Control Engineer
SESAME
Allan, Jordan.
Tel: (+962-5) 3511348 , ext. 265
Fax: (+962-5) 3511423
Mob: (+962-7)88183296
www.sesame.org.jo