Experimental Physics and
| |||||||||||||||||
|
On Mar 29, 2007, at 02:04 , Terry Cornall wrote: Hi all.Hi: This naive question doesn't have as simple an answer as you might hope. One archive engine can usually collect about 10000 samples/second. For example, monitor 1000 channels all updating at 10Hz. Problem 1: You won't know what your missing. The channel access protocol can go into "flow control", skipping a few samples, but it's really hard to find out if/when/how that's happening. Problem 2: The amount of data. One 'double' sample uses a little over 20 bytes for the timestamp, status, sev, value. For many values, the data file structure and index add relatively little to that, but you'll still get about 20GB per day. How do you intend to back that up? At the SNS, we run about 70 different engines. They are typically separated by subsystem, and often restarted daily to limit the amount of possible data loss in case a sub-archive gets corrupted. Some store only a few values each second, others store the value of the IOC clock each second (don't ask me why), others store about 1000 values each second, including waveforms. The total amount of data for Feb. 2007 is about 150G. When you try to allow access to 'all' or as much as possible, that means that somebody has to periodically create indices. This is mostly automated, but in case there's a problem, for example one index reaches the 2GB file size limit and things need to get reorganized, that process takes a lot of time. The index mechanism certainly needs some improvement, but even after we eventually get that, moving those amounts of data over the network takes many days. If somebody tells you that disks are cheap, please ask that person to take care of your archiving, then run as fast as you can. -Kay
| ||||||||||||||||
ANJ, 10 Nov 2011 |
·
Home
·
News
·
About
·
Base
·
Modules
·
Extensions
·
Distributions
·
Download
·
· Search · EPICS V4 · IRMIS · Talk · Bugs · Documents · Links · Licensing · |