Project

General

Profile

Actions

Task #821

open

Adjust latency and FADC window

Added by Alexandre Camsonne about 1 year ago. Updated 8 months ago.

Status:
New
Priority:
Normal
Start date:
09/18/2023
Due date:
% Done:

0%

Estimated time:
Actions #1

Updated by Alexandre Camsonne about 1 year ago

  • Assignee set to Alexandre Camsonne
Actions #2

Updated by Alexandre Camsonne 9 months ago

  • Priority changed from Normal to High
Actions #3

Updated by Alexandre Camsonne 9 months ago

  • Tracker changed from Bug to Task

Hi Ed,

Almost every event has some low-energy pile-up in several of the blocks that belong to a cluster. Malek posted some results as a function of the energy for 16uA on LD2 at kinc_x36_6 here: https://logbooks.jlab.org/files/2024/02/4258330/TH2F_pileups_run4712_LH2_16uA_NPS-6m.gif

Even in elastics, which are very quiet, pile-up is present. One example is in slide 4 of Malek's presentation here: https://logbooks.jlab.org/files/2024/01/4243935/fit-method.pdf You can see on slide 6 of that presentation the improvement in energy resolution provided by the removal of the pile-up pulses.

Carlos

On Sun, Mar 10, 2024 at 12:06 PM Edward R Kinney <> wrote:
Hi Carlos,

How often are you finding multiple waveforms in the 440 ns? Could you send us such a multiple waveform?
-Ed

From: Carlos Munoz Camacho <>
Date: Saturday, March 9, 2024 at 5:10 PM
To: Peter Bosted <>
Cc: Edward R Kinney <>, <>
Subject: Re: [Hallc_running] Suggestion to shorten NPS readout interval from 400 to 200 nsec
Dear Peter, all,

Thank you for looking into ways to decrease the data rate for the upcoming kinematic settings. Malek and I have discussed this proposal of reducing the FADC readout window, but we don't think we should do it.

As it has been shown, waveform analysis is crucial to improve the energy resolution of the NPS calorimeter, which is in turn the key to the good missing mass resolution needed for exclusive channels such as DVCS. It is incorrect to say that we only analyze events in a 100 ns time window; we are currently fitting all 110 samples (i.e. 440 ns). Attached is a sample (normalized) waveform.

The pulse itself extends over ~100 ns (25 samples). In order to remove pile-up before the coincidence pulse, we need enough samples to fit a pulse arriving before it.

There are also channel-to-channel variations in the coincidence time of +/- 20 ns (5 samples) and variations due to the calorimeter distance as a function of the kinematic setting.

We also need enough time window to compute accidentals.

While we could scrap a few samples at the end (and possibly at the beginning) of the current readout window, this is probably not worth the trouble as it won't significantly change things (20-30% at most).

We are open to discussing other ways of reducing the data rate without compromising the experiment's feasibility.

Best,
Carlos

On Fri, Mar 8, 2024 at 11:45 PM Peter Bosted via Hallc_running <> wrote:
Background: we have been generally running considerably less beam current
than in the NPS and SIDIS proposals (mostly planned for 30 muA). As a
result, we are getting anywhere from 2 to 10 times less events than
we hoped for.

There are several factors that limit which current we use:
a) keep anode current in average of NPS columns 0 and 1 below 30 muA
b) keep data rate low enough that no crate exceeds 80 MB/sec. Since the
crates all have roughly the same rates, we need to be well below
400 MB/sec to avoid this happening to any of the 4 and a half VME crates
(one of the 5 is only half-populated). Last night we did a 30 minute
run at 300 MB/sec with no trips. I think there is general agreement
that keeping the rate below 200 MB/sec is acceptable.
c) keeping the trigger rate low eneough to avoid significant computer dead
time corrections, as well as exceeding the maximum transfer rate of data
from the hall to the mass storage system.

For most of the settings for the rest of the experiment, factor c) will
be the limit that we reach before factors a) and b). Since the event size
(and hence computer live time and transfer rates) are largely determined
by writing out the NPS FADC data, we can gain up to a factor of two
in what current we run by reducing the readout time for the FADC.

At present, we have a readout window of 400 nsec.

We only analyze events in a 100 nsec time window.

But we need to readout over an interval longer than 100 nsec in order
to catch the long tails of some pulses.

A resonable compromise as far as I can tell would be to shorten the
readout interval from 400 to 200 nsec.

To keep the window centered on the coincidence time peak, we would
reduce the time by 75 nsec on the back end, and 125 on the front end,
as far as I understand.

I propose that the experts get together on Tuesday after the Moller run
and implement this change, after taking one run with the 400 nsec
window, so we can make sure thaat no good data is being lost. As
far as I understand, the experts include Alex, Wassim, Ben, and Sanguang.

As I understand, only the config file(s) need to be changed: no knobs
on the crates need to be turned.

Prof. Peter Bosted
email:
phone: (808) 315-1297 (cell)
P.O. Box 6254, Ocean View, HI 96737

_______________________________________
Hallc_running mailing list

https://mailman.jlab.org/mailman/listinfo/hallc_running

Actions #4

Updated by Alexandre Camsonne 8 months ago

  • Priority changed from High to Normal
Actions

Also available in: Atom PDF