Nov 20-21/25 PionLT/KaonLT Analysis Meeting Notes ------------------------------------------------- (Notes by GH) Today: PionLT will be discussed first Please remember to post your slides at: https://redmine.jlab.org/projects/kltexp/wiki/Kaon_LT_Meetings Thursday: Present ----------------- Regina - Garth Huber, Nathan Heinrich, Muhammad Junaid, Alicia Postuma, Nermin Sadoun Virginia - Richard Trotta CUA - Chi Kin Tam, Tanja Horn JLab - Dave Gaskell Ohio - Julie Roche Junaid ------ PionLT pion absorption correction via Geant4 - NGC on, Aerogel n=1.011 simulation for P_shms=5.127 GeV/c - added an uncertainty calculation - correction is 0.9654 +/- 0.00005 RF cut PID efficiency - added HGC to PID selection cuts to both numerator (Did) and denominator (Should) - get 0.99802 +/- 0.00007 - will apply both corrections to the normalized physics yields - *NB* Garth: it would be great if you could put together a table of systematic uncertainties similar to what is in the Blok paper - please include Nathan's new systematic errors there as well Next steps - setting up for next Q2 LT-sep, checking RF cut offsets for these data Nathan ------ PionLT CoinLumi analysis - sent everyone a copy of his Lumi report for comment - no comments received yet from Sameer - question for Dave on where to find the documentation for the hodoscope gate width - after some hunting, Dave finds: Found an elog from 2019 that says discriminator widths were 50 ns https://logbooks.jlab.org/entry/3747761 - Dave also asks Bill Henry to recheck the gate widths manually, and gets back an answer surprisingly quickly https://logbooks.jlab.org/files/2025/07/4423016/IMG_6761.jpeg which confirms that the widths are still 50ns - Nathan confirms that 50ns is the value he used in his ELLT calculation, so everything should be good now Alicia ------ pi+n BSA paper has been resubmitted to PLB - awaiting on final communication from journal, hopefully about page proofs - working on arXiv submission, filling in author list metadata u-channel replay of Q2=3.0, W=2.32 data - the intent was to see if new HMS matrix elements for P_hms=6.59 GeV/c made any improvement to the MM reconstruction resolution - differences in MM are quite small, we will have to live with the worse resolutionfor this setting - pi+n contamination in omega region is worse than for Q2=3.0, W=3.14 setting - the two MM peaks are not fully resolved, comparison to clean pion data indicates omega is 1-1.5sigma to left of pi+n, so very close to the nominal 1.5sigma optical criterion for resolving two close peaks - unfortunately, no RF cut for this setting - also, Python generator behaves poorly for this setting, as reported earlier (extraneous MM shape) - Garth: suggests to consult with Henry Klest on Pythia settings - retained EM^2 PM^2 so MM^2 can be calculated - SIMC predicts DVCS and pi- to be offset from each other by ~1 sigma, but MM^2 data is shifted by ~1 sigma to right of both peaks - doesn't trust data peak positioning so far away from omega, pi+n region - applying a shift of 4 MeV^2 to data, to overlap data with pi0 simulation shows reasonable agreement in peak shape, lends credence that this region is mostly pi0, as expected - low epsilon data doesn't need any MM offset, as peaks already in correct position, and also RF cut available Next steps: - will look in more detail at low epsilon MM^2 data - will contact Henry Klest re. Pythia Richard ------- KaonLT Q2=3.0, W=2.32 LT-sep - looking at t-phi bins with anomalously low cross sections - the issue is that these are bins with no pi+n leakthrough to anchor the background fit underneath Lambda peak - Chebyshev polynomial fits for background are poorly constrained, and clearly oversubtracting, giving outlier yields near zero - now looking at 2 different types of background fits - quadratic w/far edge fixed to MM=1.15 GeV data - Chebyshev poly fitting the rest - still adjusting fit parameters, but things looking better - *NB* Garth: the error bars due to this background fitting uncertainty are significantly underestimated, the way you'r calculating these errors is fine only for those settings where pi+n leakthrough gives a constraint, but you need to take into account the uncertainty in the background estimation where it is poorly constrained - using Q2=4.4 parameterization - Goal: all ratios within 3sigma of mean, and mean within 1.5sigma of unity - then will apply these fit functions also to Q2=5.5 data Chi Kin ------- KaonLT Q2=3.0, W=3.14 LT-sep - last week, theta_pq (CM frame) was not calculated correctly - wrong reference frame used, giving negative weights in re-weighting script - this is fixed now, things looking more consistent - iterations looking a bit better now - did 8 iterations, fits stabilized but still not good enough - looked at t-binning, with goal to improve the yields in poorly-populated bins - narrowed the t-bin ranges: new bin limits: 0.19,0.26,0.31,0.38,0.49 - previously had 0.17 to 0.6 - Richard was using only data up to 0.4 Friday: Present --------------- Regina - Garth Huber, Nathan Heinrich, Muhammad Junaid, Nacer Hamdi, Vijay Kumar, Nermin Sadoun York - Stephen Kay CUA - Sameer Jain, Tanja Horn, Chi Kin Tam JMU - Ioana Niculescu, Gabriel Niculescu Ohio - Julie Roche JLab - Dave Gaskell CSULA - Konrad Aniol Nacer ----- KaonLT Q2=0.5 LT-sep - still looking for right parameterization - fitting only sigT=p1/Q2+|t|^p2/(Q^2+p3)^2, L=LT=TT=0 - switched to simpler Wfac=1/(W^2-mp^2)^2 - did 4 iterations - Data/MC ratios looking quite a bit better, R near 1 and flatter than before - high epsilon has a bit bigger oscillations than low epsilon - sigT has non-monotonic t-dependence - would like to try adding back sigL now - *NB* Garth: suggests a polynomial for sigT, the fit |t|^p2 form is monotonic and cannot reproduce the observed t-dependence - TT could be a simple straight line fit TT=a+b*t, no Q2-dependence - L could be some simple monotonic, like L=p1+|t|^p2 - keep LT=0 - kinematic and focal plane plots for Data and MC - overall the plots look good, but there are some mismatches between Data and MC that generated some discussion - SHMS_xpfp is a bit narrower than data - HMS xptar has a shift between data and MC for high epsilon, right SHMS - *NB* Gabriel: puzzled that the HMS_xptar shift is as large as it is - Dave: this is a known issue with HMS xptar reconstruction - the data should be corrected by a couple of mr, could have an impact on phi-distribution, i.e. at edges of acceptance Data and MC will mismatch, giving rise to oscillations in the Data/MC ratio - Junaid did not see this effect - *NB* Nacer and Junaid will compare 0th order matrix offsets used in replays Vijay ----- PionLT Low Q2 LT-sep - systematic checks for Coin Blocking correction - varied timing cuts +/-4ns - resulting uncertainties are correlated with epsilon for both Q^2: - low epsilon: +/-0.5%, mid and high epsilon: +/-0.3% - how should we apply these uncertainties to data? - Nathan: what about calculating it run-by-run and adding it in quadrature with the statistical errors? - Garth: these are not random errors, should not be added in quadrature with statistical errors. The issue is that random errors are magnified by 1/Delta-epsilon in the L/T-separation, non-random errors are not magnified - Nacer: wouldn't it be more conservative to take the biggest error and apply it with the statistical? - Dave: the issue is that it's too conservative, under-reports the quality of the data, and results in lower quality L/T-separation - *NB* Dave: need to break this uncertainty into parts (see systematics table in Blok paper). Part of the uncertainty is global, and part of it epsilon-correlated) - Garth: take the smallest (high epsilon) value ~0.3% as a scale systematic uncertainty, and the difference between the low and mid epsilon values and this smallest value as the epsilon-correlated part - Dave: agrees with this suggestion - *NB* Garth needs to have a discussion with students on how systematic uncertainties were treated in Fpi-2 analysis (Blok paper) - shows Data MC overlay for kinematic and focal plane plots for 2 Q2=0.425 settings: - high epsilon, Right-2 - low epsilon, Center - agreement looks fairly good, HMS yptar shift seems smaller than Nacer's Sameer ------ KaonLT Coin Blocking correction - error bars are statistical and systematic added in quadrature, following our discussion he will quote them separately next time - Nathan also needs to separate them in his analysis - systematic uncertainty is bigger than statistical, which indicates a problem in the analysis - *NB* Nathan: the right cut is too tight into the distribution, it needs to shift by ~20ns - Nacer: what boiling factor should we use for KaonLT data? - *NB* Nathan: Richard's boiling factor is ~2x Nathan's - would like someone to redo this study using the same methodology as in Nathan's report - perhaps Richard used what we now understand to be the wrong LiveTime, given the newer studies? - hopefully get a boiling number similar to Nathan, in which case use his number as it's higher statistics - if it's significantly different, then need to investigate in more detail - Garth: agrees with this suggestion, it would be good for Nacer to add this to his list. Nathan thinks reproducing his study should not take long to do Gabriel ------- Looking in more detail at KaonLT offsets - ideally want a common set of offsets for all settings - using physics data to investigate the variation in the offsets with setting, which will also be helpful in understanding the uncertainties in the offsets - uses Lambda, Sigma, pi+n MM peaks as constraint - Lambda, Sigma: compare data to SIMC MM values - pi+n: recalculate MM using pion mass and compare to SIMC - follows a method similar to Richard: - generates many offsets, evaluates MM for each - finds best set of offsets, then creates a new generation of offsets near these values - process is time consuming, only 1 generation done so far, expect 2-3 generations necessary to converge - gets a nice set of histos of offsets satisfying some criteria for each generation - not using hcana MM values, calculating everything on own from spectrometer vectors - using HallC:p value of the beam energy, not the value from standard.kinematics - *NB* Garth: we had a discussion about this at a meeting that Gabriel missed. The issue is that HallC:p is not corrected for the Arc Energy Measurement. Need to find the value of HallC:p at the time of the Arc Energy Measurement (AEM), and then correct all other values by the ratio Beam=(HallC:p_now)/(HallC:p_AEM)*(AEM-GeV) - also note that for 10.6 GeV beam energy the bremsstrahlung of the beam in the Hall C Arc is too large to ignore. The Arc Energy Measurements are corrected for this loss (via a calculation), while HallC:p is not corrected for it. Next Week Meetings ------------------ - Thurs: Nov 27: no meeting due to USA Thanksgiving - Fri: Nov 28 @ 11:00 Eastern/10:00 Regina - Canadians and UK collaborators are invited - Richard thought he would be available to attend