Particle Physics : The Boosted Regime… 1 answer below »

ttH, H ¯ ? b ¯b in The Boosted Regime. May 7, 2018 1 1 introduction The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) generated extraordinary amounts of high-energy protonproton collision data. In July 2012, the ATLAS and CMS experiments observed a new neutral scalar boson consistent with the standard model Higgs boson, a previously unobserved particle, the existence of which had been predicted theoretically since the 1960s. This particle is an essential part of the Standard Model (SM)of particle physics which forms the basis of the modern understanding of physics. It is on the smallest scale due to its role in explaining the spontaneous breaking of physics of electroweak symmetry; besides, it has become the last remaining undiscovered fundamental particle according to (SM). Essentially, the Standard Model is a mathematical description of the elementary particles which describe the interactions between the fundamental matter particles (the spin-1/2 fermions), in terms of the fundamental force particles the integer spin bosons. Also, the Standard Model has succeeded concerning describing a multitude of decay and scattering processes at all energies, which experimentally reached so far [1]. This analysis will search for ttH, H ¯ ? b ¯b for semi-leptonic decay mode, involving jets, b-jets, and a hard lepton (which might be an electron or a muon). Each of these objects is required to satisfy specific criteria. The analysis targets events in which one or every top-quarks decays semileptonically, generating an electron or a muon which is used to trigger the event. So, this process is important since it measure the top quark’s Yukawa coupling to the Higgs-boson. This mode contributes around 1% of the entire Higgs-boson production cross-section. Approximately 58% of the Higgs boson decays are expected to be two b-quarks. This decay mode therefore also is sensitive to the Yukawa coupling of b-quarks [2]. 2 2 Detectors 2.1 The Large Hadron Collider The Large Hadron Collider (LHC) constitutes the biggest part of the CERN accelerator complicated. It’s been designed to provide extremely high-energy collisions, primarily proton-proton and, additionally, with heavy ions (Pb-Pb and p-Pb). It provides collisions to four large experiments that collectively support large physics programme covering many areas of current research in contemporary high-energy physics. The main parameters of the LHC machine, as shown in Table 2.1 with their nominal values [1] Table 2.1: Large Hadron Collider parameters and their nominal values for pp collisions [1]. Parameters Nominal Values Maximum Energy Per Proton 7 TeV Maximum Energy Per Beam 366 MJ Maximum Number of Bunches 2808 Mean Collision Per Bunch Crossing 19 Maximum Beam Lifetime 28 h Luminosity 1034 cm-2 s -1 Proton Per Bunch 1.15 × 1011 Minimum Bunch Separation 25 ns Integrated Cross-Section (Per Machine Cycle) 100 mb Proton Energy Loss Per Turn 6.7 KeV Temperature 1.9 K Beam Pipe Vacuum 10-8 – 10-9 P a 3 Most usually L is measured in terms of events per barn of cross section, wherever 1b = 10-24 cm2 . The event rate is then simply the product N = Ls, and therefore the integrated luminosity is L = R L dt. To achieve this luminosity magnets squeeze the beams, minimising their cross-sectional widths. Luminosity tracking techniques for hadron colliders include beamgas imaging with Van Der Meer scans, which can be computed according to the following formula [1]. L = N1N2n1n2f ? 4psxsy F(?c) (1) where N1 and N2 are the particles numbers for every bunch, where n1 and n2 are the bunches numbers in each beam, f is the revolution frequency, ? is the Lorentz factor and sx and sy are the powerful dimensions of the beam as measured using Van Der Meer scans. 4psxsy is the overlap cross-sectional area of the collision. Finally, F(?c) is a factor representing the impact of the crossing angle. The LHC is divided into eight independent sections as a circular ring. The four main experiments are: ATLAS, located on the CERN campus, CMS situated opposite ATLAS, ALICE at site 2, and finally, LHCb locates at site 8, as shown in Figure 2.1. The beams in the accelerator are curved by the dipole magnets which create a curve radius about 2.8 km. The dipole magnets generate a magnetic field of strength 8.3 T. The dipole magnets need a very considerable amount of electric current, requiring the use of superconducting coils. These are kept at low temperature (approximately 1.9 K) that enables them to be superconducting. The LHC ring consists of 1232 of these 15-metre cryo-diplole magnet segments [1]. 4 Figure 2.1: Diagram of the CERN accelerator complex [1] 2.2 ATLAS Detector In the Large Hadron Collider (LHC), ATLAS and CMS experiments are the two general purpose detectors, which have several purposes as tools for particle physics. The ATLAS detector which has a parallel cylindrical geometry consists of four basic parts: the Inner Detector, the Calorimeters, the Muon Spectrometer and the Magnet Systems. Figure 2. 2 displays the general design of the ATLAS detector, whose dimensions are about 25 m in height an 44 m in length, weighting about 7000 tonnes [2]. 5 Figure 2.2: Overview of the ATLAS detector [3]. 2.2.1 The Inner Detector The Inner Detector (ID) includes three subsystems: the silicon pixel detector, the semiconductor tracker (SCT) and therefore the Transition Radiation Tracker (TRT). The accuracy measurements of the charged particles momentum are the principal function of the inner detector. Additionally, it satisfies some requirements in which they used to determine the granularity of the detector subsystems, as well as the range in ? (the pseudorapidity), and r (is the radius in the Cylindrical Coordinates, which describes the distance of the beam line in the ATLAS coordinate system), identification of the electron and the jets identification producing from b quarks. The ID includes a length of 6.2 m and is a pair of.1 m in diameter; its angular coverage is among the region |?| = 2.5 [1]. ? = – ln[tan(? 2 )] (2) Where: ? is the polar angle in this coordinate system. ? = cos-1 (y/x) 6 The Silicon Pixel Detector is the closest detector to the beam-line. It has the very best granularity, with minimum element dimensions of 400µm× 50 µm, and over 80 million readout channels, designed to form precise measurements over the total acceptance of the detector. Due to the first detection subsystem which particles from the reaction point encounter, pixel layers perform a very significant role in the vertex reconstruction for particles that decay through comparatively short distances.In the pixel detector, every module contains on 46080 pixels in which 16 arrays of 18×160 pixels, each have its readout chip. Also, each barrel layer consists of 1456 modules, with each end-cap layer consists of 288 modules [1]. Each SCT modules consist of four sensors. Each one of these modules measures 63.6mm by 64.0mm and features 768 readout strips.In the inner detector, the SCT has subdivided into three districts which are a barrel in the central area and two end-caps. The modules of the detector at the barrel ordered in cylindrical organising with four concentric layers. For end-caps, the modules of the sensor arrange as discs centred on the beam axis and the strips oriented radially. The SCT has a contribution to the measurement of vertex position, effect parameter and the momentum. Also, the SCT performs a very significant role in determining jets, and It possesses some 6.3 million readout channels [1]. The final layer of the inner detector is the transition radiation tracker, that is intended to provide drift time measurements (typically around thirty hits per track), intending to improving momentum resolution and pattern recognition. The transition radiation tracker consists of radial wires of tungsten-rhenium which 30 ? m diameter. Each one of these wires is isolated in its own non-flammable gas volume inside a 4 mm diameter straw, and it contains a total of 3 m3 Xe(70%)-CO2(27%)-O2(3%) gas mixture. Each straw gives a spatial 170 µm resolution as well two featured thresholds 7 of energy. It enables the detector to distinguish among two things; tracking hits which cross through only the lower threshold and transition radiation hits which pass the higher threshold [1] Precision is represented as s( 1 pT ) = constant. Generally, it is possible to write the resolution of a given track parameter X as a function of pT as following formula: sX(pT ) = sX(8)(1 ? pX/pT ) (3) Where sX(8) is represented the asymptotic resolution. pX is the momentum value at which the contributions of the intrinsic. The multiplescattering terms are equal for the parameter X. Finally, he symbol ? denotes addition in quadrature. The final formula which is represented the pT function describe the total performance of the tracking; spT /pT = constant · pT (4) spT /pT = 0.05%pT ? 1% [GeV ] (5) 2.2.2 The Calorimeters The calorimeters at which consist of two components are electromagnetic calorimeters (EM) and hadronic are designing for measuring the particles energies that interact via the strong interactions and electromagnetic.Typically, the measurements that produced via calorimeters are devastating which it meant most of the strength of particles are absorbed as they via analysis [1]. Generally, the energy resolution of a calorimeter can be expressed as: sE/E = a v E ? b E ? c (6) 8 where a is represented the stochastic term while b is represented the noise term. c is a constant term which is represented accounts for non-uniformities and miscalibrations. Photons that have high energy transform into an electron and positron pairs when matter entered. These electrons and positrons emit extra photons as they react with the nuclei and electrons of the detector.This operation will keep going till the particles in the shower have exhausted their energy lower than Ec (a critical threshold), at the point that shower will stop [1]. Along the range of entire rapidity over which EM calorimetry is possible, ATLAS uses the technology of liquid argon an accordion-kind calorimeter inside the barrel area |?| <1.475. Also, it has a functional layout in the forward regions (1.375 <|?|< 3.2) commodities lead-liquid argon (or LAr) sampling detector, which incorporates accordion- shaped electrodes and lead absorber plates.The distinguish between hadronic shower, and EM showers are via their profiles. Like alternative detector sections, it’s divided into a central barrel and two end-caps [1]. Typically, the hadronic showers improve during a long time. Also, the multiplicity of the particle which as a depth function is different. In contrast, the hadronic showers consist of the strongly-interacting hadronisation and the decay. Usually, the initial particle energy has divided among its particles of the daughter and multi-particle interactions which approximately equal fashion [1]. The shower travel distance within the calorimeter after which the average particle energy has reduced via a factor 1/e is known as an electromagnetic calorimeter depth which it is represented in radiation length terms by X0. When the particles cross the detector upstream of the electromagnetic calorimeter lose energy, is corrected by using a presampler detector which 9 at |?|<1.8. So, the electromagnetic calorimeter energy resolution represents via the following formula [1]. sE/E = (10.1 ± 0.4%) v E ? (0.2 ± 0.1)% [(GeV ) 1/2 ] (7) Where the 1st term (stochastic) expresses the noise while the 2nd term (constant) expresses the impact of non-uniformity and miscalibration. The hadronic calorimeter is the more rough of the two ATLAS calorimeters. By comparing between the electromagnetic calorimeter, finer granularity in its coarsest layer which is ?? × ?f = 0.050 × 0.025 and the typical granularity of the hadronic calorimeter is ?? × ?? = 0.1 × 0.1 [1]. The tile calorimeter expresses via the following formula which the energy resolution of the tile calorimeter being measured by using pions beams. sE/E = 56.4 ± 0.4% v E ? (5.5 ± 0.1) [(GeV ) 1/2 ] (8) sE/E = 70.6 ± 1.5% v E ? (5.8 ± 0.2) [(GeV ) 1/2 ] (9) The last formula expresses the energy resolution of the hadronic end-cap. 2.2.3 The Muon System The purpose of the design muon spectrometer at the ATLAS detector is to perform accurate measurements of muon properties at the terascale which immerses in a strong magnetic field of 3.9 T peak; besides, it bends the muon trajectories. The magnetic field can offer 1.5 to 5.5 Tm and 7.5 Tm of the curvature power at the barrel and the end-caps, respectively. The experiments of the test beam have illustrated the identified design standard to obtain 2 – 4% resolution performance for muons that have pT in the 10- 200 GeV range with 10% for 1TeV muons; has achieved. The muon detector uses two distinct trigger detector technologies: the resistive plate chambers 10 (RPC) within the barrel region, and also the thin gap chambers (TGC) within the end-caps. One necessary style criterion of the muon spectrometer is its alignment, that aims for the precision of better than 10 µm to enable momenta to be measured accurately [1]. The reconstruction accuracy of the muon spectrometer expresses by the following formula spT pT = 10% (10) This relation is will be valid at the 1 TeV muon energy where the accuracy of the measurements reduces with power. For example, at the strengths that in the range of 10-200 GeV the accuracy will be 2-3% 2.2.4 The Magnet system The solenoid is the central magnet that immerses the inner detector in a 2 T field; this field runs parallel to the beam-line. The muon spectrometer’s barrel region engaged in a magnetic field from the barrel toroid (0.5 T) and each of the muon spectrometer’s end-caps has its own 1 T toroidal magnet. The radius of curvature of a charged particle depends directly on its chargeto-mass ratio. The magnet system forms an essential part of measurements taken within the inner detector and muon spectrometer. ATLAS features three magnet systems, solenoid and three toroids; their primary purpose is to deflect the trajectories of charged particles via the Lorentz force. In Table 2.2 summarizes the overall performance of the ATLAS detector targets [1]. 11 Table 2.2: Overall performance of the ATLAS detector targets [2]. Detector Component Required Resolution ? Coverage Tracking spT /pT = 0.5% pT ? 1% ±2.5 EM Calorimetry sE/E = 10%/ v E ? 0.7% ±3.2 Hadronic Calorimetry (jets) barrel and end-cap forward sE/E = 50%/ v E ? 3% sE/E = 100%/ v E ? 10% ±3.2 3.1 = |?| = 4.9 Muon Spectrometer spT /pT = 10% at pT = 1T eV ±2.7 3 The ttH, H ¯ ? b ¯b Channel. It is difficult to search for Higgs bosons created by gluon fusion in the H ? b ¯b channel due to the overwhelming dominance of the quantum chromodynamics multi-jet backgrounds potential discovery channels usually involving an associated W or Z boson, or top quark. However, these include Higgs production with associated W or Z bosons, with associated tt¯. Figure 2.3 illustrates the cross sections for Higgs production by different channels for centre-of-mass (CM) energy v s = 8 T eV . In the gluon fusion channel that is gg ? ttH¯ (with extra contributions from qq¯ ? ttH¯ ), i.e. gluon-gluon fusion to supply a top and anti-top pair, followed by Higgs strahlung from the top quark, the top pair can decay by the weak interaction: t ? W+¯b and t¯ ? W-b (decays to different d-type quarks are possible, however since the corresponding CKM matrix components are very small, these are heavily suppressed), as displayed in Figure 3.1 [1]. It is customary to characterise analyses of this channel by the decays of the W bosons originating from the decay of the tt¯ system. Therefore three distinct possible kinds of final state all-hadronic (in that each W bosons decay to quarks), semileptonic (or lepton-plus-jets), which one W decays to 12 Figure 3.1: is represented the cross sections at v s = 8 T eV for various Higgs production methods at pp collider [4] quarks, and one decays to a lepton and a neutrino. Finally, dileptonic which both W bosons decay to leptons. In these three situations, the lepton-plusjets channel is probably going to possess the best discovery potential, since it contains one hard, isolated lepton to scale back the background dominance, and it lacks the additional issue of reconstructing the tt¯ system in the case of 2 leptonic decays. Figure 3.2 shows the ttH, H ¯ ? b ¯b channel for the one leading-order contribution to the semileptonic final state [1]. 13 Figure 3.2: is represented the ttH, H ¯ ? b ¯b channel for the one leading-order contribution to the semileptonic final state (Feynman diagram) [1] 3.1 Cross-section in the ttH¯ : The initial calculations suffered from large theoretical uncertainties as a result of the dependence of the leading-order cross section on alternative the selection of renormalisation scale µR for the strong coupling as and also the choice of nucleon PDF factorisation scales, µF . These two scale choices are used to separate hard from soft quantum chromodynamics processes in the parton distribution functions. The latest developments in phenomenology enabled the calculation of those cross sections to next-to-leading order (NLO). It’s notable that, by varying the renormalisation scale µR and the factorisation scale µF by a factor of two around their central value µ0, [1]. µR = µF = µ0 = (2mt + mH)/2, (11) Where mt denotes the mass of the top quark and mH denotes the Higgs mass the scale dependence was found to be a lot of lower at NLO than at LO – O(10%) as opposed to O(50%) – indicating that the NLO predictions were more theoretically strong. The result of this scale variation on the calculated cross section is the main historical uncertainty. 14 3.2 Background For discovering the Higgs boson, it’s necessary to discriminate the Higgs signal from the total of all relevant backgrounds with an exact statistical significance. 5s is known as the level of threshold confidence for a discovery. Typically. It may complicate by a variety of factors, including the relative cross sections of the signal and background, and also the degree to which the backgrounds resemble the signal (regarding some variable set that parametrises the events). In the cases where the background processes mainly resemble the signal then it is more difficult to distinguish between the signal and random fluctuations of the background [1]. For the ttH, H ¯ ? b ¯b channel, there are two main kinds of background: the irreducible background, that consists of ttZ, Z ¯ ? b ¯b, as well as ttb¯ ¯b production, and reducible backgrounds that contain ’light’ jets mistagged as b-jets. These backgrounds are termed reducible as a result of acting as background processes only under imperfect b-tagging, and their size will thus reduce by implementing improved b-tagging. By taking into consideration, the dependence of the mistag rate on jet variables like pt and |?| it’s possible to construct a mistag matrix. It allows one to calculate the probability that a given lightweight jet is going to be mistagged, as a operate of the associated jet variables. This process is often necessary for making simulated data sets because it is crucial to replicate the b-tagging performance of the experiment accurately. All told, experimental regions (as characterized by jet multiplicity and range of b-tagged jets) within the single-lepton ttH, H ¯ ? b ¯b channel, the dominant background contribution is from ttb¯ ¯b production continuum production of a b-pair in association with a top pair, a sample Feynman diagram for which is displayed in Figure 3.3 [1]. 15 Figure 3.3: is represented a contribution to continuum ttb¯ ¯b production (Feynman diagram) [1]. 4 Monte Carlo (MC) and Modelling 4.1 Monte Carlo (MC) Monte Carlo methods (MC) which are a comprehensive computational algorithm class that used to get numerical results by depending on the repeated random sampling, is used for quantitatively comparing theoretical predictions and observed data. In this process, Monte Carlo is used to producing sets of simulated events that is corresponding with forecasts of different events. Physically, (MC) generators will enable to simulate at the level of the hard-scattering. Besides, they will deliver their results (outputs) in a vector format-LHEF (Les Houches Event Format), which a set of particle four-vectors that will represent each even, at these files. So, these group of four-vectors will use as inputs to an independent software piece that implements the showering [1]. It is essential to understand that decision about how computational framework can be used to perform the different theoretical models are scarcely unique. It is often lead to variations in the various MC software predictions. So, it is imperative to compare the different MC 16 generators predictions immediately and to understand what variables as well parameters that are most relevant to and observed contradiction. It is necessary to understand this effect. The MC is needed to be strong which meant that its predictions about the observations should not differ as to the function of assumptions implicit in the modelling of the software of the Monte Carlo [1] 4.2 ttb¯ ¯b Modelling It is essential that understanding the method of modelling irreducible background ( i.e. ttb¯ ¯b), which is an indispensable contribution overall background of tt¯+jets, for searching for the Higgs boson in the ttH, H ¯ ? b ¯b channel. For this goals, there are studies were undertaken that aim to compare the performance of this process in the MadGraph5 generator with of Alpgen. This process had the particular interest because of the possible restriction of Algen modelling in specific sides ongoing search of ttH¯ . Notably, it is relating to the incapability for searching for top quarks decays in Alpgen, besides the necessity of generating independent exclusive samples for tt+light and stout flavours, with no convenient Parton-jet matching that is available. Due to the crucial role played via heavy flavour jets for choosing an event for the analysis, the matching is of essential significance in the study of ttH, H ¯ ? b ¯b. It is necessary that the Jets are matched entirely to partons in the event [1]. The restrictions meant that the full tt¯+jets background not defined via the Alpgen in a satisfactory method. This reason led to a desire to continue these studies by using MadGraph Monte Carlo instead. The significant feature of MadGraph is to allow to generate fully inclusive tt samples which this sample involves tt with both light and heavy flavour jets. The matching to all parton flavours is possible. Officially, ATLAS MC12 was generated by Alpgen MC which is available for operating on the gird. Locally, a set of 17 parameters and inner setting were used to run the MadGraph. It aims to reproduce the shape of event distributions that foretold via Alpgen [1]. 4.3 Background and Signal Modelling The critical contribution in the background at the Standard Model ttH¯ analysis is the production of the tt¯-plus-jets wherever the additional jets consist of both light and heavy flavours. Besides, there are more important contributions that are coming from the output of single-top and a vector boson (W±, Z) in association with jets. As well, it is coming from the production of diboson (WW, ZZ, W Z), and a vector boson production that associates with a tt pair which denoted tt¯ + V . Furthermore, there are contributions from multijet because of jets misidentification or photons as electrons. Also, non-prompt leptons that are producing from the b and c hadrons decay [1]. The CTEQ6L1 PDF set [89] is used to simulate events of W+jets and Z+jets with the production of diboson in Alpgen 2.14 events. Pythia 6.425 [133] used for the fragmentation and parton shower evolution. It used in the case of W+jets and Herwing 6.520 for Z+jets diboson. For averting doublecounting which between the matrix-element calculation and the fragmentation used the MLM matching technique. This technique uses the ALPHA algorithm to implement matrix element computations to extract colour information for processes of multiparton . The background of (W+ and Z+) Jets predestined by the Monte Carlo (MC) pT reweighting to compute the variations in the pT data spectra and MC [1]. To produce the samples of tt+jets background with powheg is used the CT10 PDF set with mt = 172.5 which is the nominal top quark mass. It interfaced to Pythias employing the CTEQ6L1 PDF set and the Perugia2011 18 underlying event tune while samples of the single top produced in Powheg and the CT10 PDF set. These are corresponding to the Wt and production mechanisms of s-channel. In the case of t-channel, MRST LO** PDF set is used to produce the AcerMc v3.8 LO generator. These samples will linke to Pythia with the CTEQ6L1 PDF set, as well Perugia2011C that is an underlying-event tune. So, the overlaps between Wt and tt are taken out. Also, the samples of single-top normalized to theoretical NNLO cross-section via using the MSTW2088 NNLOPDF set [155]. In the MadGraph 5, the samples of ttV are produced by using the CTEQ6L1 PDF set, (interfaced with Pythia 6.425) that are normalized to their cross-section of theoretical NLO. Concerning the Higgs decays, the ttH signal is modelled by using the HELAC-Oneloop package to produce matrix elements of NLO. Thereafter, NLO matrix elements are showered by using Powheg BOX as an interface to Pythia 8.1. Besides, the PDF set CTEQ6L1 and the AU2 underlying event tune are used in showering the NLO matrix elements [1]. Table 4.1 describes the normalisations of the generated samples which are obtained from state-of-the-art calculations, where ’PYTHIA’ indicates that PYTHIA6 and PYTHIA8 are utilized for simulations of v s = 8 TeV and v s = 7 TeV data, respectively. The event generators used to the model signals, for the Standard Model Higgs boson (H ? ZZ(*) ? 4`, H ? ?? and H ? WW(*) ? e?µ?) [5]. So, in this section briefly the simulation and data-driven techniques utilised to model the ttH¯ signal and also the background processes at the Standard Model Higgs boson, to train the multivariate discriminants. The SM Higgs boson production processes that are relevant to this analysis are the dominant gluon fusion (gg -? H, denoted ggF), vector-boson fusion (qq0 -? qq0H, denoted V BF) and Higgs strahlung. The tiny contribution from the associated production with a tt¯ pair (qq/gg ¯ -? ttH¯ ). The VBF 19 Table 4.1: The list of the event generators that are utilized in order to model the signal and background processes [5]. Process Generators ggF, VBF POWHEG + PYTHIA W H , Z H , tt¯H PYTHIA W +jets, Z/? * + jets ALPGEN + HERWIG tt¯, tW , tb MC@NLO + HERWIG tqb AcerMC + PYTHIA qq¯ ? WW MC@NLO + HERWIG gg ? WW gg2WW + HERWIG qq¯ ? ZZ POWHEG + PYTHIA gg ? ZZ gg2ZZ + HERWIG WZ MadGraph + PYTHIA, HERWIG W? + jets ALPGEN + HERWIG W? * MadGraph + PYTHIA qq¯/g g ? ?? SHERPA process includes full QCD and EW corrections up to NLO and approximate NNLO QCD corrections are utilized in addition. Next to leading order (NLO) electroweak (EW) corrections are as QCD soft-gluon re-summations at up to log (NNLL). Cross sections of the associated W H/Z H processes (V H) are calculated as well as QCD corrections up to NNLO and (EW) corrections up to NLO. The transversal momentum, pT spectrum of the Higgs boson within the method follows the HqT which incorporates QCD corrections at NLO and QCD soft-gluon re-summations up to NNLL. The results of finite quark lots taken under consideration.To come up with showers and their hadronisation and to simulate the underlying event, PYTHIA6 or PYTHIA8 used. Instead, HERWIG or Sherpa are used to generate and 20 showers, with the HERWIG underlying event simulation performed using JIMMY. Many different programs are used to generate the hard-scattering processes.The branching ratios of the Standard Model Higgs boson as a perform of mH, similarly as there are calculated using the HDE- CAY and PROPHECY4F programs [5]. 21 5 Multivariate Analysis Techniques: When considering physics analysis problems, there are often multivariate in nature. Many multivariate computational methods which are used in particle physics, including boosted decision trees, discriminant, support vector machines, and artificial neural networks. The mapping between the input information set and the final outputs was probably unknown earlier and difficult to determine a priori. Accordingly, it is appropriate to use analysis techniques which are generally applicable. The advantage of these techniques is that they are adoptable [2]. In the ttH¯ analysis, for each of the signal regions, a boosted decision tree (BDT) is exploited to discriminate between the ttH¯ signal and the backgrounds. Figure 5.1 displays that the fraction of the various background parts likewise the ttH¯ signal purity for each of the signal and control regions within the dilepton and single-lepton channels.The distributions of the classification BDTs within the signal regions are used because of the final discriminants for the profile likelihood fit, while in the control regions the event yield is as input to the fit.Though the standard techniques exploit similar data, they create use of this information from different perspectives and supported different assumptions, so their combination more improves the separation power of the classification BDT. The properties of the Higgsboson and top-quark candidates from the reconstruction BDT are utilized to outline further input variables to the classification BDT. Several mixtures of those jets are possible after reconstructing the Higgs-boson and top-quark candidates to explore their properties and the signal event topology. To boost the signal separation, three intermediate variable techniques implemented before the classification BDT. First, the reconstruction BDT utilised to choose the simplest combination of jet- assignments in every event and to make the Higgs-boson and top-quark candidates. Secondly, a likelihood 22 discriminant (LHD) methodology that mixes the signal and background possibilities of all possible mixtures in every event. Finally, a matrix component methodology (MEM) that exploits the total matrix component calculation to separate the signal from the background.The outputs of the three common variable strategies are used as input variables to the classification BDT in one or additional of the signal regions (6) [2]. Figure 5.1: fractional contribution of the various background to the total in each analysis class in the signal-lepton channel.[2]. 23 5.1 Classification BDT and Reconstruction BDT The output of the reconstruction BDT, the LHD and also the MEM represent the first dominant variables within the classification BDT. So, the classification BDT is trained to separate the signal from the tt¯ background on a sample that’s statistically independent of the example used for the analysis, while the toolkit for multivariate analysis (TMVA) is to train each this and also the reconstruction BDT. The classification BDT made by combining many input variables that exploit the various kinematics of signal and background events, likewise the b-tagging information. General kinematic variables, like invariant masses and angular separations of pairs of reconstructed jets and leptons, are combined with outputs of the intermediate variable discriminants and also the b-tagging discriminants of the chosen planes [2]. W-boson, top-quark and Higgs-boson competitors designed from mixtures of jets and leptons. The b-tagging data is used to discard mixtures containing jet – parton tasks inconsistent with the proper competitor flavor. For this the reconstruction BDT altogether dilepton and resolved single-lepton signal regions. It’s trained to match reconstructed jets to the partons emitted from top-quark, and Higgs-boson decays [2]. In the leptonic decays, W-boson candidates are assembled inside the single lepton from the lepton and the neutrino four-momentum (p` and p?), respectively. Then it is constructed from the missing transverse momentum. It’ z problem that concluded with the aid of solving the following equation [2], m2 w = (p` + p?) 2 . (12) mw expresses of W-boson mass. Each of this equation used in independent mixtures that meant if there is no actual solutions exist, the discriminant 24 of this equation is ready to zero which giving a unique solution. Hadronic decaying of W-boson and hadronic decaying Higgs-boson applicants are each formed from jets couple. For the applicants of top-quarks are fashioned from one jet and one W-boson applicant, where the top-quark applicant

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

Problem-Solving-with-Persona-Dolls

Problem Solving with Persona Dolls

Exploring ways in which to use persona dolls to help preschool children participate in the process of considering, understanding, and solving specific problems.

To begin, identify a problem related to an “–ism” (such as racism, sexism, heterosexism, religionism, and/or classism) that may come up as young children interact and express their feelings and emotions. For example, in the article “Problem Solving with Young Children Using Persona Dolls,” the teacher uses a persona doll, Tanisha, to address a problem related to racial prejudice that she is noticing in her classroom. The teacher explains that Tanisha’s feelings have been hurt because some children did not want to play with her because of the color of her skin.

Write a problem statement written from the point of view of a persona doll (like the example with Tanisha: “No one will play with me because they don’t like the color of my skin. That hurts my feelings and makes me mad.”)

Reference: ARTICLE ATTACHED

Pierce, J., & Johnson, C. (2010). Problem solving with young children using persona dolls. YC: Young Children, 65(6), 106-108. Retrieved from the Walden Library using the Education Research Complete database

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

human-life-cycle-paragraph-3

check the grammar and your writing is good please

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

Assume that Bikesfriends Company manufactures entry-level bicycles. The company currently sells… 1 answer below »

Assume that Bikesfriends Company manufactures entry-level bicycles. The company currently sells its product to wholesale distributors in Munich, Bavaria, and Stuttgart. Because of the popularity of bicycles, the company is considering distributing to northern wholesalers as well.

Although wholesale prices vary depending on the quantity of bicycles purchased by a distributor, revenue consistently averages $90 per bicycle sold.

They sell 900 bikes a month.

Pitsburg Company monthly operating statistics are shown as follows:

Attachments:

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

please answer all parts. 4.30 In the SMD system shown in Fig. P4.30, vx(t) is the input velocity of.

please answer all parts.

4.30 In the SMD system shown in Fig. P4.30, vx(t) is the input velocity of the platform and vy(t) is the output velocity of the 100 kg mass. v(t) 100 kg Ns 100 100 S 100 kg 100 N Sv() 100 Figure P4.30: SMD system of Problem 4.30. APPLICATIONS OF THE LAPLACE TRANSFORM PTER 4 (a) Draw the equivalent s-domain circuit. (c) Determine the frequency response. Hint: Use two node equations. 4.30 In the SMD system shown in Fig. P4.30, vx(t) is the input velocity of the platform and vy(t) is the output velocity of the 100 kg mass. v(t) 100 kg Ns 100 100 S 100 kg 100 N Sv() 100 Figure P4.30: SMD system of Problem 4.30.

APPLICATIONS OF THE LAPLACE TRANSFORM PTER 4 (a) Draw the equivalent s-domain circuit. (c) Determine the frequency response. Hint: Use two node equations.

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

watch-video-and-answer-discussion-posts-in-2-paragraphs

Please write at least two large paragraphs to describe your critical thoughts and reflections about watching the Movie the Race to Rebuild America’s Infrastructure. Specifically, how do you think about the issue of an aging and deteriorating American public infrastructure system? What are the possible consequences of an aging and deteriorating American public infrastructure system? How do we take actions to address this issue?

See the Movie Link:

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

the-independent-variable

Need all answers

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

Week7-MSProjectPlan-Final-Version

Executive Summary

As the primary audience for this work is executive management (for whom time is an ongoing concern), the custom is to provide an Executive Summary to give that audience a quick but factual high-level view of the project status both current and projected. The notion is, executive management will only look at the summary unless for particular items or issues they want to dive into the detail which you will provide in a separate essay.

The most effective way to write the executive summary is to first to write the detailed essay (see Assignment-2 below) and then summarize that material into this shorter more high-level document, but one still containing the pertinent data and proposed direction.

Your executive summary will highlight the current status of your project and your current EVA numbers. (The current status date is that 60% date we calculated week-5).

Your executive summary will highlight any issues or problems you found with the project in terms of schedule and cost.

Your executive summary will then explain, in summary, the steps that you are taking to bring the project back to baseline, or if that is not completely possible, what will be the effect by project completion.

You will be using a template named “add your name here-Final Project Essays” for your Essay (you will be using the same template for all essays).

At LEAST 2 pages long in length with appropriate references (you didn’t invent the strategies for project control).

At least two different reference sources are required.

Formatting:

oDouble-space (no additional space between paragraphs)

oOne-inch margins

oTimes Roman12-point font size

oLeft justified

Point value 20 points

Assignment 2

Controlling Your Project Essay.

We had you mess with your project plan by forcing a 40% completion with 60% schedule progress, and several other common issues (schedule slippage, cost overruns etc.). You have done your EVA forecast last week and now it’s time to take steps based on what you found to bring the project back to baseline or as close to it as you can get. You may not simply back out the changes we had you put in.For the purposes of our educational goals, we are going to say those things happened, and the current percent complete of your project is 40%, and we are 60% into your schedule (that is your status date is that 60% calculated date.) So MAKE SURE your STATUS DATE in the PROJECT INFORMATION Window is the 60% calculated date NOT BLANK or all your EVA numbers will be bogus! Make sure your % complete is still 40% or you have changed the scenarios starting point.

In theory, we always need to balance project scope, with project budget, and the project schedule. In practice, that seldom happens. Usually, something has to give. My normal prescription is freeze one, (for example scope), manage another (for example budget) and float the third – which in this example, that would be schedule.

So, in this example, we would guarantee to deliver to the original scope PERIOD! We will try to stay as close to the original budget as possible – but the schedule may move out a bit (or a bunch).

You may not be facing such dire consequences – or you might be. At a conceptual level – facing such issues, I don’t know what your stakeholders would want to do in terms of the triple constraints.That is, which to freeze, to manage, or to float.You have to figure that out.

We’ve talked about the fundamental things you can do in such circumstances (Resource leveling, get schedule/scope/budget relief, swap to more productive resources, FastTrack, Crashing, etc.).Now it’s up to you to analyze where you are and come up with a plan to fix the problems. You will be documenting your plan and implementing portions of it by modifying a version of your .MPP file and seeing what happens to your numbers. You will also be submitting that modified project plan (without any overallocated resources) for review.

You will write an essay on what the possible options are when a project is at odds with its baseline.What are the options available?

You will then document how you plan on bringing your project back in-line with your baseline

You will explain in detail what your forecast numbers reveal as to your cost and schedule issues before corrective actions are taken.

You will explain in detail the changes you plan to make to task durations, resources, budget values, project scope, etc. to bring your project back in line

You will be using a template named “add your name here-Final Project Essays” for your Essay (you will be using the same template for all essays).

At least 3 pages or longer in length.

Formatting:

oDouble-space (no additional space between paragraphs)

oOne-inch margins

oTimes Roman12-point font size

oLeft justified

At least two different reference sources are required.

Point value 60 points

Assignment 3

PROJ-592 Lessons Learned Essay.

For the last 7+ weeks, you have been working on your course project. You have selected what I hope has worked out to be an effective project subject, constructed a detailed project plan, and gone through a whole series of analysis and modifications, including simulating common project concerns and issues.

We have also covered a myriad of topics and ideas in our reading and in our threaded interactions.We have practiced multiple skills for example creating AIB’s, crashing projects at the least cost, calculating pertMean’s, calculating, interpreting, and leveraging Earned Value statistics, etc.

It is my hope, that wherever you sit in the range of prior project exposure, from rookie to experienced professional, that this course and these activities have produced moments of insight, and learning. Here is your chance to document your journey from Week-1 to Week-7 in terms of that educational achievement.

The expectation is that you will write your detailed review of your insights and takeaways inclusive of the large list of topics and concepts we’ve covered from week one all the way through week seven. What concepts, skills, tools, etc. have you learned, and why are they important.Your essay should include your experience with your seven-part course project.What would you do again? What is on your list of things never to repeat? What insights and realizations have you reached on the challenges of real world project management (even in our simplified and highly manipulated theoretical environment).

At least 3 pages or longer in length.

Formatting:

oDouble-space (no additional space between paragraphs)

oOne-inch margins

oTimes Roman12-point font size

oLeft justified

Point value 45 points

Assignment 4

Final Version of your Project Plan

You’ve updated your Microsoft project plan to implement some of the ideas expressed in your essay in your effort to bring your project back to your baseline. Now let’s take a look at your final plan product.

What changes you made to the final plan?

oDid you eliminate all over allocated resources?

oMaybe you Fast-Tracked some tasks?

oMaybe you de-scope some deliverables?

oWhat does your TCPI look like now?

Submit your final version of your project plan (.MPP file) as:

oWeek7_MSProjectPlan Final Version_vn.mpp (Where “vn” is your version number as in – Week7_MSProjectPlan Final Version_George_v8.mpp”)

Point value 40 points

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

Module 7 DiscussionChoose an organization that you are interested in and familiar with (this may…

Module 7 DiscussionChoose an organization that you are interested in and familiar with (this may include your current job). Discuss how this particular organization currently uses short term and long term incentives to motivate employees. Be sure to address the following:What type of organization is it? (if you prefer, you may withhold the name of the organization)List and briefly describe the short term incentives the organization offers.List and briefly describe the long term incentives the organization offers.How are these incentives used to motivate employees? (Are they effective? Why or why not?)Suggest several specific ways that new or different incentives would further motivate employees and benefit the organization as a whole.Use references from both the textbook and outside sources to support why you think your suggestions would be benefit this organization.Specifically:How realistic were your peers’ suggestions for added incentives? Why?What were the best suggestions that your peers made?What suggestions would you add to your peers’ incentive plan?What information did your peers overlook in their responses to this issue?

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.

In your own words, define the terms fixed mindset and growth mindset . Describe a situation (i.e., p

    • In your own words, define the terms fixed mindset and growth mindset.

 

    • Describe a situation (i.e., personal, professional, or academic) where you had a fixed mindset. What thoughts did you have or comments did you make to reflect a fixed mindset?

 

    • How could you have changed your thoughts to reflect this idea of a growth mindset?

 

  • Explain how having a growth mindset will help you persevere to graduation.
    • Posted: 4 years ago
    • Due: 03/03/2016
    • Budget: $3
     
    Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
    Use Discount Code "Newclient" for a 15% Discount!

    NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.