Bayesian Inference for QTLs in Inbred Lines

Bayesian Inference for QTLs in Inbred Lines

Bayesian QTL Mapping Brian S. Yandell University of Wisconsin-Madison www.stat.wisc.edu/~yandell/statgen UW-Madison, April 2008 April 2008 UW-Madison Brian S. Yandell 1 outline 1. 2. 3. 4. What is the goal of QTL study? Bayesian vs. classical QTL study Bayesian strategy for QTLs model search using MCMC Gibbs sampler and Metropolis-Hastings 5. model assessment Bayes factors & model averaging 6. analysis of hyper data 7. software for Bayesian QTLs April 2008 UW-Madison Brian S. Yandell 2

1. what is the goal of QTL study? uncover underlying biochemistry identify how networks function, break down find useful candidates for (medical) intervention epistasis may play key role statistical goal: maximize number of correctly identified QTL basic science/evolution how is the genome organized? identify units of natural selection additive effects may be most important (Wright/Fisher debate) statistical goal: maximize number of correctly identified QTL select elite individuals predict phenotype (breeding value) using suite of characteristics (phenotypes) translated into a few QTL statistical goal: mimimize prediction error April 2008 UW-Madison Brian S. Yandell 3 advantages of multiple QTL approach improve statistical power, precision increase number of QTL detected

better estimates of loci: less bias, smaller intervals improve inference of complex genetic architecture patterns and individual elements of epistasis appropriate estimates of means, variances, covariances asymptotically unbiased, efficient assess relative contributions of different QTL improve estimates of genotypic values less bias (more accurate) and smaller variance (more precise) mean squared error = MSE = (bias)2 + variance April 2008 UW-Madison Brian S. Yandell 4 Pareto diagram of QTL effects 3 (modifiers) minor QTL polygenes 1 2 major QTL 0

3 additive effect major QTL on linkage map 2 1 0 4 5 5 10 15 20 rank order of QTL 25 30 Intuitive idea of ellipses: Horizontal = significance Vertical = support interval April 2008 UW-Madison Brian S. Yandell

5 check QTL in context of genetic architecture scan for each QTL adjusting for all others adjust for linked and unlinked QTL adjust for linked QTL: adjust for unlinked QTL: reduce bias reduce variance adjust for environment/covariates examine entire genetic architecture number and location of QTL, epistasis, GxE model selection for best genetic architecture April 2008 UW-Madison Brian S. Yandell 6 2. Bayesian vs. classical QTL study classical study maximize over unknown effects test for detection of QTL at loci model selection in stepwise fashion Bayesian study

average over unknown effects estimate chance of detecting QTL sample all possible models both approaches April 2008 average over missing QTL genotypes scan over possible loci UW-Madison Brian S. Yandell 7 Who was Bayes? Reverend Thomas Bayes (1702-1761) part-time mathematician buried in Bunhill Cemetary, Moongate, London famous paper in 1763 Phil Trans Roy Soc London Barnard (1958 Biometrika), Press (1989) Bayesian Statistics Stigler (1986) History of Statistics Carlin Louis (1996); Gelman et al. (1995) books Was Bayes the first with this idea? (Laplace) billiard balls on rectangular table two balls tossed at random (uniform) on table where is first ball if the second is to its right (left)? first ball March 2011 UW-Madison Brian S. Yandell

8 Where is the first ball? first ball second ball Y=0 2.0 Y=1 pr (Y ) pr|Y 1.5 Y 1 prior pr() = 1 likelihood pr(Y | ) = Y(1)1-Y posterior pr( |Y) = ? pr (Y | )pr ( ) pr( | Y ) 1 1.0 pr (Y ) Y (1 )1 Y d 0.5 0 Y 1

2 pr ( | Y ) 2(1 ) Y 0 0.0 Y 0 0.0 March 2011 0.2 0.4 0.6 0.8 1.0 1 2 (now throw second ball n times) UW-Madison Brian S. Yandell 9 What is Bayes Theorem? before and after observing data prior: pr() = pr(parameters) posterior: pr(|Y) = pr(parameters|data)

posterior = likelihood * prior / constant usual likelihood of parameters given data normalizing constant pr(Y) depends only on data constant often drops out of calculation pr ( , Y ) pr (Y | ) pr ( ) pr ( | Y ) pr (Y ) pr (Y ) March 2011 UW-Madison Brian S. Yandell 10 What is Probability? Frequentist analysis Bayesian analysis chance over many trials long run average uncertainty of true value prior uncertainty before data incorporate prior knowledge/experience posterior uncertainty after analyzing current data balance prior and data estimates confidence intervals

long term frequency hypothesis tests p-values Type I error rate reject null when true chance of extreme result March 2011 UW-Madison Brian S. Yandell 11 0.3 0 0.0 0.1 0.0002 0.2 posterior likelihood 0 2 4 6 8 parameter March 2011 0.2 parameter = 4

012345678910 0.0 0.2 0.0 parameter = 2 012345678910 0.0004 0.0 0.2 Likelihood and Posterior Example parameter = 6 012345678910 data : Y 1,3,8 parameter : ? ye pr (Y y | ) y! ( M. Newton, pers. comm.) UW-Madison Brian S. Yandell 12 Frequentist or Bayesian? Frequentist approach fixed parameters range of values

maximize likelihood ML estimates find the peak random parameters distribution posterior distribution posterior mean sample from dist confidence regions random region invert a test hypothesis testing 2 nested models March 2011 Bayesian approach credible sets fixed region given data HPD regions model selection/critique Bayes factors UW-Madison Brian S. Yandell 13 Frequentist or Bayesian? Frequentist approach maximize over mixture of QT genotypes

locus profile likelihood max over effects HPD region for locus natural for locus 1-2 LOD drop work to get effects Bayesian approach joint distribution over QT genotypes sample distribution joint effects & loci HPD regions for joint locus & effects use density estimator approximate shape of likelihood peak March 2011 UW-Madison Brian S. Yandell 14 Choice of Bayesian priors elicited priors higher weight for more probable parameter values based on prior empirical knowledge use previous study to inform current study weather prediction, previous QTL studies on related organisms conjugate priors convenient mathematical form

essential before computers, helpful now to simply computation large variances on priors reduces their influence on posterior non-informative priors may have no information on unknown parameters prior with all parameter values equally likely may not sum to 1 (improper), which can complicate use always check sensitivity of posterior to choice of prior March 2011 UW-Madison Brian S. Yandell 15 QTL model selection: key players observed measurements y = phenotypic trait m = markers & linkage map i = individual index (1,,n) observed m X missing data missing marker data q = QT genotypes q Q

missing alleles QQ, Qq, or qq at locus unknown quantities = QT locus (or loci) = phenotype model parameters A = QTL model/genetic architecture unknown pr(q|m,,A) genotype model grounded by linkage map, experimental cross recombination yields multinomial for q given m Yy pr(y|q,,A) phenotype model distribution shape (assumed normal here) unknown parameters (could be non-parametric) April 2008 UW-Madison Brian S. Yandell A

after Sen Churchill (2001) 16 likelihood and posterior likelihood * prior posterior : Bayes' rule constant pr ( y | m, , , A) * pr ( | A) pr ( | m, A) pr ( A) pr ( , , A | y , m) pr ( y | m ) likelihood mixes over missing QTL genotypes : pr ( y | m, , ) q pr ( y | q, ) pr ( q | m, ) April 2008 UW-Madison Brian S. Yandell 17 Bayes posterior vs. maximum likelihood (genetic architecture A = single QTL at ) LOD: classical Log ODds maximize likelihood over effects R/qtl scanone/scantwo: method = em LPD: Bayesian Log Posterior Density average posterior over effects R/qtl scanone/scantwo: method = imp LOD( ) log10 max pr ( y | m, , ) LPD( ) log10 pr ( | m ) pr ( y | m, , ) pr ( ) April 2008

UW-Madison Brian S. Yandell 18 LOD & LPD: 1 QTL n.ind = 100, 1 cM marker spacing April 2008 UW-Madison Brian S. Yandell 19 Simplified likelihood surface 2-D for BC locus and effect locus and effect = 2 1 profile likelihood along ridge maximize likelihood at each for symmetric in around MLE given weighted average of posterior average likelihood at each with weight p() how does prior p() affect symmetry? April 2008 UW-Madison Brian S. Yandell 20 LOD & LPD: 1 QTL n.ind = 100, 10 cM marker spacing April 2008 UW-Madison Brian S. Yandell

21 likelihood and posterior likelihood relates known data (y,m,q) to unknown values of interest (,,A) pr(y,q|m,,,A) = pr(y|q,,A) pr(q|m,,A) mix over unknown genotypes (q) posterior turns likelihood into a distribution weight likelihood by priors rescale to sum to 1.0 posterior = likelihood * prior / constant April 2008 UW-Madison Brian S. Yandell 22 marginal LOD or LPD What is contribution of a QTL adjusting for all others? improvement in LPD due to QTL at locus contribution due to main effects, epistasis, GxE? How does adjusted LPD differ from unadjusted LPD? raised by removing variance due to unlinked QTL raised or lowered due to bias of linked QTL analogous to Type III adjusted ANOVA tests can ask these same questions using classical LOD see Bromans newer tools for multiple QTL inference April 2008 UW-Madison Brian S. Yandell 23

LPD: 1 QTL vs. multi-QTL marginal contribution to LPD from QTL at 1st QTL 2nd QTL April 2008 2nd QTL UW-Madison Brian S. Yandell 24 substitution effect: 1 QTL vs. multi-QTL single QTL effect vs. marginal effect from QTL at 1st QTL 2nd QTL April 2008 2nd QTL UW-Madison Brian S. Yandell 25 3. Bayesian strategy for QTLs augment data (y,m) with missing genotypes q build model for augmented data genotypes (q) evaluated at loci () depends on flanking markers (m) phenotypes (y) centered about effects () depends on missing genotypes (q)

and depend on genetic architecture (A) How complicated is model? number of QTL, epistasis, etc. sample from model in some clever way infer most probable genetic architecture estimate loci, their main effects and epistasis study properties of estimates April 2008 UW-Madison Brian S. Yandell 26 do phenotypes help to guess genotypes? posterior on QTL genotypes q M41 D4Mit41 D4Mit214 M214 what are probabilities for genotype q between markers? 120 bp 110 all recombinants AA:AB have 1:1 prior ignoring y

100 90 what if we use y? AA AA AB AA AA AB AB AB Genotype April 2008 UW-Madison Brian S. Yandell 27 posterior on QTL genotypes q full conditional of q given data, parameters proportional to prior pr(q | m, ) weight toward q that agrees with flanking markers proportional to likelihood pr(y|q,) weight toward q with similar phenotype values posterior balances these two this is the E-step of EM computations

pr ( y | q, ) * pr ( q | m, ) pr ( q | y , m, , ) pr ( y | m, , ) April 2008 UW-Madison Brian S. Yandell 28 April 2008 UW-Madison Brian S. Yandell 29 Bayes for normal data Y = G + E posterior for single individual environ E ~ N(0,2), 2 known likelihood pr(Y | G,2) = N(Y | G,2) prior pr(G | 2,,) = N(G | ,2/) posterior N(G | +B1(Y-), B12) Yi = G + Ei posterior for sample of n individuals shrinkage weights Bn go to 1 2 pr(G | Y , , , ) N G Bn (Y ), Bn n Yi n with Y sum , Bn

1 n n 2 March 2011 UW-Madison Brian S. Yandell 30 effect of prior variance on posterior 0.5 2 3 4 5 6 7 8 9 11 13 1 15 17 2 3 4 5 6 7 8 9 11 2 13 15 17

2 3 4 5 6 7 8 9 11 13 15 17 normal prior, posterior for n = 1, posterior for n = 5 , true mean March 2011 UW-Madison Brian S. Yandell 31 where are the genotypic means? (phenotype mean for genotype q is q) prior mean data mean n small prior data means n large posterior means 6 qq April 2008 8

10 Qq 12 y = phenotype values UW-Madison Brian S. Yandell 14 16 QQ 32 prior & posteriors: genotypic means q prior for genotypic means centered at grand mean variance related to heritability of effect hyper-prior on variance (details omitted) posterior shrink genotypic means toward grand mean shrink variance of genotypic mean prior: E ( q ) posterior: E ( q | y ) y (1 bq ) yqbq

shrinkage: April 2008 y bq 1 V ( q ) V ( yq ) 2 q V ( yq ) V ( y )h V ( y )hq2 V ( q | y ) V ( yq )bq 1 UW-Madison Brian S. Yandell 33 Empirical Bayes: choosing hyper-parameters How do we choose hyper-parameters ,? Empirical Bayes: marginalize over prior estimate , from marginal posterior likelihood pr(Yi | Qi,G,2) = N(Yi | G(Qi),2) prior

pr(GQ | 2,,) = N(GQ | ,2/) marginal pr(Yi | 2,,) = N(Yi | ,2 ( + 1)/) estimates EB posterior March 2011 Y , s 2 sumi (Yi Y )2 / n 1 or 2 /s 2 2 pr (GQ | Y ) N GQ Y BQ (YQ Y ), BQ nQ UW-Madison Brian S. Yandell 34 What if variance 2 is unknown? recall that sample variance is proportional to chi-square pr(s2 | 2) = 2 (ns2/2 | n) or equivalently, ns2/2 | 2 ~ n2 conjugate prior is inverse chi-square pr(2 | ,2) = inv-2 (2 | ,2 ) or equivalently, 2/2 | ,2 ~ 2 empirical choice: 2= s2/3, =6 E(2 | ,2) = s2/2, Var(2 | ,2) = s4/4 posterior given data

pr(2 | Y,,2) = inv-2 (2|+n, (2+ns2)/(+n) ) March 2011 UW-Madison Brian S. Yandell 35 multiple QTL phenotype model phenotype affected by genotype & environment E(y|q) = q = 0 + sum{j in H} j(q) number of terms in QTL model H 2nqtl (3nqtl for F2) partition genotypic mean into QTL effects q = 0 + 1(q1) + 2(q2) q = mean + main effects + 12(q1,q2) + epistatic interactions partition prior and posterior (details omitted) April 2008 UW-Madison Brian S. Yandell 36 QTL with epistasis same phenotype model overview Y q e, var(e) 2 partition of genotypic value with epistasis q q1 q 2 q12 partition of genetic variance & heritability 2

q 2 1 2 2 2 12 var( q ) 2 q 2 2 2 2 hq 2 h1 h2 h12 2 q April 2008 UW-Madison Brian S. Yandell 37 partition of multiple QTL effects partition genotype-specific mean into QTL effects q = mean + main effects + epistatic interactions q = + q = + sumj in A qj

priors on mean and effects ~ N(0, 02) grand mean q qj ~ N(0, 12) model-independent genotypic effect ~ N(0, 12/|A|) effects down-weighted by size of A determine hyper-parameters via empirical Bayes hq2 q2 0 Y and 1 2 2 1 hq April 2008 UW-Madison Brian S. Yandell 38 Recombination and Distance assume map and marker distances are known useful approximation for QTL linkage Haldane map function: no crossover interference

independence implies crossover events are Poisson all computations consistent in approximation 0.2 12 log(1 2r ) March 2011 Haldane Kosambi Morgan 0.0 2 r rate r 1 e 1 2 0.4 rely on given map with known marker locations 1-to-1 relation of distance to recombination all map functions are approximate anyway 0.0 0.5 UW-Madison Brian S. Yandell

1.0 distance 1.5 2.0 39 recombination model pr(Q|X,) locus is distance along linkage map identifies flanking marker region flanking markers provide good approximation map assumed known from earlier study inaccuracy slight using only flanking markers extend to next flanking markers if missing data could consider more complicated relationship but little change in results pr(Q|X,) = pr(geno | map, locus) pr(geno | flanking markers, locus) Xk March 2011 Q? X k 1 UW-Madison Brian S. Yandell 40

Where are the loci on the genome? prior over genome for QTL positions flat prior = no prior idea of loci or use prior studies to give more weight to some regions posterior depends on QTL genotypes q pr( | m,q) = pr() pr(q | m,) / constant constant determined by averaging over all possible genotypes q over all possible loci on entire map no easy way to write down posterior April 2008 UW-Madison Brian S. Yandell 41 prior & posterior for QT locus prior information from other studies 0.2 concentrate on credible regions use posterior of previous study as new prior uniform prior over genome use framework map choose interval proportional to length then pick uniform position within interval posterior

0.0 prior 0 March 2011 no prior information on locus 20 40 60 distance (cM) UW-Madison Brian S. Yandell 80 42 model fit with multiple imputation (Sen and Churchill 2001) pick a genetic architecture 1, 2, or more QTL fill in missing genotypes at pseudomarkers use prior recombination model use clever weighting (importance sampling) compute LPD, effect estimates, etc. April 2008 UW-Madison Brian S. Yandell

43 4. QTL Model Search using MCMC construct Markov chain around posterior want posterior as stable distribution of Markov chain in practice, the chain tends toward stable distribution initial values may have low posterior probability burn-in period to get chain mixing well sample QTL model components from full conditionals sample locus given q,A (using Metropolis-Hastings step) sample genotypes q given ,,y,A (using Gibbs sampler) sample effects given q,y,A (using Gibbs sampler) sample QTL model A given ,,y,q (using Gibbs or M-H) ( , q, , A) ~ pr ( , q, , A | y , m) ( , q, , A)1 ( , q, , A)2 ( , q, , A) N April 2008 UW-Madison Brian S. Yandell 44 EM-MCMC duality EM approaches can be redone with MCMC EM estimates & maximizes MCMC draws random samples simulated annealing: gradually cool towards peak both can address same problem sometimes EM is hard (impossible) to use

MCMC is tool of last resort use exact methods if you can try other approximate methods be clever! (math, computing tricks) very handy for hard problems in genetics March 2011 UW-Madison Brian S. Yandell 45 Why not Ordinary Monte Carlo? independent samples of joint distribution chaining (or peeling) of effects pr(|Y,Q)=pr(GQ | Y,Q,2) pr(2 | Y,Q) possible analytically here given genotypes Q Monte Carlo: draw N samples from posterior sample variance 2 sample genetic values GQ given variance 2 but we know markers X, not genotypes Q! would have messy average over possible Q pr( |Y,X) = sumQ pr( |Y,Q) pr(Q|Y,X) March 2011 UW-Madison Brian S. Yandell 46 What is a Markov chain?

future given present is independent of past update chain based on current value can make chain arbitrarily complicated chain converges to stable pattern () we wish to study pr (1) p /( p q) p 1-p 0 1 1-q q March 2011 UW-Madison Brian S. Yandell 47 Markov chain idea p 1-p Mitchs 0 pr (1) p /( p q) q 1 1-q

1 Series1 Series2 other pubs March 2011 0 1 3 5 7 9 11 13 15 17 19 21 UW-Madison Brian S. Yandell 48 Markov chain Monte Carlo can study arbitrarily complex models need only specify how parameters affect each other can reduce to specifying full conditionals construct Markov chain with right model joint posterior of unknowns as limiting stable distribution update unknowns given data and all other unknowns sample from full conditionals cycle at random through all parameters next step depends only on current values

nice Markov chains have nice properties sample summaries make sense consider almost as random sample from distribution ergodic theorem and all that stuff March 2011 UW-Madison Brian S. Yandell 49 MCMC sampling of (,q,) Gibbs sampler genotypes q effects not loci q ~ pr ( q | yi , mi , , ) pr ( y | q, ) pr ( ) ~ pr ( y | q) pr ( q | m, ) pr ( | m ) ~ pr ( q | m) Metropolis-Hastings sampler extension of Gibbs sampler does not require normalization pr( q | m ) = sum pr( q | m, ) pr( ) April 2008 UW-Madison Brian S. Yandell 50 Gibbs sampler idea toy problem want to study two correlated effects

could sample directly from their bivariate distribution instead use Gibbs sampler: sample each effect from its full conditional given the other pick order of sampling at random repeat many times 0 1 1 ~ N , 2 0 1 1 ~ N 2 ,1 2 2 ~ N 1 ,1 2 April 2008 UW-Madison Brian S. Yandell 51 Gibbs sampler samples: = 0.6 N = 200 samples 3 2 1 Gibbs: mean 2 2 0

1 -1 0 -2 -1 -2 -2 -2 -1 -1 0 0 1 1 2 Gibbs: mean 1 Gibbs: mean 2 Gibbs: mean 1

3 2 N = 50 samples 20 30 40 50 -1 0 1 2 -1 0 1 2 50 100 150 200

50 100 150 200 Markov chain index -2 -1 -1 0 0 1 1 2 Gibbs: mean 2 Gibbs: mean 2 2 0 -2

-1 0 1 2 3 -1 0 1 2 3 Gibbs: mean 1 -2 -2 -1 0 1 2 Gibbs: mean 2 3

Gibbs: mean 1 2 10 -2 1 50 0 40 -1 30 Gibbs: mean 2 20 -2 10 Markov chain index 3 0 0

Markov chain index April 2008 -2 Gibbs: mean 1 0 Markov chain index UW-Madison Brian S. Yandell -2 Gibbs: mean 1 52 Metropolis-Hastings idea f() 0.4 want to study distribution f() unless too complicated 0.2 take Monte Carlo samples Metropolis-Hastings samples: 0.0 take samples using ratios of f

propose new value * accept new value with prob a 2 4 6 0.4 near (?) current value from some distribution g 0 8 10 g(*) f (* ) g (* ) a min 1, * f ( ) g ( ) 0.0 0.2 Gibbs sampler: a = 1 always -4 April 2008

UW-Madison Brian S. Yandell -2 0 2 4 53 0 2 4 6 8 April 2008 mcmc sequence 800 400 0 2 4 6 8 UW-Madison Brian S. Yandell 0.0 0.2 0.4 0.6 histogram pr(|Y) 1.0 pr(|Y)

histogram 0 2 4 6 8 2.0 0 2 4 6 8 0 0 0 2 4 6 8 0.0 0 2 4 pr(|Y) histogram 6 pr(|Y) histogram 400

150 0 50 0 2 4 6 8 0.0 0.4 0.8 1.2 0 2 4 6 8 N = 1000 samples narrow g wide g 800 mcmc sequence 0 50 150 mcmc sequence N = 200 samples narrow g wide g mcmc sequence Metropolis-Hastings samples 0 2 4 6 8 54 0

0.0 0.1 2000 0.2 pr(|Y) 0.3 0.4 mcmc sequence 4000 6000 0.5 8000 0.6 10000 MCMC realization 0 2 4 6 8

10 2 3 4 5 6 7 8 added twist: occasionally propose from whole domain April 2008 UW-Madison Brian S. Yandell 55 What is the genetic architecture A? components of genetic architecture how many QTL? where are loci ()? how large are effects ()? which pairs of QTL are epistatic? use priors to weight posterior toward guess from previous analysis

improve efficiency of sampling from posterior increase samples from architectures of interest April 2008 UW-Madison Brian S. Yandell 56 Markov chain for number m add a new locus drop a locus update current model 0 March 2011 1 ... m-1 m m UW-Madison Brian S. Yandell m+1 57 number of QTL distance (cM)

jumping QTL number and loci 112222111112222233333333222222222112222223 111223 222211221 332211 33332 1 2 2211 1 1 3 2 1 1 111 333 11122222211111 60 2 1 1 1 1 1 1 1 1 1 1 2 1 2 11 2222221 40 222222222222222222222221 111111111 111111 20 111111111111111111111111 11 111

11111 1 80 0 20 40 60 MCMC run 80 100 0 20 40 80 100 3 2 1 0 March 2011 60 UW-Madison Brian S. Yandell 58

Whole Genome Phenotype Model E(y) = + (q) = + X y = n phenotypes X = nL design matrix in theory covers whole genome of size L cM X determined by genotypes and model space only need terms associated with q = nnQTL genotypes at QTL = diag() = genetic architecture = 0,1 indicators for QTLs or pairs of QTLs || = = size of genetic architecture = loci determined implicitly by = genotypic effects (main and epistatic) = reference April 2008 UW-Madison Brian S. Yandell 59 Methods of Model Search Reversible jump (transdimensional) MCMC sample possible loci ( determines possible ) collapse to model containing just those QTL bookkeeping when model dimension changes Composite model with indicators include all terms in model: and sample possible architecture ( determines ) can use LASSO-type prior for model selection Shrinkage model set = 1 (include all loci) allow variances of to differ (shrink coefficients to zero) April 2008

UW-Madison Brian S. Yandell 60 sampling across QTL models A 0 1 m+1 2 m L action steps: draw one of three choices update QTL model A with probability 1-b(A)-d(A) update current model using full conditionals sample QTL loci, effects, and genotypes add a locus with probability b(A) propose a new locus along genome innovate new genotypes at locus and phenotype effect decide whether to accept the birth of new locus drop a locus with probability d(A) propose dropping one of existing loci decide whether to accept the death of locus April 2008 UW-Madison Brian S. Yandell 61 reversible jump MCMC consider known genotypes q at 2 known loci models with 1 or 2 QTL

M-H step between 1-QTL and 2-QTL models model changes dimension (via careful bookkeeping) consider mixture over QTL models H nqtl 1 : Y 0 1 ( q1 ) e nqtl 2 : Y 0 1 ( q1 ) 2 ( q 2 ) e April 2008 UW-Madison Brian S. Yandell 62 Gibbs sampler with loci indicators partition genome into intervals at most one QTL per interval interval = 1 cM in length assume QTL in middle of interval use loci to indicate presence/absence of QTL in each interval = 1 if QTL in interval = 0 if no QTL Gibbs sampler on loci indicators see work of Nengjun Yi (and earlier work of Ina Hoeschele) Y 0 1 1 ( q1 ) 2 2 ( q1 ) e April 2008 UW-Madison Brian S. Yandell 63

Bayesian shrinkage estimation soft loci indicators strength of evidence for j depends on variance of j similar to > 0 on grey scale include all possible loci in model pseudo-markers at 1cM intervals Wang et al. (2005 Genetics) Shizhong Xu group at U CA Riverside Y 0 1 ( q1 ) 2 ( q1 ) ... e 2 j ( q j ) ~ N ( 0 , j ), April 2008 2 j ~ inverse - chisquare UW-Madison Brian S. Yandell 64 epistatic interactions model space issues Fisher-Cockerham partition vs. tree-structured? 2-QTL interactions only? general interactions among multiple QTL?

retain model hierarchy (include main QTL)? model search issues epistasis between significant QTL check all possible pairs when QTL included? allow higher order epistasis? epistasis with non-significant QTL whole genome paired with each significant QTL? pairs of non-significant QTL? Yi et al. (2005, 2007) April 2008 UW-Madison Brian S. Yandell 65 Reversible Jump Details reversible jump MCMC details can update model with m QTL have basic idea of jumping models now: careful bookkeeping between models RJ-MCMC & Bayes factors Bayes factors from RJ-MCMC chain components of Bayes factors March 2011 UW-Madison Brian S. Yandell 66 reversible jump choices

action step: draw one of three choices (m = number of QTL in model) update step with probability 1-b(m+1)-d(m) update current model loci, effects, genotypes as before add a locus with probability b(m+1) propose a new locus innovate effect and genotypes at new locus decide whether to accept the birth of new locus drop a locus with probability d(m) pick one of existing loci to drop decide whether to accept the death of locus March 2011 UW-Madison Brian S. Yandell 67 RJ-MCMC updates 1-b(m+1)-d(m) add locus b(m+1) map traits X Y genos Q

d(m) drop locus March 2011 loci effects UW-Madison Brian S. Yandell 68 propose to drop a locus 1 2 3 m+1 choose an existing locus equal weight for all loci ? more weight to loci with small effects? 1 qd ( r; m 1) m 1 drop effect & genotypes at old locus adjust effects at other loci for collinearity this is reverse jump of Green (1995) check acceptance

do not drop locus, effects & genotypes until move is accepted March 2011 UW-Madison Brian S. Yandell 69 propose to add a locus 0 m+1 2 m 1 L propose a new locus qb ( ) 1 / L uniform chance over genome actually need to be more careful (R van de Ven, pers. comm.) choose interval between loci already in model (include 0,L) probability proportional to interval length (21)/L uniform chance within this interval 1/(21) need genotypes at locus & model effect innovate effect & genotypes at new locus draw genotypes based on recombination (prior) no dependence on trait model yet draw effect as in Greens reversible jump adjust for collinearity: modify other parameters accordingly

check acceptance ... March 2011 UW-Madison Brian S. Yandell 70 acceptance of reversible jump accept birth of new locus with probability min(1,A) accept death of old locus with probability min(1,1/A) pr ( m1 , m 1 | Y , X ) d ( m 1) qb (m1 ) 1 A pr ( m , m | Y , X ) b( m) qd ( r; m 1) J m Q, , , m March 2011 UW-Madison Brian S. Yandell 71 acceptance of reversible jump move probabilities m m+1 birth & death proposals d (m 1) b ( m) qb (m 1 )

qd (r; m 1) Jacobian between models fudge factor see stepwise regression example March 2011 UW-Madison Brian S. Yandell J sr|others n 72 reversible jump idea expand idea of MCMC to compare models adjust for parameters in different models augment smaller model with innovations constraints on larger model calculus change of variables is key add or drop parameter(s) carefully compute the Jacobian consider stepwise regression Mallick (1995) & Green (1995) efficient calculation with Hausholder decomposition March 2011 UW-Madison Brian S. Yandell 73 model selection in regression known regressors (e.g. markers) models with 1 or 2 regressors

jump between models centering regressors simplifies calculations m 1 : Yi a ( Qi 1 Q1 ) ei m 2 : Yi a1 ( Qi 1 Q1 ) a 2 ( Qi 2 Q 2 ) ei March 2011 UW-Madison Brian S. Yandell 74 slope estimate for 1 regressor recall least squares estimate of slope note relation of slope to correlation n (Q r1 y s y a , r1 y i 1 s1 i1 Q1 )(Yi Y ) / n s1 s y n n i 1 i 1

s12 (Qi1 Q1 ) 2 / n, s 2y (Yi Y ) 2 / n March 2011 UW-Madison Brian S. Yandell 75 2 correlated regressors slopes adjusted for other regressors a1 ( r1 y r12 r2 y ) s y s1 a r2 y s y s2 r12 s2 c21 , c21 s1 n a2 ( r2 y r12 r1 y ) s y March 2011 s2 2 ( Q

Q c ( Q Q )) i 2 2 21 i1 1 , s221 i 1 UW-Madison Brian S. Yandell n 76 Gibbs Sampler for Model 1 mean slope variance March 2011 2 n , Bn ~ Bn (Y ), Bn n n a ~ Bn

n (Qi1 Q1 )(Yi Y) 2 i 1 , B n ns12 ns12 n 2 2 v Y Y a ( Q

Q ) i i1 1 i 1 2 ~ inv - 2 v n, v n UW-Madison Brian S. Yandell 77 Gibbs Sampler for Model 2 mean slopes variance March 2011 2 ~ Bn (Y ), Bn n

a2 ~ Bn n (Qi 2 Q2 )(Yi Y a1 (Qi1 Q1 )) i 1 ns221 2 , Bn 2 ns21 2 n 2 2 v Yi Y ak (Qik Qk ) i 1 k 1

2 ~ inv - 2 v n, v n UW-Madison Brian S. Yandell 78 updates from 2->1 drop 2nd regressor adjust other regressor a a1 a2c21 a2 0 March 2011 UW-Madison Brian S. Yandell 79 updates from 1->2 add 2nd slope, adjusting for collinearity adjust other slope & variance z ~ (0,1), J

s21 n n (Q i2 Q2 )Yi a1 (Qi1 Q1 ) a2 a2 z J , a2 i 1 ns221 a1 a a2c21 a z c21 J a2c21 March 2011 UW-Madison Brian S. Yandell 80 model selection in regression known regressors (e.g. markers) models with 1 or 2 regressors jump between models augment with new innovation z m parameters innovations transformations a2 a2 z J 2 1 2 ( , a , ; z ) z ~ (0,1) a1 a a2c21 2

2 1 ( , a1 , a2 , ) March 2011 a a1 a2c21 z 0 UW-Madison Brian S. Yandell 81 change of variables change variables from model 1 to model 2 calculus issues for integration need to formally account for change of variables infinitessimal steps in integration (db) involves partial derivatives (next page) a a1 1 c21 J c21 z g ( a; z | Y , Q1 , Q2 ) 1 a2 0 J a2 (a , a 1 March 2011 2

| Y , Q1 , Q2 )da1da2 ( a; z | Y , Q1 , Q2 ) Jdadz UW-Madison Brian S. Yandell 82 Jacobian & the calculus Jacobian sorts out change of variables careful: easy to mess up here! g ( a; z ) 1 c21J g ( a; z ) ( a1 , a2 ), 0 J az 1 c21J 1 J 0 ( c21J ) J det 0 J g ( , a , 2 ; z ) da1da2 Jdadz da1da2 det az March 2011

UW-Madison Brian S. Yandell 83 geometry of reversible jump 0.6 0.6 0.8 Reversible Jump Sequence 0.8 Move Between Models a2 b2 0.2 0.4 a2 b2 0.2 0.4 c21 = 0.7 0.0 0.0 m=2 m=1 0.0 0.2

0.4 b1 0.6 0.8 0.0 a1 March 2011 0.2 0.4 b1 0.6 0.8 a1 UW-Madison Brian S. Yandell 84 QT additive reversible jump first 1000 with m<3 0.0 0.0 0.05 0.1

ab2 2 0.2 a2 b2 0.10 0.3 0.15 0.4 a short sequence 0.05 March 2011 0.10 b1 a1 0.15 -0.3 -0.2 -0.1 0.0 0.1 0.2 b1 UW-Madison Brian S. Yandell a1 85

0.3 credible set for additive 0.1 -0.1 0.0 ab22 regression line corresponds to slope of updates 0.2 90% & 95% sets based on normal -0.1 March 2011 UW-Madison Brian S. Yandell 0.0 b1 a1 0.1 0.2 86 collinear QTL = correlated effects 8-week

effect 2 2 additive -0.2 -0.1 cor = -0.7 -0.6 -0.3 effect 2 2 additive -0.4 -0.2 cor = -0.81 0.0 0.0 4-week -0.6 -0.4 -0.2 0.0 additive 1

0.2 -0.2 -0.1 effect 1 0.0 additive 1 0.1 0.2 effect 1 linked QTL = collinear genotypes correlated estimates of effects (negative if in coupling phase) sum of linked effects usually fairly constant April 2008 UW-Madison Brian S. Yandell 87 multivariate updating of effects before more computations when m > 2 avoid matrix inverse Cholesky decomposition of matrix simultaneous updates effects at all loci

after accept new locus based on sampled new genos at locus sampled new effects at all loci also long-range positions updates March 2011 UW-Madison Brian S. Yandell 88 8 QTL simulation (Stevens Fisch 1998) n = 200, h2 = .5 Location, j SF detected 3 QTL Bayesian IM n 200 200 500 500 March 2011 h2 .5 .8 .9 .97 detect 2

4 7 8 Q TL N o. Chrom . j c j M arker m j Position (cM ) A dditive effect j

D om inance Effect j 1 1 1 11 -3 0 2 1 3 10 -5 0 3 3 4

2 2 0 4 6 6 7 -3 0 5 6 8 12 3 0 6 8 2 12

-4 0 7 8 3 14 1 0 8 9 10 15 2 0 UW-Madison Brian S. Yandell 89 0.0 frequency 0.1

0.2 0.3 0.4 posterior number of QTL 7 8 9 10 11 number of QTL 12 13 geometric prior with mean 0.5 seems to have no influence on posterior here March 2011 UW-Madison Brian S. Yandell 90 posterior genetic architecture Chromosome count vector m 1 2 3 4 5 6 7 8 9 10 Count 8

2 0 1 0 0 2 0 2 1 0 3371 9 3 0 1 0 0 2 0 2 1 0 751 7 2 0 1 0 0 2 0 1 1 0 377 9 2 0 1 0 0 3 0 2 1 0 218 9 2 0 1 0 0 2 0 2 2 0

198 March 2011 UW-Madison Brian S. Yandell 91 Bayesian model averaging average summaries over multiple architectures avoid selection of best model focus on better models examples in data talk later April 2008 UW-Madison Brian S. Yandell 92 model averaging for 8 QTL 2000 1000 0 scaled QTL intensity (a) 0 500 1000

1500 2000 1500 2000 1500 2000 -2 0 2 -6 additive effect 4 (b) 0 500 1000 Position 2 1 0 -1 dominance effect (c)

0 500 1000 Position March 2011 UW-Madison Brian S. Yandell 93 model averaging: focus on chr. 8 (a) 150 0 50 scaled QTL intensity all samples 40 60 80 100 120 position(cM) samples with 8 QTL

60 0 20 Frequency (b) 40 March 2011 60 80 100 position(cM) UW-Madison Brian S. Yandell 120 94 5. Model Assessment balance model fit against model complexity model fit prediction interpretation parameters smaller model miss key features may be biased easier

low variance bigger model fits better no bias more complicated high variance information criteria: penalize likelihood by model size compare IC = 2 log L( model | data ) + penalty(model size) Bayes factors: balance posterior by prior choice compare pr( data | model) April 2008 UW-Madison Brian S. Yandell 95 Bayes factors ratio of model likelihoods ratio of posterior to prior odds for architectures average over unknown effects () and loci () pr (data | model A1 ) BF pr (data | model A2 ) roughly equivalent to BIC BIC maximizes over unknowns BF averages over unknowns 2 log10 ( BF ) 2 LOD (change in model size ) log10 ( n ) April 2008 UW-Madison Brian S. Yandell 96

issues in computing Bayes factors BF insensitive to shape of prior on A geometric, Poisson, uniform precision improves when prior mimics posterior BF sensitivity to prior variance on effects prior variance should reflect data variability resolved by using hyper-priors automatic algorithm; no need for user tuning easy to compute Bayes factors from samples apply Bayes rule and solve for pr(y | m, A) pr(A | y, m) = pr(y | m, A) pr(A | m) / constant pr(data|model) = constant * pr(model|data) / pr(model) posterior pr(A | y, m) is marginal histogram April 2008 UW-Madison Brian S. Yandell 97 Bayes factors and genetic model A prior probability 0.10 0.20 sampled marginal histogram shape affected by prior pr(A) e e pr ( A|y , m) /pr ( A) BFA, A1

pr ( A 1|y , m) /pr ( A 1) e p u geometric exponential Poisson uniform p p e u u p u u u u u p e p e e p p e 0.00 prior pr(A) chosen by user posterior pr(A|y,m) 0.30 |A| = number of QTL 0 e e p

e e p u p u p u u 2 4 6 8 m = number of QTL 10 pattern of QTL across genome gene action and epistasis April 2008 UW-Madison Brian S. Yandell 98 3 4 BF sensitivity to fixed prior for effects Bayes factors 0.5 1 2 4 3 2 3 4

2 4 3 2 4 3 2 4 3 2 4 3 2 4 3 2 3 4 2 4 3 2 1 1 1 1 1

0.2 3 4 2 4 3 2 1 1 1 0.05 1 0.20 1 1 0.50 2.00 5.00 2 hyper-prior heritability h 4 3 2 1 B45 B34 B23 B12 20.00 50.00

2 qj ~ N0, G2 / m , G2 h 2 total , h 2 fixed April 2008 UW-Madison Brian S. Yandell 99 BF insensitivity to random effects prior insensitivity to hyper-prior 1.0 3.0 hyper-prior density 2*Beta(a,b) 0.0 0.5 1.0 1.5 2 hyper-parameter heritability h 3 2 3 2 3 2 3 2

Bayes factors 0.2 0.4 0.6 0.0 density 1.0 2.0 0.25,9.75 0.5,9.5 1,9 2,10 1,3 1,1 2.0 3 2 1 1 1 0.05 0.10 1 0.20 2 Eh

1 3 3 2 2 B34 B23 B12 1 1 0.50 1.00 2 qj ~ N0, G2 / m , G2 h 2 total , 12 h 2 ~ Beta (a, b) April 2008 UW-Madison Brian S. Yandell 100 How sensitive is posterior to choice of prior? simulations with 0, 1 or 2 QTL strong effects (additive = 2, variance = 1) linked loci 36cM apart differences with number of QTL clear differences by actual number works well with 100,000, better with 1M

effect of Poisson prior mean larger prior mean shifts posterior up but prior does not take over March 2011 UW-Madison Brian S. Yandell 101 simulation study: prior 2 QTL at 15, 65cM n = 100, 200; h2 = 40% vary prior mean from 1 to 10 QTL Poisson prior 10 independent simulations examine posterior mean, probability March 2011 UW-Madison Brian S. Yandell 102 4 6 8 2.8 2.4 2.0 2 4

6 8 n = 100, h^2 = 40 n = 200, h^2 = 40 6 8 prior mean no. QTL 10 0.4 0.0 0.8 0.4 4 10 0.8 prior mean no. QTL posterior prob. of 2 QTL prior mean no. QTL 2

March 2011 10 0.0 posterior prob. of 2 QTL 2 1.6 posterior mean no. QTL 2.5 2.0 1.5 posterior mean no. QTL posterior m depends on prior 2 4 6 8 10 prior mean no. QTL UW-Madison Brian S. Yandell 103

cumulative posterior as prior mean changes 10 1 2 3 4 5 6 7 8 9 0 1 2 3 4 1.0 0.6 0.8 1 2 3 4 5 6 7 8

9 10 10 987654321 1 2 3 4 5 10 6 7 8 9 0 1 prior mean March 2011 1 2 3 4 5 6 7 8 9 10 1 2

3 4 5 6 7 8 9 10 3 4 0.4 876543219 10 0.2 32165487 9 10 0.0 12 34 56 78 9 10 n = 200, h^2 = 40 cumulative prob.

0.8 0.4 0.6 1 2 3 45 6 78 9 10 0.2 0.0 cumulative prob. 1.0 n = 100, h^2 = 40 2 prior mean UW-Madison Brian S. Yandell 104 March 2011 0.8 0.4 0.0 1 QTL present

0 1 2 3 4 5 2 QTL present 0 1 2 3 4 5 0 QTL present 0 1 2 3 4 5 1 QTL present 0 1 2 3 4 5 UW-Madison Brian S. Yandell 0.0 0.2 0.2 0.4 0.4 0.6 0 QTL present 0 1 2 3 4 5 0.0 0.4 prior post. 0.0

prior mean = 4 0.0 0.2 0.4 prior mean = 2 0.0 0.4 0.8 effect of prior mean on posterior m 2 QTL present 0 1 2 3 4 5 105 effect of prior shape on posterior (b) Poisson(3) 0.6 0.05 0.00 0.1 0.0 4 6 8 10 0 2

4 6 8 10 0 5 10 20 nQtl nQtl (d) Geom(2/3) (e) S&A(4) (f) uniform(0,30) 0.6 0.05 0.00 0.1 0.0 4 6 nQtl

8 10 0.10 Density 0.4 0.2 0.3 Density 0.15 0.5 0.6 0.5 0.4 0.3 0.1 2 30 0.20 nQtl 0.0 0 0.10

Density 0.4 0.2 0.3 Density 0.1 0.0 2 0.2 Density 0.15 0.5 0.6 0.5 0.4 0.3 0.2 Density 0 March 2011 (c) Poisson(6) 0.20

(a) Poisson(1) 0 2 4 6 8 10 nQtl UW-Madison Brian S. Yandell 0 5 10 20 30 nQtl 106 marginal BF scan by QTL compare models with and without QTL at average over all possible models estimate as ratio of samples with/without QTL scan over genome for peaks

2log(BF) seems to have similar properties to LPD pr ( y | m, model with ) BF pr ( y | m, model without ) April 2008 UW-Madison Brian S. Yandell 107 6. analysis of hyper data marginal scans of genome detect significant loci infer main and epistatic QTL, GxE infer most probable genetic architecture number of QTL chromosome pattern of QTL with epistasis diagnostic summaries heritability, unexplained variation April 2008 UW-Madison Brian S. Yandell 108 marginal scans of genome LPD and 2log(BF) tests for each locus estimates of QTL effects at each locus separately infer main effects and epistasis main effect for each locus (blue) epistasis for loci paired with another (purple) identify epistatic QTL in 1-D scan infer pairing in 2-D scan April 2008

UW-Madison Brian S. Yandell 109 hyper data: scanone April 2008 UW-Madison Brian S. Yandell 110 2log(BF) scan with 50% HPD region April 2008 UW-Madison Brian S. Yandell 111 2-D plot of 2logBF: chr 6 & 15 April 2008 UW-Madison Brian S. Yandell 112 1-D Slices of 2-D scans: chr 6 & 15 April 2008 UW-Madison Brian S. Yandell 113

1-D Slices of 2-D scans: chr 6 & 15 April 2008 UW-Madison Brian S. Yandell 114 What is best genetic architecture? How many QTL? What is pattern across chromosomes? examine posterior relative to prior prior determined ahead of time posterior estimated by histogram/bar chart Bayes factor ratio = pr(model|data) / pr(model) April 2008 UW-Madison Brian S. Yandell 115 How many QTL? posterior, prior, Bayes factor ratios prior strength of evidence MCMC error April 2008 UW-Madison Brian S. Yandell 116

most probable patterns nqtl posterior prior bf bfse 1,4,6,15,6:15 5 0.03400 2.71e-05 24.30 2.360 1,4,6,6,15,6:15 6 0.00467 5.22e-06 17.40 4.630 1,1,4,6,15,6:15 6 0.00600 9.05e-06 12.80 3.020 1,1,4,5,6,15,6:15 7 0.00267 4.11e-06 12.60 4.450 1,4,6,15,15,6:15 6 0.00300 4.96e-06 11.70 3.910 1,4,4,6,15,6:15 6 0.00300 5.81e-06 10.00 3.330 1,2,4,6,15,6:15 6 0.00767 1.54e-05 9.66 2.010 1,4,5,6,15,6:15 6 0.00500 1.28e-05 7.56 1.950 1,2,4,5,6,15,6:15 7 0.00267 6.98e-06 7.41 2.620 1,4 2 0.01430 1.51e-04 1.84 0.279 1,1,2,4 4 0.00300 3.66e-05 1.59 0.529

1,2,4 3 0.00733 1.03e-04 1.38 0.294 1,1,4 3 0.00400 6.05e-05 1.28 0.370 1,4,19 3 0.00300 5.82e-05 1.00 0.333 April 2008 UW-Madison Brian S. Yandell 117 what is best estimate of QTL? find most probable pattern 1,4,6,15,6:15 has posterior of 3.4% estimate locus across all nested patterns Exact pattern seen ~100/3000 samples Nested pattern seen ~2000/3000 samples estimate 95% confidence interval using quantiles 247 245 248 246 chrom locus locus.LCL locus.UCL n.qtl 1 69.9 24.44875 95.7985 0.8026667 4 29.5 14.20000 74.3000 0.8800000

6 59.0 13.83333 66.7000 0.7096667 15 19.5 13.10000 55.7000 0.8450000 April 2008 UW-Madison Brian S. Yandell 118 how close are other patterns? size & shade ~ posterior distance between patterns sum of squared attenuation match loci between patterns squared attenuation = (1-2r)2 sq.atten in scale of LOD & LPD multidimensional scaling MDS projects distance onto 2-D think mileage between cities April 2008 UW-Madison Brian S. Yandell 119 how close are other patterns? April 2008

UW-Madison Brian S. Yandell 120 diagnostic summaries April 2008 UW-Madison Brian S. Yandell 121 7. Software for Bayesian QTLs R/qtlbim publication CRAN release Fall 2006 Yandell et al. (2007 Bioinformatics) properties cross-compatible with R/qtl epistasis, fixed & random covariates, GxE extensive graphics April 2008 UW-Madison Brian S. Yandell 122 R/qtlbim: software history Bayesian module within WinQTLCart WinQTLCart output can be processed using R/bim Software history

initially designed (Satagopan Yandell 1996) major revision and extension (Gaffney 2001) R/bim to CRAN (Wu, Gaffney, Jin, Yandell 2003) R/qtlbim total rewrite (Yandell et al. 2007) April 2008 UW-Madison Brian S. Yandell 123 other Bayesian software for QTLs R/bim*: Bayesian Interval Mapping Satagopan Yandell (1996; Gaffney 2001) CRAN no epistasis; reversible jump MCMC algorithm version available within WinQTLCart (statgen.ncsu.edu/qtlcart) R/qtl* Broman et al. (2003 Bioinformatics) CRAN multiple imputation algorithm for 1, 2 QTL scans & limited mult-QTL fits Bayesian QTL / Multimapper Sillanp Arjas (1998 Genetics) www.rni.helsinki.fi/~mjs no epistasis; introduced posterior intensity for QTLs (no released code) Stephens & Fisch (1998 Biometrics)

no epistasis R/bqtl C Berry (1998 TR) CRAN no epistasis, Haley Knott approximation * Jackson Labs (Hao Wu, Randy von Smith) provided crucial technical support April 2008 UW-Madison Brian S. Yandell 124 many thanks Karl Broman Tom Osborn David Butruille Jackson Labs Marcio Ferrera Gary Churchill Josh Udahl Hao Wu Pablo Quijada Randy von Smith Alan Attie U AL Birmingham Jonathan Stoehr David Allison Hong Lan Nengjun Yi Susie Clee Tapan Mehta Jessica Byers Samprit Banerjee

Mark Keller Ram Venkataraman Daniel Shriner Michael Newton Hyuna Yang Daniel Sorensen Daniel Gianola Liang Li my students Jaya Satagopan Fei Zou Patrick Gaffney Chunfang Jin Elias Chaibub W Whipple Neely Jee Young Moon USDA Hatch, NIH/NIDDK (Attie), NIH/R01 (Yi, Broman) April 2008 UW-Madison Brian S. Yandell 125

Recently Viewed Presentations

  • LIN3022 Natural Language Processing Lecture 10 Albert Gatt

    LIN3022 Natural Language Processing Lecture 10 Albert Gatt

    What is NLG? Subfield of AI and Computational Linguistics that is concerned with systems to produce understandable texts (in English or other languages)
  • Introduction to - Mrs. Tully&#x27;s Website for Students

    Introduction to - Mrs. Tully's Website for Students

    The muses are typically invoked at or near the beginning of an epic poem or classical Greek hymn. An epic : Includes supernatural elements, involves an epic hero, was originally sung, often to a harp or lyre, ... Introduction to...
  • Ethical Trading - Lloyd&#x27;s of London

    Ethical Trading - Lloyd's of London

    U.s. Dodd-frank act: Practical Implications for Lloyd's Joe Gunset, General Counsel, Lloyd's America John P. Mulhern, Partner, Dewey & LeBoeuf
  • Horatio I, i

    Horatio I, i

    Horatio's diction through the play's initial scene shifts from a barrage of metaphors to action-oriented prose, foreshadowing further shifts in setting and mood. Shakespeare uses long-winded and figurative language to characterize Horatio as someone who is intelligent and not afraid...
  • Post-Impressionism:1886-1905

    Post-Impressionism:1886-1905

    Post-Impressionism (as a art name) was coined by writer, editor, and museum director, Roger Fry, who used the term in 1910 and 1912 for two shows at London's Grafton Galleries by works of Cezanne, Gauguin, and Matisse. The name stuck
  • Costs and Benefit Analysis of tax exemptions for

    Costs and Benefit Analysis of tax exemptions for

    Costs and Benefit Analysis of tax exemptions for export promotion schemes Sukumar Mukhopadhyay ICRIER Export Promotion Schemes namely, Drawback Duty Entitlement Pass Book (DEPB) Export Promotion Capital Goods To get an idea of the coverage of the schemes in relation...
  • The Respiratory System - West Virginia University

    The Respiratory System - West Virginia University

    Bacteria get labeled w/antibodies, killed by complement, macrophages, killer cells. This slide is just another way to organize things for immune responseto help study. I won't use in lecture. Combining non-specific and adaptive immune response
  • Exploring Majors and Careers 2013 - PWRFaculty.org

    Exploring Majors and Careers 2013 - PWRFaculty.org

    Why Do I Need A Resume? Demonstrate . qualifications and fit. Summary of education, experience, skills and activities. It's a marketing document. To get the interview!