New Releases by Tom Leonard

Tom Leonard is the author of The Effects of Lime and K on Coastal Bermudagrass Yield and Nutrient Concentrations at Two N Rates (1986), On Bayes' Theorem, Paternity Suits, and Wisconsin Law (1985), Intimate Voices (1984), Satires & Profanities (1984), Glascow, My Big Birdie (1983).

31 - 58 of 58 results
<<

The Effects of Lime and K on Coastal Bermudagrass Yield and Nutrient Concentrations at Two N Rates

release date: Jan 01, 1986

On Bayes' Theorem, Paternity Suits, and Wisconsin Law

A Bayesian Approach to Model Checking

A SMALL SAMPLE EVALUATION OF A BAYESIAN DESIGN METHOD FOR QUANTAL RESPONSE MODELS.

Bayes Estimation of a Multivariate Density

Bayes Estimation of a Multivariate Density
The problem addressed concerns the estimation of a p-dimensional multivariate density, given only a set of n observation vectors, together with information that the density function is likely to be reasonably smooth. A solution is proposed which employs up to n + 1/2 p(p+1) smoothing parameters, all of which may be estimated by their posterior means. This avoids the well-known difficulties, associated with even one-dimensional kernel estimators, of estimating the bandwidth or smoothing parameter by a mathematical procedure. The posterior mean value function, unconditional upon the smoothing parameters, turns out to be a data-based mixture of multivariate t-distributions. The corresponding estimate of the sampling covariance matrix may be viewed as a shrinkage estimator of the Bayes-Stein type. The results involve some finite series which may be evaluated by straightforward simulation procedure. (Author).

An Inferential Approach to the Bioassay Design Problem

An Inferential Approach to the Bioassay Design Problem
The Bioassay design problem may usefully be considered within an inferential framework, rather than by reference to a formal decision theoretic procedure based upon a number of special assumptions. Three graphical techniques are described to assist the user''s selection of new design points. Firstly, a plot, against dose level, of the predictive probability of the death of the next rat will help the user to choose design points relating to particular regions of LD values; comparison with the maximum likelihood estimate of the response curve leads to informal stopping rules. Secondly, new approximations, to the posterior density of the effective dose, are proposed, for each LD value. These are related to the marginal likelihood ideas of Sprott and Kalbfleisch. Thirdly, mixtures of these densities leads to design measures for the distribution of future dose levels. These seem to make criteria like D-optimality rather tangential to the real design issue. The ideas are illustrated graphically by reference to a fertility example due to Bliss.

A Bayesian Approach to Markovian Models for Normal and Poisson Data

A Bayesian Approach to Markovian Models for Normal and Poisson Data
A Bayesian updating procedure is proposed for filtering the process parameters in the two-stage Markovian constant variance model for time varying normal data in the situation where the signal to noise ratio is unknown. A forecastign procedure is described which yields the entire predictive distribution of future observations; a numerical study involves an on-line analysis for chemical process concentration readings. A similar method is developed for Poisson data and applied to the analysis of an industrial control chart.

Applications of the EM Algorithm to the Estimation of Bayesian Hyperparameters

Applications of the EM Algorithm to the Estimation of Bayesian Hyperparameters
Applications of the EM algorithm to the estimation of Bayesian hyperparameters are discussed and reviewed in the context of the author''s philosophy involving the inductive and pragmatic modelling of sampling distributions and prior structures. Frequently the hyperparameters may be estimated from the data, thus avoiding the subjective assessment of these values. The ideas are applied to multiple regression models, histograms and multinomial distributions. A numerical example is described in the context of smoothing the cell probabilities of several multinomial distributions. (Author).

Some Penalized Likelihood Procedures for Smoothing Probability Densities

Some Penalized Likelihood Procedures for Smoothing Probability Densities
Some methods are considered for the estimation of probability densities. They employ a linear approximation to either the density or the logistic density transform. The coefficients in the approximation are estimated by maximum likelihood and the number of terms is judged via an information criterion. Hence the traditionally difficult problem of judging the degree of smoothness is carried out in a relatively simple manner. Criteria considered include the penalties proposed by Akaike and Schwarz for model complexity, together with an empirical criterion based upon a plot of the log-likelihoods. The practical procedures are related to an asymptotic consistency argument and a number of numerical examples are presented.

Some Philosophies of Inference and Modelling

Some Philosophies of Inference and Modelling
During the Spring semester of 1981, the Mathematics Research Center held a weekly statistical discussion series as a precursor to its special year on Scientific Inference, Data Analysis, and Robustness. The many discussants included G.E.P. Box, D.V. Lindley, B.W. Silverman, A. Herzberg, C.F. Wu, B. Joiner and D. Rubin. Many aspects of statistics were discussed, including the Box philosophy of deductive and inductive reasoning, and Lindley''s coherent Bayesian viewpoint. The present paper attempts to constructively review the discussion series, and to add a number of retrospective comments and suggestions. (Author).

Why Do We Need Significance Levels?

Why Do We Need Significance Levels?
Significance tests are commonly used in many application areas as attempts to formally confirm or refute specific conclusions. For example, in the social sciences (e.g. psychology, sociology, and econometrics) there is often much more emphasis on data-fitting and seeking ''significant'' results than on developing proper mathematical models which reltae in an inductively sensible way to the real-life problem. In the present paper a new formulation is used to demonstrate that significance tests tend to be much too ready to reject the null hypothesis for large sample sizes. It is recommended that the usual percentage points should be replaced by quantities depending in a particular way upon sample size, but not upon a choice of significance level. The phenomena discussed would appear to be particularly relevant to the area of scientific reporting. For example, many results in applied journals which might have been viewed as ''significant, '' because they yield a low p-value, may in fact serve to detract from the very scientific theory which they claim to substantiate. For large sample sizes, the techniques proposed in this paper permit a larger range of viable null hypotheses than experienced under fixed-size significance testing. It should therefore be easier to use them to find a data-credible model which is also reasonable in real-life terms.

Cleveland Customs and Superstitions

Folklore Paper

Folklore Paper
Paper, researched at Houlton, Me., Spring 1971, on jokes collected at Ricker College.

A Priest Came on at Merkland Street

The Roles of Inductive Modelling and Coherence Bayesian Statistics

31 - 58 of 58 results
<<


  • Aboutread.com makes it one-click away to discover great books from local library by linking books/movies to your library catalog search.

  • Copyright © 2025 Aboutread.com