## SEARCH

#### Institution

##### ( see all 97)

- McMaster University 127 (%)
- RWTH Aachen University 36 (%)
- Cochin University of Science and Technology 11 (%)
- McMasters University 11 (%)
- Internal Revenue Service 10 (%)

#### Author

##### ( see all 133)

- Balakrishnan, N. [x] 143 (%)
- Cramer, Erhard 33 (%)
- N.Balakrishnan N. Balakrishnan 19 (%)
- Nair, N. Unnikrishnan 12 (%)
- Sankaran, P. G. 11 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 143 matching Articles
Results per page:

## Progressive Hybrid and Adaptive Censoring and Related Inference

### The Art of Progressive Censoring (2014-01-01): 327-340 , January 01, 2014

Inferential results for progressive hybrid and adaptive progressive Type-II censored data are shown. A special focus is given to one- and two-parameter exponential distributions.

## Progressive Type-II Censoring: Distribution Theory

### The Art of Progressive Censoring (2014-01-01): 21-66 , January 01, 2014

The distribution theory of progressively Type-II censored order statistics is presented with a focus on particular baseline distributions like exponential and generalized Pareto distributions. The discussion includes joint, marginal, and conditional distributions as well as the fundamental quantile representation. The connection to generalized order statistics and sequential order statistics is highlighted. Further topics discussed are shapes of density functions, recurrence relations, exceedances, and discrete progressively Type-II censored order statistics.

## Exact two-sample nonparametric confidence, prediction, and tolerance intervals based on ordinary and progressively type-II right censored data

### TEST (2010-05-01) 19: 68-91 , May 01, 2010

It is shown how various exact nonparametric inferences based on an ordinary right or progressively Type-II right censored sample can be generalized to situations where two independent samples are combined. We derive the relevant formulas for the combined ordered samples to construct confidence intervals for a given quantile, prediction intervals, and tolerance intervals. The results are valid for every continuous distribution function. The key results are the derivations of the marginal distribution functions in the combined ordered samples. In the case of ordinary Type-II right censored order statistics, it is shown that the combined ordered sample is no longer distributed as order statistics. Instead, the distribution in the combined ordered sample is closely related to progressively Type-II censored order statistics.

## Selecting the Best Population Using a Test for Equality Based on Minimal Wilcoxon Rank-sum Precedence Statistic

### Methodology and Computing in Applied Probability (2007-06-01) 9: 263-305 , June 01, 2007

In this paper, we first give an overview of the precedence-type test procedures. Then we propose a nonparametric test based on early failures for the equality of two life-time distributions against two alternatives concerning the best population. This procedure utilizes the minimal Wilcoxon rank-sum precedence statistic (Ng and Balakrishnan, 2002, 2004) which can determine the difference between populations based on early (100*q*%) failures. Hence, this procedure can be useful in life-testing experiments in biological as well as industrial settings. After proposing the test procedure, we derive the exact null distribution of the test statistic in the two-sample case with equal or unequal sample sizes. We also present the exact probability of correct selection under the Lehmann alternative. Then, we generalize the test procedure to the *k*-sample situation. Critical values for some sample sizes are presented. Next, we examine the performance of this test procedure under a location-shift alternative through Monte Carlo simulations. Two examples are presented to illustrate our test procedure with selecting the best population as an objective.

## Piecewise Linear Approximations for Cure Rate Models and Associated Inferential Issues

### Methodology and Computing in Applied Probability (2016-12-01) 18: 937-966 , December 01, 2016

Cure rate models offer a convenient way to model time-to-event data by allowing a proportion of individuals in the population to be completely cured so that they never face the event of interest (say, death). The most studied cure rate models can be defined through a competing cause scenario in which the random variables corresponding to the time-to-event for each competing causes are conditionally independent and identically distributed while the actual number of competing causes is a latent discrete random variable. The main interest is then in the estimation of the cured proportion as well as in developing inference about failure times of the susceptibles. The existing literature consists of parametric and non/semi-parametric approaches, while the expectation maximization (EM) algorithm offers an efficient tool for the estimation of the model parameters due to the presence of right censoring in the data. In this paper, we study the cases wherein the number of competing causes is either a binary or Poisson random variable and a piecewise linear function is used for modeling the hazard function of the time-to-event. Exact likelihood inference is then developed based on the EM algorithm and the inverse of the observed information matrix is used for developing asymptotic confidence intervals. The Monte Carlo simulation study demonstrates the accuracy of the proposed non-parametric approach compared to the results attained from the true correct parametric model. The proposed model and the inferential method is finally illustrated with a data set on cutaneous melanoma.

## Quantile-Quantile Plots and Goodness-of-Fit Test

### Handbook of Tables for Order Statistics from Lognormal Distributions with Applications (1999-01-01): 39-40 , January 01, 1999

In any statistical study based on the assumption of particular distribution for the data at hand, one will naturally be interested in assessing the validity of that assumption; more specifically, one will be interested in testing for the hypothesis that the data has come from that specific distribution wherein only the functional form of the distribution is assumed to be known while it may involve some unknown parameters. For example, we may be interested in testing whether the data at hand has possibly arisen from the three-parameter lognormal distribution in (4.1), wherein we may assume that all three parameters μ, σ and *k* are unknown.

## Precedence-type tests based on record values

### Metrika (2008-09-01) 68: 233-255 , September 01, 2008

Precedence-type tests based on order statistics are simple and efficient nonparametric tests that are very useful in the context of life-testing, and they have been studied quite extensively in the literature; see Balakrishnan and Ng (Precedence-type tests and applications. Wiley, Hoboken, 2006). In this paper, we consider precedence-type tests based on record values and develop specifically record precedence test, record maximal precedence test and record-rank-sum test. We derive their exact null distributions and tabulate some critical values. Then, under the general Lehmann alternative, we derive the exact power functions of these tests and discuss their power under the location-shift alternative. We also establish that the record precedence test is the uniformly most powerful test for testing against the one-parameter family of Lehmann alternatives. Finally, we discuss the situation when we have insufficient number of records to apply the record precedence test and then make some concluding remarks.

## Front Matter - The Art of Progressive Censoring

### The Art of Progressive Censoring (2014-01-01) , January 01, 2014

## Empirical phi-divergence test statistics for the difference of means of two populations

### AStA Advances in Statistical Analysis (2017-02-13): 1-28 , February 13, 2017

Empirical phi-divergence test statistics have demostrated to be a useful technique for the simple null hypothesis to improve the finite sample behavior of the classical likelihood ratio test statistic, as well as for model misspecification problems, in both cases for the one population problem. This paper introduces this methodology for two-sample problems. A simulation study illustrates situations in which the new test statistics become a competitive tool with respect to the classical *z* test and the likelihood ratio test statistic.

## Estimation and Modelling

### Quantile-Based Reliability Analysis (2013-01-01): 327-359 , January 01, 2013

Earlier in Chaps. 3 and 7, several types of models for lifetime data were discussed through their quantile functions. These will be candidate distributions in specific situations. The selection of one of them or a new one is dictated by how well it can justify the data generating mechanisms and satisfy well other criteria like goodness of fit. Once the question of an initial choice of the model is resolved, the problem is then to test its adequacy against the observed data. This is accomplished by first estimating the parameters of the model and then carrying out a goodness-of-fit test. This chapter addresses the problem of estimation as well as some other modelling aspects.

In choosing the estimates, our basic objective is to get estimated values that are as close as possible to the true values of the model parameters. One method is to seek estimate that match the basic characteristics of the model with those in the sample. This includes the method of percentiles and the method of moments that involve the conventional moments, *L*-moments and probability weighted moments. These methods of estimation are explained along with a discussion of the properties of these estimates. In the quantile form of analysis, the method of maximum likelihood can also be employed. The approach of this method, when there is no tractable distribution function, is described. Many functions required in reliability analysis are estimated by nonparametric methods. These include the quantile function itself and other functions such as quantile density function, hazard quantile function and percentile residual quantile function. We review some important results in these cases that furnish the asymptotic distribution of the estimates and the proximity of the proposed estimates to the true values.