Vés al contingut (premeu Retorn)

Lunch Seminar MESIO UPC-UB, 2015-2016

 

Lunch Seminar MESIO UPC-UB, 2015-2016

 

Speaker:

Guadalupe Gómez

 

Department of Statistics and Operations Research, UPC

Title:

Sample size determination when the hazard ratio is not constant

Date:

11-11-2015

Hour:

14:00-15:00 (Pizza and drinks provided!)

Room:

Sala de Juntes de l’FME

 



Abstract:
Standard methods of summarizing the treatment difference in a comparative, randomized clinical study with a specific event time as the primary endpoint (PE) are based on Kaplan-Meier curves, the logrank test and the hazard ratio which is assumed to be approximately constant over time. For designing the study, one usually utilizes an event-driven scheme to determine the sample size and this formulation is, often based, on the proportional hazard (PH) assumption.
In this talk we will discuss sample size determination for the comparison of two groups when the PH does not hold. The starting point is the asymptotic relative efficiency (ARE) method (Gómez and Lagakos, 2013) to derive efficiency guidelines for deciding whether to expand a study primary endpoint from E1 to E* = E1 U E2. The ARE assumes, among other set of realistic assumptions, constant hazard ratios, HR1 and HR2 for E1 and E2 and can be used as a tool to compute the required sample size if E* is chosen as PE.

 

 

Speaker:

Elena Fernández

 

Department of Statistics and Operations Research, UPC

Title:

Incertidumbre en Localización Discreta

Date:

16-12-2015

Hour:

14:00-15:00 (Pizza and drinks provided!)

Room:

Sala de Juntes de l’FME

 



Resumen:
Uno de los principales retos que se plantean actualmente en Investigación Operativa es la integración de elementos relacionados con la incertidumbre en los modelos estudiados. En esta presentación abordamos varios aspectos relacionados con este tema a través de la localización discreta. A través del Problema de Localización de Plantas ilustramos los cambios cualitativos implicados por la incorporación de incertidumbre en la demanda. Redefinimos el concepto de solución y analizamos distintas alternativas de modelización, junto con sus correspondientes ventajas y dificultades. Para algunos casos presentamos modelos deterministas equivalentes que permiten su resolución exacta mediante técnicas de programación lineal entera. También discutimos métodos alternativos de solución como los basados en Sample Average Approximation.

 

 

 

Speaker:

Jan Graffelman

 

Department of Statistics and Operations Research, UPC

Title:

The Statistics of Hardy-Weinberg Equilibrium

Date:

24-03-2016

Hour:

14:00-15:00 (Pizza and drinks provided!)

Room:

Sala de Juntes de l’FME

 



Abstract:
The Hardy-Weinberg law is a cornerstone principle of modern genetics. The law is more than a century old, and was independently stated in 1908 by the English mathematician Godfrey Hardy and the German physician Wilhelm Weinberg. The law states that, in the absence of disturbing factors (migration, differential survival and others) allele and genotype frequencies in a biological population will achieve equilibrium values within one generation and remain stable afterwards. The latter condition is known as Hardy-Weinberg equilibrium (HWE).

The principle is still relevant today, as testing markers for HWE is a standard step in almost all genetic studies. It is known that deviation from equilibrium is often associated with genotyping errors (misclassification of homozygotes as heterozygotes or the reverse), and equilibrium testing is an effective device to detect such errors. However, disequilibrium can also arise from other causes.

Moreover, Hardy-Weinberg equilibrium is typically assumed in many other statistical procedure that use genetic marker data, such as gene-disease association studies, relatedness investigations, and others. The HWE law has been a topic of intense research, and there are hundreds of research papers dedicated to it. Research related to the principle continues, as new types of genetic data arise. Pearson's chi-square test has been the most popular procedure to test genetic markers for equilibrium for decades, though nowadays computer-intensive exact procedures have become more and more popular.

In this talk I will explain some basic genetics, address the issue of statistically testing markers for Hardy-Weinberg equilibrium in large genomic databases, and comment upon recent work related to markers that reside on the X chromosome.

 

 

 

Speaker:

Marta Perez-Casany

 

Department of Statistics and Operations Research, UPC

Title:

The Marshall-Olkin transformation Applied to count probability distributions

Date:

06-04-2016

Hour:

14:00-15:00 (Pizza and drinks provided!)

Room:

Sala de Juntes de l’FME

 



Abstract:
In 1997, Marshall and Olkin defined  a way of generalizing a family of probability distributions
by increasing its number of parameters by one. This mechanism has been used quite a lot to generalize
continuous probability distributions, but very few research has been done with respect to the discrete case.

The talk has three parts. In the first one the Marshall-Olkin transformation is defined and
some of the main properties derived from applying it to the discrete case are presented.
The second part is devoted to the Mashall-Olkin Extended Zipf distribution and its application in social networking.
Finally, in the third part a natural way of generalizing the Marshall-Olkin transformation is presented.

 

 

 

Speaker:

Pedro Delicado

 

Department of Statistics and Operations Research, UPC

Title:

Modelización de procesos estocásticos funcionales, con aplicaciones a la dinámica de fertilidad

Date:

27-04-2016

Hour:

14:00-15:00 (Pizza and drinks provided!)

Room:

Sala de Juntes de l’FME

 



Resumen:
Un proceso estocástico funcional es una función que para cada valor de su argumento devuelve una función aleatoria (en vez de una variable aleatoria escalar). En este trabajo se propone un modelo simple e interpretable para el análisis de procesos estocásticos funcionales basado en una representación del proceso de tipo producto tensorial. Para ello se define el operador de covarianza marginal y se calculan sus funciones propias. Se demuestra que las componentes principales marginales y las componentes principales producto así obtenidas dan lugar a representaciones del proceso estocástico funcional que son óptimas en un sentido bien definido. Dada una muestra de realizaciones independientes del proceso estocástico funcional subyacente, se propone un método sencillo para obtener las componentes de este modelo. Se establece también la consistencia y la tasa de convergencia asintótica para las estimaciones propuestas. Los métodos se ilustran mediante la modelización de la dinámica de las tasas anuales de fertilidad por edad correspondientes a 17 países y 56 años. Este análisis muestra que el enfoque propuesto conduce a interpretaciones reveladoras de las componentes del modelo y a conclusiones interesantes.

 

(Trabajo conjunto con Kehui Chen y Hans-Georg Müller)

 

Speaker:

Miguel Santolino

 

Departament d'Econometria, Estadística i Economia Espanyola, UB

Title:

Challenges in risk quantification

Date:

11-05-2016

Hour:

14:00-15:00 (Pizza and drinks provided!)

Room:

Sala de Juntes de l’FME

 



Abstract:
The choice of a risk measure is a major issue in decision-making in any areas including insurance, financial, health, safety, environmental, adversarial and catastrophic risks. Many different risk measures are available to practitioners, but the selection of the most suitable risk measure for use in a given context is generally controversial. We recently proposed a new class of four-parameter distortion risk measures called GlueVaR risk measures and their tail-properties were investigated. We showed that these GlueVaR risk measures could be matched to a wide variety of contexts comparing to traditional risk measures.

 

A key element in characterizing a risk measure is the underlying risk attitude that is assumed when this measure is used for risk assessment. We design a set of instruments which provide a precise portrait of the underlying risk position of a decision-maker when selecting a particular risk measure. There is a strong relationship between risk measures and capital allocation problems. Risk measures are often required to fulfill aggregation properties. Capital allocation problems fall on the disaggregation side of risk management. We explore the connection between capital allocation principles and compositional data.