Penalized function-on-function linear quantile regression

Ufuk Beyaztas*, Han Lin Shang, Semanur Saricam

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

We introduce a novel function-on-function linear quantile regression model to characterize the entire conditional distribution of a functional response for a given functional predictor. Tensor cubic B-splines expansion is used to represent the regression parameter functions, where a derivative-free optimization algorithm is used to obtain the estimates. Quadratic roughness penalties are applied to the coefficients to control the smoothness of the estimates. The optimal degree of smoothness depends on the quantile of interest. An automatic grid-search algorithm based on the Bayesian information criterion is used to estimate the optimum values of the smoothing parameters. Via a series of Monte-Carlo experiments and an empirical data analysis using Mary River flow data, we evaluate the estimation and predictive performance of the proposed method, and the results are compared favorably with several existing methods.
Original languageEnglish
Pages (from-to)301-329
Number of pages29
JournalComputational Statistics
Volume40
Issue number1
Early online date17 Apr 2024
DOIs
Publication statusPublished - Jan 2025

Keywords

  • Functional data
  • Derivative-free optimization
  • Quantile regression
  • Smoothing parameter

Fingerprint

Dive into the research topics of 'Penalized function-on-function linear quantile regression'. Together they form a unique fingerprint.

Cite this