Uniform Linear Representation for Post-selection Inference One of the important hurdles for solving the problem of post-selection inference (PoSI) is the non-smoothness introduced in the functional due to a data dependent model selection. Selective inference, a method of conditional inference put forward by the Stanford group, solves this hurdle by explicitly approximating the distribution of the estimator conditional on the selected model. Uniform inference, proposed by Berk et al. (2013), solves this hurdle by turning the PoSI problem into a simultaneous inference problem. While selective inference requires a very specific model-selection procedure, uniform inference can solve the PoSI problem with all types of model-selection procedures at once. Even though the framework of Berk et al. (2013) is classical and commonly used in most of the literature, it is restrictive only allowing fixed covariates and normally distributed homoscedastic errors in linear regression. Bachoc et al. (2016) extended this methodology to allow for non-Gaussian heteroscedastic errors but still work with fixed covariates. Because of the Gaussian errors assumption of Berk et al. (2013), the main technical tool of asymptotic uniform linear representation was buried in their analysis until shown explicitly by Bachoc et al. (2016). However, the analysis of Bachoc et al. (2016) is limited to inference about a fixed number of functionals or, in other words, fixed number of covariates. The aim of this work is extending the framework of Berk et al. (2013) and Bachoc et al. (2016) to allow for a diverging number of covariates (almost exponential in the sample size) and use of general M-estimators (with possible misspecification) including linear regression and generalized linear models. This was achieved by proving asymptotic uniform (in model) linear representation for all these problems. Then an application of high-dimensional central limit theorem and bootstrap consistency allows for valid post-selection inference in all these M-estimation problems. This is joint work with Wharton group on Linear Models, including Lawrence D Brown, Andreas Buja, Edward George, Linda Zhao. References: Francois Bachoc, David Preinerstorfer and Lukas Steinberger (2016). Uniformly valid confidence intervals post-model-selection. arXiv:1611.01043. Richard Berk, Lawrence Brown, Andreas Buja, Kai Zhang, and Linda Zhao (2013). Valid post-selection inference. Ann. Statist. Volume 41, Number 2 (2013), 802-837.