Journal Description
Econometrics
Econometrics
is an international, peer-reviewed, open access journal on econometric modeling and forecasting, as well as new advances in econometrics theory, and is published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), EconLit, EconBiz, RePEc, and other databases.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 29.6 days after submission; acceptance to publication is undertaken in 6.2 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
1.1 (2023);
5-Year Impact Factor:
1.4 (2023)
Latest Articles
How Financial Stress Can Impact Fiscal and Monetary Policies: Threshold VAR Analysis for Brazilian Economy
Econometrics 2024, 12(4), 37; https://doi.org/10.3390/econometrics12040037 - 5 Dec 2024
Abstract
►
Show Figures
This study examines economic policy responses in Brazil during periods of financial stress, with a particular emphasis on the dynamics of both the impulse and rule components of fiscal policy. We offer novel empirical evidence on policy responses under both low and high
[...] Read more.
This study examines economic policy responses in Brazil during periods of financial stress, with a particular emphasis on the dynamics of both the impulse and rule components of fiscal policy. We offer novel empirical evidence on policy responses under both low and high stress conditions, utilizing monthly data that span the past two decades. To this end, we construct a Financial Stress Index (FSI) and integrate it into a threshold-VAR framework. Additionally, we employ five distinct methodologies to decompose fiscal policy into its impulse and rule components. Our analysis yields two main findings. First, fiscal policy exhibits procyclical behavior in its impulse component and countercyclical behavior in its rule component across both regimes. Second, while monetary policy is countercyclical during high stress conditions, its impact remains largely statistically non-significant. These results suggest that policymakers should exercise caution when timing the implementation of expansionary fiscal policies, carefully considering the phase of the business cycle. Moreover, our findings carry significant implications for the ongoing discourse on fiscal stimulus and debt stabilization strategies, particularly in the context of financial stress.
Full article
Open AccessArticle
Instrument Selection in Panel Data Models with Endogeneity: A Bayesian Approach
by
Álvaro Herce and Manuel Salvador
Econometrics 2024, 12(4), 36; https://doi.org/10.3390/econometrics12040036 - 2 Dec 2024
Abstract
►▼
Show Figures
This paper proposes the use of Bayesian inference techniques to search for and obtain valid instruments in dynamic panel data models where endogenous variables may exist. The use of Principal Component Analysis (PCA) allows for obtaining a reduced number of instruments in comparison
[...] Read more.
This paper proposes the use of Bayesian inference techniques to search for and obtain valid instruments in dynamic panel data models where endogenous variables may exist. The use of Principal Component Analysis (PCA) allows for obtaining a reduced number of instruments in comparison to the high number of instruments commonly used in the literature, and Monte Carlo Markov Chain (MCMC) methods enable efficient exploration of the instrument space, deriving accurate point estimates of the elements of interest. The proposed methodology is illustrated in a simulated case and in an empirical application, where the partial effect of a series of determinants on the attraction of international bank flows is quantified. The results highlight the importance of promoting and developing the private sector in these economies, as well as the importance of maintaining good levels of creditworthiness.
Full article
Figure A1
Open AccessArticle
Bayesian Inference for Long Memory Stochastic Volatility Models
by
Pedro Chaim and Márcio Poletti Laurini
Econometrics 2024, 12(4), 35; https://doi.org/10.3390/econometrics12040035 - 27 Nov 2024
Abstract
►▼
Show Figures
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination
[...] Read more.
We explore the application of integrated nested Laplace approximations for the Bayesian estimation of stochastic volatility models characterized by long memory. The logarithmic variance persistence in these models is represented by a Fractional Gaussian Noise process, which we approximate as a linear combination of independent first-order autoregressive processes, lending itself to a Gaussian Markov Random Field representation. Our results from Monte Carlo experiments indicate that this approach exhibits small sample properties akin to those of Markov Chain Monte Carlo estimators. Additionally, it offers the advantages of reduced computational complexity and the mitigation of posterior convergence issues. We employ this methodology to estimate volatility dependency patterns for both the SP&500 index and major cryptocurrencies. We thoroughly assess the in-sample fit and extend our analysis to the construction of out-of-sample forecasts. Furthermore, we propose multi-factor extensions and apply this method to estimate volatility measurements from high-frequency data, underscoring its exceptional computational efficiency. Our simulation results demonstrate that the INLA methodology achieves comparable accuracy to traditional MCMC methods for estimating latent parameters and volatilities in LMSV models. The proposed model extensions show strong in-sample fit and out-of-sample forecast performance, highlighting the versatility of the INLA approach. This method is particularly advantageous in high-frequency contexts, where the computational demands of traditional posterior simulations are often prohibitive.
Full article
Figure 1
Open AccessArticle
Forecasting Wind–Photovoltaic Energy Production and Income with Traditional and ML Techniques
by
Giovanni Masala and Amelie Schischke
Econometrics 2024, 12(4), 34; https://doi.org/10.3390/econometrics12040034 - 12 Nov 2024
Abstract
►▼
Show Figures
Hybrid production plants harness diverse climatic sources for electricity generation, playing a crucial role in the transition to renewable energies. This study aims to forecast the profitability of a combined wind–photovoltaic energy system. Here, we develop a model that integrates predicted spot prices
[...] Read more.
Hybrid production plants harness diverse climatic sources for electricity generation, playing a crucial role in the transition to renewable energies. This study aims to forecast the profitability of a combined wind–photovoltaic energy system. Here, we develop a model that integrates predicted spot prices and electricity output forecasts, incorporating relevant climatic variables to enhance accuracy. The jointly modeled climatic variables and the spot price constitute one of the innovative aspects of this work. Regarding practical application, we considered a hypothetical wind–photovoltaic plant located in Italy and used the relevant climate series to determine the quantity of energy produced. We forecast the quantity of energy as well as income through machine learning techniques and more traditional statistical and econometric models. We evaluate the results by splitting the dataset into estimation windows and test windows, and using a backtesting technique. In particular, we found evidence that ML regression techniques outperform results obtained with traditional econometric models. Regarding the models used to achieve this goal, the objective is not to propose original models but to verify the effectiveness of the most recent machine learning models for this important application, and to compare them with more classic linear regression techniques.
Full article
Figure 1
Open AccessArticle
Likert Scale Variables in Personal Finance Research: The Neutral Category Problem
by
Blain Pearson, Donald Lacombe and Nasima Khatun
Econometrics 2024, 12(4), 33; https://doi.org/10.3390/econometrics12040033 - 6 Nov 2024
Abstract
Personal finance research often utilizes Likert-type items and Likert scales as dependent variables, frequently employing standard probit and ordered probit models. If inappropriately modeled, the “neutral” category of discrete dependent variables can bias estimates of the remaining categories. Through the utilization of hierarchical
[...] Read more.
Personal finance research often utilizes Likert-type items and Likert scales as dependent variables, frequently employing standard probit and ordered probit models. If inappropriately modeled, the “neutral” category of discrete dependent variables can bias estimates of the remaining categories. Through the utilization of hierarchical models, this paper demonstrates a methodology that accounts for the econometric issues of the neutral category. We then analyze the technique through an empirical exercise relevant to personal finance research using data from the National Financial Capability Study. We demonstrate that ignoring the “neutral” category bias can lead to incorrect inferences, hindering the progression of personal finance research. Our findings underscore the importance of refining statistical modeling techniques when dealing with Likert-type data. By accounting for the neutral category, we can enhance the reliability of personal finance research outcomes, fostering improved decision-relevant insights.
Full article
Open AccessArticle
Enhancing Efficiency: Halton Draws in the Generalized True Random Effects Model
by
David H. Bernstein
Econometrics 2024, 12(4), 32; https://doi.org/10.3390/econometrics12040032 - 6 Nov 2024
Abstract
►▼
Show Figures
This paper measures the impact of the number of Halton draws in excess of on technical efficiency in the generalized true random effects (four-component) stochastic frontier model estimated by simulated maximum likelihood. A substantial set of Monte Carlo simulations demonstrates
[...] Read more.
This paper measures the impact of the number of Halton draws in excess of on technical efficiency in the generalized true random effects (four-component) stochastic frontier model estimated by simulated maximum likelihood. A substantial set of Monte Carlo simulations demonstrates that increasing the number of Halton draws to ( ) decreases the mean squared error of the total technical efficiency estimates by ( ) percent. Furthermore, increasing the number of Halton draws either improves or has no detrimental impact on correlation, mean squared error, relative bias, and upward bias for persistent, transient, and total technical efficiency. An energy sector application is included, to demonstrate how these issues can arise in practice, and how increasing Halton draws can improve parameter and efficiency estimates in empirical work.
Full article
Figure 1
Open AccessArticle
Exploring the Role of Global Value Chain Position in Economic Models for Bankruptcy Forecasting
by
Mélanie Croquet, Loredana Cultrera, Dimitri Laroutis, Laetitia Pozniak and Guillaume Vermeylen
Econometrics 2024, 12(4), 31; https://doi.org/10.3390/econometrics12040031 - 5 Nov 2024
Abstract
►▼
Show Figures
This study addresses a significant gap in the literature by comparing the effectiveness of traditional statistical methods with artificial intelligence (AI) techniques in predicting bankruptcy among small and medium-sized enterprises (SMEs). Traditional bankruptcy prediction models often fail to account for the unique characteristics
[...] Read more.
This study addresses a significant gap in the literature by comparing the effectiveness of traditional statistical methods with artificial intelligence (AI) techniques in predicting bankruptcy among small and medium-sized enterprises (SMEs). Traditional bankruptcy prediction models often fail to account for the unique characteristics of SMEs, such as their vulnerability due to lean structures and reliance on short-term credit. This research utilizes a comprehensive database of 7104 Belgian SMEs to evaluate these models. Belgium was selected due to its unique regulatory and economic environment, which presents specific challenges and opportunities for bankruptcy prediction in SMEs. Our findings reveal that AI techniques significantly outperform traditional statistical methods in predicting bankruptcy, demonstrating superior predictive accuracy. Furthermore, our analysis highlights that a firm’s position within the Global Value Chain (GVC) impacts prediction accuracy. Specifically, firms operating upstream in the production process show lower prediction performance, suggesting that bankruptcy risk may propagate upward along the value chain. This effect was measured by analyzing the firm’s GVC position as a variable in the prediction models, with upstream firms exhibiting greater vulnerability to the financial distress of downstream partners. These insights are valuable for practitioners, emphasizing the need to consider specific performance factors based on the firm’s position within the GVC when assessing bankruptcy risk. By integrating both AI techniques and GVC positioning into bankruptcy prediction models, this study provides a more nuanced understanding of bankruptcy risks for SMEs and offers practical guidance for managing and mitigating these risks.
Full article
Figure 1
Open AccessArticle
Impact of Areal Factors on Students’ Travel Mode Choices: A Bayesian Spatial Analysis
by
Amin Azimian and Alireza Azimian
Econometrics 2024, 12(4), 30; https://doi.org/10.3390/econometrics12040030 - 26 Oct 2024
Abstract
A preliminary analysis of the 2018/2019 Austin Travel Survey indicated that most off-campus students in Travis County, TX, tend to use cars rather than more sustainable transportation modes, significantly contributing to traffic congestion and environmental impact. This study aims to analyze the impacts
[...] Read more.
A preliminary analysis of the 2018/2019 Austin Travel Survey indicated that most off-campus students in Travis County, TX, tend to use cars rather than more sustainable transportation modes, significantly contributing to traffic congestion and environmental impact. This study aims to analyze the impacts of areal factors, including environmental and transportation factors, on students’ choices of travel mode in order to promote more sustainable transport behaviors. Additionally, we investigate the presence of spatial correlation and unobserved heterogeneity in travel data and their effects on students’ travel mode choices. We have proposed two Bayesian models—a basic model and a spatial model—with structured and unstructured random-effect terms to perform the analysis. The results indicate that the inclusion of spatial random effects considerably improves model performance, suggesting that students’ choices of mode are likely influenced by areal factors often ‘unobserved’ in many individual travel mode choice surveys. Furthermore, we found that the average slope, sidewalk density, and bus-stop density significantly affect students’ travel mode choices. These findings provide insights into promoting sustainable transport systems by addressing environmental and infrastructural factors in an effort to reduce car dependency among students, thereby supporting sustainable urban development.
Full article
(This article belongs to the Special Issue Innovations in Bayesian Econometrics: Theory, Techniques, and Economic Analysis)
►▼
Show Figures
Figure 1
Open AccessArticle
Econometric Analysis of the Sustainability and Development of an Alternative Strategy to Gross Value Added in Kazakhstan’s Agricultural Sector
by
Azat Tleubayev, Seyit Kerimkhulle, Manatzhan Tleuzhanova, Aigul Uchkampirova, Zhanat Bulakbay, Raikhan Mugauina, Zhumagul Tazhibayeva, Alibek Adalbek, Yerassyl Iskakov and Daniyar Toleubay
Econometrics 2024, 12(4), 29; https://doi.org/10.3390/econometrics12040029 - 17 Oct 2024
Abstract
►▼
Show Figures
Based on the systematization of relevant problems in the agricultural sector of Kazakhstan and other countries, the purpose of the research is to aid in the development and implementation of a methodology for the econometric analysis of sustainability, the classification of economic growth,
[...] Read more.
Based on the systematization of relevant problems in the agricultural sector of Kazakhstan and other countries, the purpose of the research is to aid in the development and implementation of a methodology for the econometric analysis of sustainability, the classification of economic growth, and an alternative strategy for gross value added depending on time phases with time lags of 0, 1, and 2 years, and on the gross fixed capital formation in the agricultural sector of Kazakhstan. The research has used a variety of quantitative techniques, including the logistic growth difference equation, applied statistics, econometric models, operations research, nonlinear mathematical programming models, economic modeling simulations, and sustainability analysis. In the work on three criteria: equilibrium, balanced and optimal growth, we have defined the main trends of growth of Gross added value of agriculture, hunting and forestry. The first, depending on the time phases, the second, depending on the Gross fixed capital formation transactions for equilibrium growth, for the growth of an alternative strategy, for the endogenous growth rate and the growth of exogenous flows. And we also received a classification of the trend of Productive, Moderate and Critical growth for the agricultural industry depending on the correlated linkaged industry of the national economy of Kazakhstan. The results of this work can be used in data analytics and artificial intelligence, digital transformation and technology in agriculture, as well as in the areas of sustainability and environmental impact.
Full article
Figure 1
Open AccessArticle
Long-Term Care in Germany in the Context of the Demographic Transition—An Outlook for the Expenses of Long-Term Care Insurance through 2050
by
Patrizio Vanella, Christina Benita Wilke and Moritz Heß
Econometrics 2024, 12(4), 28; https://doi.org/10.3390/econometrics12040028 - 9 Oct 2024
Abstract
Demographic aging results in a growing number of older people in need of care in many regions all over the world. Germany has witnessed steady population aging for decades, prompting policymakers and other stakeholders to discuss how to fulfill the rapidly growing demand
[...] Read more.
Demographic aging results in a growing number of older people in need of care in many regions all over the world. Germany has witnessed steady population aging for decades, prompting policymakers and other stakeholders to discuss how to fulfill the rapidly growing demand for care workers and finance the rising costs of long-term care. Informed decisions on this matter to ensure the sustainability of the statutory long-term care insurance system require reliable knowledge of the associated future costs. These need to be simulated based on well-designed forecast models that holistically include the complexity of the forecast problem, namely the demographic transition, epidemiological trends, concrete demand for and supply of specific care services, and the respective costs. Care risks heavily depend on demographics, both in absolute terms and according to severity. The number of persons in need of care, disaggregated by severity of disability, in turn, is the main driver of the remuneration that is paid by long-term care insurance. Therefore, detailed forecasts of the population and care rates are important ingredients for forecasts of long-term care insurance expenditures. We present a novel approach based on a stochastic demographic cohort-component approach that includes trends in age- and sex-specific care rates and the demand for specific care services, given changing preferences over the life course. The model is executed for Germany until the year 2050 as a case study.
Full article
(This article belongs to the Special Issue Advancements in Macroeconometric Modeling and Time Series Analysis)
►▼
Show Figures
Figure 1
Open AccessArticle
Estimating the Effects of Credit Constraints on Productivity of Peruvian Agriculture
by
Tiemen Woutersen, Katherine Hauck and Shahidur R. Khandker
Econometrics 2024, 12(4), 27; https://doi.org/10.3390/econometrics12040027 - 26 Sep 2024
Abstract
This paper proposes an estimator for the endogenous switching regression models with fixed effects. The decision to switch from one regime to the other may depend on unobserved factors, which would cause the state, such as being credit constrained, to be endogenous. Our
[...] Read more.
This paper proposes an estimator for the endogenous switching regression models with fixed effects. The decision to switch from one regime to the other may depend on unobserved factors, which would cause the state, such as being credit constrained, to be endogenous. Our estimator allows for this endogenous selection and for conditional heteroscedasticity in the outcome equation. Applying our estimator to a dataset on the productivity in agriculture substantially changes the conclusions compared to earlier analysis of the same dataset. Intuitively, the reason that our estimate of the impact of switching between states is smaller than previously estimated is that we captured the selection issue: switching between being credit constrained and credit unconstrained may be endogenous to farm production. In particular, we find that being credit constant has the substantial effect of reducing yield by 11%, but not the previously estimated very dramatic effect of reducing yield by 26%.
Full article
Open AccessArticle
Estimating Treatment Effects Using Observational Data and Experimental Data with Non-Overlapping Support
by
Kevin Han, Han Wu, Linjia Wu, Yu Shi and Canyao Liu
Econometrics 2024, 12(3), 26; https://doi.org/10.3390/econometrics12030026 - 20 Sep 2024
Abstract
When estimating treatment effects, the gold standard is to conduct a randomized experiment and then contrast outcomes associated with the treatment group and the control group. However, in many cases, randomized experiments are either conducted with a much smaller scale compared to the
[...] Read more.
When estimating treatment effects, the gold standard is to conduct a randomized experiment and then contrast outcomes associated with the treatment group and the control group. However, in many cases, randomized experiments are either conducted with a much smaller scale compared to the size of the _target population or accompanied with certain ethical issues and thus hard to implement. Therefore, researchers usually rely on observational data to study causal connections. The downside is that the unconfoundedness assumption, which is the key to validating the use of observational data, is untestable and almost always violated. Hence, any conclusion drawn from observational data should be further analyzed with great care. Given the richness of observational data and usefulness of experimental data, researchers hope to develop credible methods to combine the strength of the two. In this paper, we consider a setting where the observational data contain the outcome of interest as well as a surrogate outcome, while the experimental data contain only the surrogate outcome. We propose an easy-to-implement estimator to estimate the average treatment effect of interest using both the observational data and the experimental data.
Full article
Open AccessArticle
Score-Driven Interactions for “Disease X” Using COVID and Non-COVID Mortality
by
Szabolcs Blazsek, William M. Dos Santos and Andreco S. Edwards
Econometrics 2024, 12(3), 25; https://doi.org/10.3390/econometrics12030025 - 4 Sep 2024
Abstract
►▼
Show Figures
The COVID-19 (coronavirus disease of 2019) pandemic is over; however, the probability of such a pandemic is about 2% in any year. There are international negotiations among almost 200 countries at the World Health Organization (WHO) concerning a global plan to deal with
[...] Read more.
The COVID-19 (coronavirus disease of 2019) pandemic is over; however, the probability of such a pandemic is about 2% in any year. There are international negotiations among almost 200 countries at the World Health Organization (WHO) concerning a global plan to deal with the next pandemic on the scale of COVID-19, known as “Disease X”. We develop a nonlinear panel quasi-vector autoregressive (PQVAR) model for the multivariate t-distribution with dynamic unobserved effects, which can be used for out-of-sample forecasts of causes of death counts in the United States (US) when a new global pandemic starts. We use panel data from the Centers for Disease Control and Prevention (CDC) for the cross section of all states of the United States (US) from March 2020 to September 2022 regarding all death counts of (i) COVID-19 deaths, (ii) deaths that medically may be related to COVID-19, and (iii) the remaining causes of death. We compare the t-PQVAR model with its special cases, the PVAR moving average (PVARMA), and PVAR. The t-PQVAR model provides robust evidence on dynamic interactions among (i), (ii), and (iii). The t-PQVAR model may be used for out-of-sample forecasting purposes at the outbreak of a future “Disease X” pandemic.
Full article
Figure 1
Open AccessArticle
Signs of Fluctuations in Energy Prices and Energy Stock-Market Volatility in Brazil and in the US
by
Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Gabriela Mayumi Saiki, Matheus Noschang de Oliveira, Guilherme Fay Vergara, Pedro Augusto Giacomelli Fernandes, Vinícius Pereira Gonçalves and Clóvis Neumann
Econometrics 2024, 12(3), 24; https://doi.org/10.3390/econometrics12030024 - 23 Aug 2024
Abstract
►▼
Show Figures
Volatility reflects the degree of variation in a time series, and a measurement of the stock performance in the energy sector can help one understand the pattern of fluctuations within this industry, as well as the factors that influence it. One of these
[...] Read more.
Volatility reflects the degree of variation in a time series, and a measurement of the stock performance in the energy sector can help one understand the pattern of fluctuations within this industry, as well as the factors that influence it. One of these factors could be the COVID-19 pandemic, which led to extreme volatility within the stock market in several economic sectors. It is essential to understand this regime of volatility so that robust financial strategies can be adopted to handle it. This study used stock data from the Yahoo! Finance API and data from the energy-price database from the US Energy Information Administration to conduct a comparative analysis of the volatility in the energy sector in Brazil and in the United States, as well as of the energy prices in California. The volatility in these time series were modeled using GARCH. The stock volatility regimes, both before and after COVID-19, were identified with a Markov switching model; the spillover index between the energy markets in the USA and in Brazil was evaluated with the Diebold–Yilmaz index; and the causality between the energy stock price and the energy prices was measured with the Granger causality test. The findings of this study show that (i) the volatility regime introduced by COVID-19 is still prevalent in Brazil and in the USA, (ii) the changes in the energy market in the US affect the Brazilian market significantly more than the reverse, and (iii) there is a causality relationship between the energy stock markets and the energy prices in California. These results may assist in the achievement of effective regulation and economic planning, while also supporting better market interventions. Also, acknowledging the persistent COVID-19-induced volatility can help with developing strategies for future crisis resilience.
Full article
Figure 1
Open AccessArticle
Transient and Persistent Technical Efficiencies in Rice Farming: A Generalized True Random-Effects Model Approach
by
Phuc Trong Ho, Michael Burton, Atakelty Hailu and Chunbo Ma
Econometrics 2024, 12(3), 23; https://doi.org/10.3390/econometrics12030023 - 12 Aug 2024
Abstract
►▼
Show Figures
This study estimates transient and persistent technical efficiencies (TEs) using a generalized true random-effects (GTRE) model. We estimate the GTRE model using maximum likelihood and Bayesian estimation methods, then compare it to three simpler models nested within it to evaluate the robustness of
[...] Read more.
This study estimates transient and persistent technical efficiencies (TEs) using a generalized true random-effects (GTRE) model. We estimate the GTRE model using maximum likelihood and Bayesian estimation methods, then compare it to three simpler models nested within it to evaluate the robustness of our estimates. We use a panel data set of 945 observations collected from 344 rice farming households in Vietnam’s Mekong River Delta. The results indicate that the GTRE model is more appropriate than the restricted models for understanding heterogeneity and inefficiency in rice production. The mean estimate of overall technical efficiency is 0.71 on average, with transient rather than persistent inefficiency being the dominant component. This suggests that rice farmers could increase output substantially and would benefit from policies that pay more attention to addressing short-term inefficiency issues.
Full article
Figure 1
Open AccessArticle
Is It Sufficient to Select the Optimal Class Number Based Only on Information Criteria in Fixed- and Random-Parameter Latent Class Discrete Choice Modeling Approaches?
by
Péter Czine, Péter Balogh, Zsanett Blága, Zoltán Szabó, Réka Szekeres, Stephane Hess and Béla Juhász
Econometrics 2024, 12(3), 22; https://doi.org/10.3390/econometrics12030022 - 8 Aug 2024
Abstract
Heterogeneity in preferences can be addressed through various discrete choice modeling approaches. The random-parameter latent class (RLC) approach offers a desirable alternative for analysts due to its advantageous properties of separating classes with different preferences and capturing the remaining heterogeneity within classes by
[...] Read more.
Heterogeneity in preferences can be addressed through various discrete choice modeling approaches. The random-parameter latent class (RLC) approach offers a desirable alternative for analysts due to its advantageous properties of separating classes with different preferences and capturing the remaining heterogeneity within classes by including random parameters. For latent class specifications, however, more empirical evidence on the optimal number of classes to consider is needed in order to develop a more objective set of criteria. To investigate this question, we tested cases with different class numbers (for both fixed- and random-parameter latent class modeling) by analyzing data from a discrete choice experiment conducted in 2021 (examined preferences regarding COVID-19 vaccines). We compared models using commonly used indicators such as the Bayesian information criterion, and we took into account, among others, a seemingly simple but often overlooked indicator such as the ratio of significant parameter estimates. Based on our results, it is not sufficient to decide on the optimal number of classes in the latent class modeling based on only information criteria. We considered aspects such as the ratio of significant parameter estimates (it may be interesting to examine this both between and within specifications to find out which model type and class number has the most balanced ratio); the validity of the coefficients obtained (focusing on whether the conclusions are consistent with our theoretical model); whether including random parameters is justified (finding a balance between the complexity of the model and its information content, i.e., to examine when (and to what extent) the introduction of within-class heterogeneity is relevant); and the distributions of MRS calculations (since they often function as a direct measure of preferences, it is necessary to test how consistent the distributions of specifications with different class numbers are (if they are highly, i.e., relatively stable in explaining consumer preferences, it is probably worth putting more emphasis on the aspects mentioned above when choosing a model)). The results of this research raise further questions that should be addressed by further model testing in the future.
Full article
Open AccessArticle
Instrumental Variable Method for Regularized Estimation in Generalized Linear Measurement Error Models
by
Lin Xue and Liqun Wang
Econometrics 2024, 12(3), 21; https://doi.org/10.3390/econometrics12030021 - 12 Jul 2024
Abstract
►▼
Show Figures
Regularized regression methods have attracted much attention in the literature, mainly due to its application in high-dimensional variable selection problems. Most existing regularization methods assume that the predictors are directly observed and precisely measured. It is well known that in a low-dimensional regression
[...] Read more.
Regularized regression methods have attracted much attention in the literature, mainly due to its application in high-dimensional variable selection problems. Most existing regularization methods assume that the predictors are directly observed and precisely measured. It is well known that in a low-dimensional regression model if some covariates are measured with error, then the naive estimators that ignore the measurement error are biased and inconsistent. However, the impact of measurement error in regularized estimation procedures is not clear. For example, it is known that the ordinary least squares estimate of the regression coefficient in a linear model is attenuated towards zero and, on the other hand, the variance of the observed surrogate predictor is inflated. Therefore, it is unclear how the interaction of these two factors affects the selection outcome. To correct for the measurement error effects, some researchers assume that the measurement error covariance matrix is known or can be estimated using external data. In this paper, we propose the regularized instrumental variable method for generalized linear measurement error models. We show that the proposed approach yields a consistent variable selection procedure and root-n consistent parameter estimators. Extensive finite sample simulation studies show that the proposed method performs satisfactorily in both linear and generalized linear models. A real data example is provided to further demonstrate the usage of the method.
Full article
Figure 1
Open AccessArticle
Comparing Estimation Methods for the Power–Pareto Distribution
by
Frederico Caeiro and Mina Norouzirad
Econometrics 2024, 12(3), 20; https://doi.org/10.3390/econometrics12030020 - 11 Jul 2024
Abstract
►▼
Show Figures
Non-negative distributions are important tools in various fields. Given the importance of achieving a good fit, the literature offers hundreds of different models, from the very simple to the highly flexible. In this paper, we consider the power–Pareto model, which is defined by
[...] Read more.
Non-negative distributions are important tools in various fields. Given the importance of achieving a good fit, the literature offers hundreds of different models, from the very simple to the highly flexible. In this paper, we consider the power–Pareto model, which is defined by its quantile function. This distribution has three parameters, allowing the model to take different shapes, including symmetrical and left- and right-skewed. We provide different distributional characteristics and discuss parameter estimation. In addition to the already-known Maximum Likelihood and Least Squares of the logarithm of the order statistics estimation methods, we propose several additional methods. A simulation study and an application to two datasets are conducted to illustrate the performance of the estimation methods.
Full article
Figure 1
Open AccessArticle
Stochastic Debt Sustainability Analysis in Romania in the Context of the War in Ukraine
by
Gabriela Dobrotă and Alina Daniela Voda
Econometrics 2024, 12(3), 19; https://doi.org/10.3390/econometrics12030019 - 5 Jul 2024
Cited by 1
Abstract
►▼
Show Figures
Public debt is determined by borrowings undertaken by a government to finance its short- or long-term financial needs and to ensure that macroeconomic objectives are met within budgetary constraints. In Romania, public debt has been on an upward trajectory, a trend that has
[...] Read more.
Public debt is determined by borrowings undertaken by a government to finance its short- or long-term financial needs and to ensure that macroeconomic objectives are met within budgetary constraints. In Romania, public debt has been on an upward trajectory, a trend that has been further exacerbated in recent years by the COVID-19 pandemic. Additionally, a significant non-economic event influencing Romania’s public debt is the war in Ukraine. To analyze this, a stochastic debt sustainability analysis was conducted, incorporating the unique characteristics of Romania’s emerging market into the research methodology. The projections focused on achieving satisfactory results by following two lines of research. The first direction involved developing four scenarios to assess the risks presented by macroeconomic shocks. Particular emphasis was placed on an unusual negative shock, specifically the war in Ukraine, with forecasts indicating that the debt-to-GDP ratio could reach 102% by 2026. However, if policymakers implement discretionary measures, this level could be contained below 88%. The second direction of research aimed to establish the maximum safe limit of public debt for Romania, which was determined to be 70%. This threshold would allow the emerging economy to manage a reasonable level of risk without requiring excessive fiscal efforts to maintain long-term stability.
Full article
Figure 1
Open AccessArticle
Investigation of Equilibrium in Oligopoly Markets with the Help of Tripled Fixed Points in Banach Spaces
by
Atanas Ilchev, Vanya Ivanova, Hristina Kulina, Polina Yaneva and Boyan Zlatanov
Econometrics 2024, 12(2), 18; https://doi.org/10.3390/econometrics12020018 - 17 Jun 2024
Abstract
►▼
Show Figures
In the study we explore an oligopoly market for equilibrium and stability based on statistical data with the help of response functions rather than payoff maximization. To achieve this, we extend the concept of coupled fixed points to triple fixed points. We propose
[...] Read more.
In the study we explore an oligopoly market for equilibrium and stability based on statistical data with the help of response functions rather than payoff maximization. To achieve this, we extend the concept of coupled fixed points to triple fixed points. We propose a new model that leads to generalized triple fixed points. We present a possible application of the generalized tripled fixed point model to the study of market equilibrium in an oligopolistic market dominated by three major competitors. The task of maximizing the payout functions of the three players is modified by the concept of generalized tripled fixed points of response functions. The presented model for generalized tripled fixed points of response functions is equivalent to Cournot payoff maximization, provided that the market price function and the three players’ cost functions are differentiable. Furthermore, we demonstrate that the contractive condition corresponds to the second-order constraints in payoff maximization. Moreover, the model under consideration is stable in the sense that it ensures the stability of the consecutive production process, as opposed to the payoff maximization model with which the market equilibrium may not be stable. A possible gap in the applications of the classical technique for maximization of the payoff functions is that the price function in the market may not be known, and any approximation of it may lead to the solution of a task different from the one generated by the market. We use empirical data from Bulgaria’s beer market to illustrate the created model. The statistical data gives fair information on how the players react without knowing the price function, their cost function, or their aims towards a specific market. We present two models based on the real data and their approximations, respectively. The two models, although different, show similar behavior in terms of time and the stability of the market equilibrium. Thus, the notion of response functions and tripled fixed points seems to present a justified way of modeling market processes in oligopoly markets when searching whether the market has reached equilibrium and if this equilibrium is unique and stable in time
Full article
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Conferences
Special Issues
Special Issue in
Econometrics
Innovations in Bayesian Econometrics: Theory, Techniques, and Economic Analysis
Guest Editor: Deborah GefangDeadline: 31 May 2025
Special Issue in
Econometrics
Advancements in Macroeconometric Modeling and Time Series Analysis
Guest Editor: Julien ChevallierDeadline: 31 December 2025