Home | << 1 >> |
![]() |
Acuna, J. A., Cantarino, D., Martinez, R., & Zayas-Castro, J. L. (2024). A two-stage stochastic game model for elective surgical capacity planning and investment. Socio-Econ. Plan. Sci., 91, 101786.
Abstract: Waiting for elective procedures has become a major health concern in both rich and poor countries. The inadequate balance between the demand for and the supply of health services negatively affects the quality of life, mortality, and government appraisal. This study presents the first mathematical framework shedding light on how much, when, and where to invest in health capacity to end waiting lists for elective surgeries. We model the healthcare system as a two-stage stochastic capacity expansion problem where government investment decisions are represented as a non-symmetric Nash bargaining solution. In particular, the model assesses the capacity requirements, optimal allocation, and corresponding financial investment per hospital, region, specialty, and year. We use the proposed approach to target Chile's elective surgical waiting lists (2021- 2031), considering patients' priorities, 10 regional health services, 24 hospitals, and 10 surgical specialties. We generate uncertain future demand scenarios using historical data (2012-2021) and 100 autoregressive integrated moving average prediction models. The results indicate that USD 3,331.677 million is necessary to end the waiting lists by 2031 and that the Nash approach provides a fair resource distribution with a 6% efficiency loss. Additionally, a smaller budget (USD 2,000 million) was identified as necessary to end the waiting lists in a longer planning horizon. Further analysis revealed the impact of investment in patient transfer and a decline in investment yield.
|
Bertossi, L., & Geerts, F. (2020). Data Quality and Explainable AI. ACM J. Data Inf. Qual., 12(2), 11.
Abstract: In this work, we provide some insights and develop some ideas, with few technical details, about the role of explanations in Data Quality in the context of data-based machine learning models (ML). In this direction, there are, as expected, roles for causality, and explainable artificial intelligence. The latter area not only sheds light on the models, but also on the data that support model construction. There is also room for defining, identifying, and explaining errors in data, in particular, in ML, and also for suggesting repair actions. More generally, explanations can be used as a basis for defining dirty data in the context of ML, and measuring or quantifying them. We think dirtiness as relative to the ML task at hand, e.g., classification.
Keywords: Machine learning; causes; fairness; bias
|
Cominetti, R., Quattropani, M., & Scarsini, M. (2022). The Buck-Passing Game. Math. Oper. Res., Early Access.
Abstract: We consider two classes of games in which players are the vertices of a directed graph. Initially, nature chooses one player according to some fixed distribution and gives the player a buck. This player passes the buck to one of the player's out-neighbors in the graph. The procedure is repeated indefinitely. In one class of games, each player wants to minimize the asymptotic expected frequency of times that the player receives the buck. In the other class of games, the player wants to maximize it. The PageRank game is a particular case of these maximizing games. We consider deterministic and stochastic versions of the game, depending on how players select the neighbor to which to pass the buck. In both cases, we prove the existence of pure equilibria that do not depend on the initial distribution; this is achieved by showing the existence of a generalized ordinal potential. If the graph on which the game is played admits a Hamiltonian cycle, then this is the outcome of prior-five Nash equilibrium in the minimizing game. For the minimizing game, we then use the price of anarchy and stability to measure fairness of these equilibria.
|