Open access peer-reviewed chapter - ONLINE FIRST

Incorporating Risk into Optimization Models

Written By

Siddhartha Sampath

Submitted: 08 April 2024 Reviewed: 15 April 2024 Published: 14 May 2024

DOI: 10.5772/intechopen.1005386

The Future of Risk Management IntechOpen
The Future of Risk Management Edited by Larisa Ivascu

From the Edited Volume

The Future of Risk Management [Working Title]

Dr. Larisa Ivascu, Dr. Marius Pislaru and Dr. Lidia Alexa

Chapter metrics overview

13 Chapter Downloads

View Full Metrics

Abstract

In modern industry, risk is often understood, communicated, and actioned upon through decision support tools. These tools often incorporate machine learning, statistical and optimization models. These models, especially optimization models, are often point estimate based and must be artfully massaged to incorporate uncertainty to build robust—risk adjusted models that provide a good basis for optimal, long term, goal-based decision making. Probability estimation, simulation and chance constrained optimization are three well known techniques that when used alone or in combination can increase the power of optimization models by considering the underlying risk of the process being optimized when recommending alternatives to decision makers.

Keywords

  • optimization
  • risk
  • chance-constrained
  • simulation
  • probability

1. Introduction

People comprehend reality in two primary fashions—an intuitive one and an analytical one [1]. As such these two approaches lend themselves to how we interpret risk in general and how we interpret it in a business context. In an intuitive sense, we tend to associate risk with the fear of experiencing a negative outcome. In a logical sense, we associate risk with negative outcomes and additionally, a probability of that negative outcome occurring. Although the analytical definition of risk is commonly agreed upon, risk is often highly dependent on the specific industry it is referenced in [2]. Furthermore, the perceived “negativity” of an outcome can be subjective, individual decision makers have their own utility functions on how they perceive gains (or losses) [3]. One way to interpret this is through the following thought exercise evaluating the utility of a $1 gain in two different situations. Suppose you have $10. An additional dollar is a 10% gain in your total holdings. Now suppose you have $100. Would $1 now have the same meaning to you? Probably not. As a motivating factor, $1 may be less attractive in the second situation than the first. Furthermore, the motivation provided by the potential to gain or lose $1 may differ between individuals as well.

The second complexity arises from how humans perceive probability—and the aversion to ambiguity as evidenced by the Allais and Elsberg paradoxes [4]. These essentially capture behaviors typically exhibited by decision makers in business where uncertainty is actively avoided despite the potential for larger utility payoffs [5].

Thus, when we talk about “risk” in a business context, the question arises are we talking about the inherent variance associated with the outcomes of a business proposition? Or are we talking about how two different decision makers would arrive at a decision when faced with the same proposition (how they perceive and handle risk)?

It is thus important to properly define risk before proceeding with the rest of this chapter. In order to do this, we borrow the classic definition of a monetary gamble as a set of outcomes x1x2xn and a probability distribution p1p2pn over x1x2xn where pi denotes the probability of obtaining xi and pi=1. The “risk” here is that some of the outcomes in X are undesirable and may occur with a non-zero probability. How decision makers incorporate this information and arrive at a decision, whether using the classic expected-utility framework based on rational choices, prescriptive methods like the analytical hierarchy process or descriptive methods such as prospect theory is beyond the scope of this chapter.

This chapter will instead focus on how to deal with uncertainty in business and three main approaches, probability assessment, simulation, and lastly chance constrained optimization under the assumption that the results of these techniques will be presented to a decision maker via a decision support tool. The decision maker will then use their preferred decision-making strategy to arrive at a decision.

Advertisement

2. Simulation and optimization

Like “risk” the terms “simulation” and “optimization” themselves have many different meanings based on the specific context that they are used in. In general, we will use simulation here to denote a Monte Carlo simulation where many realizations of the random variable as denoted by the monetary gambles we defined earlier are achieved. A thousand sample simulation of a risky gamble X is thus the set of outcomes x1x2xn sampled thousand times according to their associated probability of being realized.

Similarly, we define the term optimization in this chapter will denote a constrained optimization problem with constraints, and an objective that must be maximized. A constrained optimization problem can generally be expressed as follows.

MaxcxE1
subject to:ajxbjforj=1,,m

One of the simplest examples of a constrained optimization problem is probably the knapsack problem. It can be stated as such.

“A hiker about to embark on a long trek has a knapsack whose weight capacity is W kgs. He has in front of him the option to pick a set of items 1,,n. Each of these items has a weight wi and value vi associated with it. How can the hiker maximize the value of the items he carries on the trek, without exceeding the weight limit W of the knapsack?”

This can be translated into a binary integer program as follows.

Let 1,,n be N possible items the hiker can choose from.

Let W be the weight limit of the knapsack.

Let wi be the weight associated with each item and vi be the value of each item.

We define a decision variable xi which is a binary variable indicating whether the hiker chooses item i or not. In other words, xi=1 indicates item i was chosen to be included in the knapsack. xi=0 indicates that the item was excluded.

Solving

M1:MaxiNvixiE2
subject to:iNwixiWE3
xi01E4

Provides us the optimal set of items that the hiker can choose for a knapsack of weight limit W. We call this the solution vector or solution. The objective term iNvixi is maximized and the is the sum of the value of each selected item. The constraint iNwixiW ensures that the sum of the weights of the items in the knapsack does not exceed some limit W.

The knapsack problem is a common problem found in many fields including logistics, transportation, warehousing, finance and so on. It can also have many different versions depending on the business context, including having multiple objectives and quadratic constraints. In general, for simpler versions of the problem, many algorithms and solvers exist that can solve large instances (∼ 1 million decision variables) in a reasonable amount of time. The knapsack problem is considered an NP-hard problem, although it scales much better than other NP-hard problems. Many heuristics exist for solving large instances of the knapsack problem to near optimality in polynomial time [6, 7, 8].

The model described by M1 is a deterministic model. We use point estimates of the value and the cost to specify the model. What if these estimates were fuzzy instead? What if the weight and value of the items to choose from were random variables that could take on several different values? Based on our previous discussions, we are including a risk into any solution the decision maker specifies. But sometimes, it is not enough to consider the risk of the model coefficients, since we need to evaluate how the model will perform in different scenarios.

To illustrate this, let us modify M1 further with additional complexity. Consider an inventory problem where a manager must decide the quantity of a particular item he must manufacture based on a forecasted demand and a budget. The demand in this sense can be viewed as a random variable. There is a set of possible values that it can assume, with associated probabilities and we assume that any combination of these items will fulfill this overall demand number. If the forecasted demand is higher than the actual demand, the manager will incur an inventory and salvage cost for the unsold items. If the forecasted demand is lower than the actual demand, there is an opportunity cost that the manager will incur. How should the manager approach the problem? This problem can be further complicated by the fact that the value proposition of the item, i.e. the different costs and benefits from manufacturing the item are random variables themselves. The discerning reader will appreciate that even for a single item, this becomes a complicated problem to solve.

Assuming the above problem can be modeled as a knapsack problem, there are two methods adopted in general in industry. One is to simulate the random variable associated with the overall demand, sample different scenarios (percentiles such as 25th, 50th, 90th for example) and then solve the knapsack problem with a different demand. For each solution of the optimization exercise, we can run a simulation again on the output to analyze how each solution would fare for different demand scenarios (keeping in mind that the solution was only optimized for a particular demand scenario). The results of this simulation on the solutions can be compared using appropriate metrics (avg objective value for example) and one of the solutions is chosen. This is typically referred to as “what-if” scenario analysis [9, 10]. The deterministic version of this problem can be modeled very simply by slightly modifying M1.

M2:MaxiNvixiE5
subject to:iNxi=FE6
iNcixiBE7
xi010E8

Here, F is the forecasted demand and the constraint iNxi=F specifies the total quantity of each item adding up to F. However, this F is not the actual demand, which we will denote using D. If F < D, then we assume an opportunity cost of P proportional to the deficit DF. If F>D, we assume that there is an inventory cost of E. But note that we need to solve M2 first and then evaluate it for different values of D. Two additional terms PmaxDF0EmaxFD0 can be added to evaluate the true objective. iNcixiB is the constraint that ensures that the cost to manufacture all the items does not exceed the budget B. In this problem, each item can be manufactured multiple times and therefore the decision variable xi can take on any integer value between 1 and 10.

M2 also still assumes point values for vi and ci. Solving the model M2 without the PmaxDF0EmaxFD0 terms for different values of F (sampled from the distribution for D) will provide different answers. We would then have to evaluate each of these solutions for different values of D sampled from the distribution for D. Ultimately, a decision maker would have to compare the different solutions and choose one. However, none of these solutions would individually be guaranteed to be the “best” solution to satisfy the decision maker’s risk appetite. In order to keep it simple, we assume that we would use the mean value of the cost and value associated with each item. Figure 1 below illustrates this.

Figure 1.

Solving the knapsack problem with inventory constraints for different forecasted demand values and presenting them to decision makers to choose between them.

For an example of the above problem which is a useful illustration of commonly encountered problems in dealing with risk in industry [11, 12, 13], let us define a simple problem instance. Let us assume five products a company is looking to sell. The values associated with these items are normally distributed Nμσ with with the mean and standard deviation as detailed in the table below. Note that the value can assume negative values signifying that the manufacturer assumed a loss on those items. We model the costs associated with the manufacture of each item as a truncated normal variable so that we do not incur negative costs as NTRUNCμσ with the parameters specified in the table below and in Figure 2 and Table 1.

Figure 2.

The probability distributions of the cost(weight) and value of the five items specified in the problem instance depicted using boxplots.

ItemMean costStd costMean valueStd value
A1111
B1.5221.5
C3232
D4442.5
E5353

Table 1.

The mean and standard deviation of the various knapsack items. The costs are modeled using a truncated normal distribution and the benefits are modeled using a standard normal distribution.

Let us say that, the demand D is a random variable with distribution {5, 10, 15, 20, 25} and the opportunity cost P is distributed {2, 3, 4, 5}, the same as E, with budget of 40.

Assume J samples from D such that D1,,Dj and F1,,Fj are different values of D and F in each simulation that describes the actual demand and the forecasted demand target to hit. Assume also each simulation has an associated vij, associated with it, which is the value of the item i in that simulation.

We are now finally at a point where we can play “what-if” scenarios. We could solve the problem for the five values of F, assuming the mean values of Vi,PandE, obtaining five different solutions, then running a secondary simulation on each of these five results for various realized values of D, comparing the five solutions using the average objective value from the simulations, essentially an “evaluation” phase of the generated results. But note the assumptions we are making when solving the problem. Firstly, we are assuming the mean value of vi in each optimization. We could simulate the values of vi in the evaluation phase, but we are not inherently considering the riskiness of the item’s value in any of the five solutions. Furthermore, we are forced to ignore the PmaxDF0and theEmaxFD0 terms when solving the model since these can only fully be computed during the evaluation phase.

The other method is less common, but one that is gaining traction in industry and that is to run a chance-constrained optimization on the knapsack problem [14, 15, 16]. In a chance-constrained optimization model, the simulation is run and is incorporated into the model itself. The chance constrained optimization model then replaces the single constraint in the deterministic case with J different constraints, one for each simulation. We can now pose different conditions on the model to find solutions that satisfy requirements to increase the robustness of the solution to risk in an objective manner, for example solving a chance constrained model can provide a single solution that satisfies conditions like “do not exceed the demand D more than 10% of the time and maximize the mean value of the objective across all simulations” or “do not exceed D more than 50% of the time, but when you do, do not exceed by more than 10% of D and maximize the mean value across all simulations.” This in general seems a more robust approach and is easier to communicate to decision makers since it is a single solution that can satisfy a given risk appetite rather than multiple solutions yielded by the first method that a decision maker must choose between. Moreover, it incorporates all aspects of variability of the problem. For example, if there were additional cost attributes associated with the manufacturing of each item that were random variables, this could very simply be added to our chance constrained formulation. Figure 3 illustrates the general solution procedure describing this process.

Figure 3.

Solving the knapsack problem with inventory constraints incorporating the risk in value and cost using a chance constrained model. The model produces a single solution.

For the case where we want 80% probability of the solution does not exceed the budget and maximize the mean value of the solution, we can set up the chance constrained formulation the following way:

M3:MaxiNjJvijxiJPjJyjJEjJzjJE9
subject to:iNxi=rE10
iNcijxiBbjBj1JE11
jJbj0.2JE12
yjDjrDmaxujj1JE13
yjDmax1ujj1JE14
zjrDjDmax1ujj1JE15
zjDmaxujj1JE16
xi010E17
r050E18
bj01E19
yj0DmaxE20
zj0DmaxE21
uj01)E22

This model can be explained as follows. To set up M3, we first assume that we have run J simulations where each simulation run specifies a sample value for vi,ciand D.

The iNjJvijxiJ term in the objective is the mean value over all simulations. The r variable is how many units we decide to manufacture in total (specified in constraint (10)). Note that we do not need a forecasted demand to manufacture to. Then yjand zj are the excess and underage numbers that are equal to Djr and 0 respectively if Dj>r and 0 and rDj if r>Dj. Constraints (13)(16) specify the values of yjand zj ensuring only one of them can be non-zero for any simulation j using binary indicator variable uj. Constraints (11) and (12) ensures that the budget B can only be exceeded for 20% of the simulations.

Notice the differences between Model M2 and Model M3. In M2, no variability in the costs or value of the items were incorporated. The number of units produced was equal to the forecasted demand. Model M2, would tend to prefer “efficient” items—or items that have a higher value for the mean value of their benefit divided by their cost. M3 however, considers the distribution of cost and benefit values. It also does not assume how many units to produce. This is in fact treated as a decision variable and is an output derived from solving the model.

Solving M2 for a forecasted demand of 10 and a budget of 40 would produce a result that specifies manufacturing three units of item 2 and seven units of item 5. For a forecasted demand of 20, the model recommends 17 units of item 2, 1 unit of item of Item 4 and 2 units of item 5. Which of these is the better solution? The decision maker must evaluate these for different values of the demand and calculate the objective value. If we use the mean values of P and E specified above, the objective value of the two solutions is 16 and 23 respectively. In this case we would pick the second solution where we manufactured 20 units, but what if we were willing to violate our budget 20% of the time? Model M2 does not allow us a framework to easily answer such questions. This is a common case in business where budgets are malleable, and a case can be made for additional funds to manufacture if needed.

In contrast, solving the chance constrained problem gives us a more diversified solution for the problem instance specified of 5 units for item 1, 4 units of item 2, 4 units of item3 and 2 units of item 4 with 20% of the cases violating the budget. The objective value with 100 simulations is approximately 10. Notice how this is significantly lower than either of the two scenarios we modeled using expected values which did not consider the possibility that the combination could exceed the budget and further, ended up overestimating the average objective value. The mean we used in the original problem was replaced by sampling the truncated normal distribution and led to a very different strategy.

Chance constraints are preferred to more complex stochastic formulations of the knapsack problem as they incorporate the results of simulation and do not require a closed form formulation which is difficult with complex distributions like the one used in the problem instance above that involved a truncated normal distribution for the costs of the items [17].

Advertisement

3. Probability assessment

In the discussion above, we assumed that the probability distribution of various random variables describing costs and benefits was known. In the business world, probability assessment plays a crucial role in many different areas by directing resource allocation, risk management techniques, and decision-making procedures. Probability assessment is used in a wide range of industries, including manufacturing, insurance, healthcare, and finance, to measure uncertainty and make decisions. Probabilistic models are used in finance, for example, to predict market trends, evaluate investment risks, and maximize portfolio diversification. Similarly, probabilities are used in medicine to help diagnose conditions, forecast how treatments will work, and assess the efficacy of various medical procedures. Probability assessment is also used in manufacturing and other industries to predict equipment failures, enhance production processes, and reduce downtime. Industry professionals, regardless of the domain, frequently use statistical modeling, expert judgment, and historical data analysis to effectively assess probability. By using an interdisciplinary approach, enterprises may reduce risks, take advantage of opportunities, and predict possible outcomes, which improves overall operational efficiency and competitiveness.

Probability estimation is not a trivial exercise. In many cases, where there is no data available, businesses default to expert opinion to estimate probabilities [18, 19, 20, 21]. In fact, the act of estimating accurate probabilities is so important, some companies even hire chief probability officers to measure the inherent riskiness of various activities in the company. As noted by Savage et al., “planning for an uncertain future, calls for a shift in information management--from single numbers to probability distributions--in order to correct the” flaw of averages. “This, in turn, gives rise to the prospect of a Chief Probability Officer to manage the distributions that underlie risk, real portfolios, real options and many other activities in the global economy” [22].

The assumptions behind various probability estimates must be regularly revisited and corrected using statistical analysis based on historical data to ensure optimal decision making.

Advertisement

4. Conclusion

Incorporating risk into optimization models is essential for modern industry decision-making, where uncertainties are prevalent and can significantly impact outcomes. Various techniques for integrating risk considerations into optimization models, including probability estimation, simulation, and chance-constrained optimization can help decision makers make choices that are robust to risk. Probability assessment plays a crucial role in quantifying uncertainties across industries, guiding resource allocation, risk management strategies, and decision-making processes. While empirical data, expert judgment, and statistical analysis are commonly employed for probability estimation, the inherent complexity of uncertainty requires continuous refinement and validation of assumptions to ensure accuracy.

Understanding risk involves both intuitive and analytical perspectives, considering subjective perceptions and objective probabilities of outcomes. Decision-makers interpret risk differently, influenced by individual utility functions and attitudes toward uncertainty. Therefore, defining risk within a business context necessitates careful consideration of industry-specific factors and decision-maker preferences.

Simulation and optimization techniques offer powerful tools for addressing uncertainty in decision-making. Simulation, particularly Monte Carlo simulation, generates multiple realizations of uncertain variables, providing insights into potential outcomes and associated risks. Optimization, on the other hand, identifies optimal decisions under uncertainty, leveraging constraints and objectives to achieve desired outcomes.

Traditional what-if scenarios tend to fall short of providing robust and risk-proof solutions since they present choices that decision makers must still choose between and may not be equipped to do so. This can happen because the solutions analyzed may not truly represent the risk involved since they make assumptions like using point estimates or because the choices must be evaluated using other metrics that decision makers must consider. Often, decision makers may still end up making a subjective decision between the various choices.

Chance constraints provide a more objective method to consider risk by incorporating uncertainty into the model itself. They allow for intuitive questions to be posed and modeled easily and provide a single solution that satisfies various conditions that a decision maker may want from an ideal risk-proof solution. By formulating optimization problems with probabilistic constraints, decision-makers can ensure that solutions meet predefined risk thresholds while maximizing expected outcomes. Chance-constrained optimization offers a unified framework for handling uncertainty, integrating simulation results directly into the optimization process. As illustrated using a knapsack problem, which is a very common problem encountered in industry, considering the distributional characteristics of uncertain parameters, and evaluating solutions across multiple simulations, decision-makers can make informed choices that balance risk and reward effectively. By leveraging probability estimation, simulation, and chance-constrained optimization techniques, industries can mitigate risks, exploit opportunities, and enhance operational efficiency, ultimately achieving their strategic objectives in a competitive landscape.

Advertisement

Acknowledgments

Many thanks to my former director at Intel, Karl G Kempf for the many discussions around this topic that led to me shaping my views. Thanks must be given also to Esma Gel, John Fowler and Jorge Sefair for their advice on the topic during my stint at ASU.

References

  1. 1. Epstein S. Integration of the cognitive and the psychodynamic unconscious. American Psychologist. 1994;49:709-724
  2. 2. Pablo AL. Managerial risk interpretations: Does industry make a difference? Journal of Managerial Psychology. 1999;14(2):92-108. DOI: 10.1108/02683949910255142
  3. 3. Crundwell FK. Decision tree analysis and utility theory. Finance for engineers: Evaluation and funding of capital projects. 2008:427-456
  4. 4. Munthe C. The price of precaution and the ethics of risk. Springer Science & Business Media; 19 May 2011
  5. 5. Zappia C. Leonard savage, the Ellsberg paradox, and the debate on subjective probabilities: Evidence from the archives. Journal of the History of Economic Thought. 2021;43(2):169-192
  6. 6. Pferschy U, Schauer J, Thielen C. Approximating the product knapsack problem. Optimization Letters. 2021;15(8):2529-2540
  7. 7. Baldo A et al. The polynomial robust knapsack problem. European Journal of Operational Research. 2023;305(3):1424-1434
  8. 8. Faenza Y, Segev D, Zhang L. Approximation algorithms for the generalized incremental knapsack problem. Mathematical Programming. 2023;198(1):27-83
  9. 9. Arsham H et al. Sensitivity analysis and the “what if” problem in simulation analysis. Mathematical and Computer Modelling. 1989;12(2):193-219
  10. 10. Bisogno S et al. Combining modelling and simulation approaches: How to measure performance of business processes. Business Process Management Journal. 2016;22(1):56-74
  11. 11. Cappanera P, Trubian M. A local-search-based heuristic for the demand-constrained multidimensional knapsack problem. INFORMS Journal on Computing. 2005;17(1):82-98
  12. 12. Garcia-Guarin J et al. Schedule optimization in a smart microgrid considering demand response constraints. Energies. 2020;13(17):4567
  13. 13. Jian C, Wu Y, Li M. Energy efficient allocation of virtual machines in cloud computing environments based on demand forecast. In: Advances in Grid and Pervasive Computing: 7th International Conference, GPC 2012, Hong Kong, China, May 11–13, 2012. Proceedings 7. Berlin Heidelberg: Springer. p. 2012
  14. 14. Song Y, Luedtke JR, Küçükyavuz S. Chance-constrained binary packing problems. INFORMS Journal on Computing. 2014;26(4):735-747
  15. 15. Li X, Liu S, Wang J, Chen X, Ong YS, Tang K. Data-Driven Chance-Constrained Multiple-Choice Knapsack Problem: Model, Algorithms, and Applications. 26 Jun 2023. arXiv preprint arXiv:2306.14690
  16. 16. Küçükyavuz S, Jiang R. Chance-constrained optimization: A review of mixed-integer conic formulations and applications. 21 Jan 2021;2. arXiv preprint arXiv:2101.08746
  17. 17. Li P, Arellano-Garcia H, Wozny G. Chance constrained programming approach to process optimization under uncertainty. Computers & Chemical Engineering. 2008;32(1–2):25-45
  18. 18. Ouchi F. A Literature Review on the Use of Expert Opinion in Probabilistic Risk Analysis. Policy Research Working Paper 3201. World Bank. 2004. Available from: http://econ.worldbank.org/
  19. 19. Ramler R, Felderer M. Experiences from an initial study on risk probability estimation based on expert opinion. In: 2013 Joint Conference of the 23rd International Workshop on Software Measurement and the 8th International Conference on Software Process and Product Measurement. IEEE. 23 Oct 2013. pp. 93-97
  20. 20. Clemen RT, Winkler RL. Combining probability distributions from experts in risk analysis. Risk Analysis. 1999;19:187-203
  21. 21. Sampath S, Gel ES, Kempf KG, Fowler JW. A generalized decision support framework for large-scale project portfolio decisions. Decision Sciences. 2022;53(6):1024-1047
  22. 22. Savage S, Scholtes S, Zweidler D. Probability management: Planning for an uncertain future calls for a shift in information management--from single numbers to probability distributions--in order to correct the “flaw of averages.” This, in turn, gives rise to the prospect of a chief probability officer to manage the distributions that underlie risk, real portfolios, real options and many other activities in the global economy. OR/MS Today. 2006;33(1):20-28

Written By

Siddhartha Sampath

Submitted: 08 April 2024 Reviewed: 15 April 2024 Published: 14 May 2024