Divide Timesteps Model

This part of the documentation explains the parts of the Divide Timesteps model which are different to the normal model.

Sets, Variables, Parameters and Expressions

  • support_timesteps: The support time steps are defined as a set in both the master and the sub problems, and give the time steps at which the original problem is split into sub problems.
  • e_co_stock_res defines the restrictions on the stock commodity for the sub problems.

Rules

  • def_costs_rule:
    • The costs of the master problem are the investment costs and fix costs for all capacity variables plus the sum of costs of all sub problems, which are stored in FutureCosts.
    • The costs of the sub problem consists of the three cost types not accounted for in the master problem. These are variable costs, which are costs varying with the usage of commodities, fuel costs, which depend on the use of stock commodities, and environmental costs, which depend on the use of taxed environmental commodities. Also compare with Cost Variables (though not all cost variables of the master branch are supported yet).
  • res_storage_state_by_capacity_rule: This rule is the same as in the normal, except that the constraints for the support steps in the sub problems are skipped, because they are also included in the master problem and the constraint is enforced there.
  • res_co2_generation_rule/res_global_co2_limit_rule: Make sure that the master and the sub problems respectively don’t pass the global CO2 limit.
  • sub_costs_rule: Assures that the costs of the sub problems cannot be higher than the restriction on costs given by the master problem plus omega times Lambda.
  • sub_commodity_source: This rule enforces that the sub problems cannot use more of a stock commodity than allowed by the restriction e_co_stock_res plus the relaxing expression omega times Lambda.

Functions

Cut Generation

This section explains the function add_cut() in the Divide Timesteps Master in detail.

def add_cut(self, cut_generating_problem, readable_cuts=False):
    """
    Adds a cut to the master problem, which is generated by a sub problem

    Args:
        cut_generating_problem: sub problem which generates the cut
        readable_cuts:  scale cuts to make them easier to read (may cause numerical issues)
    """
    if cut_generating_problem.Lambda() < 0.000001:
        print('Cut skipped for subproblem ' + str(cut_generating_problem) + ' (Lambda = ' + str(
            cut_generating_problem.Lambda()) + ')')
        return

First, check if Lambda is very close to zero. If Lambda is zero, this means that the sub problem does not violate any constraints passed to it by the master problem. This in turn means that the sub problem yields a feasible solution and does not add a new constraint to the master problem. In this case we don’t add a cut and simply return.

# dual variables
multi_index = pd.MultiIndex.from_tuples([(t,) + sto
                                         for t in self.t
                                         for sto in self.sto_tuples],
                                        names=['t', 'sit', 'sto', 'com'])
dual_sto = pd.Series(0, index=multi_index)
dual_sto_help = get_entity(cut_generating_problem, 'res_initial_and_final_storage_state')
dual_sto = dual_sto.add(-abs(dual_sto_help.loc[[cut_generating_problem.ts[1]]]), fill_value=0)
dual_sto = dual_sto.add(abs(dual_sto_help.loc[[cut_generating_problem.ts[-1]]]), fill_value=0)

Next, we initialize the dual variables. For every constraint the corresponding dual variable states how much the objective would change if the constraint is changed by one. Note that this means the duals are not really variables (in the mathematical sense), but rather fixed rational numbers. The storage constraint dual is made negative for the first time step of the cut generating (sub) problem, because increasing the storage available in the beginning would decrease the objective function. Similar the dual is made positive for the last time step of the cut generating problem, because increasing the storage which needs to be left at the end of the cut generating problem would increase the objective function.

dual_pro = get_entity(cut_generating_problem, 'def_process_capacity')
dual_tra = get_entity(cut_generating_problem, 'def_transmission_capacity')
dual_sto_cap = get_entity(cut_generating_problem, 'def_storage_capacity')
dual_sto_capl = get_entity(cut_generating_problem, 'def_storage_capacity_l')
dual_sto_pow = get_entity(cut_generating_problem, 'def_storage_power')
dual_com_src = get_entity(cut_generating_problem, 'sub_commodity_source')
dual_env = get_entity(cut_generating_problem, 'res_global_co2_limit')

dual_zero = cut_generating_problem.dual[cut_generating_problem.sub_costs]
Lambda = cut_generating_problem.Lambda()

Next, we initialize all other dual variables. For every constraint there is exactly one dual. Note that one rule can describe more than one constraint and in turn the corresponding dual variable is actually a vector of dual variables. As an example consider def_process_capacity. This rule defines a constraint for each process which means dual_pro contains one dual variable for every one of these constraints. In Divide Timesteps there are the capacity constraints, the commodity constraint (sub_commodity_source), the CO2 constraint (res_global_co2_limit) and the cost constraint. To generate the cut we also need the value of Lambda for the cut generating problem.

cut_expression = - 1 * (sum(dual_pro[pro] * self.cap_pro[pro] for pro in self.pro_tuples) +
                                sum(dual_tra[tra] * self.cap_tra[tra] for tra in self.tra_tuples) +
                                sum((dual_sto_cap[sto] - dual_sto_capl[sto]) * self.cap_sto_c[sto] for sto in self.sto_tuples) +
                                sum(dual_sto_pow[sto] * self.cap_sto_p[sto] for sto in self.sto_tuples) +
                                sum([dual_sto[(t,) + sto] * self.e_sto_con[(t,) + sto]
                                     for t in self.t
                                     for sto in self.sto_tuples]) +
                                sum([dual_com_src[com] * self.e_co_stock[(cut_generating_problem.tm[-1],) + com]
                                     for com in self.com_tuples if
                                     com[1] in self.com_stock
                                     and not math.isinf(self.commodity.loc[com]['max'])]) +
                                sum([dual_env[0] * self.e_co_stock[(cut_generating_problem.tm[-1],) + com]
                                     for com in self.com_tuples
                                     if com[1] in self.com_env]) +
                                dual_zero * self.eta[cut_generating_problem.tm[-1]])

With the dual variables we can generate the cut expression: The cut expression is the sum of all dual variables times the corresponding variables in the master instance. This reflects that by increasing one variable in the master instance (e.g. a process: cap_pro[pro]) the objective function of the sub problem would change by the corresponding dual (e.g. dual_pro[pro]). As increasing the capacity would decrease the objective function and decreasing it would increase the objective function we have to multiply by minus one. The same holds for the constraints on commodities, CO2 and costs (allowing for more commodities/CO2/costs, decreases the objective function).

# cut generation
if readable_cuts and dual_zero != 0:
    cut = 1 / (-dual_zero) * cut_expression >= 1 / (-dual_zero) * (Lambda + cut_expression())
else:
    cut = cut_expression >= Lambda + cut_expression()
self.Cut_Defn.add(cut)

The cut expression can be evaluated (with cut_expression()) for the current variables in the master problem. We know that using the current values of the master variables the sub problem cannot be solved without violating at least one constraint by Lambda (because the sub problem minimizes Lambda). This implies that in future iterations the cut expression has to be at least the evaluated cut expression plus Lambda for the sub problem to become feasible (Lambda is (almost) zero). This is the cut we add to the master problem.

If readable_cuts is True we multiply both sides by one divided through minus dual_zero, which corresponds to down scaling both sides with the negative of the dual of the sub problem costs. This gives a different representation of the cuts which is helpful to their mathematical interpretation. On the other hand it can lead to numerical problems, because a multiplication with a very small number could happen,so the feature is turned off by default.