Rare
0/8

# Lagrangian Relaxation

Authors: Benjamin Qi, Alex Liang, Dong Liu

aka Aliens Trick

### Prerequisites

ResourcesLagrangian RelaxationIntuitionGeometryProblems

Resources
Mamnoon Siam
Serbanology

## Lagrangian Relaxation

Lagrangian Relaxation involves transforming a constraint on a variable into a cost $\lambda$ and binary searching for the optimal $\lambda$.

Focus Problem – try your best to solve this problem before continuing!

View Internal Solution

The problem gives us a length $N$ ($1 \le N \le 3 \cdot 10^5$) array of integers in the range $[-10^9,10^9]$. We are given some $K$ ($1 \le K \le N$) and are asked to choose at most $K$ disjoint subarrays such that the sum of elements included in a subarray is maximized.

### Intuition

The main bottleneck of any dynamic programming solution to this problem is having to store the number of subarrays we have created so far.

Let's try to find a way around this. Instead of storing the number of subarrays we have created so far, we assign a penalty of $\lambda$ for creating a new subarray (i.e. every time we create a subarray we penalize our sum by $\lambda$).

This leads us to the sub-problem of finding the maximal sum and number of subarrays used if creating a new subarray costs $\lambda$. We can solve this in $\mathcal{O}(N)$ time with dynamic programming.

Dynamic Programming Solution

Let $v$ be the maximal achievable sum with $\lambda$ penalty and $c$ be the number of subarrays used to achieve $v$. Then the maximal possible sum achievable if we use exactly $c$ subarrays is $v+\lambda c$. Note that we add $\lambda c$ to undo the penalty.

Our goal is to find some $\lambda$ such that $c=K$ (assuming $K$ is at most the number of positive elements). As we increase $\lambda$, it makes sense for $c$ to decrease since we are penalizing subarrays more. Thus, we can try to binary search for $\lambda$ to make $c=K$ and set our answer to be $v+\lambda c$ at the optimal $\lambda$.

This idea almost works but there are still some very important caveats and conditions that we have not considered.

### Geometry

Let $f(x)$ be the maximal sum if we use at most $x$ subarrays. We want to find $f(K)$.

The first condition is that $f(x)$ must be concave or convex. Since $f(x)$ is increasing in this problem, the means that we need $f(x)$ to be concave: $f(x) - f(x - 1) \ge f(x + 1) - f(x)$. In other words, this means that the more subarrays we add, the less we increase the sum by. We can intuitively see that this is true.

Proof that our function is concave

Consider the following graphs of $f(x)$ and $f(x)-\lambda x$. In this example, we have $\lambda=5$.

Here is where the fact that $f(x)$ is concave comes in. Because the slope is non-increasing, we know that $f(x) - \lambda x$ will first increase, then stay the same, and finally decrease.

Let $v(\lambda)$ be the optimal maximal achievable sum with $\lambda$ penalty and $c(\lambda)$ be the number of subarrays used to achieve $v(\lambda)$ (note that if there are multiple such possibilities, we set $c(\lambda)$ to be the maximal number of subarrays to achieve $v(\lambda)$). These values can be calculated in $\mathcal{O}(N)$ time using the dynamic programming approach described above.

When we assign the penalty of $\lambda$, we are trying to find the maximal sum if creating a subarray reduces our sum by $\lambda$. So $v(\lambda)$ will be the maximum of $f(x) - \lambda x$ and $c(\lambda)$ will equal to the rightmost $x$ that maximizes $f(x) - \lambda x$.

Given the shape of $f(x) - \lambda x$, we know that $f(x) - \lambda x$ will be maximized at all points where $\lambda$ is equal to the slope of $f(x)$ (these points are red in the graph above). If there are no such points it will be maximized at the rightmost point where the slope is less than $\lambda$. So this means that $c(\lambda)$ will be the rightmost $x$ at which the slope of $f(x)$ is still greater or equal to $\lambda$.

Now we know exactly what $\lambda$ represents: $\lambda$ is the slope and $c(\lambda)$ is the rightmost $x$ at which the slope of $f(x)$ is still greater or equal to $\lambda$.

We binary search for $\lambda$ and find the highest $\lambda$ such that $c(\lambda) \ge K$. Let the optimal value be $\lambda_{\texttt{opt}}$. Then our answer is $v(\lambda_{\texttt{opt}}) + \lambda_{\texttt{opt}} K$. Note that this works even if $c(\lambda_{\texttt{opt}}) \neq K$ since $c(\lambda_{\texttt{opt}})$ and $K$ will be on the same line with slope $\lambda_{\texttt{opt}}$ in that case.

Because calculating $v(\lambda)$ and $c(\lambda)$ with the dynamic programming solution described above will take $\mathcal{O}(N)$ time, this solution runs in $\mathcal{O}(N\log{\sum A[i]})$ time.

#include <bits/stdc++.h>using namespace std;
#define ll long long
int main() {	int n, k;	cin >> n >> k;
int a[n];

## Problems

StatusSourceProblem NameDifficultyTags
PlatinumEasy
CFNormal
CFNormal
FHCNormal
KattisNormal
Balkan OINormal
IOIHard

### Join the USACO Forum!

Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers!