Thursday, November 6, 2014

SquareCovering in SRM 474

At first, I found that there's an optimal solution where each square's left and upper edge must touch a red point. Otherwise we can simply shift those squares.

This lead to a straightforward bit-mask DP. Suppose we sort the points along the x axis then the y axis. Then for each mask(bit 1 represents a point already been covered), we find the lowest 0 bit(corresponding to the left most uncovered point thus is the left edge of the square), iterate through all points while using those to the upper right of it as possible upper edges, iterate through all possible squares and calculate the new mask to do the DP. This results in O((2^n)*(n^2)*m), which barely passed under the time limit by some careful pruning(ignore points already covered when iterating through points).

As I looked at others' solutions in the practice room, I found a clever technique to optimize. Instead of explicitly determine the position of the squares, we can first simply calculate all the masks using exactly one square. Then using the techniques to iterate through subsets, we can set dp[mask] to min(dp[i]+dp[maks^i], mask|i==mask). Inductively, if two subsets can both be covered by an arbitrary number of squares, so is the whole set. This results in O(3^n).

Tuesday, August 19, 2014

Optimal Proof - Nisoku in SRM 463

I'll prove a subtle part of the problem:

Given a sorted array of even length $\{a_1, a_2, \ldots, a_{2n}\}$, we need to pair these elements and multiply the sum of each pair. The largest value should be $(a_1 + a_{2n}) \times (a_2 + a_{2n-1}) \cdots$.

Let's consider an easier case. Suppose we have 4 numbers $a<b<c<d$. It's obvious that $(a+d)(b+c)>(a+b)(c+d)$ and $(a+d)(b+c)>(a+c)(b+d)$. How can we generalize this property?

Suppose $a_1$ is paired with some $a_i$, and $a_{2n}$ is paired with some $a_j$. By a single exchange, we could get a better solution. After that, the pair of the smallest & largest element remains the same, and we can use induction here for inner elements.

Sometimes it's just essential to prove pairwise inequalities for sequence optimality.

Monday, February 11, 2013

Iterate through the Subsets of Subsets

Bit-mask DP

The main loop:
for (int mask = 0; mask < (1 << n); ++mask)
go(mask);
In the memoization process:
for (int i = mask; i != 0; i = (i - 1) & mask)
// the status transformation


For a certain bit which is 1 in mask, all possible combinations of lower 1 bits are enumerated, ending in all 0s. Then this bit becomes 0 with other initial lower 1 bits recovered by the bit-wise and. Again combinations of those bits are enumerated, ending in all 0s, while this bit remains 0.

Complexity

The overall time complexity measured in the number of transformations is $latex \sum_{i=0}^{n}(\binom{n}{i} \sum_{j=0}^{i}\binom{i}{j}) - 2^n = 3^n - 2^n$.

Friday, March 9, 2012

SRM536 Div1 Level2 500 Problem

The solution is ready for download at https://sites.google.com/site/boyouepii/document/SRM-536-Div1-Medium.pdf?attredirects=0&d=1.
I'm practicing $latex \textrm{\LaTeX}$ these days and this solution happens to be quite mathematical.

Monday, February 20, 2012

Enable LaTeX in Blogger

The method is as follows:

1) Registry on Blogger: http://www.blogger.com
2) Create a site on Google sites for keeping your files: https://sites.google.com
(Take a look at mine: https://sites.google.com/site/boyouepii/)
3) Create a Picassa album for including photos inside your blog.(Optional)

You can do all of this steps with your same Google Account.

Now if you want to add $latex \textrm{\LaTeX} $ functionality then:

Download this file: https://sites.google.com/site/boyouepii/mathtex.js?attredirects=0&d=1

And, if you want, change its name to for example: mylatex.js.

Upload this file to your Google sites account and copy the link URL.
Then inside Blogger, on the configuration:

Design>Page Elements>Add a Gadget>HTML/JavaScript

Create a Gadget named, for example, LaTeX and include inside it this HTML Code:

<script src="here include the link to your mylatex.js file" type="text/javascript">
</script>
<script type="text/javascript">
replaceMath( document.body );
</script>


And that′s everything, now you can try to write a post including(without line breaks):



$latex \displaystyle \lim_{n \to \infty} \sum_{k=1}^{n} \frac{1}{k^2} = \frac{\pi^2}{6} $

Thursday, November 17, 2011

One Possible Implementation of the Generic Push-Relabel Algorithm

The book said that we can select an applicable operation in O(1) time, and use O(V) and O(1) time for relabel and push operation respectively.

First note that all operations are performed on overflowing vertices other than s or t. We may store all these vertices in a data structure and when that structure becomes empty, the algorithm terminates. When nonempty, pick an arbitrary vertex u from it. If there is some admissible edge (u,v)(that is, (u,v) is in the residual network and h[u]=h[v]+1), a push operation is applicable. Otherwise, a relabel operation is applicable.

So we may wish to maintain a list of admissible edges for every vertex. Every time, either a push or relabel operation may be applicable depending on whether the corresponding list is empty or not. After a push, the only possible influence on these lists is that (u, v) has to be removed from the list corresponding to u after a saturating push. If we simply associate every edge a pointer to the nodes in these lists, the removal can be done in O(1) time. Each edge (u,v) may appear in these lists at most once, for h[u]=h[v]+1 and h[v]=h[u]+1 cannot hold at the same time.

But after a relabel operation on u, some edges (u,v) has to be inserted into the list corresponding to u, and all edges (v,u) appearing in other lists has to be removed. The insertion is simple. But we have to check the lists for every other vertex in order to remove edges ending at u, thus resulting in O(E) time complexity. The solution is to use orthogonal lists. Intuitively, all admissible edges leaving u are on the horizontal line, while those entering u are on the vertical line. Now, the insertion and removal operations take time proportional to the out degrees and in degrees of vertex u. A single operation of both types is now a little bit more complicated, but requires only O(1) time.

Finally, a simple list suffices for the data structure mentioned in the second paragraph. Every time, pick the vertex at the end of that list. Relabel doesn't change the flow of any vertex, so no updates are necessary. After a push from u to v, remove u if it is no longer overflowing and insert v if its flow is 0 previously.

Now, the generic push-relabel algorithm is implemented using a total time of O(V2E).

The Initialization of the Generic Push-Relabel Algorithm

When we apply the generic push-relabel algorithm, we first label s as |V| and t as 0, and all edges starting from s are pushed in as many units of flow as possible, i.e they are saturated. But why these operations are necessary and reasonable is not stated explicitly.

What if we label s and t with different values? The difference between h[s] and h[t], h[s]-h[t], must be no smaller than |V|, for otherwise the statement that there is no path from s to t in the residual network never holds. And when the preflow becomes a flow, it is not necessarily a maximum flow since there may be augmenting paths. If s and t are labeled with greater values other than |V| and 0, the bounds on the number of relabel, saturating push and thus non-saturating push will become higher. Intuitively, there must be redundant relabel operations before extra flows are pushed to t or back to s since h[s] and h[t] become greater, and during that process, extra push operations between vertices other than s and t occur.

Suppose for some neighbor u of s, we initially push less flow into edge (s,u) than c(s,u). Then (s,u) is in the residual network. But h[s]=|V|>=2>h[u]+1=0+1=1, i.e. h is no longer a valid height function.