tag:blogger.com,1999:blog-80747133161226740132018-03-06T16:20:43.763-08:00Algo & Mathyoubohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-8074713316122674013.post-85276292447825583912014-11-06T11:45:00.002-08:002014-11-06T11:45:42.334-08:00SquareCovering in SRM 474At first, I found that there's an optimal solution where each square's left and upper edge must touch a red point. Otherwise we can simply shift those squares.<br /><br />This lead to a straightforward bit-mask DP. Suppose we sort the points along the x axis then the y axis. Then for each mask(bit 1 represents a point already been covered), we find the lowest 0 bit(corresponding to the left most uncovered point thus is the left edge of the square), iterate through all points while using those to the upper right of it as possible upper edges, iterate through all possible squares and calculate the new mask to do the DP. This results in O((2^n)*(n^2)*m), which barely passed under the time limit by some careful pruning(ignore points already covered when iterating through points).<br /><br />As I looked at others' solutions in the practice room, I found a clever technique to optimize. Instead of explicitly determine the position of the squares, we can first simply calculate all the masks using exactly one square. Then using the techniques to <a href="http://youbo2010.blogspot.com/2013/02/iterate-through-subsets-of-subsets.html">iterate through subsets</a>, we can set dp[mask] to min(dp[i]+dp[maks^i], mask|i==mask). Inductively, if two subsets can both be covered by an arbitrary number of squares, so is the whole set. This results in O(3^n).youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-35395509851823152822014-08-19T22:27:00.001-07:002014-08-19T22:45:50.391-07:00Optimal Proof - Nisoku in SRM 463I'll prove a subtle part of the problem:<br /><br /><blockquote class="tr_bq">Given a sorted array of even length $latex \{a_1, a_2, \ldots, a_{2n}\}$, we need to pair these elements and multiply the sum of each pair. The largest value should be $latex (a_1 + a_{2n}) \times (a_2 + a_{2n-1}) \cdots$.</blockquote><br />Let's consider an easier case. Suppose we have 4 numbers $latex a<b<c<d$. It's obvious that $latex (a+d)(b+c)>(a+b)(c+d)$ and $latex (a+d)(b+c)>(a+c)(b+d)$. How can we generalize this property?<br /><br />Suppose $latex a_1$ is paired with some $latex a_i$, and $latex a_{2n}$ is paired with some $latex a_j$. By a single exchange, we could get a better solution. After that, the pair of the smallest & largest element remains the same, and we can use induction here for inner elements.<br /><br />Sometimes it's just essential to prove pairwise inequalities for sequence optimality. youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-74057271500279484492013-02-11T18:59:00.001-08:002013-02-11T18:59:54.724-08:00Iterate through the Subsets of Subsets<h2>Bit-mask DP</h2>The main loop:<br /><blockquote class="tr_bq">for (int mask = 0; mask < (1 << n); ++mask)</blockquote><blockquote class="tr_bq"><blockquote class="tr_bq">go(mask);</blockquote></blockquote>In the memoization process:<br /><blockquote class="tr_bq">for (int i = mask; i != 0; i = (i - 1) & mask)</blockquote><blockquote class="tr_bq"><blockquote class="tr_bq">// the status transformation</blockquote></blockquote><br /><br />For a certain bit which is 1 in <i>mask</i>, all possible combinations of lower 1 bits are enumerated, ending in all 0s. Then this bit becomes 0 with other initial lower 1 bits recovered by the bit-wise and. Again combinations of those bits are enumerated, ending in all 0s, while this bit remains 0.<br /><br /><h2>Complexity </h2>The overall time complexity measured in the number of transformations is $latex \sum_{i=0}^{n}(\binom{n}{i} \sum_{j=0}^{i}\binom{i}{j}) - 2^n = 3^n - 2^n$.youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-7794955052357355042012-03-09T19:26:00.000-08:002012-03-09T19:26:07.665-08:00SRM536 Div1 Level2 500 ProblemThe solution is ready for download at <a href="https://sites.google.com/site/boyouepii/document/SRM-536-Div1-Medium.pdf?attredirects=0&d=1">https://sites.google.com/site/boyouepii/document/SRM-536-Div1-Medium.pdf?attredirects=0&d=1</a>.<br />I'm practicing $latex \textrm{\LaTeX}$ these days and this solution happens to be quite mathematical.youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-30687554633719867472012-02-20T00:55:00.000-08:002012-02-20T01:33:21.228-08:00Enable LaTeX in Blogger<div class="separator" style="clear: both; text-align: center;"></div>The method is as follows:<br /><br />1) Registry on Blogger: <a href="http://www.blogger.com/">http://www.blogger.com</a><br />2) Create a site on Google sites for keeping your files: <a href="https://sites.google.com/">https://sites.google.com</a><br />(Take a look at mine: <a href="https://sites.google.com/site/boyouepii/">https://sites.google.com/site/boyouepii/</a>)<br />3) Create a Picassa album for including photos inside your blog.(Optional)<br /><br />You can do all of this steps with your same Google Account.<br /><br />Now if you want to add $latex \textrm{\LaTeX} $ functionality then:<br /><br />Download this file: <a href="https://sites.google.com/site/boyouepii/mathtex.js?attredirects=0&d=1">https://sites.google.com/site/boyouepii/mathtex.js?attredirects=0&d=1</a><br /><br />And, if you want, change its name to for example: mylatex.js.<br /><br />Upload this file to your Google sites account and copy the link URL.<br />Then inside Blogger, on the configuration:<br /><br />Design>Page Elements>Add a Gadget>HTML/JavaScript<br /><br />Create a Gadget named, for example, LaTeX and include inside it this HTML Code:<br /><br /><blockquote class="tr_bq" style="font-family: inherit;"><script src="here include the link to your mylatex.js file" type="text/javascript"><br /></script><br /><script type="text/javascript"><br />replaceMath( document.body );<br /></script></blockquote><br /><br />And thatâ€²s everything, now you can try to write a post including(without line breaks):<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-iNrFcwS88WE/T0ISHygMi0I/AAAAAAAAAwk/NdWoLkVsa5M/s1600/sample-code.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-iNrFcwS88WE/T0ISHygMi0I/AAAAAAAAAwk/NdWoLkVsa5M/s1600/sample-code.png" /></a></div><br /><br />$latex \displaystyle \lim_{n \to \infty} \sum_{k=1}^{n} \frac{1}{k^2} = \frac{\pi^2}{6} $youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com2tag:blogger.com,1999:blog-8074713316122674013.post-82304190850402131992011-11-17T19:16:00.001-08:002011-11-17T19:50:14.378-08:00One Possible Implementation of the Generic Push-Relabel AlgorithmThe book said that we can select an applicable operation in O(1) time, and use O(V) and O(1) time for relabel and push operation respectively.<br /><br />First note that all operations are performed on overflowing vertices other than s or t. We may store all these vertices in a data structure and when that structure becomes empty, the algorithm terminates. When nonempty, pick an arbitrary vertex u from it. If there is some <i>admissible</i> edge (u,v)(that is, (u,v) is in the residual network and h[u]=h[v]+1), a push operation is applicable. Otherwise, a relabel operation is applicable.<br /><br />So we may wish to maintain a list of admissible edges for every vertex. Every time, either a push or relabel operation may be applicable depending on whether the corresponding list is empty or not. After a push, the only possible influence on these lists is that (u, v) has to be removed from the list corresponding to u after a saturating push. If we simply associate every edge a pointer to the nodes in these lists, the removal can be done in O(1) time. Each edge (u,v) may appear in these lists at most once, for h[u]=h[v]+1 and h[v]=h[u]+1 cannot hold at the same time.<br /><br />But after a relabel operation on u, some edges (u,v) has to be inserted into the list corresponding to u, and all edges (v,u) appearing in other lists has to be removed. The insertion is simple. But we have to check the lists for every other vertex in order to remove edges ending at u, thus resulting in O(E) time complexity. The solution is to use <i>orthogonal lists</i>. Intuitively, all admissible edges leaving u are on the horizontal line, while those entering u are on the vertical line. Now, the insertion and removal operations take time proportional to the out degrees and in degrees of vertex u. A single operation of both types is now a little bit more complicated, but requires only O(1) time.<br /><br />Finally, a simple list suffices for the data structure mentioned in the second paragraph. Every time, pick the vertex at the end of that list. Relabel doesn't change the flow of any vertex, so no updates are necessary. After a push from u to v, remove u if it is no longer overflowing and insert v if its flow is 0 previously.<br /><br />Now, the generic push-relabel algorithm is implemented using a total time of O(V<sup>2</sup>E).youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-17972345853519697292011-11-17T00:17:00.001-08:002011-11-17T00:41:34.422-08:00The Initialization of the Generic Push-Relabel AlgorithmWhen we apply the generic push-relabel algorithm, we first label s as |V| and t as 0, and all edges starting from s are pushed in as many units of flow as possible, i.e they are saturated. But why these operations are necessary and reasonable is not stated explicitly.<br /><br />What if we label s and t with different values? The difference between h[s] and h[t], h[s]-h[t], must be no smaller than |V|, for otherwise the statement that there is no path from s to t in the residual network never holds. And when the preflow becomes a flow, it is not necessarily a maximum flow since there may be augmenting paths. If s and t are labeled with greater values other than |V| and 0, the bounds on the number of relabel, saturating push and thus non-saturating push will become higher. Intuitively, there must be redundant relabel operations before extra flows are pushed to t or back to s since h[s] and h[t] become greater, and during that process, extra push operations between vertices other than s and t occur.<br /><br />Suppose for some neighbor u of s, we initially push less flow into edge (s,u) than c(s,u). Then (s,u) is in the residual network. But h[s]=|V|>=2>h[u]+1=0+1=1, i.e. h is no longer a valid height function.youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-35060398371754333362011-10-26T05:42:00.000-07:002011-10-26T05:49:11.894-07:00SRM522 Div1 Level1 250 ProblemGiven a board consisting of n(n>=2) cells, each marked with either an 'A' or 'B', Alice and Bob alternatively place coins on some continuous empty cells, with the restriction that at least one cell must be left empty. When there is only one cell left, Alice wins if that cell is marked with 'A' and Bob wins otherwise.<br /><br />First of all, if at least one end is marked with an 'A', then Alice wins. Without loss of generality, suppose the board is of the form "A??...?"('?' means empty cells marked with 'A' or 'B' arbitrarily). Alice can obtain "ACC...C"('C' means covered by a coin) in one step and win.<br /><br />Then what if the board is of the form "B??...?B"?<br />If Alice covered either end with a coin, again without loss of generality, the board becomes "CC...CB" or "CC...C??..?B". Bob wins immediately for the former, by covering all other empty cells except the rightmost 'B' for the latter. So Alice would have to choose "B??...?CC...C??...?B" if she wanted to win.<br />Now let's consider a simpler situation, "B??...?CC...CB". <b>(I)</b>If it is now Bob's turn, he only needs to cover all the empty cells on the left and wins. <b>(II)</b>If it is now Alice's turn, she has up to four choices. If she choose to cover the rightmost 'B' or either end of the empty cells on the left, Bob can obviously win. Then again she has to choose to make the board of the form "B??...?CC...C??...?CC...CB". If Bob covers all empty cells of the second interval, then we reach the same configuration except that the number of empty cells decreases. So we can prove by <b><i>induction</i></b> that no matter whose turn is it, the winner would be Bob if the board is of the form "B??...?CC...CB".<br />So let's return to "B??...?CC...C??...?B". If the second interval is empty, the it corresponds to (I). Otherwise, Bob could cover all empty cells on the right except the rightmost 'B', and now it corresponds to (II). Either way, Bob wins.<br /><br />So all we have to do is to check the two cells on both ends.<br /><br />I code a bit mask DP during the contest. Finding patterns and proving them is still a hard task for me. Need more exercise.youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-23736059119631869322011-04-27T03:57:00.001-07:002011-05-04T03:28:58.055-07:00Why recursive languages are NOT closed under homomorphismRefer to this <a href="http://infolab.stanford.edu/%7Eullman/ialc/spr10/slides/tm2.pdf">slide</a>. <br /><br />First let's consider a much simpler problem, <i>inverse homomorphism</i>, in which w is in L' if and only h(w) is in L. We can just apply h to w and let the TM accepting L run with input h(w). So L' is recursively enumerable(resp. recursive) if L is recursively enumerable(resp. recursive).<br /><br />Now we'll attack <i>homomorphism</i>, in which w is in L' if and only if for some x, both h(x)=w and x being in L hold.<br /><br />For recursively enumerable languages, the slide say that we can design an NTM M to take as input w, guess an x such that h(x)=w and simulate M<sub>1</sub> on x(suppose L=L(M<sub>1</sub>)). I will give a detailed sketch on how M behaves. It<br />0)sets l to 1<br />1)guesses one x of length l<br />2)if there is such an x invokes M<sub>1</sub> otherwise goes to 4)<br />3)if M<sub>1</sub> accepts so would M otherwise goes to 4)<br />4)increases l by 1 and goes back to 1)<br />So if L is recursively enumerable, so is L'.<br />And of course we can use a DTM that generates each x in a systematic way. <br /><br />Now what about recursive languages? Even though L is recursive, the above process may not halt since there may be infinitely many x'es such that h(x)=w but none of these x'es is in L.<br /><br />At last, I will give an example in which for every w, there are infinitely many x'es such that h(x)=w. Consider the function h: N*N->N in which h(a,b)=a. Of course, {x|h(x)=w}=N for every w in N. And N*N is countable, as is explained in this <a href="http://infolab.stanford.edu/%7Eullman/ialc/spr10/slides/tm1.pdf">slide</a>.<br /><br />P.S Since every TM(either deterministic or nondeterministic) has a corresponding encoding, it is possible to "design" different NTM's(as sub-routine) for different input strings, as is the "break" operation in the discussion of concatenation. Even though potential break positions are infinite, for every specific string they are finite.<br /><br />P.S <a href="http://infolab.stanford.edu/%7Eullman/ialcsols/sol9.html">This page</a> gives a concrete example for exercise 9.2.6(e).youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-36458037213788192222011-04-09T06:39:00.000-07:002011-04-09T07:02:05.360-07:00SRM502 Div1 Level2 500 ProblemWe define the relation problem[i]<problem[j] if pointsPerMinute[i]*requiredTime[j]>pointsPerMinute[j]*requiredTime[i]. This is a <a href="http://en.wikipedia.org/wiki/Total_order">total order</a>, as is explained in this <a href="http://apps.topcoder.com/forums/?module=Thread&threadID=704043&start=0">thread</a>.<br /><br />If two adjacent problems, say i and i+1, satisfies that problem[i+1]<problem[i]. Then by swapping them, we get a better solution, for doing so doesn't affect the points of all other problems, increases problem[i+1]'s point by pointsPerMinute[i+1]*requiredTime[i] and decreases problem[i]'s point by pointsPerMinute[i]*requiredTime[i+1]. By definition, the former is larger.<br /><br />Many people figure out at this point that the best solution must be exactly the one satisfying problem[i]<=problem[i+1] for all i in the range [0,n-2], e.g. <a href="http://wywcgs.wordpress.com.cn/2011/04/08/srm-502-d1l2-theprogrammingcontestdivone/">this post</a> by <a href="http://www.topcoder.com/tc?module=MemberProfile&cr=22629546">wywcgs</a>. I used to just believe so after a USACO contest, but now I'm not that convinced. So I do some careful analysis.(I hope for an intuitive explanation.)<br /><br /><i><b>Lemma 1.</b></i> If S is not a sorted sequence, then there must be at least one pair of adjacent elements such that S[i]>S[i+1].<br /><i><b>Lemma 2.</b></i> For every possible sequence S, we can get the sorted sequence by repeatedly swapping adjacent pairs such that S[i]>S[i+1]. We know in linear algebra such adjacent swapping reduces the inversion number by exactly 1.(I'm eagerly looking forward to a more elegant explanation.)<br /><i><b>Theorem 3.</b></i> The sorted sequence is the best solution. This comes directly from the fact that all the swapping operations in Lemma 2 increase the points.<br /><br />Note that we may not want to include some problems, since their points may become negative. But every time we do decide to include a problem, it must be placed at the end if we consider the problems in order. So the optimal substructure and independent substructure properties are reserved. The recurrence relation becomes obvious as in the classic 0-1 knapsack problem: best[i][j]=max(best[i-1][j], best[i-1][j-requiredTime[i]]+maxPoint[i]-j*pointsPerMinute[i]). Here i means we have so far considered problem [0, i] and j means the last problem gets solved at time j.<br /><br />P.S. See Petr's perfect code <a href="http://www.topcoder.com/stat?c=problem_solution&cr=10574855&rd=14431&pm=11357">here</a>.youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-68694238047599858492011-03-16T20:09:00.000-07:002011-03-18T00:34:14.390-07:00Finding one good chip using less than n pairwise tests: solution to CLRS problem 4-6Part (b) says that when there are more than half good chips and your goal is to find out ONE of them, you can use divide-and-conquer techniques and floor(n/2) pairwise tests can reduce the problem size to nearly half.<br /><br />We notice that in a test, when "Good-Good" is reported, the two must be both good or both bad. Otherwise, they cannot be both good. So we'll choose any one from such groups and ignore all other groups.<br /><br />When the number of chips is even, the situation is quite straightforward. Consider the extreme condition in which there are (n-1) bad chips and (n+1) good ones. The number of groups in which both two are good must be larger than the groups in which both two are bad. So after the "divide", the property that there are more than half good chips is preserved.<br /><br />When the number of chips is odd, the problem becomes a little bit intricate. Again consider the extreme condition in which there are n bad chips and (n+1) good ones. We must take out a chip, namely X, and perform pairwise tests for the others. When X is bad, it is OK. But what if X is good? We can easily see that after the "divide", the number of good ones must be no smaller than the number of bad ones no matter what you take out. So when there are odd number of chips after the "divide", we can conclude that the property is actually preserved(Note that if there is exactly 1 left, then it must be good) and we can simply ignore X. Otherwise, we cannot tell whether X is good or bad, but from part (a) we know that when there are equal numbers of good chips and bad chips, we may not be able to distinguish any one as good if all bad chips can conspire. So just keep this unknown chip and proceed further. If the one taken out is good and all bad ones do conspire, at last we know X is in fact good. Otherwise, we will find another good one.<br /><br />This <a href="http://qianyf1985.spaces.live.com/blog/cns%212E3278A538B387BE%21146.entry">post</a> gives a more detailed explanation in Chinese.youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0tag:blogger.com,1999:blog-8074713316122674013.post-65507374151516845222011-03-09T01:23:00.000-08:002011-03-09T19:27:25.765-08:00Finding the second smallest element: solution to CLRS exercise9.1-1The statement says that this can be achieved using exactly n+ceiling(lg(n))-2 comparisons in the worst case. I thought for a long time but didn't get the right idea.<br />This <a href="http://n1b-algo.blogspot.com/2008/12/finding-second-smallest-element.html">post</a> by <a href="http://n1b-algo.blogspot.com/">n1b-algo</a> gives a detailed explanation.<br />Its main idea is to build a comparison tree and check the elements "knocked out" by the smallest one afterwards. The smallest element is placed at the root, and it beats exactly one element at each level except the root, so there are ceiling(lg(n))-1 elements to check in total, since the binary tree with n leaves at the bottom level and without any other leaves has ceiling(lg(n)) levels excluding the root.youbohttp://www.blogger.com/profile/16926620632272616369noreply@blogger.com0