• Advanced Algorithm

  • NPC problems list

    Understand the NPC problems by what senario it is in our life.
    What's the important methods to solve NPC problem?
    1. Linear Programming
    2. Independent Set (ISP)
    3. Clique
    4. SAT
    5. TSP
      • From TSP we have Hamiltonian Cycle and Hamiltonian Path.
      • Hamiltonian Cycle means from one vertex travel all other vertices once and come back to this vertex. Hamiltonian Path means from one vertex to another vertex and travel all other vertices once.
      • No constant ratio aproximation algorithm.
    6. 3SAT
      • Basically we have three steps to prove a problem is NPC.
    7. Dominating Set
    8. Subset Sum

      We have pseudo-polynomial time via DP for Subset Sum problem. Then what is the exact time complexity for Subset Sum?

      It's O(nt), n is the elements number in the set, t is the value the subset sum equal to.

      • Solvable in pseudo polynomial time.
    9. Vertex Cover

      The relationship between decision problem and optimization problem. Does it exist size k & What is the maximal/minimal...

      • Constant ratio approximation algorithm.
    10. TSP-TRI
      • Constant ratio approximation algorithm.
    11. 0-1 Integer Linear problem
    12. Set-Partition
      • Separate the whole set as two equal weight sets.
    13. Bin Packing
  • 0-1 Integer Linear problem

    1. The relax version of 0-1 ILP is the lower bound.
  • Branch and Bound for Vertex Cover I (Lecture 5)

    1. If we have some information that vertices in vertex cover cannot be smaller than n, then if in a iteration we know the solution will be smaller than n, then we can cut that branch, eg. not going on to calculate it. Where this information from? For example, we can solve a relaxed version vertex cover by Linear Programming efficiently and get the information.
    2. Recursive method:

      i. Two situations: a. MVC contain vertex v b. MVC contain vertex v's all neighborhood.

    3. Simple Branch and Bound:
      1. Basically in 2. Recursive method we need compare the solution of a and the solution of b. If we use Linear Programming to solve b and get a bounded solution, then we compare the bounded solution with a. if a is smaller than the solution then we will choose a direclty without get the exact solution of b.
    4. Modification: Use Maximum Matching
      1. It's clear the number of edges of maximum matching is smaller or equal to the number of vertex in vertex cover.
      2. How to solve Maximum Matching in polynomial time? (In marriage problem.)

    5. Basically, in vertex cover we have two branches. So we solve one accuratly, and then solve the other via relax version and get a bottom line. Then compare the two branches. If the accurate branch even samller than the bottom line of the relax version, then we choose the accurate branch.
  • Use ILP solve Vertex Cover problem (Lecture 6/7)

    1. The style of ILP .
      1. Minimize a equation with some linear constrains and bound of variables.
    2. Transfer Vertex Cover to ILP
  • Marriage problem

    1. Augment path
      1. Start with an uncovered vertex, and alternative between covered edge and the uncovered edge, and ended with an uncovered vertex. The uncovered edge will be larger 1 than covered edge. Then we replace the uncovered edge in the augment path as covered, the covered as uncovered, we can find one more matching.
      1. We can simply use breadth-first search to find the augment path. The key point at here is: breadth-first search, but only walk on the augment road. Because breadth-first search is O(n+m), n=|V|, m=|E|. And we do breadth-first search for every vertex. So the complexity if O(n*(n+m)).
    2. Stable Marriage
  • Assignment problem

    1. Minimum/maximum weight perfect matching
    2. Hungarian algorithm
      1. Two types of calculation, one is in bipartite, one is in matrix. Both methods are O(n^4) now. But we know on topcoder there is a O(n^3) algorithm.
      2. In bipartite, we have two parameters, 1)weight of edge, which is never change; 2)weight of vertex. For vertex, we have person set and job set. At beginning the weight of job set vertex is 0, and the weight of person vertex is the minimal of all its weight.
      3. Choose the edge that is the sum of the two endpoints. Find the maximum matching, if it is perfect matching, then we done. If not, find the minimum vertex cover. Then out of the vertex cover, find the minimal value Delta of edge weight and its two endpoints. Then update the vertex weight as this: for person, if it is not in vertex cover, then add Delta; if it is in vertex cover, doesn't change. For job, if it is not in vertex cover, do nothing; if it is in vertex cover, minus Delta. Then do from the beginning of iii).
      4. For the matrix version:
      5. Each row minus its minimal value and each column minus its minimal value. Try to use minimal horizontal or vertical lines to cover all zeros. If the lines number is the same as person or job, then try to find out corresponding 0 (Choose the first 0, then choose the second 0 which is not in the same row or colomn with the first 0.). If the lines number is not the same as person or job, then find out the minimal value Delta out of the lines.
      6. For the rows that are not covered by the line, minus Delta. For the column that are covered by the line, add Delta. Then try to use minimal line to cover all zeros again.
    3. Shortest path algorithm in our textbook
  • Use ILP solve Set Cover problem

    1. Transfer Set Cover problem to ILP
  • Use ILP solve assignment problem

    1. Transfer assignment problem to ILP
    2. If we transfer the problem to ILP by focusing on vetex, we need "if ... else ...". If the edge in the optimal solution, then the vertex is 1, else it is 0.
    3. We solve the duel problem. The goal is to find a subset of edges via maximize the sum of the value of vextices (vertex has parameter 0 or 1, which means choose an edge or not). So we assign value $$y_i$$to each vertex $$v_i$$, and make sure the sum of the two endpoints of an edge is not larger than the weight of the edge.
    4. The object function for ILP is:
      • $$max\,\,\,\sum(y_ix_i)$$
      • $$s.t.\,\,\, y_ix_i+y_jx_j \leq w$$
      • $$0\leq x_i\leq1$$
    5. Not sure 3) is correct or not. 3) just leads to the bipartite Hungary algorith.
  • Branch and Bound for Vertex Cover II (Lecture 8)

    1. Strongly NP-C: If we get some extral information that the maximal of input is the polynomial of input length, then this problem is still NP-C, then the original problem is strongly NP-C. Otherwise, like subset sum, it's not a strongly NP-C.
    2. A recursive method to solve Vertex Cover problem. (Divid and conquer)
      1. Two parameters: one is the input elements number, one is the k (whether the graph has a vertex cover k).
      2. Pick an edge, then calculate the two vertex covers which are not included the endpoint. And also, every time recursive back we need to check if the graph has no edge after remove the picked one. (Because here is not minimum vertex cover, it just about if the graph has a vertex cover of k or not. So if there is no edge, and k>= 0, then it's true. This check need O(n^2)). So the complexity is T(n,k) = 2(T(n-1,k-1)+cn^2). We can get O(2^kn^2).
      3. We can modify it to find minimum vertex cover.
    3. So now we have several methods for Minimum vertex cover, one exact solution is consider about vertex itself and vertex neighbors. The complexity is O(1.47^n). One is use branch and bound. We can use ILP to find the bound of one branch, or use maximum match, or use this recursive method.
  • TSP

    1. Hamiltonian Cycle and Hamiltonian Path are both NP-C.
    2. HP can polynomial reducted from HC. (Check f this steps in notebook...)
  • Kernelization

    In computer science, a kernelization is a technique for designing efficient algorithms that achieve their efficiency by a preprocessing stage in which inputs to the algorithm are replaced by a smaller input, called a "kernel".

    1. What did they do in this chapter? Prove a theory and then lead to a method.
    2. By kernelization, we can shrink the graph, transfer the question from 'Does G have a vertex cover of size k?' to 'Does G' have a vertex cover of size k'?'

    3. NT-Theorem. Basically it says we can separate the graph G as two part, and vertex cover of one part add the other part is also the vertex cover, and it can the case that the addition is the minimum vertex cover.

  • Solve TSP by MST (Lecture 9a)

    1. MST can be solved in polynomial-time.
    2. Two methods for MST: Prim's algorithm and Kruskal's algorithm.
    3. Prim's algorithm is O(|E|log|E|), Kruskal's algorithm is O(|E|log|V|).
    4. Not directly solve TSP, but solve TSP-TRI.
    5. If Hamiltonian tour is H*, then remove one edge, it is a spanning tree. Assume we have a MST (T). Then we have c(T)< or = c(H*). For this MST we do preorder traversal.
    6. Tree traveling can be separated as depth-first search and breadth-first search. There are three types in depth-first search: preorder, in-order, post-order. Preorder start from the root and then go down directly to leaves; Inorder start from left leaf, then go back to its root, then go to right leaf; Postorder start from left leaf then go to right leaf then go back to root.
    7. From 5. we have a sequence W. (In preorder we travel each edge of the tree exactly twice. So c(W) = 2c(T).)
    8. But now we cannot build a tour by just trvel in this MST for some vertices have odd edges. Then For the vertices with odd edges we find minimum matching. The minmum matching < 1/2c(H*).
    9. Combine 5. and 8. We can build a tour now. The tour separate as two parts: one is the MST, which is <= Hamiltonian tour, and the second part is the minimum matching, which is <= 1/2(Hamiltonian tour). So the answer is 3/2 times of optimal solution. By triangle property, we can delete the repeated vertices.
  • Solve Vertex Cover by Maximum Matching (Lecture 9b)

    maximum matching: How to do this? Augment path.

    1. Randomly choose an edge. Choose both endpoints of the edge as vertex cover, then remove all edges that connected with either of the endpoint. Then repeat this step.
    2. A most two times vertices of optimal solution for this method.
    3. Analysis: This algorithm builds a maximum matching in the graph. Given a vertex cover, for each edge in this maximum matching, at least one endpoint in the vertex cover. So vertex cover number > edge in maximum matching. Now we have 2 times edge vertex in our vertex cover. So approximation rate is 2.
  • EPTAS, FPTAS, PTAS

    1. Two parts about the definition. One is about runing time, one is about how close the solution to optimal solution.
    2. For FPTAS, the running time is polynomial (to input length and some variations), and the approximation ratio is (1+epsilon).
    3. For EPTAS, the running time is O(|I|^c), and c is a constant. The approximation ratio is also (1+epsilon).
    4. For PTAS, just need the running time is polynomial type. So maybe it will have the running time likes O(n^{1/epsilon}) and epsilon is not fixed.
  • PTAS of Greedy Algorithm on Knapsack

    1. Then there is an example about Knapsack and an algorithm about it that the approximation ratio is not fixed.
      1. Firstly an greedy algorithm is introduced. 1) sort the items with value/size. 2) greedy method. 3) return the max between greedy result and the item with largest value.
      2. This greedy method has the solution > or = 1/2 optimal solution.
        1. Two lemmas. 1) same setting of Kanpsack problem, if package size is larger, then optimal solution is larger. 2) If sorting the items by value/size, and there is a solution that the sum of first several items is the package size, then it is the optimal solution.
        2. From lemma 2 we can see, the GA and optimal solutons are different at the last several. Sort the items by value/size. Focus on the (l+1)-th item that if add it in the package will larger than k. We know all (l+1)=k', it is the solution for k'. From lemma 1 we know this optimal solution p(k')>=p(k). So p(k)<=sum of (l+1) items. We know sum of l is GA solution, (l+1)-th item also small than GA, so OP <= GA + GA.
      3. If we change it a little bit. Find the optimal solution in a smaller set with number of items k, then use greedy method, the approximation ratio is 1+1/k. And the complexity is O(n^{k+1}).
        1. Try to find 1/epsilon items from all items as intial set. Then these items are the same for OP and GA. For the rest part of the whole solution, we know OP <= GA+next item.
      4. Actually Knapsack has pseudo polynomial time solution via dynamic programming. It seems like a polynomial algorithm, but the exponent is relate to the input length. Basically we should understand for O(n), when we say the length of n, this n is about how many inputs (the count of the inputs) or the value of the input(the length of the bits for this value). So for Knapsack the time complexity is O(nW), this n is about the count of the inputs, W is the value. If we double items, it doesn't matter of the items' size or value, just double the count. If we double W, then it's length of the bits should be double.
      5. A good explanation of pseudo polynomial time at here: http://stackoverflow.com/questions/3907545/how-to-understand-the-knapsack-problem-is-np-complete
  • Path width solve Minimum Vertex Cover (Lecture 10a)

    1. Path decomposition of a graph:
      1. All sub set constitute the whole graph.
      2. Each edge belongs to one or more sub sets.
      3. If a sub set between sub set a and sub set b, then the edges a and b share should in this sub set.
    2. Path width solve MVC:
    3. Three scenarios: 1) the beginning sub set; 2) delete node from the previous sub set; 3) add node from the previous sub set.
    4. In the first scenario, list all possible vertex combinations. If the combination is not vertex cover of that sub set, then the cost is infinity.
    5. In the second scenario, check the possible vertex combination and combination with the delete node. Then choose the minimal of the two values.
    6. In the third scenario, if possible vertex combination contain the new node, then check the value of minus the new node and then add the cost of the new node. If the combination doesn't contain the new node, and it's the vertex cover of this sub set then it's just the previous value.
    7. Minimum Vertex Cover can be solved in linear time on graphs of bounded pathwidth via Dynamic Programming.
    8. Nice pathwidth decomposition.
  • Treewidth solve Minimum Vertex Cover (Lecture 10)

    1. Two questions: minimum vertex cover in a tree and minimum dominating set on a tree.
    2. Nice Tree decomposition: four types of nodes: 1) join node, leaves equal to the root; 2) Insert Node, from leaf to root; 3) Delete Node, from leaf to root; 4) Start Node (Leaves), only one element.
    3. The insert node and delete node is from leaf to root and start node is the leaf because when we use the tree to do calculation we start from the leaf.
    4. The treewidth decomposition for vertex cover is similar to pathwidth decomposition for vertex cover. For the join node, the left and the right means subset of vertex cover, and they share the same node of the root, so the calculation is add them together then minus root.
  • Dynamic programming for Knapsack (Lecture 13)

    1. Basically the idea is build a table.
  • Bin Packing problem (Lecture 13/14)

    1. There are three algorithms: 1) First Fit (FF), and the performace is FF(I) < or = 17/10_OPT(I)+2; 2)Best Fit, performance is the same as FF; 3) First Fit Decreasing, which is sort the items in decreasing order and use First Fit method, and the performance is FFD(I) < or = 11/9_OPT(I)+4.
    2. Two definitions: 1) Absolute approximation ratio, for every input I the ratio is > or = 3/2; 2) Asymptotic performance ratio, this one is about when input is larger than some number. It's < or = 11/9.

results matching ""

    No results matching ""