Algorithms CC-BY-NC

Maintainer: admin

1Introduction to algorithms

1.1Computing complexity

Under the uniform cost model, all primitive operations are considered to take one unit of time. We don't consider things like the number of bits of an input, even if it's actually the number of bits that determines the amount of time taken; we just consider the number of operations performed.

1.2Runtimes of simple algorithms

$O(1)$
Accessing an element in an array, parity bit checking, comparison, basic operations
$O(\log n)$ (could be to base 2, but it doesn't matter)
Binary search, the ternary search coin-weighing problem, some other kind of search where you halve (or something) your search space at every time step
$O(n)$
Accessing the $n$th element in a linked list, finding the maximum or minimum value in an unsorted array
$O(n \log n)$
Mergesort, finding the closest two points in a graph(achievable)
(finding the closest two points in a graph $O(n \log^2 n)$)
$O(n^2)$
Integer Multiplication (where $n$ is the number of digits), two dimensional array,selection sort
$O(a^n)$
exponential time

2Binary search trees

2.1Standard BST operations

min(), max(), search(), insert(), delete(), next() and prev() all have a running time of $O(h)$, where $h$ is the height of the tree.

2.2Random BSTs

A binary tree created from random permutations has an expected height of $O(\log n)$.

2.3Two-dimensional binary search

Given a binary search tree in which each node contains a point and a specification of whether it divides its subtrees horizontally or vertically, find all the points that fall within a specified range.

Implementation: A binary search, implemented recursively, with the condition that the search doesn't stop until all of the potential nodes have been investigated. So at each node, check if that node's coordinates fall within the specified range; if not, go left or right (or both) depending on the coordinates and the division of the subtrees and repeat until all of the nodes have been searched.

Runtime: $O(n)$ where $n$ is the number of nodes in the tree, as no node needs to be visited more than once (but up to the entire tree may need to be visited).

From assignment 1, question 4.

2.4Red-black trees

Properties

A red-black tree is a type of self-balancing binary search tree with the following properties:

  1. Every node is either red or black.
  2. The root node is black.
  3. Wherever a leaf can be (but is not) placed, add a black nil leaf.
  4. If a node is red, both children are black.
  5. The number of black nodes between a node and a nil leaf is the same for all nil leaves.

Operations

They are $O(\log n)$ high, so all of the standard operations take $O(\log n)$, as before.

Rotation is $O(1)$ time, as it just involves changing three pointers:

Red-black tree rotation

Colouring a red black tree takes $O(n)$ time.

Inserting a new element as the root can be done in $O(\log n)$ time, by inserting the new element as a leaf the standard way and then rotating the tree until the new element becomes the root. Since the tree has a height of $\log n$, there are at most $\log n$ rotations needed, so the total runtime is $O(2 \log n) = O(\log n)$. From assignment 1, question 2.

Lethargic delete

Implement a "lethargic delete" operation which is performed by finding the node and marking it as deleted, and rebuilding the red-black tree from scratch using the non-deleted nodes only if more than 50% of nodes are deleted.

Implementation: The lethargic delete operation can be performed by first finding the element to delete using a binary search $O(\log n)$ and then marking the element as deleted, which can be done in constant time. Once 50% of the nodes have been marked as deleted, it is then necessary to pull out the non-deleted nodes in a sorted order, which can be done by doing an in-order traversal of the tree $O(n)$. Then, the tree can be rebuilt from the array of the nodes above, using the following recursive tree-building algorithm:

build_tree(array):
    n = length of array
    # If the array has only one element, return that element
    if n == 1:
        return array[0]

    # Otherwise, we build the tree recursively
    # Get "middle" element (either the middle, or the one next to it)
    root = array[ceil(n / 2)]
    # Build left subtree using the elements to the left of the "middle"
    root.left = build_tree(array[0:ceil(n / 2)])
    # Build right subtree using the elements to the right of the "middle"
    root.right = build_tree(array[ceil(n / 2) + 1:])

This algorithm runs in $O(1)$ time because node only needs to be visited once, with a constant number of operations (i.e. splitting the array) done at every node. We can then colour the tree using a $O(n)$ breadth-first search and the following instructions:

  1. Save the number of nodes in the tree (obtained after the initial in-order tree traversal) as $n$.
  2. Number the root node i = 1.
  3. For each new node encountered during the breadth-first search, increment i . If $i \geq 2 \log (n), then the node is in the last row, and the node should be coloured red. Otherwise, the node should be coloured black.

(In other words, if the last row is not complete, then all of its nodes should be coloured red. All other nodes should be coloured black. Note that any nodes missing a child will gain a black nil leaf as that child, to ensure that the root is the same colour as the leaves.)

Runtime: $O(n\log n)$

From assignment 1, question 3.

3Graph algorithms

3.1Breadth-first search

Starting at any node, visit all of that node's neighbours recursively until the entire graph has been traversed. "Breadth-first" because all the neighbours of a node are visited before moving on to the next node.

Implementation: Recursively, or iteratively using a queue.

Runtime: $O(n+m)$, where $n$ is the number of vertices and $m$ is the number of edges

3.2Testing bipartiteness

Given a graph, determine whether or not is is a bipartite graph (such that the nodes can be divided into two groups, with no node having an edge to a node in the same group as itself).

Implementation: This involves just running BFS on the graph and alternately colouring the vertices. If this is possible, then the graph is bipartite; if there are any nodes that are coloured one way but need to be coloured the other way, then the graph is not bipartite.

Runtime: $O(n+m)$

3.3Depth-first search

Same as breadth-first, except you visit the first unvisited neighbour of each node until you get to a node with no unvisited neighbours. Kind of like having a low attention span.

Implementation: Recursively, or iteratively using a stack.

Runtime: $O(n+m)$

3.4Finding a path between two vertices

Given two vertices in a graph, determine whether or not there exists a path from one to the other and if so, output a possible path.

Implementation: Start from the first vertex and proceed along a DFS or BFS until you've either visited all the nodes that you can or have reached the second vertex.

Runtime: $O(n+m)$

3.5Checking for strongly connectedness

Given a graph, determine whether or not the graph is strongly connected - that is, if there is a path from each vertex to every other.

Implementation: Starting from any vertex, run BFS. If all the vertices are traversed, run BFS again, only on the reverse graph (so with all the edges flipped); if all the vertices are again traversed, then G is strongly connected. Other methods too.

Runtime: $O(n+m)$

3.6Topological sorting and DAG detection

A graph has a topological ordering if and if it is also a directed acyclic graph (DAG). Implement an algorithm for determining whether or not a given graph is a DAG.

Implementation: Iteratively remove nodes that have no incoming edges and their outgoing edges, each iteration generates a topological level. The graph is a DAG if and only if there are no nodes remaining.

An optimized method would be keeping track of a list of vertices with no incoming edges, and after each 'iteration', add and remove vertices to that list accordingly. Since we have a list of the vertices we have need, we no longer need to go through the entire graph at each iteration, thus it will take $Cn$ rather than $n^2$. However, it takes $O(n+m)$ to initialise the list of the vertices with no incoming edges, thus the optimized runtime is $O(n+m)$.

Runtime: $O(n^2)$, optimised to $O(n+m)$

An example situation is given in assignment 1, question 7.

3.7Computing the width of a rooted tree

Given a rooted tree, for which the descendants of each node are represented by a linked list of children, determine its width (i.e. the maximal path distance between any two nodes).

Implementation: Traverse nodes recursively using a post-order DFS, computing the maximum distance between a node and a leaf at each node.

Runtime: $O(n)$ (since DFS is $O(m+n)$ and in a tree, $m = n - 1$)

From assignment 1, question 1.

3.8Deleting a vertex from a path

A graph with $n$ nodes contains two vertices $s$ and $t$ such that the distance between them is greater than $n/2$. Give an $O(m+n)$ algorithm to find a node $v$ in the graph which, when deleted from the graph, results in the destruction of all paths between $s$ and $t$.

Implementation: First of all, this node $v$ will always exist, because there are simply not enough nodes not being used in a path from $s$ and $t$ for there to be another path that does not have any common nodes with the original path. We are essentially looking for a node common to all paths between $s$ and $t$, and this can be done using a breadth-first search, as we are looking for the node which is the only one in a layer in a BFS tree:

find_common_node(graph, starting_point):
    initialise queue
    queue.push(starting_point)

    while queue is not empty:
        vertex = queue.pop()
        mark vertex as visited
        # Check all the neighbouring vertices for this vertex
        for neighbour in neighbours:
            if neighbour not visited and not in queue:
                queue.append(neighbour)

        # If queue has only one element at this point, it's the common node
        # So stop and return that node
        if queue has only one element:
            return queue.pop() 

This works because whenever the breadth-first search gets to a point where only one node is
present in the queue (even after having pushed all the neighbours of the current node into the queue)
then it means that there is only one path forward and so we have found a common node. The node
remaining in the queue is thus v.

Runtime: $O(m+n)$

From assignment 1, question 5.

3.9Eulerian cycles

Give an algorithm that either computes an Eulerian cycle in a graph, or correctly reports that no such cycle exists.

Implementation: First, split the graph into edge-disjoint cycles as follows: starting at an arbitrary node, perform a depth-first search until we return to the original node, storing and marking every traversed edge as "deleted" along the way. (If it's not possible to return to the original node, then no Eulerian cycle exists.) Repeat this for every node with edges remaining. We can then recreate the graph using the information obtained from the edge deletions by merging them (arbitrarily) at common vertices until the entire graph has been built up again.

Incidentally, a connected graph can only have a Eulerian cycle if all of its vertices are of even degree. The converse is true as well.

Runtime: $O(n)$

4Greedy algorithms

4.1Interval scheduling

Given tasks which have a start and end time, maximise the number of tasks completed within a time constraint, with the condition that no tasks overlap.

Implementation: Sort the tasks by finish time, and do the one which finishes the earliest, then remove the tasks that overlap with the one we've done, and repeat until no tasks are left.

Alternatively, we could prioritise tasks that start the latest; this would still result in an optimal schedule.

Runtime: $O(n \log n)$ ($O(n\ log\ n)$ to sort the schedule, $O(n)$ to order it)

Example of the alternative implementation in assignment 2, question 1.

4.2Weighted interval scheduling

Same as above, except each task has a weight. You want to minimise the sum of the product of the weight and finishing time for each task.

Implementation: Order the tasks by their weight-to-time ratio, and prioritise the tasks with the highest such ratio. In pseudocode:

# Returns the weighted schedule duration
schedule_jobs(jobs):
    job_queue = sorted(jobs, reverse=True) # sort the jobs based on w/t ratio, highest first
    time = 0
    weighted_duration = 0
    for job in job_queue:
        process(job)
        time += job.time
        weighted_duration += time * job.weight

    return weighted_duration

From assignment 2, question 2.

4.3Graph coloring

We have to colour a graph with k colours such that no adjacent vertices have the same colour.

Implementation: Run BFS on the graph, and colour each vertex the first available colour.

Incidentally, if the graph is planar, then this can be done with 4 colours. But that's very hard to prove.

Runtime: $O(n+m)$

4.4Kruskal's algorithm

Implementation: Start with a set of 'trees' that are just the individual vertices, then for every iteration, do:

  1. Remove the edge with the lowest cost
  2. Check the trees which will be connected by the edge, if it forms a cycle, throw this edge out, if it doesn't,
  3. Merge the two trees which are connected by this edge
  4. GOTO 1.

In other words, add the smallest edge that, when added to the graph, does not result in a cycle. If there are $n$ vertices, continue until $n-1$ edges have been added.

If the edge costs are all distinct, then Kruskal's algorithm always finds the (unique) minimum spanning tree. Otherwise, we can modify the algorithm to produce all the possible minimum spanning trees, as follows:

find_msts(graph):
    msts = [[]]
    edges = sorted(graph.edges) # sort the set of edges by their weight, ascending

    # First use the original Kruskal's algorithm to find one minimum spanning tree
    for edge in edges: # go through the edges one by one, starting with the lowest
        # If the edge set so far is not yet a tree and does not contain any cycles:
        mst_in_progress = msts[0] + edge
        if not is_tree(mst_in_progress) and not has_cycles(mst_in_progress: 
            msts[0].append(edge) # add the edge to the edge set and keep going
        # Otherwise, simply keep going, ignoring this edge

    # Now pull out the edges with non-unique weights and put them elsewhere
    # But only the second (and higher) edges - so 1 2 2 3 3 3 --> 1 2 3 and 2 3 3
    dupes = remove_dup_weights(edges) # removes them from edges, pulls into dupes

    # Now we have a minimum spanning tree in msts[0]
    # Check if there are potentially more in the graph
    if len(dupes) == 0:
        # No duplicates - all edges have distinct weights; the original MST is unique
        return msts
    else:
        # For each duplicate, try to insert that duplicate edge, then the others
        for dupe in dupes:
            possible_mst = [dupe]
            for edge in edges:
                # Use Kruskal's basic algorithm to try and build up an MST
                # Try to add each edge after the list containing dupe until we get a tree
                mst_in_progress = possible_mst + edge
                if not is_tree(mst_in_progress) and not has_cycles(mst_in_progress):
                    possible_mst.append(edge)
            # Now we have a tree - check if it's indeed minimum-spanning
            # For that to be true, it must have the same weight as the original, valid MST
            if get_total_weight(possible_mst) == get_total_weight(msts[0]):
                # Make sure that it's not the same MST as our original one
                if possible_mst != msts[0]:
                    msts.append(possible_mst) # it is indeed distinct - add it to the set
        # Return the set of MSTs found
        return msts

Runtime: $O(n \log n)$ ($O(n\ log\ n)$ to sort the edges by weight, every iteration an edge is removed(constant time), and a union(if possible) is made, which takes constant time)

Some theory and applications in assignment 2, question 3.

4.5Anti-Kruskal's algorithm

The same problem statement as the above, only do the reverse.

Implementation: Start with the full graph, however, keep a list of all the edges sorted by weight, then for each iteration, do:

  1. Remove the edge from the sorted list with the highest cost from the graph
  2. Check the graph to see if it's still connected
  3. If not, add the edge back in the graph
  4. Remove the edge from the sorted list
  5. GOTO 1.

In other words, keep removing the highest-cost edges until there are only $n-1$ edges left (and the graph is still connected).

Runtime: $O(n^2)$ ($O(n \log n)$ to sort the edges by weight, then $n iterations to check for connectedness, with each iteration running in $O(n)$ time)

4.6Dijkstra's algorithm

Find the shortest path between a single node and every other node in the graph, assuming non-negative edge weights.

Implementation: Create a list of distances from the source to each vertex, each of which is initialised to infinity (except the distance from the source to itself, which is 0). This list is kept in a heap so that the minimum element can be retrieved in $O(\log n)$ time. We can also create a list of the vertices before each vertex in every optimal path, so that we can later generate the paths themselves by backtracking.

For every iteration, do:

  1. Take out the vertex with the lowest distance to the source from the list
  2. For all of its adjacent vertices, if the distance to this vertex + the cost of the edge leading to the adjacent vertex is lower than the distance to source of the adjacent vertex, update it, and change the previous vertex list to point to this one
  3. Update the list accordingly as distances to the source has changed
  4. GOTO 1.
  5. If there are no more, or the lowest distance is infinity, the algorithm has completed.
  6. Backtrack from the destination by looking up the list of previous vertices until we get back to the source

Implemented in assignment 2, question 5.

Runtime: $O(n \log n)$ with a heap, $O(n^2)$ without (keeping the list of distances to the source in a proper heap requires $O(\log n)$ every iteration, and there are at most n iterations)

4.7Huffman coding

Design an optimal binary, variable-length encoding algorithm (specifically, a prefix code) that uses the frequency of each characters to determine its corresponding code.

Implementation: We start with a list of nodes (each one corresponding to a character) sorted by the frequency of occurrence of the character. We can then build a priority queue of the nodes as follows:

  1. Remove the two nodes with the lowest frequency.
  2. Create a new node with the combined frequency of these two nodes, and has two children being these two nodes, and put the new node back into the heap
  3. GOTO 1.

The structure produced at the end should be a binary tree that determines the Huffman code. To find the codeword for each character, simply traverse the tree, taking the left edge of any node as a 0, and the right as a 1 (or vice versa).

We could also create a ternary Huffman code using a ternary tree. The algorithm is similar, only instead of removing two nodes with the lowest frequency, we remove three, and create a new node with three children. This algorithm then ends when there is only one node left in the queue, as before, although if the number of characters to encode is even, it is necessary to create an empty node initially and add it to the queue (as a full ternary tree can only have an odd number of leaves).

Runtime: $O(n\log n)$ ($O(n\log n)$ to sort the nodes at the beginning, then at most $n$ iterations, each of which takes $O(\log n)$ time to properly insert the new node into the heap)

Example of a ternary Huffman code in assignment 2, question 4.

5Divide and conquer

Divide-and-conquer algorithms work by taking a problem and breaking it down into smaller, more manageable subproblems. Once all the subproblems have been solved, their solutions can be amalgamated to obtain the solution for the entire problem.

5.1Mergesort

Implementation: We can create a sorted list in $O(n)$ time by merging two sorted sublists of half lengths. These two sorted sublists can be created in the same way, until the sublists are of length 1 or 0, in which case they are already sorted.

Two sorted sublists can be merged with the use of a temporary array and a linear number of comparisons. That is, for the first element in the fully sorted list, take the smaller of the first elements of the two sorted sublists, and delete that element from the array. And so on.

Runtime: $O(n\log n)$ (from the recurrence $T(n) = 2T(\frac{n}{2}) + n$)

5.2Exponentiation

Implementation: The naive method, brute-force method is to multiply the number $n$ times, which takes $O(n)$. Using a divide-and-conquer algorithm, however, we can define exponentiation in a way that scales much better: if $n$ is even, $x^n = x^{n/2} \times x^{n/2}$, and if $n$ is odd, $x^n = x^{n/2} \times x^{n/2} \times x$. Then $x^{n/2}$ can be calculated the same way, etc.

Runtime: $T(n) = 2T(\frac{n}{2}) + 2 = 3 \log n = O(\log n)$ confirm?

5.3Chip testing

Two chips are being put in the tester at once. The chip in slot 1 will test the condition of the chip in slot 2. A good chip will always be correct while a bad chip will give a random result. We know that in our pile, there are more good chips than bad chips. How can we accuately determine the quality of all of the chips?

Implementation: The naive method is to test a chip against the n-1 other chips, and the chip's quality will be indicated by the majority's result since there are more good chips than bad ones. This will take $n^2$ time as you will need to test $\frac{n}{2} chips $n$ times each in the worst case.

Instead, we can group chips into pairs, and test them with each other. This will either produce results GG, BB, or GB. If we only keep the GG results, we preserve the |good| > |bad| property. If the number of chips is odd, we can simply test that chip against every other one. This way we are cutting the number of chips to test by at least half every iteration.

Runtime: $O(n)$ (recurrence: $T(n) = \frac{n}{2} + T(n/2)$ if n is even, $T(n) = n-1 + \frac{n-1}{2} + T(\frac{n-1}{2})$ if n is odd)

5.4Finding the kth smallest element

Given an unsorted list, find the $k$th smallest element.

Implementation: The naive approach would be to sort the list in $O(n \log n)$ time, then simply access the $k$th element. However, we can develop a more efficient algorithm using a divide-and-conquer approach. We begin by splitting the list into $\lceil \frac{n}{5} \rceil$ sublists of at most 5 elements each. For each sublist, we find the median, which can be done in constant time (so $O(n)$ for all of the sublists). Next, we recursively find the median of all the medians found so far, then split the original list into two, with one partition containing only elements greater than the median and the other containing only elements less than or equal to the median. We then recurse on the sublist that the kth smallest element will be found in (based on the number of elements in each sublist and the value of $k$).

Runtime: $O(n)$ (recurrence: $\displaystyle T(n) = T\left (\frac{9n}{10} \right ) + cn$)

5.5Counting inversions

?

Implementation: ?

Runtime: $O(n\log n)$

5.6Closest pair of points

?

Implementation: ?

Runtime: $O(n \log n)$

5.7Karatsuba multiplication

?

Implementation: ?

Runtime:$O(n^(\log3))$ = $O(n^(1.585))$

5.8Matrix multiplication

?

Implementation: ?

Runtime $O(n^2.81)$

5.9Recurrences

Given a recurrence in the form $\displaystyle T(n) = aT\left ( \frac{n}{b} \right ) + cn$, find a suitable upper bound.

This isn't really an algorithm as much as it is a type of question likely to appear on the exam. Some situations:

  • When $a = b$, the answer is $O(n \log n)$ no matter what $c$ is.
  • When $b = 2$ and $a > 2$, the answer is $O(n^{\log_2a})$.
  • When $b = 2$ and $a = 1$, the answer is $O(n)$.
  • Generally, if $a < b$, the upper bound is $O(n)$.
  • If $a = b = 2$ and $n$ is replaced by $n^2$, then $O(n^2)$ is an upper bound.

From KT, chapter 5.

6Dynamic programming

6.1Computing the Fibonacci numbers

Give an algorithm to compute the $n$th Fibonacci number.

Implementation: The method is the same as the tradition recursive method: $F(n) = F(n-1) + F(n-2)$, except we use memoization. This entails storing the results we have for $F(n)$ in a data structure, so it doesn't need to be recomputed every time. Much more efficient than the naive method, which has a runtime of $O(\varphi^n)$ (where $\varphi = 1.618 \ldots$, interestingly enough).

Runtime: $O(n)$ time, since now we only calculate $F(0) \ldots F(k)$ once for all $k<n$.

6.2Edit distance

Given two strings, find the minimum number of permutations necessary to transform one string into the other. Allowed changes are inserting, deleting and substituting characters.

Implementation: Edit(i,j) means the edit distance between the first i letters of A and the first j letters of B.

Then the general recurrence can be described as:

Edit(i,j) = i, if j = 0
Edit(i,j) = j, if i = 0
Edit(i,j) = min(Edit(i-1,j)+1, Edit(i,j-1)+1, Edit(i-1,j-1)+diff(A[i],B[i])) //diff(a,b) returns 1 if a != b, 0 otherwise

To backtrack to find the difference, we can simply start at the bottom right corner, and look at the cell above, to the left, and to the left/top of it, and go to the lowest cell. We repeat this until we've reached the beginning. This takes O(n) time.

Runtime: From the implementation, it's easy to see that Edit(i,j) depends on Edit(i-1,j), Edit(i,j-1) and Edit(i-1,j-1).
If we have a n x m table where each cell is the result of Edit(i,j), filling one cell requires 3 look ups, so filling the entire thing requires 3mn = O(mn)

6.3Subset sum

Given a list of items with weights and a maximum weight, we want to achieve the heaviest combined weight from the list of items without going over the maximum weight.

Implementation: We define the function opt(i,w) as the optimal weight with up to the ith item available, and with a maximum weight of w.
It's easy to see opt(0,w) = 0 for any w, and opt(i, 0) = 0 for any i.
If the i
th item weighs more than w, it means there's no way we can put it in the bag, thus if
wi > w, then opt(i,w) = opt(i-1,w).
Otherwise, we can either not put the i^th item in, or not. We would put in the option that end up giving us the better weight. Thus we would put in:
max(opt(i-1,w),opt(i-1,w-wi)+wi) The first option is not including the current item, the second option is considering it in the bag.
This recurrence will reduce this problem until it becomes trivial to solve.

Runtime: The table would be n x W to map out all the possible results for opt(i,w). We can see that opt(i,w) depends on opt(i-1,w) and opt(i-1,w-wi), so filling each cell requires only two look ups. The final runtime would be $O(nW)$. It's almost like polynomial time, except W can be arbitrarily large so we call it pseudopolynomial.
Back tracking is similar to the other dynamic prog algorithms. We start at the bottom right corner, and we go towards the top left depending on which one is optimal. Backtracking takes $O(n)$

6.4Longest common subsequence

Given two strings with lengths m and n, we define LCS(m,n) as the longest common subsequence between them. Design an algorithm to compute the LCS.

Implementation: LCS(i,j) can be defined recursively:

LCS(i,j) = 0 if i or j are zero, this is trivial
or
LCS(i,j) = LCS(i-1,j-1)+1, this results when the ith character and the jth character of the two strings are the same. We can simply remove that character and find the LCS of the remaining two strings.
or
max(LCS(i-1,j),LCS(i,j-1)), the last characters of the two strings differ. The longest subsequence can either include the character from the first string or the character from the second string, so we take the longest of the two.

Runtime: LCS(i,j) then depends on LCS(i-1,j-1), LCS(i-1,j), LCS(i,j-1). We can set up a table with m x n size, and fill the table. Filling each cell requires 3 look ups, so filling the entire table takes 3mn = O(mn) time.
To backtrack the procedure is similar to finding the edit distance; We start at the bottom right of the table, and traverse up and to the left. We go up and left if the ith and the jth letter of the tables are the same, and we either go up or left depending on which cell is bigger

6.5The knapsack problem

Optimise value and minimise weight. Like choosing what objects to carry in a knapsack.

Implementation: This is the same problem as the subset sum problem, if you replace wi with vi instead. If you think about it, the subset sum problem is just the case of the knapsack problem where the values of each item is equal to its weight.

Runtime: Same as subset sum, $O(nW)$

6.6The knapsack problem with repetition

Same as above except, different somehow?

Implementation: It's the subset sum problem with a subtlety. When we consider whether or not to put in the i^th item, we don't change i when we add it in the sack, so instead of: max(opt(i-1,w),opt(i-1,w-wi)+wi), we do max(opt(i-1,w),opt(i,w-wi)+wi).

Runtime: The runtime is the same

6.7Making change

Given several denominations of coins and an amount, make change or something.

Implementation: Again it's slightly different from subset sum, but the idea is the same. This time you're trying to optimize the number of coins to be a minimum, and similar to knapsack with repeated elements, you can have as many coins of the same type as you want. The recurrence is now:

opt(i,0) = 0 for all i, opt(0,v) = 0 for all v
opt(i, v) = min(opt(i-1,v),opt(i, v - d~i~) + i) //d~i~ is the coinage value of the i~th~ coin we put in.

Runtime: The runtime is $O(nV)$, where V is the amount of change we want to make and what is n supposed to be

6.8Bellman-Ford algorithm

If we have a directed graph with edge values that could be negative, Dijkstra's algorithm becomes screwy because it would just keep going around and around a cycle with negative values. Note however, if the graph doesn't have negative values Dijkstra's algorithm is much faster.

Implementation: Unlike Dijkstra's algorithm which relaxes the closest vertex, this algorithm relaxes all of the vertices, and does this n-1 times, allowing the closest distance to propagate from the source to the destination. This way negative cycles can be detected without the algorithm not terminating.

Runtime: Relaxing all the vertices takes m time, and it does it n-1 times, so it's O(nm)

6.9Floyd-Warshall algorithm

Same problem as Bellman-Ford algorithm, solved differently

Implementation: This algorithm calculates all possible paths and their distances between two points.
We define opt(i,j,k) as the shortest distance between i and j, using only vertices from 1 up to k. We then realize several properties for this function:
opt(i,j,k) for i=j is 0, unless there is an edge with negative cost in k. This algorithm will detect negative costs.
When going from a vertex to a vertex which is not itself, there are two options for opt(i,j,k):
Either the kth point is not being used in the path, in which case, opt(i,j,k) = opt(i,j,k-1)
Or the kth point is used in the path, so
opt(i,j,k) = opt(i,k,k-1) + opt(k,j,k-1) // we go from i to k, then k to j, effectively including k in the path
We take the minimum of these two, and so we have defined a recurrence for opt(i,j,k)

Runtime: Since opt(i,j,k) takes 3 parameters, all of which can be up to n, there are n3 data sets we need to get. For each, we perform 2 look ups, so the running time is 2n3, which is O(n3)

6.10Optimal binary search trees

Given a binary search tree, and the frequencies for each element to be retrieve, construct a binary search tree which has the lowest expected search time

Implementation: We make the observation that each optimal BSTs must be made of subtrees which themselves are optimal. Thus if we choose an optimal root, finding the subtrees are a just a subproblem. We start by finding the optimal subtrees of size 2, which can then be used to find optimal subtrees of size 3 and so on until we find the optimal BST. Finding an optimal subtree of size 3 from an optimal subtree of size 3 involves checking all possibilities for the root, then putting on the optimal subtree of size 2.
If we define P(i,j) as the search time for the optimal BST for a subtree with elements spanning from i to j, then:
P(i,j) = P(i,k-1)+ frequency of k + P(k+1,j)
But what is k? We can only figure out the value of K which gives us the lowest search time by bruteforce, so we loop through all possible values of K from i to j.

Runtime: The table is n x n, so we know we have to iterate at least $n^2$ times. At each cell, we look for an optimal value of k by looking at all the possibilities from i to j, this is at worst n times. Therefore the running time is O(n3)

6.11Chain matrix multiplication

Given a lot of matrices to multiply, figure out the way that requires the lowest number of multiplications.

Implementation: The approach is similar to optimal BSTs. Once again we can define a function called M(i,j) as the minimum number of multiplications needed to multiply matrices from i -> j together.
M(i,j) = M(i,k-1)+ dim(i)dim(k)dim(k-1)+M(k+1,j)
Once again, we have to figure out the optimal value for k, which means iterating through all possibilities for k in i->j

Runtime: The runtime is the same as finding an optimal BST because they're fundamentally the same problem, thus the running time is O(n3)

6.12Maximum independent set in a tree

Problem statement

Implementation: ?

Runtime: $O(n+n) = O(n)$

6.13Diameter(width) of a tree

Problem statement

Implementation: ?

Runtime: O(n)

7Network Flow

7.1The Ford-Fulkerson algorithm

Given a flow network with integer edge capacities, and two vertices designated as source and sinks, find the maximum amount of flow between the source and the sink.

Implementation: We iteratively do:
1. Use DFS to find a valid path from source to sink
2. Find the lowest capacity of the edges along that path, and augment the graph by that amount along that path
An augmented graph means lowering the flow capacity in the direction of the specified path by the specified amount, but at the same time, add flow capacity to the opposite direction of all the edges along the path by the same amount
3. Add the amount to a total count that keeps track of the total flow
4. GOTO 1. using the augmented graph
5. When no paths can be made from source to sink, we are done

Runtime: If all the edge capacities are integers, we always increase the flow by at least 1. In the worst case we would have to iterate F times, where F is the maximum flow. Each iteration it takes O(n) time to find a path and augment it. The runtime is then O(Fn)

7.2The Edmonds-Karp algorithm

Problem statement

Implementation: It is identical to Ford-Fulkerson, except instead of using DFS to find a path from source to since, you use BFS instead. This magically produces short paths.

Runtime: Every iteration this will take O(E) E-> number of edges, and at each iteration one of E number edges will become saturated. Since this guarantees that the shortest path gets picked, the path must be monotonically increasing with every iteration, so there can be at most V iterations. Thus the running time is $O(E^2V)$

7.3Ford-Fulkerson with scaling capacities

Problem statement

Implementation: You keep a value $\delta$ which is the largest power of 2 smaller than the maximum capacity of any edge coming out of source, and you only allow that much flow through at each iteration. After each iteration you scale down $\delta$ by a factor of two, until it reaches 1

Runtime: This turns a runtime of O(Fn) into a runtime of $O(m^2log_2C)$. You scale the factor log2C times, and at most 2m augmentations every scale. Each augmentation takes O(m) time, that gives us the result of $O(m^2log_2C)$ for the running time

7.4Maximum bipartite matching

Problem statement

Implementation: You set everything to have a flow capacity of 1, and connect a supersource to one side and a supersink to another. Then Ford Fulkerson that shit

Runtime: $O(EV)$, the max capacity is E, since each edge has a capacity of 1, and traversal takes V every iteration