# 2.4. An Anagram Detection Example¶

A good example problem for showing algorithms with different orders of
magnitude is the classic anagram detection problem for strings. One
string is an anagram of another if the second is simply a rearrangement
of the first. For example, `"heart"`

and `"earth"`

are anagrams. The
strings `"python"`

and `"typhon"`

are anagrams as well. For the sake
of simplicity, we will assume that the two strings in question are of
equal length and that they are made up of symbols from the set of 26
lowercase alphabetic characters. Our goal is to write a Boolean function
that will take two strings and return whether they are anagrams.

## 2.4.1. Solution 1: Checking Off¶

Our first solution to the anagram problem will check the lengths of the
strings and then to see that each character in the first string actually
occurs in the second. If it is possible to “checkoff” each character, then
the two strings must be anagrams. Checking off a character will be
accomplished by replacing it with the special C++ character `\0`

.
The first step in the
process will be to convert the second string to a local second string
for checking off.
Each character
from the first string can be checked against the characters in the local second string
and if found, checked off by replacement. ActiveCode 1 shows this function.

To analyze this algorithm, we need to note that each of the *n*
characters in `s1`

will cause an iteration through up to *n*
characters in the array from `s2`

. Each of the *n* positions in the
array will be visited once to match a character from `s1`

. The number
of visits then becomes the sum of the integers from 1 to *n*. We stated
earlier that this can be written as

As \(n\) gets large, the \(n^{2}\) term will dominate the \(n\) term and the \(\frac {1}{2}\) can be ignored. Therefore, this solution is \(O(n^{2})\).

## 2.4.2. Solution 2: Sort and Compare¶

Another solution to the anagram problem will make use of the fact that
even though `s1`

and `s2`

are different, they are anagrams only if
they consist of exactly the same characters. So, if we begin by sorting
each string alphabetically, from a to z, we will end up with the same
string if the original two strings are anagrams. ActiveCode 2 shows
this solution.

At first glance you may be tempted to think that this algorithm is
\(O(n)\), since there are three consecutive simple iterations:
the first two to convert strings to char arrays and the last
to compare the *n*
characters after the sorting process. However, the two calls to the
C++ `sort`

function are not without their own cost. As we will see in
a later chapter, sorting is typically either \(O(n^{2})\) or
\(O(n\log n)\), so the sorting operations dominate the iteration.
In the end, this algorithm will have the same order of magnitude as that
of the sorting process.

## 2.4.3. Solution 3: Brute Force¶

A **brute force** technique for solving a problem typically tries to
exhaust all possibilities. For the anagram detection problem, we can
simply generate an array of all possible strings using the characters from
`s1`

and then see if `s2`

occurs. However, there is a difficulty
with this approach. When generating all possible strings from `s1`

,
there are *n* possible first characters, \(n-1\) possible
characters for the second position, \(n-2\) for the third, and so
on. The total number of candidate strings is
\(n*(n-1)*(n-2)*...*3*2*1\), which is \(n!\). Although some
of the strings may be duplicates, the program cannot know this ahead of
time and so it will still generate \(n!\) different strings.

It turns out that \(n!\) grows even faster than \(2^{n}\) as
*n* gets large. In fact, if `s1`

were 20 characters long, there would
be \(20!=2,432,902,008,176,640,000\) possible candidate strings.
If we processed one possibility every second, it would still take us
77,146,816,596 years to go through the entire array. This is probably not
going to be a good solution.

## 2.4.4. Solution 4: Count and Compare¶

Our final solution to the anagram problem takes advantage of the fact that any two anagrams will have the same number of a’s, the same number of b’s, the same number of c’s, and so on. In order to decide whether two strings are anagrams, we will first count the number of times each character occurs. Since there are 26 possible characters, we can use an array of 26 counters, one for each possible character. Each time we see a particular character, we will increment the counter at that position. In the end, if the two arrays of counters are identical, the strings must be anagrams. ActiveCode 3 shows this solution.

Again, the solution has a number of iterations. However, unlike the
first solution, none of them are nested. The first two iterations used
to count the characters are both based on *n*. The third iteration,
comparing the two arrays of counts, always takes 26 steps since there are
26 possible characters in the strings. Adding it all up gives us
\(T(n)=2n+26\) steps. That is \(O(n)\). We have found a
linear order of magnitude algorithm for solving this problem.

Before leaving this example, we need to say something about space requirements. Although the last solution was able to run in linear time, it could only do so by using additional storage to keep the two arrays of character counts. In other words, this algorithm sacrificed space in order to gain time.

This is a common occurrence. On many occasions you will need to make decisions between time and space trade-offs. In this case, the amount of extra space is not significant. However, if the underlying alphabet had millions of characters, there would be more concern. As a computer scientist, when given a choice of algorithms, it will be up to you to determine the best use of computing resources given a particular problem.

Self Check

- O(n)
- No. In an example like this you want to count the nested loops, especially the loops that are dependent on the same variable, in this case, n.
- O(n^2)
- Right! A nested loop like this is O(n^2).
- O(log n)
- No. log n typically is indicated when the problem is iteratively made smaller
- O(n^3)
- No. In an example like this you want to count the nested loops. especially the loops that are dependent on the same variable, in this case, n.

Q-1: Given the following code fragment, what is its Big-O running time?

```
int main(){
int test = 0;
for (int i = 0; i < n; i++){
for (int j = 0; j < n; j++){
test = test + i * j;
}
}
return 0;
}
```

- O(n)
- Right! Even though there are two loops they are not nested. You might think of this as O(2n) but we can ignore the constant 2.
- O(n^2)
- No. Be careful, in counting loops you want to look carefully at whether or not the loops are nested.
- O(log n)
- No. log n typically is indicated when the problem is iteratively made smaller.
- O(n^3)
- No. Be careful, in counting loops you want to look carefully at whether or not the loops are nested.

Q-2: Given the following code fragment what is its Big-O running time?

```
int main(){
int test = 0;
for (int i = 0; i < n; i++){
test = test + 1;
}
for (int j = 0; j < n; j++){
test = test - 1;
}
return 0;
}
```

- O(n)
- No. Look carefully at the loop variable i. Notice that the value of i is cut in half each time through the loop. This is a big hint that the performance is better than O(n)
- O(n^2)
- No. Check again, is this a nested loop?
- O(log n)
- Right! The value of i is cut in half each time through the loop so it will only take log n iterations.
- O(n^3)
- No. Check again, is this a nested loop?

Q-3: Given the following code fragment what is its Big-O running time?

```
int main(){
int i = n;
int count = 0;
while (i > 0){
count = count + 1;
i = i // 2;
}
return 0;
}
```