cplusplus.co.il

Odd man out

Posted on: 24/04/2010

There’s nothing like the “Eureka” moment when you eventually manage to solve a challenging puzzle. I’m a man of puzzles myself, and the ones I like the most are computer science, or programming related, puzzles.

I’ve recently heard a pretty intriguing puzzle I would like to share with you.

First part

There’s an array of integers, of odd size N (N%2 == 1). The array contains numbers such that every number appears exactly twice in the array (each number has a duplicate), except one number.
Your mission (should you choose to accept it) is to find the one number that does not have a pair.

Example: Array=[1,4,6,7,8,1,8,11,11,4,6] Result=7.

Requirements: O(1) memory, O(N) runtime.

Second part

Same array, this time its size is even (N%2 == 0). All numbers are again duplicated exactly once, except this time two numbers don’t have a pair.
Your mission is to find those two numbers.

Example: Array=[1,4,6,7,8,1,8,11,11,32,4,6] Result=7, 32.

Requirements: O(1) memory, O(N) runtime.

Food for thought

What if we had three such numbers? Or even K of them?
Could we generalize this any further?

Good luck!

Please post the answers (or questions, if you have any) in the comments section below.

The correct answers now appear in the comments section below (congratulations to the solvers who supplied real elegant solutions!) Do not look there if you are still thinking about this puzzle.

49 Responses to "Odd man out"

I figure for the first part you start with 0 and XOR with each number in the array. XORing with the same number twice puts you back at 0, so at the end your variable contains the non-duplicated number.

Really?

What if your data is:

[2, 1, 2]

Then you end up with a result of 1:

0 xor 2 = 2
2 xor 1 = 3
3 xor 2 = 1

paste this into a python shell: ((0 ^ 2 ) ^1 ) ^ 2

Answer: 1

Nice… But how can you apply the same method in the second phase? you get the sum of both unique numbers.

XOR 2^(each number) instead of the numbers. Then you will get a reversible result for any K. You will end up with a long integer where the index of each 1 in the integer is one of the unique numbers.

I guess I should have written 2**(each number) instead of 2^(each number). Or perhaps pow(2, each number).

Here’s a cheeky C99 solution that works for both problems (albeit with a severe restriction on the size of the numbers):

#include <stdlib.h>
#include <stdio.h>
#include <math.h>

int main (int argc, char **argv) {
  int i;
  long long trinum = 0;

  for (i = 1; i < argc; i++) {
    trinum += llround (pow (3, atoi(argv[i])));
  }

  i = 0;
  while (trinum) {
    lldiv_t q = lldiv (trinum, 3);
    if (q.rem == 1) {
      printf("Odd one out: %d\n", i);
    }
    trinum = q.quot;
    i++;
  }
  return EXIT_SUCCESS;
}

(the lack of a 64-bit type makes it somewhat hard to write in standard C++)

sorry, not O(1) space…

Well done, Fletcher.

To generalize: let’s spit the source into M arrays, i-th array containing i-th bit of each element. Then, for each array we compute parity. If it’s 1, the corresponding bit of the result is 1, otherwise 0.

Now for the second part, let’s compute parity as well. If it’s 1, it means one number has 1 and another has 0. So we make another parity computation over the original array but only consider numbers that have 1 in the given bit, this will give us one number. Now we’ve reduced the problem to the first part.

Python offers a neat way to implement this:

import operator

def first(a):
return reduce(operator.xor, a, 0)

print first([1,4,6,7,8,1,8,11,11,4,6])

def second(a):
parity = reduce(operator.xor, a, 0)
assert parity != 0
bit = parity & -parity # lowest set bit
x1 = reduce(operator.xor, filter(lambda x: x & bit, a))
return x1, parity ^ x1

print second([1,4,6,7,8,1,8,11,11,32,4,6])

I must admit I like the elegance of your sourcecode.

Needless to say that the solution is flawless, like others posted here.
Congratulations to all the solvers 🙂

For the second, we need two iteration.
First, XOR all the array, then you have A^B as a result. Let N-th bit of A^B is 1 (which means, this bit is set differently in A and B).

XOR those elements of the array for which X^(1<<N) equals and not equals to 0 separately, then you have the both numbers.

How about this for the second part: as per first part, but since you’ve got two unique numbers A and B you end up with A XOR B. Since A != B, this will have at least one bit set. Divide and conquer: Pick a bit. Find the XOR of all the values where that bit is set … that’ll be one of them. Find the XOR of all the values where that bit is clear … that’ll be the other.

Nice one Alex, beat me to it!
—–nick.zoic.org

I was thinking about how to do this in one pass: If these are N-bit integers, you could keep 2N running XOR accumulators and get the answer out that way.

This possibly opens up a way to solve the K unique numbers case too I think … but I haven’t thought that one through.

you don’t have space for O(2N) accumulators…

Sorry, confused the Ns. Let me rephrase that:

You have N integers, each of which is B bits long.
You keep 2B running XOR accumulators, one for “where bit b is 0” and one for “where bit b is 1”.

Each accumulator pair holds either (A, B) if that bit is “useful” or (0, A XOR B) if that bit isn’t (in either order). pick a pair where neither side is zero, that’s your answer.

I couldn’t come up with a way to get 3 unique numbers out of this though … you get (A, B XOR C), (B, A XOR C), (C, A XOR B) and (0, A XOR B XOR C) out which isn’t very helpful.

—–nick.zoic.org

thats using O(2B*B) bits, which still isn’t O(1). O(1) is effectively O(B) bits here…

nick why do you need 2N, why isnt N+1 good enough?, one for all the numbers, and N for taking only the numbers where the ith bit is set.

Alon: I think you are correct.


#include <stdio.h>

int a[] = {1,4,6,7,8,1,8,11,11,32,4,6};

// int a[] = {1,4,6,7,8,1,8,11,11,4,6};

#define ASIZE (sizeof(a)/sizeof(a[0]))

int main (int argc, char ** argv)
{
  int i,j,ignore, can_ignore = 0;

  for (i = 0; i < ASIZE - 1; i++){
    if (can_ignore && (a[i] == ignore)) continue;
    for (j = i + 1; j < ASIZE; j++){
      if (can_ignore && (a[j] == ignore)) continue;
      if (a[i] == a[j]){
	if (can_ignore){
	  a[i] = a[j] = ignore;
	} else {
	  can_ignore = 1;
	  ignore = a[i];
	}
	break;
      }
    }
    if ((!can_ignore) || can_ignore && (a[i] != ignore)) printf("%d doesn't have a pair\n",a[i]);
  }
}

hello all,

sorry about the mangled code 🙂

i think i’ve got a fencepost bug in this, i.e. not handling if the last integer doesn’t have a pair

my first version simply zeroed out and ignored pairs and spit out singles, but then i realized zero might be one of the input integers, so i figured the value of the first pair found could be special and ignorable.

anyway, it was fun and seemed simple and elegant…to me at least 🙂

cheers,

p.

after re-reading the problem, and studying bodq’s answer, i’ve realized my ignorance. i’ve implemented O(cn), c > 1, (brute force) haven’t i?

ahhhh….good ol’ brute force. *laffin’

cheers,

p.

How does this generalize? The xor solution won’t work without modification, since the 3 odd numbers out might xor to 0…

One very simple solution that’s fully generalizable is to keep an array of counters, one for each possible value in the array. The size of that array only depends on the “width” of the integers, rather than the length of the list.

So, for 32-bit integers, you need a set of 2^32 1-bit counters, which lets you detect any number of “duplicate” or “loner” entries in a single pass through the list.

not O(1) space…

My solution in Ruby:

def solve(a)
n=0
a.each {|x| n ^= 2**x}
a.each {|x| puts x if n & 2**x != 0 }
end

not O(1) space either…

First part in J:

oddManOut=: ~:/&.#:@.0&,

Test:
oddManOut 1 4 6 7 8 1 8 11 11 4 6
7

My comments kept dying, i think the website I put in was breaking it.

think you could explain this code? I’ve always been amazed by J, but never able to understand it

Certainly!

~:/&.#:@.0&,

J reads right to left.
First we see :
,
means append

What are we appending? We are appending a zero by binding it with the ampersand. So 0&, makes sure we have a zero at the start of our list.

Next is @. which is basically saying apply the stuff to the right, to all the numbers in the list.

#. means make the number binary. Because I can’t readily xor integers.

&. is fun and useful and binds functions such that it will do the first function, here, it is #. Then it will do ~:/ and then do the inverse of #. after that! So its saying “Make the number binary, do somethin’ else and then make the number an integer again”

Lots of the built in verbs (functions) have inverses and you can define your own inverses for your own functions.

Lastly is ~:/, which is really two things, XOR ~: and /, which is saying do this to everything.

Like in LISP (+ 1 2 3) is 1+2+3

in J +/ 1 2 3 is the same.

In python, we see here in previous posts:
return reduce(operator.xor, a, 0)

Cheers!

Marc, that’s technically O(1) memory usage, but I wouldn’t exactly call it efficient 🙂

Here’s my generalization for K integers: if the result of XORing all the integers is 0, add 1 to all of them, and XOR again – the result will no longer be 0 (unless all the numbers are the same!). I haven’t proven this, but I’m assuming it 🙂

Then you can find the unique numbers using the same method as for 2 integers – look at 1 bit of the xor at a time, and only consider numbers that have that bit set (and then not set). I think you have to recurse on sublists, though, since removing one of the unique numbers will completely change the xor.

Eg. starting with [0,0,2,3,4,5] (4 uniques), the xor is 0, so add 1 to each.
[1,1,3,4,5,6] has an xor of 4, so we look at the b100 bit, and split the list into two parts based on that bit:
[1,1,3] and [4,5,6].
[1,1,3] splits on the b10 bit, into [1,1] and [3], which we know how to solve (the 1 and 2 unique cases).
[4,5,6] splits on the b10 bit into [4,6] and [5], solve those.

The performance depends on the number of unique numbers – we’ll have to do a recursive split for each unique number. Partitioning can be done in-place and in O(N), so our total performance is O(KN) time, O(1) memory.

> Here’s my generalization for K integers: if the result of XORing all the integers is 0, add 1 to all of them

This blows the O(1) memory requirement away; you should treat the input as read only.

You can run the algorithm on the input with each element increased by one just by changing your function to take a delta, so the space remains O(1).

The first two cases are easy, the third ‘food for thought’ case proved to be tricky.
(I’m including a few fragments of unchecked Haskell code. I have no compiler handy at the moment.)

1) Let r be the xor of all the ints. R is the unmatched int
import Data.Bits

xors :: Bits a => [a] -> a
xors = foldr xor 0

extract1 :: Bits a => [a] -> a
extract1 = xors

For the next case, we need a lemma:

Lemma 0: Given R /= 0. Let Z = R & (-R). Z will have one bit set that consists of just the rightmost set bit of R.
2) Let R be the xor of all ints. Then R = A ^ B, for some A, B : int, A /= B, so at least one bit in R is set. Let Z = R & (-R). Let A = the xor of all ints with that bit set. Let B’ = A ^ R.

xorWhere :: Bits a => (a -> Bool) -> [a] -> a
xorWhere p = xors . filter p

— two passes. O(1) storage required

extract2 :: Bits a => [a] -> (a,a)
extract2 xs = (a,r `xor` a)
where
r = xors xs
lsb = r & (-r)
a = xorWhere (\ -> i .&. lsb == 0)

For the third case, we need a couple of lemmata:
Lemma 1: If x0 is the the LSB of an integer x of arbitrary precision, then the LSB of x + 1 is ~x0.
Proof: The LSB of (X+Y) is the sum bit obtained from applying a half adder to the LSBs of X and Y. The output of the sum bit in a half adder is just the exlusive or of its inputs. The lsb of (X+1) = X0 ^ True = ~X0.

Corrolary 1: If you add an integer Z which has exactly one bit set to another integer, then the least significant bit to change in the result will be the bit set in Z, and it will be negated.

Lemma 2: ~x ^ ~y = x ^ y
Proof: Simple truth table

Lemma 3: Let A,B,C be integers of any precision. (A = B ^ C) && (A + 1) == (B + 1) ^ (C + 1) is unsatisfiable.
Proof: Consider the lsbs (A0, B0, and C0 respectively) of A, B, and C. (lsb A = lsb B ^ lsb C) && (lsb (A + 1) = lsb (B + 1) ^ lsb (C + 1)) => (A0 = B0 ^ C0) && (~A0 = ~B0 ^ ~C0) by lemma 1, but ~B0 ^ ~C0 = B0 ^ C0, by lemma 2, so A0 = ~A0, contradiction.

3) Let R = the xor of all the ints. R = A ^ B ^ C
If R == 0, then without loss of generality, A = B ^ C. Apply the algorithm to each of the inputs incremented by 1, and then subtract 1 from each of the results. During the second pass, R /= 0 must hold by Lemma 3!
Therefore, R /= 0, so at least one bit of R is non-zero. Let Z = R & (-R) — the LSB of R.
R is an exclusive or of three values, so either exactly 1 or exactly 3 of the results have that bit set. Let S = the xor of all ints with that bit set.
If S = R, then we know A, B and C all have bit Z set, We can apply corrolary 1 to add Z to each of the input ints and re-run the algorithm, then subtract Z from each of the results. When re-running the algorithm you know that no bit below Z is set in the (intermediate) result.
IS S /= R then let A = S. Extract B and C from the subset of the list of ints where bit Z is not set using the algorithm for step 2 above.

map3 f (x,y,z) = (f x, f y, f z)

extract3 :: Bits a => [a] -> (a,a,a)
extract3 = extract3′ 1

extract3′ :: Bits a => a -> [a] -> (a,a,a)
extract3′ n xs
| r == 0 = map3 (subtract n) $ extract3′ (2 * n) $ map (+ n) xs
| a == r = map3 (subtract n’) $ extract3′ (2 * n’) $ map (+ n’) xs
| otherwise = (a, b, c)
where
r = xors xs
— r /= 0 by base case above
n’ = r & (-r)
a = xorWhere (\ -> i .&. n’ /= 0) xs
(b, c) = extract2 $ filter (\ -> i .&. n’ == 0) xs

During each pass the candidate LSB is increased, so this algorithm will terminate for bounded integers. If you assume integers of a fixed size, as seems to be assumed by the statement that the integers can be found in O(1) storage, then the third algorithm above executes in O(1).

However the approach I defined above doesn’t work directly for n > 3, and your claimed generalization to K integers also shouldn’t work. Nothing tells you how many of the answers are ‘on each side of a split’, and answers don’t pigeonhole into the cases of exactly 1 or exactly 3 having a bit set if it is set in R, so you lose the structure that makes the problem solvable.

> Marc, that’s technically O(1) memory usage, but I wouldn’t exactly call it efficient

No, it’s not. In computer science, we often get lazy and discount the, call them s bits, required to store each integer. O(1) space really means O(s) bits so if you need 2^s bits, then it isn’t O(1) by a long shot…

Nice! I think this trick can be used to deterministically split any odd K into 2 partitions each containing less than K unique numbers. Sadly the recursion runs into trouble for even K’s…

I think this works in practice, but can’t prove it.

An easier to prove version works in O(KN) expected time: instead of adding one, apply a cryptographic hash function to each number and a randomly chosen O(1) constant (same for each pass). With very high probability, the resulting XOR will be nonzero. If it is zero, merely choose a new random number and repeat…

Update: the performance isn’t O(KN), it’s O(log(K)N), since in the degenerate case (K=N) it’s just a quicksort that partitions on the highest bit. There are log(K) levels of recursion, and all the partitioning at each level is O(N) work.

Also, now that I think about it a little more, that if-xor-is-0-add-1 trick won’t work in the general case. When the numbers all have the same number of trailing zeroes in binary, incrementing leaves the xor at 0. Some other trick will be needed. (You could partition the array on an arbitrary bit, but I’d be afraid you could get unlucky and end up sorting the array).

For example, [2,4,8,14] xor’s to 0, but so does [3,5,9,15].

You could try to fix things by incrementing multiple times, but if you multiply each of those numbers by (1<<10), you'd need a lot of increments before it made a difference to the xor.

unless you have a proof that half the unique numbers end up in each partition, its O(NK) time.

Copy/paste from Reddit:

Solving the second part:

Let X,Y be the two numbers and let Z = X xor Y which you can compute. Let A[i] the i-th bit of A.

You can observe that for any feasible function F you can find in O(1) memory O(n) time this value: F(X) xor F(Y).

Now you just have to find F so that F(X) != F(Y). You can do this by observing that as X !=Y it should be a bit jj so that X[jj] != Y[jj] and actually jj is the leftmost nonzero bit of Z.

Then this F solves the problem:
F(A) = 0 if Z[jj] = A[jj] ; A otherwise.

==
On the other hand, for 3 different numbers A,B,C you have the corner case where A = B xor C that makes using the above approach fail.

Solution for 3 numbers: http://www.reddit.com/r/programming/comments/bvgyc/puzzle_finding_the_nonduplicated_number_in_an/c0ot112

not a solution for three numbers because does not use O(N) time…

The time complexity is O(|w|N) but |w| is usually considered constant, e.g., |w| = 32, so is O(n) (linear in n).

|w| is not constant, it is at least log N since you have ~N/2 distinct integers…

O(1) space does mean O(|W|) bits, but O(N) time means O(N) time, not O(|W|N) time.

If you allow yourself more space, say O(K), you can solve the problem for K unique numbers in O(N + K) expected time…
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Choose a random O(1) constant r. Divide the numbers logically into 4K lists by assigning number x to bin (H(x) + r) mod 4K where H(-) is a cryptographic hash function. In an single pass of the array, compute the XOR of the numbers in each bin and the number of 0s, using 4K+1 registers (O(N) time).

Count the number of non-zero registers plus 1 if the # of 0s is 1. (O(K) time) If this is exactly K, output all the non-zero XOR register numbers and 0 if the 0-count was 1. Otherwise, we have failed (happens with very low probability); restart and repeat.

[this and the previous solution using cryptographic hashes assumes the adversaries computational power is limited so he can’t reverse engineer the cryptographic hash for example.]

Just to end playing with this puzzle for now:

I think there is no O(1) memory O(n) time algorithm that uses only XOR and solves the case of 4 numbers. A hint that this is the case is the following situation where the different numbers are:

X= 1001, Y = 0011, Z = 0110, W = 1100

That is, X[0] = Y[0] = Y[1] = Z[1] = Z[2] = W[2] = W[3] = X[3] = 1.

By using only * = XOR in O(1) space you can compute the set of these values: 0, X*Y, Y*Z, Z*W, W*X, X*Y*Z*W. You cannot go further as this set is closed under the XOR operation, for example (X*Y)*(Y*Z) = XZ etc.

That argument doesn’t work so well, adding 1 to all of the elements, X, Y, Z, W yields a new set

1010, 0110, 0111, 1101, such that their xor is 0110

Then you know that bits 2 and 3 of that are present in an odd number of members of X+1, Y+1, Z+1, W+1.

What is when you seem to get stuck. If you can come up with a way to cheat a bit and somehow know that it is safe that that odd number is 3 and not 1, then breaking on bit 2 you can recover:

1101 ^ (1010 ^ 0110 ^ 0111) — and the remaining three can be solved as above.

However, if you ‘guess’ wrong and assume that you have an a single 1 bit for bit 2, you’d obtain the wrong answer:

1011 ^ (garbage)

However, this does potentially open an avenue of attack against the 4 case, you could compute either 1101 or 1011, by either assumption, and then check using a linear time scan of the input that the claimed result was found, otherwise it is the other one.

However, unlike the case of 3 above, (a = b ^ c ^ d) & (a + 1 = (b + 1) ^ (c + 1) ^ (d + 1)) may be satisfiable, so you need something with a stronger cryptographic relationship with ^ than (+ 1).

So while this particular set of inputs is solvable, there probably good counter examples to the use of (+ 1) at least in the 4 number case.

But there are plenty of candidates for a better function to use.

> That argument doesn’t work so well, adding 1 to all of the elements, X, Y, Z, W yields a new set

You are right.

Also for the general case I noticed that for k numbers I could modify the algorithm I gave for 3 numbers and will work in O(1) memory and O(|w|**k * N) time, where |w| is the length of integers etc.

Leave a reply to RichardJohnn Cancel reply

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 27 other subscribers

Categories