Description:
Implement a quantum oracle on N qubits which implements the following function:
$$f(\vec{x}) = (\vec{b} \cdot \vec{x} + (\vec{1} - \vec{b}) \cdot (\vec{1} - \vec{x})) \mod 2 = \sum_{k=0}^{N-1} (b_k x_k + (1 - b_k) \cdot (1 - x_k)) \mod 2$$
Here $$\vec{b} \in \{0, 1\}^N$$ (a vector of N integers, each of which can be 0 or 1), and $$\overrightarrow{a}$$ is a vector of N 1s.
For an explanation on how this type of quantum oracles works, see Introduction to quantum oracles.
You have to implement an operation which takes the following inputs:
- an array of N qubits x in arbitrary state (input register), 1 ≤ N ≤ 8,
- a qubit y in arbitrary state (output qubit),
- an array of N integers b, representing the vector $$\overrightarrow{b}$$. Each element of b will be 0 or 1.
The operation doesn't have an output; its "output" is the state in which it leaves the qubits.
Your code should have the following signature:
Input Format:
None
Output Format:
None
Note:
None