pqcprep

pqcprep provides parametrised quantum circuits (PQCs) for quantum state preparation.

The aim of pqcprep is to implement the algorithm for quantum state preparation described in Background via gate-efficient parametrised quantum circuits (PQCs), as described in Approach. However, the functionality provided as part of the package is general enough to be adapted to a wide range of other applications. A description of how to use the package is given in Usage.

Usage

pqcprep provides an out-of-the-box command-line tool to construct and train PQCs for quantum state preparation as well as visualisaing their performance. Additionally, the package provides a range of functions to be used for more bespoke applications.

Installation

pqcprep is published on PYPI, the python publishing index. Thus, it can be installed using the command

            python3 -m pip install pqcprep

This requires pip, which should be automatically installed when using python in most cases (see here for more information).

The source code for pqcprep can also be downloaded from GitHub by cloning the repository via

            git clone https://github.com/david-f-amorim/PQC_function_evaluation.git

Command-line Tool

pqcprep comes with a built-in command-line tool. When installing pqcprep via pip (recommended) using the tool is as simple as running the command

            pqcprep [-OPTIONS]

When using pqcprep via a GitHub clone the command-line tool can be accessed by running

            python3 -m pqcprep [-OPTIONS]

when in the same working directory as the downloaded directory /pqcprep.

The central function of the command-line tool is to construct a PQC for either function evaluation (phase encoding) or amplitude preparation, train it and evaluate its performance. This is implemented by executing functions such as pqcprep.training_tools.train_QNN(), pqcprep.training_tools.test_QNN(), pqcprep.training_tools.ampl_train_QNN(), pqcprep.plotting_tools.benchmark_plots() and pqcprep.plotting_tools.benchmark_plots_ampl(). A collection of different directories containing data files and plots is generated in the process. As the program is running, relevant information is displayed in the terminal. The most important output produced are the PQC weights which can be used to replicate the trained PQC and include it in various algorithms or other applications.

Options

The following options can be passed to the command-line tool. Note that many of these options are closely linked to the arguments of pqcprep.training_tools.train_QNN(), pqcprep.training_tools.test_QNN(), pqcprep.training_tools.ampl_train_QNN(). Consulting the documentation for these functions can thus be useful.The order in which options are passed is irrelevant. Most options can be passed using a short-hand (often just one letter) beginning with - as well as a longer, more descriptive name beginning with --.

Options requiring one (or more) variable(s) to be passed alongside them, i.e. -option VAL :

  • -D, --DIR :

    Set the parent directory for the output files: -D X or --D X sets the parent directory to X. If not provided, the current working directory is taken as the default.

  • -n, --n :

    Set number of input qubits: -n X or --n X sets the number of input qubits to X. If not provided, a default value of 6 is taken.

  • -m, --m :

    Set number of target qubits: -m X or --m X sets the number of target qubits to X. If not provided, a default value of 3 is taken. This has no effect if --A or --ampl is passed.

  • -L, --L :

    Set number of network layers: -L X or --L X sets the number of network layers to X. Multiple options can be given and will be executed sequentially, e.g. -L X Y Z will create three PQCs with X, Y and Z network layers, respectively. If not provided, a default value of 6 is taken.

  • -f, --f :

    Set phase function to be evaluated: -f X or --f X sets the phase function to X, where X must be one of the options for mode in pqcprep.psi_tools.psi(). This has no effect if --A or --ampl is passed. If not provided, the default 'psi' function is evaluated.

  • -l, --loss :

    Set loss function to be used by the optimiser: -l X or --loss X sets the loss function to X, where X must be one of the optiosn for loss_str in pqcprep.training_tools.set_loss_func(). If not provided, the default loss function is SAM.

  • -e, --epochs :

    Set the number of epochs (training runs): -e X or --epochs X sets the number of epochs to X. If not provided, the default number of epochs is 600.

  • -M, --meta :

    Include a meta information in the file names of the output: -M X or --meta X results in the string X to be included in the name string created via pqcprep.file_tools.vars_to_name_str() (or via pqcprep.file_tools.vars_to_name_str_ampl() if -A or --ampl is passed). Note that the string X may not contain spaces or the sequence '--'. If not provided, no meta information is added to the file names.

  • -ni, --nint :

    Set number of integer input qubits: -ni X or --nint X sets the number of integer input qubits to X. If not provided, all input qubits are taken to be integer qubits.

  • -mi, --mint :

    Set number of integer target qubits: -mi X or --mint X sets the number of integer target qubits to X. If not provided, all target qubits are taken to be integer qubits. This has no effect if --A or --ampl is passed.

  • -d, --delta :

    Set value of the delta parameter used to train the function evaluation network in some contexts: -d X or --delta X sets the value of delta to X. Note that X must be a float between 0 and 1. Multiple options can be given and will be executed sequentially, e.g. -d X Y Z will train three PQCs with delta equal to X, Y and Z, respectively. This has no effect unless -TS or -H are passed and if -A or --ampl are passed. If not provided, a default value of 0 is taken.
    See pqcprep.training_tools.train_QNN() for more information.

  • -p, --WILL_p :

    Set value of the p parameter used by the WILL loss function: -p X or --WILL_p X sets the value of p to X. Multiple options can be given and will be executed sequentially, e.g. -p X Y Z will train three PQCs with p equal to X, Y and Z, respectively. This has no effect unless -l WILL is passed (i.e. the WILL los function is used). If not provided, a default value of 1 is used.

  • -q, --WILL_q :

    Set value of the q parameter used by the WILL loss function: -q X or --WILL_q X sets the value of q to X. Multiple options can be given and will be executed sequentially, e.g. -q X Y Z will train three PQCs with q equal to X, Y and Z, respectively. This has no effect unless -l WILL is passed (i.e. the WILL los function is used). If not provided, a default value of 1 is used.

  • -RP, --repeat_params :

    Set value of the repeat_parameters option for the function evaluation network: -RP X or --repeat_parameters X sets the value of repeat_parameters to X. Note that X must be a valid option for the argument repeat_parameters of pqcprep.training_tools.train_QNN(). This has no effect if -A or --ampl is passed. If not provided, the default value of None is taken.

  • -fA, --f_ampl :

    Set amplitude function to be prepared: -fA X or --f_ampl X sets the amplitude function to X, where X must be one of the options for mode in pqcprep.psi_tools.A(). This has no effect unless --A or --ampl is passed. If not provided, the default 'x76' function is evaluated.

  • --xmin :

    Set minimum of the amplitude function domain: --xmin X sets the mininum of the amplitude function domain to X. This has no effect unless --A or --ampl is passed. If not provided, the default value of 40.0 is taken.

  • --xmax :

    Set minimum of the amplitude function domain: --xmax X sets the maximum of the amplitude function domain to X. This has no effect unless --A or --ampl is passed. If not provided, the default value of 168.0 is taken.

  • --seed :

    Set the seed for random number generation: --seed X sets the seed to X. If not provided, the default value of 1680458526 is taken.

Options not requiring values to be passed alongside them, i.e. -option :

  • -h, --help :

    Show a summary of the various options and exit.

  • -A, --ampl :

    If passed, the generated PQC performs amplitude preparation, as opposed to function evaluation, corresponding to executing pqcprep.training_tools.ampl_train_QNN(). If not passed (default), the generted PQC performs function evaluation, corresponding to executing pqcprep.training_tools.train_QNN().

  • -r, --real :

    If passed, the generated PQC only involves CX and Ry gates, resuling in real amplitudes.

  • -PR, --pahse_reduce :

    If passed, reduce the function to be evaluated to a phase between 0 and 1. This has no effect if -A or --ampl is passed.

  • -TS, --train_suerpos :

    If passed, train the function evaluation network with inputs in superposition, as opposed to randomly sampled basis states. This has no effect if -A or --ampl is passed.

  • -H, --hayes :

    This is a short-hand used to train a circuit aimed at reproducing the results of Hayes 2023. Passing -H or --hayes is equivalent to passing -TS -r -n 6 -PR. This has no effect is -A or --ampl is passed. This option is recommended for most applications.

  • -gs, --gen_seed :

    If passed, generate the seed used for random number generation from the current timestamp. This overrides the value specified via --seed.

  • -RPA, --repeat_params_ampl :

    If passed, set the repeat_params option of pqcprep.training_tools.ampl_train_QNN() to True. This has no effect unless -A or --ampl is passed.

  • -I, --ignore_duplicates :

    If passed, existing outputs with the same name string will be ignored and overwritten.

  • -R, --recover :

    If passed, training will be continued from existing TEMP files (for the case of the program being interrupted). If no relevant TEMP files are found the program will execute normally.

  • -S, --show :

    If passed, output plots are shown as they are produced. The plots will be saved to file regardless of whether this option is passed or not. This has no effect if -NP or --no_plots is passed.

  • -P, --pdf :

    If passed, output plots are saved as PDFs, as opposed to PNGs. This has no effect if -NP or --no_plots is passed.

  • -NP, --no_plots :

    If passed, no output plots are produced.

Examples

The large number of options that can be passed to the command-line tool allow for a large degree of customisation, at the cost of increased usage complexity. The following examples serve to showcase a few simple uses of the tool. Note that the prompts assume that pqcprep has been installed via pip (see above), which enables the command pqcprep in the terminal. The examples also hold for installation via GitHub by replacing pqcprep with python3 -m pqcprep when run in the appropriate directory (see above).

  1. Train a PQC to evaluate a linear phase function using the WIM loss function and otherwise default settings:

            pqcprep -l WIM -f linear
    
  2. Train several PQCs, with different numbers of networks layers, to evaluate a quadratic phase function with 4 target qubits, the -H flag and otherwise default settings:

            pqcprep -m 4 -l quadratic -L 3 6 9 12 -H
    
  3. Train a PQC to prepare a uniform amplitude function with repeated parameters and output plots saved as PDFs:

            pqcprep -A -fA uniform -P -RPA
    

These examples mainly serve to illustrate how options are passed to the command-line tool and do not carry any particular significance in terms of the PQCs produced.

Using Module Functions

While the built-in command-line tool is the most efficient way of generating and training PQCs for state preparation, the detailed analysis and post-processing of the circuits will typically require bespoke functions depending on the application. The package pqcprep provides a wide range of functions, ranging from low- to high-level functionality, that can be imported and used for these purposes. A full overview of the provided functions is provided as part of this documentation.

To import a function <function> from the sub-module <submodule> use the command

            from pqcprep.<submodule> import <function>

when pqcprep is installed via pip (see above). A modified version of this import, taken into account local file paths, has to be used when installing via GitHub.

The sub-modules included with pqcprep are

and it is recommended to survey the functions included in each of the submodules when working with the package.

Approach

The key challenge tackled by pqcprep is to construct a PQC that can perform function evaluation: $\ket{j}\ket{0} \mapsto \ket{j} \ket{\Psi'(j)}$, for some analytical function $\Psi$, with $\Psi ' \equiv \Psi / 2 \pi$. Throughout this documentation, the $n$-qubit register containing the $\ket{j}$ and the $m$-qubit register containing the $\ket{\Psi'(j)}$ will be referred to as the "input register" and "target register", respectively.

A quantum convolutional neural network (QCNN) is used to approach the problem. A QCNN is a parametrised quantum circuit involving multiple layers. Two types of network layers are implemented here:

  • convolutional layers (CL) involve multi-qubit entanglement gates;

  • input layers (IL) (replacing the conventional QCNN pooling layers) involve controlled single-qubit operations on target qubits.

Input qubits only appear as controls throughout the QCNN.

Convolutional Layers

Each CL involves the cascaded application of a two-qubit operator on the target register. A general two-qubit operator involves 15 parameters. Hence, to reduce the parameter space, the canonical three-parameter operator

$$ \mathcal{N}(\alpha, \beta, \gamma) = \exp \left( i \left[ \alpha X \otimes X + \beta Y \otimes Y + \gamma Z \otimes Z \right] \right) $$

is applied, at the cost of restricting the search space. This can be decomposed (see Vatan 2004) into 3 CX, 3 $\text{R}_\text{z}$, and 2 $\text{R}_\text{y}$ gates. A two-parameter real version, $\mathcal{N}_\mathbb{R}(\lambda, \mu)$, can be obtained by removing the $\text{R}_\text{z}$.

Two convolutional layer topologies are implemented, loosely based on Sim 2019:

  • neighbour-to-neighbour/linear CLs: the $\mathcal{N}$ (or $\mathcal{N}_\mathbb{R}$) gate is applied to neighbouring target qubits;

  • all-to-all/quadratic CLs: the $\mathcal{N}$ (or $\mathcal{N}_\mathbb{R}$) gate is applied to all combinations of target qubits.

The $\mathcal{N}$-gate cost of neighbour-to-neighbour (NN) layers is $\mathcal{O}(m)$ while that of all-to-all (AA) layers is $\mathcal{O}(m^2)$. The QCNN uses alternating linear and quadratic CLs.

Input Layers

ILs, replacing pooling layers, feed information about the input register into the target register.
An IL involves a sequence of controlled generic single-qubit rotations (CU3 gates) on the target qubits, with input qubits as controls. For an IL producing states with real amplitudes, the CU3 gates are replaced with $\text{CR}_\text{y}$ gates. Each input qubit controls precisely one CU3 (or $\text{CR}_\text{y}$ operation), resulting in an $\mathcal{O}(n)$ gate cost. ILs are inserted after every second convolutional layer, alternating between control states 0 and 1.

Training the QCNN

For training, the QCNN is wrapped as a SamplerQNN object and connected to PyTorch's Adam optimiser via TorchConnector. The optimiser determines improved parameter values for each training run ("epoch") based on the calculated loss between output and target state. Beyond loss, mismatch is an important metric: $$ M= 1 - |\braket{\psi_\text{target}| \psi_\text{out}}|. $$

There are two ways to train the QCNN on input data:

  1. Training on individual states: one of the $2^n$ input states, $\ket{j},$ is randomly chosen each epoch. The network is thus taught to transform $\ket{j}\ket{0} \mapsto \ket{j}\ket{\Psi'(j)} $ for each of the states individually.
  2. Training in superposition: the network is taught to transform $$ \left(\sum^{2^n-1}_{j=0} c_j \ket{j} \right) \ket{0} \mapsto \sum^{2^n -1}_{j=0} c_j \ket{j}\ket{\Psi'(j)}, $$ where the coefficients $c_j \sim \frac{1}{\sqrt{2^n}}$ are randomly sampled each epoch.
    By linearity, this teaches the network to transform $\ket{j}\ket{0} \mapsto \ket{j}\ket{\Psi'(j)} $ for each $\ket{j}$.

One can also train the QCNN to produce a target distribution independent of the input register. This is equivalent to constructing an operator $\hat{U}_A$ such that $$\hat{U}_A \ket{0}^{\otimes n} = \frac{1}{|\tilde{A}|} \sum^{2^n-1}_{j=0} \tilde{A}(j) \ket{j}$$ for some distribution function $\tilde{A}$.

Background

pqcprep builds on a scheme for quantum state preparation presented in Hayes 2023: a complex vector $\boldsymbol{h} =\lbrace \tilde{A}_j e^{i \Psi (j)} | 0 \leq j < N \rbrace$, where $\tilde{A}$, $\Psi$ are real functions that can be computed efficiently, is prepared as the quantum state $$ \ket{h} = \frac{1}{\vert \tilde{A} \vert} \sum^{2^n-1}_{j=0} \tilde{A}(j) e^{i \Psi (j)} \ket{j}, $$ using $n =\lceil \log_2 N \rceil$ qubits.

This requires operators $\hat{U}_A$ and $\hat{U}_\Psi$ such that \begin{align} \hat{U}_A \ket{0}^{\otimes n} &= \frac{1}{|\tilde{A}|} \sum^{2^n-1}_{j=0} \tilde{A}(j) \ket{j}, \newline \hat{U}_\Psi \ket{j} &= e^{i \Psi (j)} \ket{j}. \end{align}

$\hat{U}_\Psi$ is constructed via an operator $\hat{Q}_\Psi$ that performs function evaluation in an ancilla register, \begin{equation} \hat{Q}_\Psi \ket{j} \ket{0}^{\otimes m}_a = \ket{j} \ket{\Psi'(j)}_a, \end{equation} with $\Psi'(j) \equiv \Psi(j) / 2 \pi$, as well as an operator $\hat{R}$ that extracts the phase, $$ \hat{R} \ket{j} \ket{\Psi'(j)}_a = \ket{j} e^{i 2 \pi \Psi' (j)} \ket{\Psi' (j)}_a.$$ Thus, $\hat{U}_\Psi = \hat{Q}_{\Psi}^\dagger \hat{R} \hat{Q}_\Psi$ with $\hat{Q}_{\Psi}^\dagger$ clearing the ancilla register: \begin{align} \hat{U}_\Psi \hat{U}_A \ket{0} \ket{0}_a &= \frac{1}{|\tilde{A}|} \sum^{2^n-1}_{j=0} \tilde{A}(j) \hat{U}_\Psi \ket{j} \ket{0}_a \newline &= \frac{1}{|\tilde{A}|} \sum^{2^n-1}_{j=0} \tilde{A}(j) \hat{Q}_\Psi^\dagger \hat{R} \hat{Q}_\Psi \ket{j} \ket{0}_a \newline &= \frac{1}{|\tilde{A}|} \sum^{2^n-1}_{j=0} \tilde{A}(j) \hat{Q}_\Psi^\dagger \hat{R} \ket{j} \ket{\Psi'(j)}_a \newline &= \frac{1}{|\tilde{A}|} \sum^{2^n-1}_{j=0} \tilde{A}(j) \hat{Q}_\Psi^\dagger \ket{j} e^{i \Psi(j)} \ket{\Psi'(j)}_a \newline &= \frac{1}{|\tilde{A}|} \sum^{2^n-1}_{j=0} \tilde{A}(j) e^{i \Psi(j)} \ket{j} \ket{0}_a \newline &= \ket{h} \ket{0}_a \newline \end{align}

This size, $m$, of the ancilla register limits the precision to which $\Psi(j)$ can be encoded to $\sim 2^{1-m} \pi$.

Imprint

David Amorim, 2024. Email: 2538354a@student.gla.ac.uk .

This project was funded by a Carnegie Vacation Scholarship and supervised by Prof Sarah Croke (University of Glasgow, School of Physics and Astronomy).

  1"""
  2### *pqcprep* provides parametrised quantum circuits (PQCs) for quantum state preparation. 
  3
  4The aim of *pqcprep* is to implement the algorithm for quantum state preparation described in [Background](#background) via gate-efficient parametrised quantum circuits (PQCs), as described in [Approach](#approach). 
  5However, the functionality provided as part of the package is general enough to be adapted to a wide range of other applications. A description of how to use the package is given in [Usage](#usage).
  6
  7# Usage 
  8
  9*pqcprep* provides an out-of-the-box command-line tool to construct and train PQCs for quantum state preparation as well as visualisaing their performance. Additionally, the package provides 
 10a range of functions to be used for more bespoke applications. 
 11
 12## Installation 
 13
 14*pqcprep* is published on [PYPI](https://pypi.org), the python publishing index. Thus, it can be installed using the command 
 15                
 16                python3 -m pip install pqcprep 
 17
 18This requires `pip`, which should be automatically installed when using python in most cases (see [here](https://pypi.org/project/pip/) for more information). 
 19
 20
 21The source code for *pqcprep* can also be downloaded from GitHub by cloning the repository via 
 22
 23                git clone https://github.com/david-f-amorim/PQC_function_evaluation.git 
 24
 25## Command-line Tool 
 26
 27*pqcprep* comes with a built-in command-line tool. When installing *pqcprep* via `pip` (recommended) using the tool is as simple as running the command
 28
 29                pqcprep [-OPTIONS]
 30
 31When using *pqcprep* via a GitHub clone the command-line tool can be accessed by running 
 32
 33                python3 -m pqcprep [-OPTIONS]
 34
 35when in the same working directory as the downloaded directory `/pqcprep`. 
 36
 37
 38The central function of the command-line tool is to construct a PQC for either function evaluation (phase encoding) or amplitude preparation, train it and
 39evaluate its performance. This is implemented by executing functions such as `pqcprep.training_tools.train_QNN()`, `pqcprep.training_tools.test_QNN()`, `pqcprep.training_tools.ampl_train_QNN()`,
 40`pqcprep.plotting_tools.benchmark_plots()` and `pqcprep.plotting_tools.benchmark_plots_ampl()`. A collection of different directories containing data files and plots is generated in the process. 
 41As the program is running, relevant information is displayed in the terminal. The most important output produced are the PQC weights which can be used to replicate the trained PQC and 
 42include it in various algorithms or other applications. 
 43
 44### Options
 45
 46The following options can be passed to the command-line tool. Note that many of these options are closely linked to the arguments of  `pqcprep.training_tools.train_QNN()`, `pqcprep.training_tools.test_QNN()`, `pqcprep.training_tools.ampl_train_QNN()`. 
 47Consulting the documentation for these functions can thus be useful.The order in which options are passed is irrelevant. Most options can be passed using a short-hand (often just one letter) beginning with `-` as well as a longer, more descriptive name beginning with `--`. 
 48
 49
 50Options requiring one (or more) variable(s) to be passed alongside them, i.e. `-option VAL` : 
 51
 52- **-D**, **--DIR** :
 53
 54    Set the parent directory for the output files: `-D X` or `--D` X sets the parent directory to `X`. If not provided, the current working directory is taken as the default.
 55
 56- **-n**, **--n** : 
 57
 58    Set number of input qubits: `-n X` or `--n X` sets the number of input qubits to `X`. If not provided, a default value of 6 is taken. 
 59
 60- **-m**, **--m** : 
 61
 62    Set number of target qubits: `-m X` or `--m X` sets the number of target qubits to `X`. If not provided, a default value of 3 is taken. This has no effect if `--A` or `--ampl` is passed. 
 63
 64-  **-L**, **--L** :   
 65
 66    Set number of network layers: `-L X` or `--L X` sets the number of network layers to `X`. Multiple options can be given and will be executed sequentially, e.g. `-L X Y Z` will 
 67    create three PQCs with `X`, `Y` and `Z` network layers, respectively.  If not provided, a default value of 6 is taken. 
 68
 69- **-f**, **--f** : 
 70
 71    Set phase function to be evaluated: `-f X` or `--f X` sets the phase function to `X`, where `X` must be one of the options for `mode` in `pqcprep.psi_tools.psi()`. This has no effect if `--A` or `--ampl` is passed. 
 72    If not provided, the default `'psi'` function is evaluated. 
 73
 74- **-l**, **--loss** : 
 75
 76    Set loss function to be used by the optimiser: `-l X` or `--loss X` sets the loss function to `X`, where `X` must be one of the optiosn for `loss_str` in `pqcprep.training_tools.set_loss_func()`. 
 77    If not provided, the default loss function is `SAM`. 
 78
 79- **-e**, **--epochs** : 
 80
 81    Set the number of epochs (training runs): `-e X` or `--epochs X` sets the number of epochs to `X`. If not provided, the default number of epochs is 600.
 82
 83- **-M**, **--meta** :    
 84
 85    Include a meta information in the file names of the output: `-M X` or `--meta X` results in the string `X` to be included in the name string created via 
 86    `pqcprep.file_tools.vars_to_name_str()` (or via `pqcprep.file_tools.vars_to_name_str_ampl()` if `-A` or `--ampl` is passed). Note that the string `X` may not 
 87    contain spaces or the sequence `'--'`. If not provided, no meta information is added to the file names. 
 88
 89- **-ni**, **--nint** : 
 90
 91    Set number of integer input qubits: `-ni X` or `--nint X` sets the number of integer input qubits to `X`. If not provided, all input qubits are taken to be integer qubits. 
 92
 93- **-mi**, **--mint** : 
 94
 95    Set number of integer target qubits: `-mi X` or `--mint X` sets the number of integer target qubits to `X`. If not provided, all target qubits are taken to be integer qubits. This has no effect if `--A` or `--ampl` is passed.     
 96
 97-  **-d**, **--delta** :   
 98
 99    Set value of the `delta` parameter used to train the function evaluation network in some contexts: `-d X` or `--delta X` sets the value of `delta` to `X`. Note that `X` must be a float between 0 and 1. Multiple options can be given and will be executed sequentially, e.g. `-d X Y Z` will 
100    train three PQCs with `delta` equal to `X`, `Y` and `Z`, respectively. This has no effect unless `-TS` or `-H` are passed and if `-A` or `--ampl` are passed. If not provided, a default value of 0 is taken.     
101    See `pqcprep.training_tools.train_QNN()` for more information. 
102
103- **-p**, **--WILL_p** :
104
105    Set value of the `p` parameter used by the `WILL` loss function: `-p X` or `--WILL_p X` sets the value of `p`  to `X`. Multiple options can be given and will be executed sequentially, e.g. `-p X Y Z` will 
106    train three PQCs with `p` equal to `X`, `Y` and `Z`, respectively. This has no effect unless `-l WILL` is passed (i.e. the `WILL` los function is used). If not provided, a default value of 1 is used. 
107
108- **-q**, **--WILL_q** :
109
110    Set value of the `q` parameter used by the `WILL` loss function: `-q X` or `--WILL_q X` sets the value of `q`  to `X`. Multiple options can be given and will be executed sequentially, e.g. `-q X Y Z` will 
111    train three PQCs with `q` equal to `X`, `Y` and `Z`, respectively. This has no effect unless `-l WILL` is passed (i.e. the `WILL` los function is used). If not provided, a default value of 1 is used. 
112
113- **-RP**, **--repeat_params** : 
114
115    Set value of the `repeat_parameters` option for the function evaluation network: `-RP X` or `--repeat_parameters X` sets the value of `repeat_parameters` to `X`. Note that `X` must be a valid option for the 
116    argument `repeat_parameters` of `pqcprep.training_tools.train_QNN()`. This has no effect if `-A` or `--ampl` is passed. If not provided, the default value of None is taken. 
117
118- **-fA**, **--f_ampl** :   
119
120    Set amplitude function to be prepared: `-fA X` or `--f_ampl X` sets the amplitude function to `X`, where `X` must be one of the options for `mode` in `pqcprep.psi_tools.A()`. This has no effect unless `--A` or `--ampl` is passed. 
121    If not provided, the default `'x76'` function is evaluated. 
122
123- **--xmin** :   
124
125    Set minimum of the amplitude function domain: `--xmin X` sets the mininum of the amplitude function domain to `X`. This has no effect unless `--A` or `--ampl` is passed. 
126    If not provided, the default value of 40.0 is taken.    
127
128- **--xmax** :   
129
130    Set minimum of the amplitude function domain: `--xmax X` sets the maximum of the amplitude function domain to `X`. This has no effect unless `--A` or `--ampl` is passed. 
131    If not provided, the default value of 168.0 is taken.       
132
133- **--seed** : 
134
135    Set the seed for random number generation: `--seed X` sets the seed to `X`. If not provided, the default value of 1680458526 is taken. 
136    
137
138Options not requiring values to be passed alongside them, i.e. `-option` : 
139
140- **-h**, **--help** : 
141
142    Show a summary of the various options and exit. 
143
144- **-A**, **--ampl** :
145
146    If passed, the generated PQC performs amplitude preparation, as opposed to function evaluation, corresponding to executing `pqcprep.training_tools.ampl_train_QNN()`. If not 
147    passed (default), the generted PQC performs function evaluation, corresponding to executing `pqcprep.training_tools.train_QNN()`. 
148
149- **-r**, **--real** :
150
151    If passed, the generated PQC only involves CX and Ry gates, resuling in real amplitudes. 
152
153- **-PR**, **--pahse_reduce** :
154
155    If passed, reduce the function to be evaluated to a phase between 0 and 1. This has no effect if `-A` or `--ampl` is passed. 
156
157- **-TS**, **--train_suerpos** :
158
159    If passed, train the function evaluation network with inputs in superposition, as opposed to randomly sampled basis states. This has no effect if `-A` or `--ampl` is passed.
160
161- **-H**, **--hayes** :
162
163    This is a short-hand used to train a circuit aimed at reproducing the results of Hayes 2023. Passing `-H` or `--hayes` is equivalent to passing `-TS -r -n 6 -PR`. 
164    This has no effect is `-A` or `--ampl` is passed. **This option is recommended for most applications**.
165
166- **-gs**, **--gen_seed** : 
167
168    If passed, generate the seed used for random number generation from the current timestamp. This overrides the value specified via `--seed`. 
169
170- **-RPA**, **--repeat_params_ampl** :
171
172    If passed, set the `repeat_params` option of `pqcprep.training_tools.ampl_train_QNN()` to True. This has no effect unless `-A` or `--ampl` is passed.
173
174- **-I**, **--ignore_duplicates** :
175
176    If passed, existing outputs with the same name string will be ignored and overwritten. 
177
178- **-R**, **--recover** :
179
180    If passed, training will be continued from existing TEMP files (for the case of the program being interrupted). If no relevant TEMP files are found the program 
181    will execute normally. 
182
183- **-S**, **--show** :
184
185    If passed, output plots are shown as they are produced. The plots will be saved to file regardless of whether
186    this option is passed or not. This has no effect if `-NP` or `--no_plots` is passed.  
187
188- **-P**, **--pdf** :
189
190    If passed, output plots are saved as PDFs, as opposed to PNGs. This has no effect if `-NP` or `--no_plots` is passed.  
191
192- **-NP**, **--no_plots** :     
193
194    If passed, no output plots are produced. 
195
196
197### Examples 
198
199The large number of options that can be passed to the command-line tool allow for a large degree of customisation, at the cost of increased usage complexity. 
200The following examples serve to showcase a few simple uses of the tool. Note that the prompts assume that *pqcprep* has been installed via pip (see above), which enables 
201the command `pqcprep` in the terminal. The examples also hold for installation via GitHub by replacing `pqcprep` with `python3 -m pqcprep` when run in the appropriate directory (see above). 
202
2031. Train a PQC to evaluate a linear phase function using the WIM loss function and otherwise default settings: 
204
205                pqcprep -l WIM -f linear 
206
2072. Train several PQCs, with different numbers of networks layers, to evaluate a quadratic phase function with 4 target qubits, the `-H` flag and otherwise default settings:
208
209                pqcprep -m 4 -l quadratic -L 3 6 9 12 -H
210
2113. Train a PQC to prepare a uniform amplitude function with repeated parameters and output plots saved as PDFs:
212
213                pqcprep -A -fA uniform -P -RPA 
214
215These examples mainly serve to illustrate how options are passed to the command-line tool and do not carry any particular significance in terms of the PQCs produced. 
216
217## Using Module Functions 
218
219While the built-in command-line tool is the most efficient way of generating and training PQCs for state preparation, the detailed analysis and post-processing of the circuits 
220will typically require bespoke functions depending on the application. The package *pqcprep* provides a wide range of functions, ranging from low- to high-level functionality, that can 
221be imported and used for these purposes. A full overview of the provided functions is provided as part of this documentation. 
222
223To import a function `<function>` from the sub-module `<submodule>` use the command
224
225                from pqcprep.<submodule> import <function>
226
227when *pqcprep* is installed via pip (see above). A modified version of this import, taken into account local file paths, has to be used when installing via GitHub. 
228
229The sub-modules included with *pqcprep* are 
230
231- `pqcprep.binary_tools` 
232
233- `pqcprep.file_tools`
234
235- `pqcprep.phase_tools`
236
237- `pqcprep.plotting_tools`
238
239- `pqcprep.pqc_tools`
240
241- `pqcprep.psi_tools`
242
243- `pqcprep.resource_tools`
244
245- `pqcprep.training_tools`
246
247and it is recommended to survey the functions included in each of the submodules when working with the package. 
248                                
249# Approach
250
251The key challenge tackled by *pqcprep* is to construct a PQC that can perform function evaluation: $\ket{j}\ket{0} \mapsto \ket{j} \ket{\Psi'(j)}$, for some analytical function 
252$\Psi$, with $\Psi ' \equiv \Psi / 2 \pi$. Throughout this documentation, the $n$-qubit register containing the $\ket{j}$ and the $m$-qubit register containing the $\ket{\Psi'(j)}$ 
253will be referred to as the "input register" and "target register", respectively.
254
255A quantum convolutional neural network (QCNN) is used to approach the problem. A QCNN is a parametrised quantum circuit involving multiple layers. Two types of network layers 
256are implemented here:
257
258- convolutional layers (CL) involve multi-qubit entanglement gates; 
259
260- input layers (IL) (replacing the conventional QCNN pooling layers) involve controlled single-qubit operations on target qubits. 
261
262Input qubits only appear as controls throughout the QCNN. 
263
264### Convolutional Layers 
265
266Each CL involves the cascaded application of a two-qubit operator on the target register. A general two-qubit operator involves 15 parameters. Hence, to reduce the parameter space, 
267the canonical three-parameter operator
268
269$$
270\\mathcal{N}(\\alpha, \\beta, \\gamma) =  \exp \\left( i \\left[ \\alpha X \otimes X + \\beta Y \\otimes Y + \\gamma Z \otimes Z \\right] \\right)
271$$
272
273is applied, at the cost of restricting the search space. This can be decomposed (see [Vatan 2004](https://arxiv.org/pdf/quant-ph/0308006)) into 3 CX, 3 $\\text{R}_\\text{z}$, and 2 $\\text{R}_\\text{y}$ gates.
274A two-parameter real version, $\\mathcal{N}_\\mathbb{R}(\lambda, \mu)$, can be obtained by removing the $\\text{R}_\\text{z}$. 
275
276Two convolutional layer topologies are implemented, loosely based on [Sim 2019](https://arxiv.org/pdf/1905.10876): 
277
278- neighbour-to-neighbour/linear CLs: the $\\mathcal{N}$ (or $\\mathcal{N}_\\mathbb{R}$) gate is applied to neighbouring target qubits; 
279
280- all-to-all/quadratic CLs: the $\\mathcal{N}$ (or $\\mathcal{N}_\\mathbb{R}$) gate is applied to all combinations of target qubits. 
281
282The $\mathcal{N}$-gate cost of neighbour-to-neighbour (NN) layers is $\\mathcal{O}(m)$ while that of all-to-all (AA) layers is $\\mathcal{O}(m^2)$.
283The QCNN uses alternating linear and quadratic CLs.
284
285### Input Layers 
286
287ILs, replacing pooling layers, feed information about the input register into the target register.  
288An IL involves a sequence of controlled generic single-qubit rotations (CU3 gates) on the target qubits, with input qubits as controls.
289For an IL producing states with real amplitudes, the CU3 gates are replaced with $\\text{CR}_\\text{y}$ gates.
290Each input qubit controls precisely one CU3 (or $\\text{CR}_\\text{y}$ operation), resulting in an $\\mathcal{O}(n)$ gate cost.
291ILs are inserted after every second convolutional layer, alternating between control states 0 and 1. 
292
293### Training the QCNN 
294
295For training, the QCNN is wrapped as a [SamplerQNN](https://qiskit-community.github.io/qiskit-machine-learning/stubs/qiskit_machine_learning.neural_networks.SamplerQNN.html) 
296object and connected to PyTorch's [Adam optimiser](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html) via [TorchConnector](https://qiskit-community.github.io/qiskit-machine-learning/stubs/qiskit_machine_learning.connectors.TorchConnector.html). 
297The optimiser determines improved parameter values for each training run ("epoch") based on the calculated loss between output and target state. 
298Beyond loss, mismatch is an important metric:
299$$
300M= 1 - |\\braket{\\psi_\\text{target}| \\psi_\\text{out}}|. 
301$$
302
303There are two ways to train the QCNN on input data:
304
3051.  Training on individual states: one of the $2^n$ input states, $\ket{j},$ is randomly chosen each epoch. 
306    The network is thus taught to transform $\ket{j}\ket{0} \mapsto \ket{j}\ket{\Psi'(j)} $ for each of the states individually.
3072.  Training in superposition: the network is taught to transform 
308$$
309\\left(\\sum^{2^n-1}_{j=0} c_j \\ket{j} \\right) \\ket{0} \\mapsto  \\sum^{2^n -1}_{j=0} c_j \\ket{j}\\ket{\\Psi'(j)},
310$$
311    where the coefficients $c_j \\sim \\frac{1}{\\sqrt{2^n}}$ are randomly sampled each epoch.  
312    By linearity, this teaches the network to transform $\ket{j}\ket{0} \mapsto \ket{j}\ket{\Psi'(j)} $ for each $\ket{j}$. 
313
314
315One can also train the QCNN to produce a target distribution independent of the input register. This is equivalent to constructing an operator $\hat{U}_A$
316such that 
317$$\hat{U}_A \ket{0}^{\otimes n} =  \\frac{1}{|\\tilde{A}|} \sum^{2^n-1}_{j=0} \\tilde{A}(j) \ket{j}$$ 
318for some distribution function $\\tilde{A}$.
319
320# Background
321
322*pqcprep* builds on a scheme for quantum state preparation presented in [Hayes 2023](https://arxiv.org/pdf/2306.11073):
323a complex vector $\\boldsymbol{h} =\\lbrace \\tilde{A}_j e^{i \Psi (j)} | 0 \leq j < N \\rbrace$, where $\\tilde{A}$, $\Psi$ are real functions 
324that can be computed efficiently, is prepared as the quantum state 
325$$ \ket{h} = \\frac{1}{\\vert \\tilde{A} \\vert} \sum^{2^n-1}_{j=0} \\tilde{A}(j) e^{i \Psi (j)} \ket{j}, $$
326using $n =\\lceil \log_2 N \\rceil$ qubits. 
327
328This requires operators $\hat{U}_A$ and $\hat{U}_\Psi$ such that 
329\\begin{align}
330\hat{U}_A \ket{0}^{\otimes n} &=  \\frac{1}{|\\tilde{A}|} \sum^{2^n-1}_{j=0} \\tilde{A}(j) \ket{j}, \\newline
331\hat{U}_\Psi \ket{j} &= e^{i \Psi (j)} \ket{j}.
332\\end{align}
333
334$\hat{U}_\Psi$ is constructed via an operator $\hat{Q}_\Psi$ that performs function evaluation in an ancilla register,
335\\begin{equation}
336\hat{Q}_\Psi  \ket{j} \ket{0}^{\otimes m}_a = \ket{j} \ket{\Psi'(j)}_a,
337\\end{equation}
338with $\Psi'(j) \equiv \Psi(j) / 2 \pi$, as well as an operator $\hat{R}$ that extracts the phase, 
339$$ \hat{R} \ket{j} \ket{\Psi'(j)}_a = \ket{j} e^{i 2 \pi \Psi' (j)} \ket{\Psi' (j)}_a.$$
340Thus, $\hat{U}_\Psi = \hat{Q}_{\Psi}^\dagger \hat{R} \hat{Q}_\Psi$ with $\hat{Q}_{\Psi}^\dagger$ clearing the ancilla register: 
341\\begin{align} 
342\hat{U}_\Psi  \hat{U}_A \ket{0} \ket{0}_a &= \\frac{1}{|\\tilde{A}|} \sum^{2^n-1}_{j=0} \\tilde{A}(j) \hat{U}_\Psi \ket{j} \ket{0}_a  \\newline 
343&= \\frac{1}{|\\tilde{A}|} \sum^{2^n-1}_{j=0} \\tilde{A}(j)  \hat{Q}_\Psi^\dagger \hat{R} \hat{Q}_\Psi \ket{j} \ket{0}_a  \\newline 
344&= \\frac{1}{|\\tilde{A}|} \sum^{2^n-1}_{j=0} \\tilde{A}(j)  \hat{Q}_\Psi^\dagger \hat{R}  \ket{j} \ket{\Psi'(j)}_a  \\newline 
345&= \\frac{1}{|\\tilde{A}|} \sum^{2^n-1}_{j=0} \\tilde{A}(j)  \hat{Q}_\Psi^\dagger \ket{j} e^{i \Psi(j)} \ket{\Psi'(j)}_a  \\newline 
346&= \\frac{1}{|\\tilde{A}|} \sum^{2^n-1}_{j=0} \\tilde{A}(j) e^{i \Psi(j)} \ket{j}  \ket{0}_a  \\newline 
347&= \ket{h} \ket{0}_a  \\newline 
348\\end{align}
349
350This size, $m$, of the ancilla register limits the precision to which $\Psi(j)$ can be encoded to $\sim 2^{1-m} \pi$. 
351
352# Imprint 
353
354
355David Amorim, 2024. Email: [*2538354a@student.gla.ac.uk*](mailto:2538354a@student.gla.ac.uk) .
356
357
358This project was funded by a Carnegie Vacation Scholarship and supervised by Prof Sarah Croke (University of Glasgow, School of Physics and Astronomy). 
359
360"""