diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml index e1a3711..97ac076 100644 --- a/.github/workflows/test.yml +++ b/.github/workflows/test.yml @@ -1,36 +1,59 @@ -name: test +name: Testing + on: [push, pull_request, workflow_dispatch] jobs: - pytest: - name: pytest + build: runs-on: ubuntu-latest - - strategy: - matrix: - python-version: [3.7] - steps: + - name: Checkout ๐Ÿ›Ž๏ธ + uses: actions/checkout@v4 + - name: Set up Python - uses: actions/setup-python@v1 + uses: actions/setup-python@v4 with: - python-version: 3.7 + python-version: '3.x' - - name: Checkout ๐Ÿ›Ž๏ธ - uses: actions/checkout@v2 - - name: Install Dependencies run: | + pip install torch + pip install torchvision + pip install gymnasium + pip install pyro-ppl + pip install POT + pip install numpy + pip install scikit-learn + pip install tqdm + pip install pandas + pip install lightning + pip install pydantic pip install -U pytest pytest-cov - pip install -U -r src/requirements.txt - + ls ./ + - name: Testing run: | - PYTHONPATH=src/ pytest tests/ --cov=mylib --cov-report=xml - - - name: Upload to Codecov - uses: codecov/codecov-action@v2 - with: - files: ./coverage.xml, - fail_ci_if_error: true - verbose: true + PYTHONPATH=src/ pytest tests/ --cov=src --cov-report=xml + + - name: Generate coverage badge + run: | + python badge_generator.py + + - name: Check for changes in coverage badge + id: check_changes + run: | + if git diff --exit-code -- coverage-badge.svg; then + echo "No changes in coverage badge" + echo "::set-output name=changes::false" + else + echo "Changes detected in coverage badge" + echo "::set-output name=changes::true" + fi + + - name: Commit and push coverage badge + if: steps.check_changes.outputs.changes == 'true' + run: | + git config --global user.name 'github-actions[bot]' + git config --global user.email 'github-actions[bot]@users.noreply.github.com' + git add coverage-badge.svg + git commit -m "Update coverage badge" + git push origin develop diff --git a/.gitignore b/.gitignore index 66717dc..726df61 100644 --- a/.gitignore +++ b/.gitignore @@ -120,3 +120,10 @@ logs/ */mnist *.csv !.dvc +data +logs +pretrain_embeddings +wget-log +checkpoints +*.parquet +*.pt \ No newline at end of file diff --git a/BLOGPOST.md b/BLOGPOST.md new file mode 100644 index 0000000..c6347e2 --- /dev/null +++ b/BLOGPOST.md @@ -0,0 +1,224 @@ +# DataMetaMap: Why Compare Datasets? A Method-Driven Blogpost + +Understanding **dataset similarity** is a hidden key to transfer learning. If we can measure how "close" one dataset is to another, we can make smarter choices about which model to pre-train onโ€”saving time and boosting performance. + +But how do you embed entire datasets into a shared vector space? Our new library, **DataMetaMap**, implements four powerful, research-backed approaches. Below, we walk through the **key methodological insight** behind each one. + +--- + +## 1. Maximum Mean Discrepancy (MMD) โ€” A Classical Kernel View + +**Based on:** *A Kernel Two-Sample Test* (Gretton et al., 2012, following the review in arXiv:2208.11726) + +### Core Idea + +MMD answers a fundamental question: *Are two datasets sampled from the same distribution?* Unlike deep learning approaches, MMD works without trainingโ€”it directly computes a distance between distributions using kernel functions. + +### Mathematical Formulation + +Let $P$ and $Q$ be two probability distributions. Given samples $X = \{x_1, ..., x_m\} \sim P$ and $Y = \{y_1, ..., y_n\} \sim Q$, the squared MMD is: + +$$\text{MMD}^2(P, Q) = \left\| \mathbb{E}_{x \sim P}[\phi(x)] - \mathbb{E}_{y \sim Q}[\phi(y)] \right\|^2_{\mathcal{H}}$$ + +where $\phi$ maps data into a Reproducing Kernel Hilbert Space (RKHS) $\mathcal{H}$. Using the kernel trick $k(x, x') = \langle \phi(x), \phi(x') \rangle_{\mathcal{H}}$, we get: + +$$\text{MMD}^2 = \mathbb{E}_{x, x' \sim P}[k(x, x')] - 2\mathbb{E}_{x \sim P, y \sim Q}[k(x, y)] + \mathbb{E}_{y, y' \sim Q}[k(y, y')]$$ + +In practice, we use the unbiased empirical estimate: + +$$\widehat{\text{MMD}}^2 = \frac{1}{m(m-1)}\sum_{i \neq j} k(x_i, x_j) - \frac{2}{mn}\sum_{i,j} k(x_i, y_j) + \frac{1}{n(n-1)}\sum_{i \neq j} k(y_i, y_j)$$ + +### Key Observations + +- **No training required** โ€” MMD works directly on raw features or neural network representations +- **Choice of kernel matters** โ€” RBF (Gaussian) kernels with bandwidth selection are standard; DataMetaMap supports multiple kernels +- **Computational cost** โ€” $O((m+n)^2)$ makes it suitable for moderate-sized datasets + +### How to Use in DataMetaMap + +Pass two datasets to the MMD embedder. The method returns a scalar distance. For embedding, we compute pairwise MMD distances to a set of reference datasets, creating a distance vector. + +**Best for:** Quick baseline comparisons, detecting dataset shift, benchmarking other methods. + +--- + +## 2. Task2Vec โ€” Embedding Tasks via Fisher Information + +**Based on:** *Task2Vec: Task Embedding for Meta-Learning* (Achille et al., arXiv:1905.11063) + +### Core Idea + +Every dataset defines a "task" for a neural network. The Fisher Information Matrix (FIM) tells us which parameters are most important for that task. By computing the diagonal of the FIM, Task2Vec creates a vector that captures the task's geometry. + +### Mathematical Formulation + +For a model with parameters $\theta$ and a dataset $\mathcal{D} = \{(x_i, y_i)\}$ with loss $\mathcal{L}(x, y; \theta)$, the Fisher Information Matrix is: + +$$F(\theta) = \mathbb{E}_{x, y \sim \mathcal{D}}\left[ \nabla_\theta \log p(y|x; \theta) \nabla_\theta \log p(y|x; \theta)^\top \right]$$ + +Computing the full $F$ is prohibitive for modern networks. Task2Vec uses the **diagonal approximation**: + +$$f_k = \mathbb{E}_{x, y \sim \mathcal{D}}\left[ \left( \frac{\partial \log p(y|x; \theta)}{\partial \theta_k} \right)^2 \right]$$ + +The task embedding is then: + +$$z_{\text{task}} = \text{diag}(F) \quad \text{or} \quad z_{\text{task}} = \log \text{diag}(F)$$ + +After fine-tuning a reference network on $\mathcal{D}$ (or using a single gradient step), we compute these per-parameter importances. + +### Key Observations + +- **Reference network dependent** โ€” Different architectures produce different similarity judgments +- **Fine-tuning is required** โ€” Each dataset needs adaptation of the base model +- **Embedding dimensionality** โ€” Equals number of network parameters (typically millions), often reduced via PCA +- **Log-transform** helps stabilize high-variance Fisher entries + +### How to Use in DataMetaMap + +1. Choose a reference network (e.g., ResNet-18 pretrained on ImageNet) +2. For each dataset, fine-tune for a few epochs +3. Compute diagonal Fisher Information matrix +4. Return the flattened vector (optionally log-transformed) + +**Best for:** Comparing classification tasks when you have a good reference model. + +--- + +## 3. Dataset2Vec โ€” Learning Dataset Representations + +**Based on:** *Dataset2Vec: Learning Dataset Meta-Features* (Jomaa et al., arXiv:1902.03545) + +### Core Idea + +Why compute Fisher matrices when we can learn to embed datasets directly? Dataset2Vec is a **meta-learning** approach: train a neural network that encodes any dataset (as a set of labeled examples) into a fixed-size vector, then optimize this encoder to predict something useful (like relative task similarity). + +### Mathematical Formulation + +Dataset2Vec processes a dataset as an unordered set: + +$$z_{\mathcal{D}} = f_{\text{pool}} \left( \{ g(x_i, y_i) \mid (x_i, y_i) \in \mathcal{D} \} \right)$$ + +where: +- $g$ is a per-example encoder (typically a small MLP processing the concatenated input and one-hot label) +- $f_{\text{pool}}$ is a permutation-invariant pooling function (sum, mean, or max) +- The output $z_{\mathcal{D}}$ is a $d$-dimensional vector (e.g., $d=128$) + +The training objective is meta-learning: given triplets of datasets $\mathcal{D}_a, \mathcal{D}_b, \mathcal{D}_c$ where $\mathcal{D}_a$ is more similar to $\mathcal{D}_b$ than to $\mathcal{D}_c$ (in terms of downstream transfer performance), we use a ranking loss: + +$$\mathcal{L} = \max\left(0, \|z_a - z_b\|^2 - \|z_a - z_c\|^2 + \alpha\right)$$ + +### Key Observations + +- **Once trained, inference is fast** โ€” No per-dataset fine-tuning or Fisher computation +- **Meta-training requires many datasets** โ€” Typically hundreds or thousands +- **Permutation invariance** ensures the embedding doesn't depend on data order +- **Generalization potential** โ€” Can embed datasets not seen during meta-training + +### How to Use in DataMetaMap + +Our library includes: +- Pre-trained Dataset2Vec models on standard benchmarks (e.g., Meta-Dataset) +- Ability to train your own meta-encoder on custom dataset collections +- Support for various pooling strategies and per-example encoders + +**Best for:** Large-scale dataset retrieval when you have many datasets and can afford meta-training. + +--- + +## 4. Wasserstein Task Embedding โ€” Optimal Transport Between Datasets + +**Based on:** *Wasserstein Task Embedding for Meta-Learning* (Lee et al., arXiv:1605.09522) + +### Core Idea + +Instead of comparing datasets through a model, compare them directly using **optimal transport**. The Wasserstein distance measures how much "mass" you must move to transform one probability distribution into another. This geometric viewpoint respects the underlying feature space structure. + +### Mathematical Formulation + +For two probability distributions $\mu$ and $\nu$ on $\mathbb{R}^d$, the $p$-Wasserstein distance is: + +$$W_p(\mu, \nu) = \left( \inf_{\gamma \in \Gamma(\mu, \nu)} \int_{\mathbb{R}^d \times \mathbb{R}^d} \|x - y\|^p d\gamma(x, y) \right)^{1/p}$$ + +where $\Gamma(\mu, \nu)$ is the set of all couplings (joint distributions) with marginals $\mu$ and $\nu$. + +For empirical distributions (our datasets), we solve: +- **1D case** (after projecting features): $W_1(\hat{\mu}, \hat{\nu}) = \frac{1}{n} \sum_{i=1}^n |X_{(i)} - Y_{(i)}|$ (sorted samples) +- **High-dimensional case**: Use entropy-regularized Sinkhorn algorithm for $O(n^2)$ approximation + +To create an **embedding**, Wasserstein Task Embedding computes distances to $K$ reference distributions: + +$$z_{\mathcal{D}} = [W(\mathcal{D}, R_1), W(\mathcal{D}, R_2), ..., W(\mathcal{D}, R_K)]$$ + +Reference distributions can be: +- Randomly sampled subsets from a large meta-dataset +- Prototypical distributions (e.g., Gaussian with different covariances) +- Other datasets in your collection + +### Key Observations + +- **No training required** โ€” Works directly on features (e.g., penultimate layer of a frozen network) +- **Handles different dataset sizes** โ€” Unlike maximum mean discrepancy, optimal transport is robust to $n_1 \neq n_2$ +- **Computational cost** โ€” $O(n^2)$ for exact Wasserstein, $O(n^2 \log n)$ for Sinkhorn approximation +- **Choice of ground distance** โ€” Euclidean is standard, but any metric works (e.g., cosine distance for embeddings) + +### How to Use in DataMetaMap + +1. Extract features for all examples using a frozen pre-trained network +2. Choose reference distributions (e.g., 50 random datasets from a meta-collection) +3. For each dataset, compute Wasserstein distance to each reference +4. Return the $K$-dimensional distance vector as the embedding +5. Optionally apply dimensionality reduction (PCA) if $K$ is large + +**Best for:** Comparing datasets with imbalanced classes, different sizes, or when you want a geometry-aware metric. + +--- + +## What DataMetaMap Does + +Our library implements all four methods **in a unified PyTorch interface**: + +- **Unified API** โ€” Same `fit()` and `transform()` pattern across all embedders +- **Flexible feature extraction** โ€” Raw data, pre-trained features, or learned representations +- **Reference management** โ€” For MMD and Wasserstein methods, handle reference dataset selection +- **Visualization tools** โ€” PCA, t-SNE, and UMAP projections of dataset embeddings +- **Similarity search** โ€” Find nearest datasets to a target + +**No code examples hereโ€”just the methods. But the repo contains ready-to-run demos.** + +--- + +## Method Comparison at a Glance + +| Method | Training Required | Inference Speed | Dimensionality | Handles Different Sizes | Geometric Interpretation | +|--------|:----------------:|:---------------:|:--------------:|:-----------------------:|:------------------------:| +| MMD | None | Medium (quadratic) | Variable (n_refs) | Yes | RKHS distance | +| Task2Vec | Per-dataset fine-tuning | Slow (per dataset) | # Parameters | N/A (fixed network) | Fisher information | +| Dataset2Vec | Meta-training (once) | Fast | Fixed (e.g., 128) | Yes | Learned similarity | +| Wasserstein | None | Slow (quadratic) | Fixed (n_refs) | Yes | Optimal transport | + +--- + +## Key Insight Across All Methods + +Despite their different mathematical origins (kernel methods, Fisher information, learned encoders, optimal transport), **all four approaches reduce to the same operation**: mapping a dataset to a vector where Euclidean distance correlates with transfer learning performance. DataMetaMap lets you compare which method works best for your domain. + +--- + +## Practical Recommendations from Our Observations + +- **For quick baselines** โ†’ Start with MMD on pre-trained features +- **When you have a strong reference model** โ†’ Try Task2Vec with few-shot fine-tuning +- **When you have many datasets for training** โ†’ Train a Dataset2Vec meta-encoder +- **When dataset sizes vary greatly** โ†’ Wasserstein embedding is your best bet +- **When computational budget is high** โ†’ Ensemble multiple methods + +--- + +## References + +- Gretton et al. (2012) โ€“ *A Kernel Two-Sample Test.* Review: arXiv:2208.11726 +- Achille et al. (2019) โ€“ *Task2Vec: Task Embedding for Meta-Learning.* arXiv:1905.11063 +- Jomaa et al. (2019) โ€“ *Dataset2Vec: Learning Dataset Meta-Features.* arXiv:1902.03545 +- Lee et al. (2016) โ€“ *Wasserstein Task Embedding for Meta-Learning.* arXiv:1605.09522 + +**Our repo:** [DataMetaMap](https://github.com/intsystems/DataMetaMap) diff --git a/PLAN.md b/PLAN.md index 38933c3..1d0a93c 100644 --- a/PLAN.md +++ b/PLAN.md @@ -49,6 +49,12 @@ DataMetaMap aims to compare datasets within a unified vector space to identify s - **Baseline Selection** Identify and select baseline methods from literature for comparison during benchmarking. + + Description (done by Meshkov Vladislav): + - Establish baselines for each embedding method as specified in the paper + - Assess baselines from the literature and determine their appropriateness for our benchmarking framework + - Conduct a literature review to identify similar papers and gather additional straightforward baselines for meaningful comparison + - Document baseline descriptions in the benchmark specifications, along with rationale for their inclusion - **Data Collection** Gather a diverse collection of datasets for experimentation, ensuring they represent various domains and formats. @@ -56,8 +62,20 @@ DataMetaMap aims to compare datasets within a unified vector space to identify s - **Data Preprocessing Pipeline** Design and implement preprocessing steps to handle different dataset formats and ensure consistent input for embedding methods. + + Description (done by Minashkin Vladislav): + - Handle diverse data types: images, text, tabular, and time series with type-specific loaders + - Fill missing values and remove bad data points + - Save all settings for exact reproduction + - **Evaluation Metrics Definition** Define quantitative metrics to evaluate embedding quality and similarity measurement accuracy. + + Description (done by Stepanov Ilya): + - Define cosine similarity, Euclidean distance and kernel-based distance as core metrics to evaluate geometric separability and structural relationships between dataset embeddings in the latent space + - Define Maximum Mean Discrepancy (MMD) metric as described in the original paper + - Ensure that all embedding methods and baselines will be evaluate using all metrics so that comparison across methods is consistent and reproducible + - **Planning and Specifications** Define technical specifications and success criteria based on research findings and data availability. @@ -89,8 +107,14 @@ DataMetaMap aims to compare datasets within a unified vector space to identify s - **Technical Report** Document the methodology, experimental setup, and findings in a comprehensive technical report. -- **User and Developer Documentation** - Create detailed documentation for users and contributors, including setup guides and API references. In this task we should create github.io page where user can find documentation for all classes and their methods. Github.io page must have headers for functions and links to their each source code. +- **User and Developer Documentation** + Build documentation. + + + Description (done by Papay Ivan): + - create detailed documentation for users and contributors, including setup guides and API references + - create github.io page where user can find documentation for all classes and their methods + - Github.io page must have headers for functions and links to their each source code. - **Demo Examples and Blog Post** Prepare example notebooks or scripts demonstrating real-world use cases, and write an explanatory blog post highlighting project value and insights. diff --git a/README.md b/README.md index d604965..9c753e7 100644 --- a/README.md +++ b/README.md @@ -5,9 +5,20 @@ DataMetaMap

DataMetaMap

-

Datasetes vector representation

+

Datasets in a shared vector space

+

+ + Coverage_2 + + + Coverage + + + Docs + +

@@ -24,15 +35,27 @@

-"DataMetaMap" is Python library designed to represent various multiple datasets in the same vector space for comparision them with each other. Library is offering a suite of advanced datasete embedding techniques compatible with PyTorch. +DataMetaMap is a Python library for representing datasets in a shared vector space, so you can compare datasets (and tasks) using standard distances and similarity metrics. + +It includes multiple dataset embedding algorithms implemented on top of PyTorch: +- Dataset2Vec (tabular datasets) +- Task2Vec (supervised tasks via Fisher information) +- Wasserstein Task Embedding (Optimal Transport based) +- MMD (used as a baseline in some workflows) ## ๐Ÿ“ฌ Assets 1. [Technical Meeting 1 - Presentation](https://github.com/intsystems/DataMetaMap/blob/master/assets/BMM_technical_1.pdf) +2. [Blog Post](https://github.com/intsystems/DataMetaMap/edit/meshkovvl/BLOGPOST.md) +3. [Technical Report](https://github.com/intsystems/DataMetaMap/blob/develop/report/data_meta_map.pdf) ## ๐Ÿ’ก Motivation -We need an ability to compare information similarity between various datasets. If so, we can find the most similar dataset to our target task dataset. Choosing the best pretrain neural net on it can narrow down the choice of potential candidates for pretrain. +If you can measure similarity between datasets, you can: +- retrieve the most similar dataset(s) to a target dataset +- choose better pretraining sources +- cluster tasks and datasets, and visualize the dataset landscape +- track dataset drift over time ## ๐Ÿ—ƒ Algorithms - [x] Maximum Mean Discrepancy, also see [๐Ÿ“ review](https://arxiv.org/abs/1605.09522) @@ -43,19 +66,89 @@ We need an ability to compare information similarity between various datasets. I ## ๐Ÿ› ๏ธ Install -TODO +Requires Python 3.10+. + +### Virtual Environment (venv) + +Recommended: install into an isolated virtual environment. + +macOS / Linux: + +```bash +python3 -m venv .venv +source .venv/bin/activate +python -m pip install -U pip +``` + +Windows (PowerShell): + +```powershell +py -m venv .venv +.\.venv\Scripts\Activate.ps1 +py -m pip install -U pip +``` + +### Install from source + +```bash +git clone https://github.com/intsystems/DataMetaMap.git +cd DataMetaMap +python -m pip install . +``` + +### Development install (editable + dev dependencies) + +```bash +python -m pip install -e ".[dev,viz]" +``` -## ๐Ÿš€ Quickstart -TODO +## ๐Ÿš€ Quickstart + +### Dataset2Vec (tabular) + +`Dataset2VecEmbedder` trains on a collection of tabular datasets, then embeds a single dataset as a vector. + +```python +import numpy as np +import torch + +from data_meta_map.models import get_model +from data_meta_map.dataset2vec_embedder import Dataset2VecEmbedder + +# Model for tabular embedding +model = get_model("dataset2vec") +embedder = Dataset2VecEmbedder(model, max_epochs=1, batch_size=8, n_batches=5) + +# Each training dataset: last column is the target +train_ds1 = np.random.randn(64, 6).astype(np.float32) +train_ds2 = np.random.randn(64, 6).astype(np.float32) +embedder.fit([train_ds1, train_ds2]) + +X = torch.randn(32, 5) +y = torch.randint(0, 2, (32,)).float() +z = embedder.embed(X, y) +print(z.shape) # (output_size,) +``` + +### Wasserstein Task Embedding (PyTorch Dataset / DataLoader) + +`WassersteinEmbedder` can compute class statistics from a dataset and build embeddings via a distance matrix. +See [demo/wasserstein/simple_example1 (1).ipynb](demo/wasserstein/simple_example1%20(1).ipynb) for an end-to-end notebook. + +### Task2Vec (supervised tasks) + +Task2Vec computes a task embedding based on the Fisher information of a probe network. +See [demo/task2vec/simple_example.ipynb](demo/task2vec/simple_example.ipynb) for an example workflow. ## ๐ŸŽฎ Demo -TODO +Notebooks are in: +- [demo/dataset2vec/simple_example.ipynb](demo/dataset2vec/simple_example.ipynb) +- [demo/task2vec/simple_example.ipynb](demo/task2vec/simple_example.ipynb) +- [demo/wasserstein/simple_example1 (1).ipynb](demo/wasserstein/simple_example1%20(1).ipynb) + +## ๐Ÿ“ˆ Benchmarks -## ๐Ÿ“š Stack -TODO - -## ๐Ÿงฉ Some details -TODO +Benchmark notebooks and scripts live in [benchmarks/](benchmarks). In particular, see [benchmarks/pretrain_benchmark/](benchmarks/pretrain_benchmark) for experiments comparing transfer performance between pretraining sources and target tasks. ## ๐Ÿ‘ฅ Contributors - [Vladislav Minashkin](https://github.com/minashkinvladislav) (Project planning, Benchmarking, Algorithms) @@ -65,4 +158,14 @@ TODO - You are welcome to contribute to our project! ## ๐Ÿ”— Useful links -ะŸะพะบะฐ ั‡ั‚ะพ ั‚ัƒั‚ ะฝะธั‡ะตะณะพ ะฝะตั‚ +- Docs: https://intsystems.github.io/DataMetaMap +- Report: [report/data_meta_map.pdf](report/data_meta_map.pdf) + +## ๐Ÿงช Development + +Run tests: + +```bash +pytest -q +pytest -q --cov=src/data_meta_map --cov-report=term-missing +``` diff --git a/badge_generator.py b/badge_generator.py new file mode 100644 index 0000000..1cf3d76 --- /dev/null +++ b/badge_generator.py @@ -0,0 +1,34 @@ +# this script was generated by ChatGPT with minor fixes +import xml.etree.ElementTree as ET + + +def get_coverage(): + tree = ET.parse("coverage.xml") + root = tree.getroot() + line_rate = float(root.get("line-rate", 0)) * 100 + return round(line_rate, 2) + + +def generate_badge(coverage): + color = "red" + if coverage >= 90: + color = "lightgreen" + elif coverage >= 75: + color = "yellowgreen" + elif coverage >= 50: + color = "olive" + badge = f""" + + + + Coverage: {coverage}% + + + """ + with open("coverage-badge.svg", "w") as badge_file: + badge_file.write(badge) + + +if __name__ == "__main__": + coverage = get_coverage() + generate_badge(coverage) diff --git a/benchmarks/pretrain_benchmark/apply_benchmark.ipynb b/benchmarks/pretrain_benchmark/apply_benchmark.ipynb new file mode 100644 index 0000000..75db554 --- /dev/null +++ b/benchmarks/pretrain_benchmark/apply_benchmark.ipynb @@ -0,0 +1,492 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "id": "345ea8f5-7653-4e0f-92ce-2b5d713d24d7", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/machenike/bmml/DataMetaMap/src/data_meta_map/wasserstein_embedder.py:8: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n", + " from tqdm.autonotebook import tqdm\n" + ] + } + ], + "source": [ + "from data_meta_map.task2vec import task2vec\n", + "from data_meta_map.models import get_model\n", + "from data_meta_map import datasets\n", + "from data_meta_map.task2vec import plot_distance_matrix\n", + "from data_meta_map.task2vec import Task2Vec\n", + "from data_meta_map.task2vec.task_similarity import cosine\n", + "\n", + "import numpy as np" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "ef2aa314-ea65-440b-8db2-45c0ae674e14", + "metadata": {}, + "outputs": [], + "source": [ + "import yaml\n", + "from collections import defaultdict\n", + "\n", + "def get_pretrained_results(dataset_names, path_to_logs):\n", + " pretrain2downstream_results = defaultdict(dict)\n", + " for pretrain_name in dataset_names:\n", + " for downstream_name in dataset_names:\n", + " if downstream_name == 'imagenet':\n", + " continue\n", + " if downstream_name == pretrain_name:\n", + " continue\n", + " with open(f'{path_to_logs}/{pretrain_name}-{downstream_name}.yaml', 'r') as f:\n", + " test_acc = yaml.safe_load(f)['task']['test_acc']\n", + " pretrain2downstream_results[pretrain_name][downstream_name] = test_acc\n", + " return pretrain2downstream_results\n", + "\n", + "def get_embedder_results(dataset_names, path_to_pretrained_logs, embedder_func, similarity_func):\n", + " embeddings = []\n", + "\n", + " dataset_list = [datasets.__dict__[name](root='../../data')[0] for name in dataset_names]\n", + " for name, dataset in zip(dataset_names, dataset_list):\n", + " embeddings.append(embedder_func(dataset))\n", + "\n", + " def find_closest(dataset_names, embeddings, similarity_metric):\n", + " res = dict()\n", + " for dataset_name, embed in zip(dataset_names, embeddings):\n", + " dists = {name:similarity_metric(embed, other_embed) for name, other_embed in zip(dataset_names, embeddings) if name != dataset_name}\n", + " argmax = max(dists.items(), key = lambda x: x[1])[0]\n", + " res[dataset_name] = argmax\n", + " return res\n", + " \n", + " dataset2closest = find_closest(dataset_names, embeddings, cosine_similarity)\n", + " pretrain2downstream_results = get_pretrained_results(dataset_names, path_to_pretrained_logs)\n", + " method_performance = {name: {'accuracy' : pretrain2downstream_results[dataset2closest[name]][name], \n", + " 'pretrain': dataset2closest[name]} for name in dataset_names}\n", + " return method_performance\n", + "\n", + "def get_random_baseline(dataset_names, path_to_pretrained_logs):\n", + " pretrain2downstream_results = get_pretrained_results(dataset_names + ['imagenet'], path_to_pretrained_logs)\n", + " random_performance = {}\n", + " for name in dataset_names:\n", + " if name == 'imagenet':\n", + " continue\n", + " choice = name\n", + " while choice == name:\n", + " choice = np.random.choice(dataset_names)\n", + " random_performance[name] = {\n", + " 'accuracy': pretrain2downstream_results[choice][name],\n", + " 'pretrain': choice\n", + " }\n", + " return random_performance\n", + " \n", + "def get_big_pretrain_baseline(dataset_names, path_to_pretrained_logs):\n", + " pretrain2downstream_results = get_pretrained_results(dataset_names + ['imagenet'], path_to_pretrained_logs)\n", + " big_baseline_performance = {}\n", + " for name in dataset_names:\n", + " if name == 'imagenet':\n", + " continue\n", + " big_baseline_performance[name] = {\n", + " 'accuracy': pretrain2downstream_results['imagenet'][name],\n", + " 'pretrain': 'imagenet'\n", + " }\n", + " return big_baseline_performance " + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "b8c01eb9-3a90-4c8b-bfea-d2253b8785ff", + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Caching features: 0%| | 0/14 [00:00\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
mnistcifar10cifar100letterskmnist
task2vec0.99600.87980.49600.9325960.9957
random0.98150.63900.35120.9335100.9752
big_pretrain0.98480.88340.67890.9197120.9851
\n", + "" + ], + "text/plain": [ + " mnist cifar10 cifar100 letters kmnist\n", + "task2vec 0.9960 0.8798 0.4960 0.932596 0.9957\n", + "random 0.9815 0.6390 0.3512 0.933510 0.9752\n", + "big_pretrain 0.9848 0.8834 0.6789 0.919712 0.9851" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import pandas as pd\n", + "\n", + "pd.DataFrame(res).T" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/benchmarks/pretrain_benchmark/get_pretrained_to_task.py b/benchmarks/pretrain_benchmark/get_pretrained_to_task.py new file mode 100644 index 0000000..717c028 --- /dev/null +++ b/benchmarks/pretrain_benchmark/get_pretrained_to_task.py @@ -0,0 +1,291 @@ +from data_meta_map.task2vec import task2vec +from data_meta_map.models import get_model +from data_meta_map import datasets +from data_meta_map.task2vec import plot_distance_matrix + +import torch +from torch.utils.data import DataLoader, random_split +import torch.nn as nn +import pandas as pd +from tqdm import tqdm + +import pandas as pd +import torch +from torch.utils.data import TensorDataset, DataLoader, random_split + +import hydra +import yaml +from omegaconf import DictConfig, OmegaConf + + +@torch.inference_mode() +def evaluate(model, loader, device): + + model.eval() + correct = 0 + total = 0 + + for x, y in loader: + + x = x.to(device) + y = y.to(device) + + logits = model(x) + pred = logits.argmax(1) + + correct += (pred == y).sum().item() + total += y.size(0) + + return correct / total + + +def train(model: nn.Module, + train_loader: DataLoader, + val_loader: DataLoader, + optimizer, + criterion, + best_path: str, + num_epochs: int, + patience: int, + device: str): + + best_acc = 0 + best_epoch = 0 + epochs_without_improvement = 0 + logs = {} + + for epoch in range(num_epochs): + + model.train() + pbar = tqdm(train_loader, desc=f"Epoch {epoch+1}/{num_epochs}") + for x, y in pbar: + x = x.to(device) + y = y.to(device) + + logits = model(x) + loss = criterion(logits, y) + + loss.backward() + optimizer.step() + optimizer.zero_grad() + + val_acc = evaluate(model, val_loader, device=device) + + # improvement + if val_acc > best_acc: + + best_acc = val_acc + epochs_without_improvement = 0 + best_epoch = epoch + + torch.save(model.state_dict(), best_path) + else: + epochs_without_improvement += 1 + + # early stopping condition + if epochs_without_improvement >= patience: + + break + + model.load_state_dict(torch.load(best_path)) + model.eval() + logs = { + 'best_val_accuracy': best_acc, + 'best_val_epoch': best_epoch + } + return model, logs + + +@torch.inference_mode() +def extract_embeddings(model, loader): + + embeddings = [] + labels = [] + + model.eval() + + for x, y in tqdm(loader): + + x = x.cuda() + + feat = model(x) + feat = feat.view(feat.size(0), -1) + + embeddings.append(feat.cpu()) + labels.append(y) + + embeddings = torch.cat(embeddings).numpy() + labels = torch.cat(labels).numpy() + + return embeddings, labels + + +def save_parquet(embeds, labels, path): + + df = pd.DataFrame({ + "hidden_state": embeds.tolist(), + "target": labels + }) + + df.to_parquet(path, index=False) + + +class MLP(nn.Module): + + def __init__(self, input_dim=512, hidden_dim=256, num_classes=10): + super().__init__() + + self.net = nn.Sequential( + nn.Linear(input_dim, hidden_dim), + + nn.BatchNorm1d(hidden_dim), + nn.ReLU(), + nn.Dropout(0.15), + + nn.Linear(hidden_dim, num_classes) + ) + + def forward(self, x): + return self.net(x) + + +def estimate_task_performance(config): + train_df = pd.read_parquet(config['paths']['train_parquet']) + test_df = pd.read_parquet(config['paths']['test_parquet']) + + X_train = torch.tensor(train_df.hidden_state.tolist(), dtype=torch.float32) + y_train = torch.tensor(train_df.target.values, dtype=torch.long) + + X_test = torch.tensor(test_df.hidden_state.tolist(), dtype=torch.float32) + y_test = torch.tensor(test_df.target.values, dtype=torch.long) + + VAL_RATIO = config['training_task']['val_split'] + + dataset = TensorDataset(X_train, y_train) + + train_size = int((1 - VAL_RATIO) * len(dataset)) + val_size = len(dataset) - train_size + + train_ds, val_ds = random_split(dataset, [train_size, val_size]) + + BATCH_SIZE = config['training_task']['batch_size'] + train_loader = DataLoader(train_ds, batch_size=BATCH_SIZE, shuffle=True) + val_loader = DataLoader(val_ds, batch_size=BATCH_SIZE) + test_loader = DataLoader(TensorDataset( + X_test, y_test), batch_size=BATCH_SIZE) + + criterion = nn.CrossEntropyLoss() + + device = torch.device(config.device) + model = MLP(input_dim=512, + hidden_dim=config['estimate_network_params']['hidden_dim'], + num_classes=max(y_test) + 1).to(device) + optimizer = torch.optim.Adam( + model.parameters(), lr=config['training']['lr']) + + model, logs = train( + model, + train_loader, + val_loader, + optimizer, + criterion, + best_path=config['paths']['checkpoint_task'], + num_epochs=config['training_task']['epochs'], + patience=config['training_task']['early_stopping_epochs'], + device=device + ) + test_acc = evaluate(model, test_loader, device) + logs['test_acc'] = test_acc + return logs + + +def apply_pretrain(config): + device = torch.device(config.device) + + model = get_model(config['model']['name'], + pretrained=config['model']['pretrained']).to(device) + train_dataset, test_dataset = datasets.__dict__[ + config['pretrain_dataset_name']](root=config['paths']['data_dir']) + + device = "cuda" + + VAL_RATIO = config['training']['val_split'] + BATCH_SIZE = config['training']['batch_size'] + + train_size = int((1 - VAL_RATIO) * len(train_dataset)) + val_size = len(train_dataset) - train_size + + train_ds, val_ds = random_split(train_dataset, [train_size, val_size]) + + train_loader = DataLoader(train_ds, batch_size=BATCH_SIZE, shuffle=True) + val_loader = DataLoader(val_ds, batch_size=BATCH_SIZE, shuffle=False) + test_loader = DataLoader( + test_dataset, batch_size=BATCH_SIZE, shuffle=False) + + criterion = nn.CrossEntropyLoss() + optimizer = torch.optim.Adam( + model.parameters(), lr=config['training']['lr']) + + model, logs = train( + model, + train_loader, + val_loader, + optimizer, + criterion, + best_path=config['paths']['checkpoint'], + num_epochs=config['training']['epochs'], + patience=config['training']['early_stopping_epochs'], + device=device + ) + return logs + + +def inference_task(config): + device = torch.device(config.device) + + model = get_model(config['model']['name'], + pretrained=config['model']['pretrained']).to(device) + + if not config['use_basic_model']: + best_path = config['paths']['checkpoint'] + model.load_state_dict(torch.load(best_path)) + model.eval() + + embedder = torch.nn.Sequential(*list(model.children())[:-1]).to(device) + + train_dataset, test_dataset = datasets.__dict__[ + config['task_dataset_name']](root=config['paths']['data_dir']) + + BATCH_SIZE = config['training']['batch_size'] + test_loader = DataLoader( + test_dataset, batch_size=BATCH_SIZE, shuffle=False) + train_full_loader = DataLoader( + train_dataset, batch_size=BATCH_SIZE, shuffle=False) + + train_embeds, train_labels = extract_embeddings( + embedder, train_full_loader) + test_embeds, test_labels = extract_embeddings(embedder, test_loader) + + save_parquet(train_embeds, train_labels, config['paths']['train_parquet']) + save_parquet(test_embeds, test_labels, config['paths']['test_parquet']) + + +@hydra.main(config_path=".", config_name="config", version_base=None) +def main(config: DictConfig): + _, pretrain_logs = apply_pretrain( + config) if config['need_pretrain'] else None, {} + task_logs = {} + if not config['pretrain_only']: + inference_task(config) + task_logs = estimate_task_performance(config) + logs = { + 'pretrain': pretrain_logs, + 'task': task_logs, + 'pretrain_to_task_config': OmegaConf.to_container(config, resolve=True) + } + with open(config['paths']['save_logs'], 'w') as f: + yaml.safe_dump(logs, f) + + +if __name__ == "__main__": + main() diff --git a/benchmarks/pretrain_benchmark/wasserstein/__init__py b/benchmarks/pretrain_benchmark/wasserstein/__init__py new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/benchmarks/pretrain_benchmark/wasserstein/__init__py @@ -0,0 +1 @@ + diff --git a/benchmarks/pretrain_benchmark/wasserstein/benchmark_wasserstein.ipynb b/benchmarks/pretrain_benchmark/wasserstein/benchmark_wasserstein.ipynb new file mode 100644 index 0000000..c9b7e70 --- /dev/null +++ b/benchmarks/pretrain_benchmark/wasserstein/benchmark_wasserstein.ipynb @@ -0,0 +1,757 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "id": "b2c3d4e5-0002-0002-0002-000000000001", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T14:14:45.714150Z", + "iopub.status.busy": "2026-04-05T14:14:45.713984Z", + "iopub.status.idle": "2026-04-05T14:14:48.807218Z", + "shell.execute_reply": "2026-04-05T14:14:48.805875Z" + } + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/papayiv/misc/DataMetaMap/src/data_meta_map/wasserstein_embedder.py:8: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", + " from tqdm.autonotebook import tqdm\n" + ] + } + ], + "source": [ + "from data_meta_map.wasserstein_embedder import WassersteinEmbedder\n", + "from data_meta_map import datasets\n", + "import torch\n", + "import numpy as np\n", + "import yaml\n", + "import pandas as pd\n", + "from collections import defaultdict\n", + "from pathlib import Path\n", + "\n", + "# ะŸัƒั‚ัŒ ะบ ะปะพะณะฐะผ pretrainโ†’task (ัะพะทะดะฐั‘ั‚ัั ัะบั€ะธะฟั‚ะพะผ get_pretrained_to_task.py)\n", + "LOGS_PATH = Path(__file__).parent.parent.parent / \"logs\" / \"pretrain_to_task_logs\" \\\n", + " if \"__file__\" in dir() else Path(\"../../logs/pretrain_to_task_logs\")\n", + "DATA_ROOT = \"../../data\"" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "b2c3d4e5-0002-0002-0002-000000000002", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T14:14:48.810271Z", + "iopub.status.busy": "2026-04-05T14:14:48.809843Z", + "iopub.status.idle": "2026-04-05T14:14:48.816069Z", + "shell.execute_reply": "2026-04-05T14:14:48.815096Z" + } + }, + "outputs": [], + "source": [ + "def get_pretrained_results(dataset_names, path_to_logs=LOGS_PATH):\n", + " pretrain2downstream_results = defaultdict(dict)\n", + " for pretrain_name in dataset_names:\n", + " for downstream_name in dataset_names:\n", + " if downstream_name == 'imagenet':\n", + " continue\n", + " if downstream_name == pretrain_name:\n", + " continue\n", + " with open(f'{path_to_logs}/{pretrain_name}-{downstream_name}.yaml', 'r') as f:\n", + " test_acc = yaml.safe_load(f)['task']['test_acc']\n", + " pretrain2downstream_results[pretrain_name][downstream_name] = test_acc\n", + " return pretrain2downstream_results\n", + "\n", + "\n", + "def get_random_baseline(dataset_names, path_to_pretrained_logs=LOGS_PATH):\n", + " pretrain2downstream_results = get_pretrained_results(dataset_names + ['imagenet'], path_to_pretrained_logs)\n", + " random_performance = {}\n", + " for name in dataset_names:\n", + " if name == 'imagenet':\n", + " continue\n", + " choice = name\n", + " while choice == name:\n", + " choice = np.random.choice(dataset_names)\n", + " random_performance[name] = {\n", + " 'accuracy': pretrain2downstream_results[choice][name],\n", + " 'pretrain': choice\n", + " }\n", + " return random_performance\n", + "\n", + "\n", + "def get_big_pretrain_baseline(dataset_names, path_to_pretrained_logs=LOGS_PATH):\n", + " pretrain2downstream_results = get_pretrained_results(dataset_names + ['imagenet'], path_to_pretrained_logs)\n", + " big_baseline_performance = {}\n", + " for name in dataset_names:\n", + " if name == 'imagenet':\n", + " continue\n", + " big_baseline_performance[name] = {\n", + " 'accuracy': pretrain2downstream_results['imagenet'][name],\n", + " 'pretrain': 'imagenet'\n", + " }\n", + " return big_baseline_performance" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "b2c3d4e5-0002-0002-0002-000000000003", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T14:14:48.818127Z", + "iopub.status.busy": "2026-04-05T14:14:48.817949Z", + "iopub.status.idle": "2026-04-05T14:15:58.118851Z", + "shell.execute_reply": "2026-04-05T14:15:58.117478Z" + } + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\r", + "Preprocessing dataset: 0%| | 0/4 [00:00" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "import matplotlib.pyplot as plt\n", + "import seaborn as sns\n", + "\n", + "# ะขะตะฟะปะพะฒะฐั ะบะฐั€ั‚ะฐ ะฟะพะฟะฐั€ะฝั‹ั… ั€ะฐััั‚ะพัะฝะธะน ะผะตะถะดัƒ ะดะฐั‚ะฐัะตั‚ะฐะผะธ\n", + "fig, ax = plt.subplots(figsize=(6, 5))\n", + "sns.heatmap(D_dataset, annot=True, fmt='.3f',\n", + " xticklabels=dataset_names, yticklabels=dataset_names,\n", + " cmap='viridis', ax=ax)\n", + "ax.set_title('Wasserstein Dataset Distance Matrix\\n(mean Bures-Wโ‚‚ over cross-class pairs)')\n", + "plt.tight_layout()\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "b2c3d4e5-0002-0002-0002-000000000005", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T14:15:58.570495Z", + "iopub.status.busy": "2026-04-05T14:15:58.569878Z", + "iopub.status.idle": "2026-04-05T14:15:58.583630Z", + "shell.execute_reply": "2026-04-05T14:15:58.582516Z" + } + }, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
recommended_pretrainwasserstein_distance
downstream
mnistletters109.4255
cifar10kmnist300.8243
cifar100cifar10402.7262
lettersmnist109.4255
kmnistmnist112.1396
\n", + "
" + ], + "text/plain": [ + " recommended_pretrain wasserstein_distance\n", + "downstream \n", + "mnist letters 109.4255\n", + "cifar10 kmnist 300.8243\n", + "cifar100 cifar10 402.7262\n", + "letters mnist 109.4255\n", + "kmnist mnist 112.1396" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# DataFrame ั ั€ะตะบะพะผะตะฝะดะฐั†ะธัะผะธ ะฟะพ ะฟั€ะตะดะพะฑัƒั‡ะตะฝะธัŽ ะฝะฐ ะพัะฝะพะฒะต Wasserstein\n", + "n = len(dataset_names)\n", + "rows = []\n", + "for i, name in enumerate(dataset_names):\n", + " closest = dataset2closest[name]\n", + " j = dataset_names.index(closest)\n", + " rows.append({\n", + " 'downstream': name,\n", + " 'recommended_pretrain': closest,\n", + " 'wasserstein_distance': round(D_dataset[i, j], 4),\n", + " })\n", + "\n", + "recommendations_df = pd.DataFrame(rows).set_index('downstream')\n", + "recommendations_df" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "b2c3d4e5-0002-0002-0002-000000000006", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T14:15:58.586406Z", + "iopub.status.busy": "2026-04-05T14:15:58.586065Z", + "iopub.status.idle": "2026-04-05T14:16:02.393785Z", + "shell.execute_reply": "2026-04-05T14:16:02.392721Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Closest pretraining dataset per task:\n", + " mnist โ†’ letters\n", + " cifar10 โ†’ kmnist\n", + " cifar100 โ†’ cifar10\n", + " letters โ†’ mnist\n", + " kmnist โ†’ mnist\n" + ] + }, + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
recommended_pretrainwasserstein_distanceaccuracy
downstream
mnistletters109.42550.98970
cifar10kmnist300.82430.61880
cifar100cifar10402.72620.49600
lettersmnist109.42550.93351
kmnistmnist112.13960.99570
\n", + "
" + ], + "text/plain": [ + " recommended_pretrain wasserstein_distance accuracy\n", + "downstream \n", + "mnist letters 109.4255 0.98970\n", + "cifar10 kmnist 300.8243 0.61880\n", + "cifar100 cifar10 402.7262 0.49600\n", + "letters mnist 109.4255 0.93351\n", + "kmnist mnist 112.1396 0.99570" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "from IPython.display import display\n", + "\n", + "logs_available = LOGS_PATH.exists() and any(LOGS_PATH.glob(\"*.yaml\"))\n", + "\n", + "if logs_available:\n", + " wasserstein_res = get_wasserstein_results(dataset_names, embedder)\n", + " rows = []\n", + " for name in dataset_names:\n", + " closest = dataset2closest[name]\n", + " j = dataset_names.index(closest)\n", + " rows.append({\n", + " 'downstream': name,\n", + " 'recommended_pretrain': closest,\n", + " 'wasserstein_distance': round(D_dataset[dataset_names.index(name), j], 4),\n", + " 'accuracy': wasserstein_res[name]['accuracy'],\n", + " })\n", + " display(pd.DataFrame(rows).set_index('downstream'))\n", + "else:\n", + " print(f\"ะ›ะพะณะธ ะฝะต ะฝะฐะนะดะตะฝั‹ ะฒ {LOGS_PATH}\")\n", + " print(\"ะ—ะฐะฟัƒัั‚ะธั‚ะต get_pretrained_to_task.py ั‡ั‚ะพะฑั‹ ะฟะพะปัƒั‡ะธั‚ัŒ accuracy.\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python (dmm)", + "language": "python", + "name": "dmm" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.20" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/code/main.ipynb b/code/main.ipynb deleted file mode 100644 index 10b40a0..0000000 --- a/code/main.ipynb +++ /dev/null @@ -1,264 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "Fg5GvKa0qXkT" - }, - "source": [ - "# ะฃัั‚ะฐะฝะพะฒะบะฐ ะฝัƒะถะฝั‹ั… ะฑะธะฑะปะธะพั‚ะตะบ" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": { - "id": "e51DLLWEqXkW", - "outputId": "27094984-95e4-4301-937e-2f3d1bd7f9b7", - "colab": { - "base_uri": "https://localhost:8080/" - } - }, - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "\u001b[33m DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.\n", - " pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.\u001b[0m\n", - " Building wheel for mylib (setup.py) ... \u001b[?25l\u001b[?25hdone\n" - ] - } - ], - "source": [ - "import warnings\n", - "warnings.filterwarnings(\"ignore\")\n", - "\n", - "try:\n", - " import google.colab\n", - " IN_COLAB = True\n", - "except:\n", - " IN_COLAB = False\n", - " \n", - "if IN_COLAB:\n", - " !git clone -qq https://github.com/Intelligent-Systems-Phystech/ProjectTemplate.git /tmp/repo\n", - " !python3 -m pip install -qq /tmp/repo/src/ && rm -rf /tmp/repo" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "7P4TWOOmqXkY" - }, - "source": [ - "# ะ˜ะผะฟะพั€ั‚ ะฑะธะฑะปะธะพั‚ะตะบ" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": { - "id": "4EVJmkwOqXkY" - }, - "outputs": [], - "source": [ - "import os\n", - "\n", - "from sklearn.linear_model import LogisticRegression\n", - "import matplotlib.pyplot as plt\n", - "\n", - "from mylib.train import cv_parameters, Trainer, SyntheticBernuliDataset" - ] - }, - { - "cell_type": "markdown", - "source": [ - "# ะะฐัั‚ั€ะพะนะบะฐ ะพะบั€ัƒะถะตะฝะธั" - ], - "metadata": { - "id": "stLbGQHDq6lS" - } - }, - { - "cell_type": "code", - "source": [ - "if IN_COLAB:\n", - " figures = '.'\n", - "else:\n", - " figures = '../figures'" - ], - "metadata": { - "id": "0TbwjK9Qq5Pg" - }, - "execution_count": 3, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "f2HeCQ89qXkZ" - }, - "source": [ - "# ะ ะฐะฑะพั‚ะฐ ั ะดะฐะฝะฝั‹ะผะธ" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "dJJn3rfVqXka" - }, - "source": [ - "## ะ“ะตะฝะตั€ะฐั†ะธั ัะธะฝั‚ะตั‚ะธั‡ะตัะบะพะน ะฒั‹ะฑะพั€ะบะธ" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": { - "id": "OSQfsmRrqXka" - }, - "outputs": [], - "source": [ - "dataset = SyntheticBernuliDataset(n=10, m=100, seed=42)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "KBgjk1tvqXkb" - }, - "source": [ - "# ะญะบัะฟะตั€ะธะผะตะฝั‚ ั ะปะพะณะธัั‚ะธั‡ะตัะบะพะน ั€ะตะณั€ะตััะธะตะน" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "19nb_usNqXkc" - }, - "source": [ - "## ะžะฑัƒั‡ะตะฝะธะต ะพะดะฝะพะน ะผะพะดะตะปะธ" - ] - }, - { - "cell_type": "code", - "source": [ - "trainer = Trainer(\n", - " LogisticRegression(penalty='l1', solver='saga', C=1.0),\n", - " dataset.X, dataset.y,\n", - ")\n", - "\n", - "trainer.train()\n", - "print(trainer.eval())" - ], - "metadata": { - "id": "ZMK7mqNQZPXJ", - "outputId": "a95524d6-db85-4a34-9c36-fa2befec2f34", - "colab": { - "base_uri": "https://localhost:8080/" - } - }, - "execution_count": 5, - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - " precision recall f1-score support\n", - "\n", - " 0 1.00 0.91 0.95 11\n", - " 1 0.93 1.00 0.97 14\n", - "\n", - " accuracy 0.96 25\n", - " macro avg 0.97 0.95 0.96 25\n", - "weighted avg 0.96 0.96 0.96 25\n", - "\n" - ] - } - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "g1Mq2ylfqXkd" - }, - "source": [ - "## ะ—ะฐะฒะธัะธะผะพัั‚ัŒ ะฒะตัะพะฒ ะฟะฐั€ะฐะผะตั‚ั€ะพะฒ ะพั‚ ะฟะฐั€ะฐะผะตั‚ั€ะพะฒ ั€ะตะณัƒะปัั€ะธะทะฐั†ะธะธ" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": { - "id": "HvHCcNPwqXkd" - }, - "outputs": [], - "source": [ - "Cs, accuracy, parameters = cv_parameters(dataset.X, dataset.y)" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": { - "id": "LQL1mX1VqXke", - "outputId": "0868006d-d6c1-4504-9e66-d66af0fb56e9", - "colab": { - "base_uri": "https://localhost:8080/", - "height": 283 - } - }, - "outputs": [ - { - "output_type": "display_data", - "data": { - "text/plain": [ - "
" - ], - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYoAAAEKCAYAAAAMzhLIAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nOzdd3hUZdrH8e8zM6kz6b0QQidAqIHQiyAgIujacMXuuuoWd9d1++u6Vd1mXVddda1rFxBsVOkt9FBCaAkJ6X0yk8mU5/0jkQ0hCRCTTMD7c125TGbOnHMnXsxvzlOV1hohhBCiNQZvFyCEEKJ7k6AQQgjRJgkKIYQQbZKgEEII0SYJCiGEEG0yebuAjhYZGamTk5O9XYYQQlxUduzYUaq1jmrpuUsuKJKTk8nIyPB2GUIIcVFRSuW09pw0PQkhhGiTBIUQQog2SVAIIYRokwSFEEKINklQCCGEaJMEhRBCiDZJUAghhGiT14JCKdVDKbVGKXVAKbVfKfVAC8copdTTSqkjSqm9SqmRnVVPlaOK5/c8z/6y/Z11CSGEuCh5c8KdC3hQa71TKRUE7FBKrdBaH2hyzBVAv8avdOBfjf/tcAZl4LndzwEwOGJwZ1xCCCEuSl67o9BaF2itdzZ+XwMcBBKaHTYfeF032AKEKqXiOqOeIN8geoX0Yl/pvs44vRBCXLS6RR+FUioZGAFsbfZUAnCyyc95nB0mKKXuUUplKKUySkpK2l1HamQqmaWZyK5/QgjxP14PCqWUBfgQ+JHWuro959Bav6i1TtNap0VFtbim1XlJjUylvK6cfGt+u88hhBCXGq8GhVLKh4aQeEtr/VELh+QDPZr8nNj4WKdIjUoFkOYnIYRowpujnhTwMnBQa/2PVg77GLi1cfTTWKBKa13QWTX1C+uHn9FPgkIIIZrw5qinCcAtwD6l1O7Gx34FJAForZ8HPgXmAEcAG3BHZxbkY/AhJTyFfSUSFEII8RWvBYXWegOgznGMBr7XNRU1SI1K5b2s93B6nPgYfLry0kII0S15vTO7u0mNTMXhdpBdke3tUoQQoluQoGgmNbKhQzuzNNPLlQghRPcgQdFMgiWBML8w9pbs9XYpQgjRLUhQNKOUIjUqVUY+CSFEIwmKFqRGpnK86jg19TXeLkUIIbxOgqIFqZGpaLSsJCuEEEhQtGhI5BAA9hTv8XIlQgjhfRIULQjxCyElPIVNpzZ5uxQhhPA6CYpG2qNxFttwW+sBmJgwkT0le6iub9c6hUIIccmQoGjkrnZQ9I8d2PeVAjApcRJu7Wbzqc1erkwIIbxLgqKRMcQPg9mH+jwr0NChHeQbxIb8DV6uTAghvEuCopFSCp8EC878hiGxJoOJ8fHj2ZC/AY/2eLk6IYTwHgmKJnwTLDiLbWinG2jopyi1l5JVnuXlyoQQwnskKJrwTbCAB+oLaoGGoACk+UkI8Y0mQdGET6IFAGd+Qz9FZEAkKeEpEhRCiG80CYomGjq0Tac7tOF/w2SrHFVerEwIIbxHgqKJhg7toNN3FNBkmGyBDJMVQnwzSVA009ChXXu6Qzs1MpUQvxDW5K7xcmVCCOEdEhTNNO/QNhlMzOw5kzUn12Bz2rxcnRBCdD0Jimaad2gDzO09F7vLzqrcVd4qSwghvEaCopmWOrSHRw8nwZLAsmPLvFiZEEJ4hwRFMy11aBuUgTm95rClYAslthIvVieEEF1PgqIFzTu0Aeb2mYtHe/js+GderEwIIbqeBEULmndoA/QO6c3giMHS/CSE+MaRoGhBSx3a0NCpfbD8IEcrj3qjLCGE8AqvBoVS6hWlVLFSKrOV56cqpaqUUrsbvx7uirqMIX4Yg31xHDtzNvbsXrMxKiNLjy7tijKEEKJb8PYdxavA7HMcs15rPbzx6/ddUBNKKfz6hVF3pBLt0acfjwyIZFLiJBYdWYTD7eiKUoQQwuu8GhRa63VAuTdraI1//1C03XVW89PNKTdTXlcundpCiG8Mb99RnI9xSqk9SqnPlFKDWzpAKXWPUipDKZVRUtIxw1f9+oaBgrrDFWc8nh6bTt/Qvrx18C201q28WgghLh3dPSh2Aj211sOAZ4DFLR2ktX5Ra52mtU6LiorqkAsbzT74xFuoyz4zKJRSfDvl2xwqP8SOoh0dci0hhOjOunVQaK2rtdbWxu8/BXyUUpFddX3/fmHU51bjqXOd8fjc3nMJ8QvhrYNvdVUpQgjhNd06KJRSsUop1fj9GBrqLeuq6/v1CwUPOI6eOfopwBTAtf2uZfXJ1eRb87uqHCGE8ApvD499G9gMDFBK5Sml7lJK3auUurfxkOuATKXUHuBpYIHuwo4Bv57BKF/DWc1PADcNvAmF4u2Db3dVOUII4RUmb15ca33TOZ5/Fni2i8qhzu1BAwHGhvxUJgN+vUNxtBAUseZYZibP5L3D73HHkDuICIjoqjKFEKJLdeump66UY3eQsmEfi4vPDAW/fqG4yupwldnPes19w+7D4XbwcubLXVWmEEJ0OQmKRkn+vgSZjHxZXnPG4/79wgCoy6486zW9Qnoxr8883j30LoW1hV1SpxBCdDUJiq84nUx0O/iytAp3k24QU1QAxnB/7Ada7kO/d9i9ePDw4t4Xu6pSIYToUhIUjVxlZQx+8Z9UeTS7q/+35alSisChUTiOVOC21p/1ugRLAtf2u5ZF2Ys4WXOyK0sWQoguIUHRyCcujnH2GpTWrGnW/BQ4PAo8YN9X2uJr7xl6D0aDkX/t/ldXlCqEEF1KgqKJ+JEjGJh7jDWlZ86b8Ik14xMbiG13y8uDRAdGszBlIUuPLWV38e6uKFUIIbqMBEUT5gnjGZ25m11WGxXOM2djBwyPpj6nGld5XYuvvWfoPcQExvCnrX/C7XG3eIwQQlyMJCiaMI8ezZis/XhQrKto1vw0tGENKduelu8qAn0C+enon3Ko/BDvHX6v02sVQoiuIkHRhMFsZkRYEEF1dtaUnRkUpnB/fHsGY9td3OrrZ/WcRXpcOs/seoYye5etNCKEEJ1KgqKZkPHjGbl/D2tKq85aRjxweBSuIhvOwtoWX6uU4ldjfoXdaeeJHU90RblCCNHpJCiaMU+YwOgDeyhyuTlUe2Z/REBqJBigdlfrdxW9Q3tz2+DbWHJ0CZvyN3V2uUII0ekkKJrxH5TC2JPHAVhRVn3Gc0aLL/4DI7BlFKGdnlbPcd/w++gV0otHNj+Ctd7a6nFCCHExkKBoRhmNJA9JYUjuMd4rLD+r+ckyPh5PrRPbntbvKvyMfvxhwh8oshXxjx3/6OyShRCiU0lQtMAyYQJXfLmCIzYHGU1maQP49QnBFBOIdeOpNrdCHRY1jFsH3cr7h99n86nNnV2yEEJ0GgmKFpjHj2fqzi0EeNy8XXDm6CWlFJYJ8TgLaqk/XtXKGRp8b/j3SA5O5rebfkt1fXWbxwohRHclQdECn/h4wvv0ZvqhfSwprqTWdeYEusDh0RgCTVg3nmrzPP4mf/488c+U2Er43abftXkHIoQQ3ZUERSuC513FrE8+otbtYUnJmUuMG3yNmMfEYj9Q1upM7a+kRqXyvRHfY3nOchYdWdSZJQshRKeQoGhF8Jw5DD5xlOQ6G+8UlJ/1vHlsPCiwbm77rgLgziF3kh6bzmPbHuNY1bHOKFcIITqNBEUrfKKjsYwbx5yNq9lWVUt2szkVplA/AlKjqN1a0OLy400ZlIE/T/ozfkY/fr7u5zjcjs4sXQghOpQERRtC5l3F9OXLMKF549TZS3IEz0hCOz3UrM0757miA6P544Q/cqj8EH/d/tfOKFcIITqFBEUbgmbMIMJZz6yiPN44VUZJvfOM532iAgkcEY11cwHu6nPfJUzpMYU7Bt/Bu1nv8umxTzurbCGE6FASFG0wmM0EXXYZN73xEg6Ph+dPnr1ybPD0JPB4qF5zfrvb/WDkDxgRPYLfbf4dx6uOd3TJQgjR4SQoziFk3lUkHD3MldrBf/JLKas/c58KU0QA5lGx1G4rxFXZ9ggoAB+DD3+Z/Bf8jH48uPZBbE7bOV8jhBDeJEFxDubx4zFGRrLws4+wuz28cPLspTuCpvcAoGb1+d1VxJpjeWzSYxytPMpvNv4Gj2593SghhPA2CYpzUD4+hC1YQPTSj5kbaOLl/FLKm+1+Zwr1xzI2jtrthdTnn98igOMTxvOTUT9hRc4KXtjzQmeULoQQHUKC4jyEffsmlK8vt679glq3h+dzz76rCJ7RE0OgD5UfH0V7zm8G9q2DbmVen3k8t+c5VuSs6OiyhRCiQ3g1KJRSryilipVSma08r5RSTyuljiil9iqlRnZ1jQCm8HBC5s8n6q03uCY0kBfySjhuO3OUkyHARMgVydTnVGNrY7+KppRSPDzuYYZGDeXXG35NZmmLfwYhhPAqb99RvArMbuP5K4B+jV/3AP/qgppaFH7brWiHgx9sX4uPUvwqO+/sHfBGxuDbI4iqz47jqXO1cqYz+Rn9eGraU4T7h/O9Vd8jpzqnM8oXQoh282pQaK3XAWevj/E/84HXdYMtQKhSKq5rqjuTX9++mCdPwvTaazzUI4o15TV8Wnrm6rHKoAid3wdPrZPqFef/hh8ZEMnzM55Ha813V3yXUntpR5cvhBDt5u07inNJAJoOJcprfOwMSql7lFIZSqmMkpKz5zp0lIg77sBdVsa1OzcxyOzPw9n51LrPXFnWNzEI85hYrJtO4cg5/6XFk0OSeXb6s5TXlXP/yvupqa/p6PKFEKJduntQnBet9Yta6zStdVpUVFSnXSdw7Fj8UlKofOFF/tw7lnyHkydOFJ11XMgVvTCG+FHx/mE89e4WztSyoVFD+duUv5Fdkc39K++XORZCiG6huwdFPtCjyc+JjY95hVKKqAd+iDM3lwErP2dBbDjPnyzmoNV+xnEGfxNh1/XHVWqn+osTF3SNyYmTeXzy4+wr3cf3V38fu8t+7hcJIUQn6u5B8TFwa+Pop7FAlda6wJsFWaZMIWDUKEr++U9+Ex9GsMnIQ1kn8TTr2PbvG4p5XFxDE9SxtnfCa25m8kz+NPFPZBRm8MPVP6TOde4Z30II0Vm8PTz2bWAzMEAplaeUukspda9S6t7GQz4FjgFHgH8D93up1NOUUkQ/+BPcJaXwzts80jeBjGpbi6vLhszuhTHMn/IPDuOxn98oqK9c2ftK/jDhD2wt2MoPV/9Q7iyEEF6jLrXtOdPS0nRGRkanX+fkffdjy8igz/IvWHCijL1WG+vHpBDj53PGcY6cakpe2Iv/wHAibklBKXVB11lyZAn/t/H/GB07mmcue4ZAn8CO/DWEEAIApdQOrXVaS89196anbivqRz/CY7VS9u+X+MuAHjg8ml+3MLfCr2cwIXN6UXegDOv6C+9emd93Pn+e9GcyijK4b+V91DprO+pXEEKI8yJB0U7+A/oTcs01lL/+OvEnT/DT5FiWlVTxYVHFWcdaJsQTMCSCqs+P4zh+Yf0VAHN7z+XxyY+zp2QP31n+HaocF34OIYRoLwmKryH6oZ9iDA6m4Df/x30JEaSHmPnl4TxO1p25NapSirDr+mMKD6Dsv4dwVV74Vqizk2fzxNQnyCrP4vbPb6fE1nnzRYQQoikJiq/BFBZGzK9+Rd2+fVS/9RZPpyShgR8cyMHdrAnK4G8iYmEKut5N6X8yz3uJj6amJU3juRnPkW/N59bPbuVkzfktay6EEF+HBEUjm83G6tWrKSi4sNG3wVfOwTxlMsVPPU1cWSl/6pfIlqpanmthhVmfWDMRt6TgKrFT9uZBtOvC96FIj0vnpZkvUV1fzcJPF7K/dP8Fn0MIIS6EBEUjg8HAhg0b2Ldv3wW9TilF3G9/C0Dhww9zfXQIc6NCeOx4AZsqzt6bwr9vGGHX9sNxpJKKj7LP6vw+H0OjhvLGnDcIMAVwxxd3sPbk2gs+hxBCnC8Jikb+/v707t2bgwcPXvCbt098PDE/e4jaTZuo+M9/+MfAJJL9/bhn/wlONeuvADCPiiF4RhK2ncVUfXq8XWHRO6Q3b855k14hvfjhmh/y34P/bdd5hBDiXCQomkhJSaGiooKiorPXbzqX0BtvJGjmTIqffAqfzH38J7UXdo+HuzJPUOc+u4kpaHpSw8zt9fnnvYVqc5EBkfxn1n+YnDCZR7c9yu+3/B6n29mucwkhRGskKJoYMGAAAAcPHrzg1yqliPvjH/CJiSH/wZ/Sx13PMylJ7Kqx8csW5lcopQi9qg+BI6OpXpFDzcb2LWEV6BPIk9Oe5O7Uu/ng8AfcvfxuyuxnzxIXQoj2kqBowmKxkJSUxKFDh9r1emNwMAl//xvOoiIKfv0brogM4cc9Y3i7oJync87u3FYGRdi1/fEfHEHV0mNYN59q33UNRh4Y+QCPT3qc/WX7uXHZjewu3t2ucwkhRHMSFI201hw4VU1S734UFRVRVta+T+UBw4cT/ZOfULNiBWXPP89DvWK5NiaMR48X8F7h2Xs0KaMi4qaB+A+KoHLJUWrW5bX7d5jTew5vXPEGPgYf7vj8Dt488Kb0WwghvjYJikZ5FXbmPL2eY85QgHbfVQCE33E7IfPnUfLU01hXruSJgT2YGGrhJ4dyWVt+9oZEymQg4uaBBAyNpOrT41Svym33tVMiUnj3qneZlDiJx7c/zoNrH5SZ3EKIr0WColGP8ED6RJlZn2sjNja2Xf0UX1FKEfv73+M/dCinfv4LPNnZvJLai36B/tyZeZyd1Wev16SMBsJvHEjgiIY+i8plx9Ce9t0NBPsG89S0p3hw1IOsyV3DdUuvY0fRjnb/PkKIbzYJiiamDYhm67Fy+vYfQF5eHtXV57+VaXMGPz8Sn30Go8XCyfvuI6C0hP8O602kj4mb9hwjs+bs3euUURF2ff+G0VAb8ql4L6tdk/KgIaxuH3I7b855E1+DL3d+cSdP73xaRkUJIS6YBEUT0wZGU+/2UBsQC7Rv9FNTPtHRJP7rOTxV1eTedTdRtlreH94Hs9HADXuOklV79oZEyqAIndeH4Fk9se0uofS1/XgcF77cx1cGRw7mvave46reV/Hvff/mpk9uIqs86+v8WkKIbxgJiiZGJ4dj9jWytcBJXFwc27Ztw+Np3yf6rwQMHkziv57DmZfHybu/Q4LbyQfD+2JSiht2H+FwS2GhFMHTkhpmcB+tpPi5PbjK2r9xkdnHzB8n/pGnpz1Nqb2UBZ8s4IU9L8jdhRDivEhQNOFrMjCxXyRfZpUwbtw4ysrKyM7O/trnNY8ZQ8KTT1CXlUXevffR0+PiveF98ADX7DrCfmvLIWAeHUvkHUNwV9dT/M/d1B2t/Fp1TEuaxqL5i5iRNINndz/LDctuYG/J3q91TiHEpU+CoplpA6I5VVWHT0QSISEhbNq0qUPOGzRtGvGPP4Zt1y5yb7+dvg47i0f0xc+guHbXEXZVn91nAeDfL4zo7w3HYPGh9OV91Kw/e/LehQjzD+OvU/7KM5c9Q019DQs/XcijWx+lpv7s0VhCCAESFGeZOiAagC+zyxg7diw5OTnk5bV/bkNTIVdeSeIzz+DIzibn5oUkVZazaERfgk1Grt99hA0VLb9Z+0QGEH3/cPwHRlD1yXHKXj+Ax/b1mo2m9pjK4vmLWTBwAW8fept5i+ex7NgymXchhDjLOYNCKeXfFYV0F7Eh/gyKC2ZNVjEjR47Ez8+PzZs3d9j5gy6bRtLLL+EqLeXETd8mJvcEi0f0JcHfl5v2HOODFiblQeN+FrekEDK3N3WHKyh6eheOE19vfoTF18Kv0n/F23PfJs4cxy/X/5I7v7iTQ+Xtn0MihLj0nM8dxTal1N+VUn07vZpuYtrAKHbkVFDnMTBq1CgOHDhARcXZW5y2V2BaGj3ffAOAnG/fTPD2rXw8oi+jQ8x8/2AuT+cUtfjJXilF0MQEou8dBgZFyQt7qVp+At3CooMXYnDEYN6c8yYPj3uYI5VHuGHpDTyy6RFK7aVf67xCiEvD+QTFcOBL4Aml1CdKqblKKdW5ZXnXtAHRuD2a9dklpKeno5Riw4YNHXoN/wEDSH73HXySkjh57314PvyAt4f15proUP58rIAHDuW2uOosgG+PIGJ+OILAkTHUrD5J8b/24CxuuY/jfBmUgev7X88n3/qEhYMWsuTIEuYumsuLe1/E7mr/iCshxMXvfIIiFNgP/A74CPgLcKwzi/K2EUlhRFr8WLzrFCEhIYwePZqdO3de8O535+ITG0vPN97AMnEihY/8jvLf/Z5n+sbxYHIM7xVW8K3dRyh0tNwXYfA3EX59f8JvTsFdXkfR07uoWZfX7tncXwn2DeZno3/GR/M/Ij02nWd2PcPcj+by4eEPcXnaP59DCHHxOp+gKAXeAG4A4oEXgT90ZlHeZjQobkhLZPWhIk5V2pk6dSoBAQF89tlnHd7Za7SYSfzns0R85ztUvvsuJxfewo/8NC8PSeZQbR2zMrLYWnn2TnlfCUyNJObHo/DvH0bVp8cpeWEvzpKvd3cB0CukF09d9hSvzX6NWEssj2x+hGuWXMNnxz/Do79eU5cQ4uJyPkGRBhwGUoEDwNNa61c6tapu4KYxSWjgne0nCQgIYMaMGeTm5l7wVqnnQ5lMRD/4ExKeeZr6Y8c4fu11TDmwh09G9iPAaOCaXUf4x4lC3K2ElDHIl4hbUgi7cQDOYhtFT+2kelVuu5f/aGpkzEjevOJNnpr2FCaDiZ+t+xnXL72elTkrJTCE+IY4Z1BorXdqre8AFgJ9gXVKqV91emVe1iM8kMn9onh3ey4ut4fhw4cTHx/P8uXLcTgcnXLN4MsvJ/mD9zFFRXHyu/cS/sxTfDG0F1fHhPGX44Vcv/so+S1srQoNHd3mEdHE/mQUAYMiqF6RQ9HTO3Ec//orxyqluCzpMj6c9yGPT3och9vBj7/8MdctvY4vTnwhgSHEJe58hseuVUplAOuB22jos7iuIy6ulJqtlMpSSh1RSv2ihedvV0qVKKV2N37d3RHXPV83pydRVO1g1aFiDAYDc+bMwWq1smbNmk67pl+vXiS//x5hN99M+WuvUb5wIf/wqefJgT3YXWNjyrZDvJZfiqetu4tvpxBx+2B0vYeSF/ZS/l4W7pqWA+ZCGJSBOb3nsGT+Eh6d9ChOt5Ofrv0pVy+5msVHFuP0yJIgQlyK1Lna3JVSPYFKoFp3YAO9UspIQ5PW5UAesB24SWt9oMkxtwNpWuvvn+9509LSdEZGRofU6HJ7mPj4GvrHBvH6nWMAWLZsGRkZGdxyyy306dOnQ67TmppVqyj49W/w1NYS+f3vU3vzQn565BTrK6yMCzXz9wFJ9A70a/X1nno3NatPUrM+D+VjIPjynljGxqGMHTPP0u1xsyJnBS/te4msiizizHHcOOBGru57NREBER1yDSFE11BK7dBap7X03Pk0PeVoras6MiQajQGOaK2Paa3rgXeA+R18ja/FZDRw4+gerM8uIbesoYN45syZREVFsWjRImprz95XoiMFTZ9O72VLsUybRskTT+C57VZeM9XxjwE92G+1M3XbIf5yvAB7K8NoDb5GQmYnE/OjkfgmBlG19BhFT+2k7nDHzAkxGozM7jWb9696n39O/yeJQYk8ufNJZnwwg4fWPsSWgi3SLCXEJcCbS3gkACeb/JzX+Fhz1yql9iqlPlBK9WjpREqpe5RSGUqpjJKSkg4tcsGYHijg9c0nAPD19eW6667DbrezePHiTl/ywhQZSeLTT5Hw5JM4CwvJuf4Gpv37WdYOjGdudCj/OFHEtO2HWF5a1WotPlGBRN41hIhbBqHdmtJXMil9dT/Ooo4JOqUUkxMn88qsV1gyfwkLBixg46mNfGf5d5jz0Rye3/M8BdaOHVoshOg652x66rQLK3UdMFtrfXfjz7cA6U2bmZRSEYBVa+1QSn0XuFFrfVlb5+3Ipqev/Pjd3XyWWcDah6YRE9ywosnWrVv57LPPmDVrFuPGjevQ67XGXV1NyTPPUvHWWxiDg4n60QNkzpjNr44WkG1zMDUsiEf6xTPQHNDqObTLg3VjPtWrT6Lr3ZhHxxJ8eU+MQb4dWmudq45VuatYlL2IrYVbUSjGxI1hfp/5TE+aTqBPYIdeTwjx9bTV9OTNoBgHPKK1ntX48y8BtNaPtnK8ESjXWoe0dd7OCIqcslqm/30tC8b04I9Xp9JYJ++++y5ZWVksWLCAAQMGdOg121KXlUXRH/6ILSMDv359CXvo53zQawB/O1GI1e3m5rgIHkyOJcbPp9VzuGud1KzKxbqlAGVSWCYkEDQlEYO/qcPrzavJY+nRpSw5uoR8az4BpgBmJM1gbu+5pMelYzQYO/yaQogL012DwkRDZ/Z0IJ+Gzuxva633NzkmTmtd0Pj9NcDPtdZj2zpvZwQFwK8X7ePd7SdZ/eBUkiIaPg07HA5effVVSktLuf3220lIaKnlrHNoralZuZLiv/4NZ24u5vHjMf34xzzrF8rrp0rxUQbu6RHF/T2iCPFp/c3fWWqnekUO9j0lqAATwVMTMY+Lx+Db8W/eHu1hZ9FOlh1bxvITy6lx1hAZEMms5FnM6TWH1MhULvHVYYTotrplUAAopeYATwJG4BWt9Z+UUr8HMrTWHyulHgXmAS6gHLhPa93m0qadFRRF1XVM/ssarkyN4x83Dj/9eE1NDS+99BIul4u7776bsLCwDr92Wzz19VT897+U/et53FVVBM+ZQ+393+MJh4FFxZWEmIx8t0cUdydGEWxq/c2/Pt9K9fIT1GVVYLD4EDSlB5axsSifzvm073A7WJe3jk+Pfcq6vHXUe+pJsCQwK3kWs5NnMzB8oISGEF2o2wZFZ+isoAB49NODvLj+GJ8/MJkBsUGnHy8uLuaVV14hMDCQ22+/neDg4E65flvcNTWUvfwy5a+9jq6vJ2T+fArvvJun7ZrPS6sJNRn5TmIUdyZGEtbGHYbjRBXVK3NxHKnEEORD0OQemNNjO+UO4ys19TWsyl3F58c/Z0vBFtzaTc/gnlze83Iu73k5KeEpEhpCdDIJig5SUVvP5L+sYXhSKK/fOeaMN6/c3FzefPNNLBaL18ICwFVaStm/X6LinXfQbjch8+dRcNudPO1QfLV8lkUAACAASURBVFFajdlo4Jb4CO7tEU1sG30YjmOVDYFxrKrhDmNSAuaxcRj8Or4Po6mKugpW5a5i+YnlbCvchlu7SbQkMqPnDKYnTWdo1FAMSvbbEqKjSVB0oNc2neC3H+/nHzcM41sjE894rruEBYCzqJiyf/+byvffRzudBM+ZQ8kdd/GiCmBxUQVGpbg2Jox7k6LaHCXlOFFF9apcHNmVqAATlnFxWCYkYDS3HjIdpbKuktUnV7MiZwVbCrbg8riICohiao+pXJZ0GWNix+Br7NjRWkJ8U0lQdCCPR3P9C5s5WmJl5U+mEGk5c2Z007BYuHAh4eHhnVbL+XCVlFD26qtUvP0O2mbDMmUKtXfexethsbxdUI7do7ksPIjv9ohmcpil1Sae+pM1VK85Sd2BMpSPAfPoWCwTEzCFd80GiDX1NazNW8vq3NVsyN+A3WXH7GNmQvwEpvaYyuTEyYT4tTkgTgjRBgmKDnakuIY5T21g5uAYnv32yLOeP3nyJG+99RZGo5GFCxcSFxfXqfWcD1dFBRVvv03FG2/irqjAf+hQ1G238+HAofynoIJSp4uBZn/uSYzimpgwAlpZ5sNZVEvN2jxse0pAawJSowialIBvYlCLx3cGh9vB1oKtrM5dzdq8tZTaSzEoA8OjhjOlxxQmJ0ymT2gf6dcQ4gJIUHSCZ1Zl8/cVh3nxllHMHBx71vMlJSW88cYb1NXVsWDBAnr37t3pNZ0PT10dVYsWUf7qa9Tn5GCKjcWycCFrps7kpfJaDtTWEWYyclNcBLclRNAzoOW1pFxVDqwb86ndWoh2uPFNDiZoYgL+gyJQhq57g/ZoD/tL97Pm5BrW568/vd93vDmeSYmTmJQwidGxo2WCnxDnIEHRCepdHuY9u4GSGgefPjDp9IztpqqqqnjrrbcoLS1lzpw5pKW1+P/AK7THg/XLtZS/9hq2rVtRfn4EXTWX7Otv4k1jIJ+VVuHRMDU8iFviI7g8IgSfFgLAU+eiNqMI68Z83BUOTJEBWCYlYB4Zg/Lp+k7nwtpC1uWtY0P+BrYUbMHusuNr8GVUzCgmJkxkYuJEegX3krsNIZqRoOgk2UU1zHt2I8N6hPDW3WMxtvBGarfb+fDDDzly5AhpaWnMnj0bk6lzRw5dqLqsLCrefIuqpUvRdXUEDBtG3U03s2TwcN4pqabA4STG18QNseEsiAunT+DZoajdGvv+UmrW5eHMs2Kw+GAZH485Pa5LOr5bUu+uJ6Mogw35G9iQv4HjVccBiDPHMSFhAuPjxzMmdoz0bQiBBEWn+mBHHj99fw8/nN6Pn1zev8VjPB4Pq1atYuPGjSQlJXHdddd5dURUa9xVVVQtXkzFu+9Rf+wYhuBgzFfNY/e8a3hP+bOqvBq3hvQQMzfEhnNVdOhZk/i01jiOVVGzNg/H4QqUj4HAtBiCJiZgimh9dFVXyLfmszF/I5tObWJLwRZqnbUYlIEhEUMYGz+WcXHjGBY1DB+jd4JNCG+SoOhkP31/Dx/uzOONO9OZ2C+y1eP27t3L0qVLMZlMXH311V26PtSF0Fpj276dynffo2bFCnR9Pf5DhlB/3fV8MSKd9yptHLE58DcoZkWGcG1MGFPDg/A1nNnU5CyspWZ9PrbdxeDR+KdENHR8Jwd7venH6XGSWZrJplOb2HRqE5mlmXi0hwBTAGkxaaTHpTM2biz9wvrJvA3xjSBB0cls9S7mP7uRUquDRfdPIDnS3OqxpaWlfPDBBxQWFjJ69GhmzpyJj0/3/QTrrqyk6uOlVH74IY6sLJSPD+bp08mZdw2fxCezuKSKCpebMJORudGhXBkVwvhQyxmh4a6ux7r5FLVbC/DYXPgkWLCMjydwWBTK1D3ehKvrq9lesJ3NBZvZUrCFnOocAML8wkiLTSM9Np3RsaPpFSL9G+LSJEHRBU6U1nLNcxsJC/Tlo/vHExrY+kQwl8vFypUr2bJlC5GRkXzrW98iPj6+C6ttn7qDB6n8aBHVS5firqzEGBZGwJVXsnfWlXwSGMrnZdXY3B6CjAYuiwhmZkQw0yKCCW9cMsRT78a2qxjrxnxcxXYMFh/M6XFY0uMwBneviXOFtYVsKdjC9sLtbC3YSpGtCIBw/3BGxYxiVMwoRkaPpH9Yf1n9VlwSJCi6yPYT5dz8762MSArljbvS8T3Hp+WjR4+yePFiamtrmTJlChMnTsRo7P5vOtrpxLp+A1Uff4x19Wp0fT0+iYn4zr2KzOkzWeUTyPKyakrqXRiAtBAzMyKCmR4RzCBzQ0e4I7sS68Z86rIqwKAISI3EMj4e36SgbveJXWvNyZqTZBRlkFGYQUZRBgW1DRsxmX3MDI8azojoEYyIHkFqVCoBJu/2xQjRHhIUXWjJ7nweeGc314xI4O/XD8NwjjkFdrudTz75hMzMTGJjY5k/f363mKB3vtw1NdSsXEX1smXUbt4MHg/+gwcTNH8eJ6bOYLVbsbK0mr1WOwBxfj5MCw9iangwk8MsWKqcDc1SGUVohxufeDOWcfEEDIvq1IUIv64CawE7i3eys2gnu0p2caTiCBqNSZkYGD6Q4dHDGRkzkhHRI4gMaL3fSojuQoKiiz27Opu/LT/Mt9OT+NPVQ87rE/KBAwf45JNPsNvtTJgwgcmTJ3frvouWuEpLqf7kE6qWfEzdgQNgNGIeP56QeVdhmziZL+tcrCqrZl1FDdUuDwZgZHAgU8KDmGwxM+ColbrNBbiKbKgAE+ZRMZjHxuET2f0/oVc5qthTsoddxbvYVbyLzNJMHG4HAImWREZEj2horooZSXJwcre7axJCgqKLaa356xdZPPflUW4fn8xvrxp0Xm8MNpuN5cuXs3v3bsLDw7nyyivp06dPF1Tc8RzZ2VQtXUb1smU4T51C+fsTdNk0gq+8Er+JE9lT52J1WTVrK2rYXW3DAwQZDYwPtTDOY2Lk4RridlegPBq/fqFY0uPwTwlHtbK0SHfjdDs5UH6A3cW7T4dHeV05ABH+EaTHpTM+fjxj48YSY47xcrVCSFB4hdaaP31ykJc2HOfuib349ZXnv6fCsWPHWLZsGeXl5QwZMoSZM2d2y3kX50N7PNh37qTqk0+o+fwL3BUVGIKCCJo+neA5V2AeN45KFOsrrKyvqGFdeQ05dfUARJtMjHEaGHGsllH5DnqYTFjSYjGPju2yxQg7itaaE9Un2FG0g+2F29lSsOV0cPQI6nH6jmN8/HhizWcvCSNEZ5Og8BKtNb9beoBXN53gulGJPPqtVHzO8xOx0+lk48aNrF+/HqPRyNSpU0lPT78oOrtbo51OajdvpvrTz6hZtQpPTQ2GkJCG0Jg9C/PYsShfX3LsDjZUWNlYaWVDRQ3F9S4A4lwwqsjJyAoX40IsDBgZS0BKxEVzl9GUR3s4XHGYrQVbG/o5indR4agAoG9oXyYmTGRc/DhGRo/E33RxhaK4OElQeJHWmqdWZfPkymym9I/iuZtHYr6AzX/Kysr4/PPPyc7OJjIyktmzZ9O3b99OrLhreOrrqd2wkZovPqdm1Wo8ViuG4GCCpk0jaNZMzBMmYPDzQ2tNts1xOjQ2lVupcLsBiLF7GFWjGRdmYfLgWPrHe38iX3tprTlWdYwN+RtYn7+eHUU7cHlc+Bh8GBE9gvS4dNLj0hkcMRiToXstASMuDRIU3cDb23L59aJ9DIoP5l83j6JH+IWtZpqVlcXnn39ORUUF/fv3Z+bMmURGXhqjaU6HxvLl1Kxejae6GkNgIObJkwm6fAaWKVMwWiwNx2pNVm0dmyusrD9ZzjabnbLGm6xwF6QF+DM+MYz08CCGWAJaXMjwYmBz2thRtIOtBVvZUrCFrIosoGE4blpMGmNix5Aely4zx0WHkaDoJlYfKuKBd3aDhseuHcqVQy9sGKzL5WLLli2sW7cOl8vF6NGjmTJlCoGBl84S2trppHbrNmpWrKBm1SrcpaUoHx8Cx44laPp0LJdNwyc6+n/Ha012cQ3rMgvZWlbDTrMiP7DhjTNAKUaGmBkTYiYtxExacCAhbewX3p2V15Wfnvy3tWAruTW5QMPM8dGxo0mPS2dM7Bh6Bve8aO+qhHdJUHQjJ8tt/ODtXew+WclNY5J4eO4gAi5wvoDVamX16tXs2rULX19fJk2aRHp6+kU3nPZctNuNffdualauombVKpy5DW+O/kOHEnTZZQRNvwzfvn1PvzFqj8ZxtJLjOwrYWlTF7hAje6J9OBygcDees3+gP6NDAhkVYiYt2EzfQD8MF+Eba4G1gK2FW093jBfbigGICYw5HRrpcenSMS7OmwRFN+N0e/jb8ixeWHuMftEWnr5pBClxFz6qqbi4mBUrVpCdnU1wcDBTpkxh+PDhF3WHd2u01jiys7GuXk3NqtXU7dsHgE9iIpZp0wiaNpXAtDSUb8NSIO5aJ/bdxdRmFFFVXMv+cBP7+wWRGWVil8tJlashOkJNRkYEBzIq2Myo4EBGBAcSepHddWityanOYVvhNrYWNITHVx3jSUFJp+84RseOlsl/olUSFN3U+uwSfvLeHqrsTn55xUBuG5d8zpncLTl+/DgrV64kPz+fiIgIpk2bxqBBgzAYLt22a2dREdY1X2Jds4baLVvQDgcGiwXzxIlYpk7BMnkypvBwtNY4T9VSm1GIbVcJus6FCvWjaGQEB3qZ2eWuJ6PaxuHaOr76l9Av0K+xqcrM2FAzvQP8LqrmHI/2kF2RzbbCbWwr2EZGUQZWpxWAPiF9GBM3hvTYdNJi02QvDnGaBEU3VmZ18LMP9rLqUDET+0by+HVDSQi98JnIWmuysrJYtWoVJSUlxMTEMHXqVAYOHHhRvcm1h8dmo3bzZqxffknNl1/iLikFpQgYOrQhNKZMwS8lBVwa+4FSajOKcBypBA2+vUIwj4rBlRLGXoeDHdW1ZFTb2FFVS0XjXUecnw8TQi1MDLMwMSyIRP/utYDhubg9bg6VH2Jr4Va2FW5jZ9FO7C47CsXA8IGMiR3DmLgxjIoZhdmn9ZWPxaVNgqKb01rz9raT/PGTAxiV4v/mDuL6tMR2vcF7PB4yMzP58ssvKS8vJy4ujilTpjBgwIBLPjCgYYJf3f4DWNeuxbp27ekmKlNUFOYpk7FMmox5/Di02xfbriJsGUW4yupQvgYChkQSOCoGv14hoOCo3cHmSivrKxqG5pY7G4Kjp78vE8MsTAgLYkKohRi/i6tvyOl2klmWeXp13N3Fu3F6nBiVkcERgxkTN4YxsWMYHj1cFjj8Bum2QaGUmg08BRiBl7TWjzV73g94HRgFlAE3aq1PtHXO9gaFrbqKdx7+GR63C7fLhcftxuTrh5/ZjH+gGf+gIAKDQwgIDiEwOITAkDDMIaFYIiIJiojE2AHbm+aW2Xjogz1sPV7OpH6R/Pma1AseRvsVt9vN3r17WbduHRUVFcTGxp4OjEu5Sao5V0kJ1vUbsK5bR+3GjXhqasBkInDECCxTJhM4cRLKLwb7zhJse0vQDjfGMD8CR8ZgHhl9ele+r4blbqxsmEG+udJKtcsDQN9AP8aHWpgQZmF8qIUo34srOOpcdewu2c22gm1sL9xOZmkmLt0wh2No1NCGO47YMQyNGoqv8eK6mxLnr1sGhVLKCBwGLgfygO3ATVrrA02OuR8YqrW+Vym1ALhGa31jW+dtb1DU220sf+EZjCYTBpMJg9GIy+GgzlaLo7YWe0019uoq7NYaaPY3UwYDwZFRhMTEERYbR2hMHCExsYRExxISHYNf4Pnfzns8mje35vD4Z4fwaHhwZn8Wju2Jv0/7OqibB0ZUVBSTJ09m8ODB36jAgIaht/bdu7GuW4d17Tochw8DYIqNxTJpIoHjJ2EMHYj9QNX/mqaSgzGPiiEgNRKD//8+DLi1JtNqZ2OFlY0VVrZWWbG6G4JjgNmf8aENoTEu1EKk78XVOW5z2thZvJNtBdvYWriVg2UH0Wj8jf4Mix52OjhSI1NlL45LSHcNinHAI1rrWY0//xJAa/1ok2O+aDxms1LKBBQCUbqNoju76cnjdmOvqcZWVUltVSU1ZSVU5edQmX+CqpISKssqqLPZz3iNf4A/QaHBBEeYCYp1E2gJJDDIQkCgP7QyWcrqcLIxu5STlTZ8DEZ6hAfSKyKQQL/2/cP0aCi32imoqKXO6cZkVAQF+BLk70ugrwk6oFXKXQ9uN2it4GJo0ayvh6oaqKxq+K/b0/B3MAdgCAolIDCWQEIwaV88uKk3OHAZ6nEqBx7loekv6UaRExjOIUs0WZYYss2ROBr33k6wV9K3tpTIeisR9TZCnTYMF8UfqIFHQ52up95Tj0M7cXkallQxYMDf4IufwQ+jTPrrFszKxPe/+1C7XttWUHjzo04CcLLJz3lAemvHaK1dSqkqIAIo7ZIKW2AwGjGHhmGuLyRqw6OQuxkc1Q1PWhq+7G4TVfX+VDn9qXb6UeXjS31vNz597OCnsQG2r07Y2vuFLwwaDIOaPVz/NWoPDobg7r+RXrfiaPz6igJaa3wZxP/+f7kwckz35SCDOOg/hAz/PtSqi3/pFdG99XId5/udcN6L6564FUqpe4B7AJKSkjr3YrVl8OWfIeM/4GeBoTdAaE8I7QH+IYAiAAgAIj11HKn4gGrrWvxRRAWOItY8Fo9TYau1UWevO+/LejyaMpsTt7vjPolqral3axxu97kPPuOF4HSArdpIXY0BrcHHV+MfpDH6aBS6tRuli4fWUOdAnfW3UZjwRelz/4LxQDxuprMH2EOdwUS5r5kqnwA65Baum3BpNxfHLeSlL1Crhsb8DubNoMgHejT5ObHxsZaOyWtsegqhoVP7DFrrF4EXoaHpqVOqBTj8BSy+H+wVkHYnTP0lmCNaPLSicjsHDvyOuro8EhNvoWfSd/D3v7g/ztfVOsnaUsj+DaeoKKjF5GekX1o0gybEE9Pr4l2QTwjRNm8GxXagn1KqFw2BsAD4drNjPgZuAzYD1wGr2+qf6DTOOljxMGx7AWKGwG0fQ8zgFg/1eFwcO/4EOTkvEODfg5Ej3yYsdHQXF9yxio5Xk7k2j+wdxbidHqKTg5l2y0D6jorG1/+SuCkVQrTBa//KG/scvg98QcPw2Fe01vuVUr8HMrTWHwMvA28opY4A5TSESdeylsBb10LBHki/D2Y8Aj4t7w/gcJSQuf8BKiu3Eh9/I/36/hqT6eKcwORyusneXkzm2jyKc2ow+RkZODaWwZMSiEoK8nZ5Qogu5NWPg1rrT4FPmz32cJPv64Dru7qu06ry4fX5UJUHN70DA65o9dDKygz2ZX4fl6uGQSl/Iy7umi4stONYKxxkrs1j/4ZT1FmdhMWZmbygPwPSY/ENkLsHIb6J5F9+a8qPw+vzwF4JtyyCnuNaPbSkZCWZ+3+An18cI4a/hsUyoAsL7RjFOdXsXnmSozuK8WhNr6GRDL2sBwn9Q6XvQYhvOAmKllhL4NUrwWlr6I+IH9HqoQUFizh46OcEWQYzbNjL+PqGd2GhX4/Wmpx9Zexakcup7Ep8/I2kTktk6LREgiNl6QYhRAMJiuY8Hlh0D9jK4K7lEDes1UPz8v9LVtb/ERY6lqFDX8BksnRhoe3n8WiO7ihmx+c5lOVbsYT7MeG6vgyaEC/NS0KIs8i7QnMbn4Sjq2Huk22GRFnZerKyfktkxGUMGfIsRqNfFxbZPh6PJnt7ERmfnqCyyEZYbCAzbk+h7+gYjMaLfeKDEKKzSFA0lbsFVv8RBl8Do25v9TCbLYfM/Q9gMfdjyJCnun1IaI/myM5iti87TkWhjYgEC7O/O4Tew6JQF+me0kKIriNB8RVbOXxwV8MM66ueglY6cF2uWvbuuxdQDB36PEZj992vWmvNyQPlbF58lNKTVsLizMz6zhD6jJCAEEKcPwmKr3jcDZPopv6icSmOs2mtOXjoF9TWHmHE8FcJCOjk5UK+huKcajZ9dIT8rEqCIvyZcXsK/cbEtmsHPSHEN5sExVcsUXDze20eUlzyGcXFn9Kn94OEh0/oosIuTHWpnS1LjpG9vYiAIB8m3diPwZMSMJqkD0II0T4SFOfJ6awgK+sRgoKGkJR0j7fLOUt9nYsdn+Wwe1UuBqUYdUVPRs7sKaOYhBBfm7yLnKfD2X/C5aoiZeBrGAzd58+mPZpDWwrZsvgotup6BqTHMvbq3ljCWl5mRAghLlT3ecfrxkrLvqSwcBHJyd8jKCjF2+WcVpJbw7p3sig8Vk1Mr2CuuC+V2F4t968IIUR7SVCcg8fjICvrYQID+9Ir+XveLgcAh93F1iXHyFybh7/Fh8tuTWHg2FgZySSE6BQSFOdwquBD6uryGT7sVQwG786X0FpzbFcJ6989TG11PamTE0if3xu/QB+v1iWEuLRJULTB43Fw4sRzhISMJDx8oldrsVY4WPt2Fif2lhLZw8IV9w0lJjnYqzUJIb4ZJCjacKrgQxyOAlJSHvPaCqpaaw5uKmDjB0fwuDyM/1Zfhk1PxCBLbgghuogERSvOuJsI886ciZryOta8eYiTB8qJ7xfKtFsGEhrdfWeCCyEuTRIUrfDm3YTWmsPbilj3zmE8Hs3kBf0ZMjlBOquFEF4hQdECj8fltbuJulonX76VxdGdxcT2DmHGHSmERMldhBDCeyQoWlBRsRGHo4D+/f6vS+8mTh2pZMXL+7FV1zP26t6MmNlT1mYSQnidBEULCguXYDKFEBk5tUuu5/Fodnx2gu3LjhMUGcC1PxtFdE8Z0SSE6B4kKJpxuWopLllOXOzVXTJvwm6tZ8XL+zl5sIL+Y2KYctMAWZ9JCNGtyDtSMyUly/F47MTGXt3p1yo8XsUXL2Zir3Ey7ZaBDJoQ3+nXFEKICyVB0Uxh0RL8/RMJCRnZqdfZvz6fde8cxhzqx7ceGilNTUKIbkuCogmHo5jy8o0k97wXpTpnQpvb5WH9u4fZv/4USYPCufyuwfibZQkOIUT3JUHRRFHRMsDTac1Otup6Pn9xHwVHqhg5K4n0+X1kVJMQotuToGiisHAxQUGpmM19OvzcBUer+OLFfThsLi6/axD9R8d2+DWEEKIzeCUolFLhwLtAMnACuEFrXdHCcW5gX+OPuVrreZ1Vk82WQ411P/36/aZDz6u1Zu+aPDZ9cARLhD/X/mAYkYlBHXoNIYToTN66o/gFsEpr/ZhS6heNP/+8hePsWuvhXVFQYGBPxqavwNc3vMPO6XS4WfPmIbK3F5E8NJIZt6fIkuBCiIuOt4JiPjC18fvXgC9pOSi6lNncu8POVVVi47PnMyk7ZSV9fm9GzeopazUJIS5K3gqKGK11QeP3hUBMK8f5K6UyABfwmNZ6cUsHKaXuAe4BSEpK6uhaL1jugTKWv7QfgLnfH0bPwRFerkgIIdqv04JCKbUSaKnH9tdNf9Baa6WUbuU0PbXW+Uqp3sBqpdQ+rfXR5gdprV8EXgRIS0tr7VxdInNtHuvezSY8LpAr7h1KSFSAN8sRQoivrdOCQms9o7XnlFJFSqk4rXWBUioOKG7lHPmN/z2mlPoSGAGcFRTdgcej2fhBNntX59EzNYKZdw3G118GlQkhLn7e2ibtY+C2xu9vA5Y0P0ApFaaU8mv8PhKYABzosgovgKvezecv7GPv6jyGXdaDOfcNlZAQQlwyvPVu9hjwnlLqLiAHuAFAKZUG3Ku1vhtIAV5QSnloCLTHtNbdLijq7S4+eW4vp45UMunGfgyd1sPbJQkhRIfySlBorcuA6S08ngHc3fj9JiC1i0u7ILbqepY9u4eyPCuX3ymT6IQQlyZpH2kna4WDJU/uwlpex5z7h9JziIxsEkJcmiQo2qGmvI7FT+zCXlPPVQ8MJ75vqLdLEkKITiNBcYGqSuwseWIXDruLeQ8MJ7ZXiLdLEkKITiVBcQGsFQ4WP7ETp8PN1T8eQVSSrNkkhLj0eWt47EXHYXex7Nk9OGpdzH9AQkII8c0hQXEe3C4Pn7+wj4qCWmZ/d4iEhBDiG0WC4hy01qx58xB5hyqYdstAkgbJ6CYhxDeLBMU5HNxUQNaWQkbP7cXAcXHeLkcIIbqcBEUbygtqWf/OYRIHhjF6TrK3yxFCCK+QoGiFy+lm+Uv7MfkZmXHHINlLQgjxjSVB0YpNHx2lLN/K9NtSMIf4ebscIYTwGgmKFhSdqGbfmjyGTkskOTXS2+UIIYRXSVA0o7Vm04dHCAjyIX1+x22NKoQQFysJimZO7C3l1P+3d/cxclVlHMe/P5aKUrG0SEur0JZatdLoFgqxsZomvqQ1JkBC0aLSqhEbef0DX6PQmJAQVDRRUatWiqDGKCCSKBqCUTC2pWWRtmvpi0Va+oJgU7HFwvbxj3uG3t3O3JmFmbnLzO+TNDv33DNznnPPdJ655+7es3kf53xgqteUMDPDiWKQgYHD/OX2rYw95XhmzJ1UdjhmZiOCE0VO//1PsG/PAeacP42eHh8aMzNwonjBoYPPs/rufzBp+olMeasvYJuZVXgSPnnu0AATp53ImfMnI/lvJszMKpwoktFjjmPB0hG98qqZWSk89WRmZoWcKMzMrJAThZmZFXKiMDOzQk4UZmZWyInCzMwKOVGYmVkhJwozMyukiCg7hqaS9CTw2It46muBfzU5nJeDbuy3+9wd3OfhmRwRJ1fb0XGJ4sWS9GBEzC47jnbrxn67z93BfW4eTz2ZmVkhJwozMyvkRHHE8rIDKEk39tt97g7uc5P4GoWZmRXyGYWZmRVyojAzs0JOFICk+ZI2Sdoi6fNlx9MOkrZLekRSn6QHy46nFSStkLRX0vpc2ThJf5C0Of0cW2aMrVCj38sk7Uzj3Sfp/WXG2EySTpV0n6SNkjZIujKVd+xYF/S5JePc9dcoJPUAjwLvBXYAa4BFEbGx1MBaTNJ2YHZEdOwfJEl6F/AMcEtEzExlNwBPR8T16UvB2Ij4XJlxNluNfi8DnomIr5UZWytImghMab9pSAAABetJREFUjIh1kk4A1gLnAUvo0LEu6POFtGCcfUYB5wBbImJbRBwCfg6cW3JM1gQR8Sfg6SHF5wIr0+OVZP+5OkqNfnesiNgVEevS4/8A/cDr6OCxLuhzSzhRZAf38dz2Dlp4wEeQAH4vaa2kS8oOpo0mRMSu9Hg3MKHMYNrsMkl/S1NTHTMNkydpCjALWEWXjPWQPkMLxtmJonvNjYgzgQXApWm6oqtENu/aLXOv3wWmAb3ALuDr5YbTfJJeDfwKuCoi9uf3depYV+lzS8bZiQJ2Aqfmtl+fyjpaROxMP/cCd5BNwXWDPWl+tzLPu7fkeNoiIvZExEBEHAZ+QIeNt6RRZB+Yt0XE7am4o8e6Wp9bNc5OFNnF6+mSpkp6BfAh4K6SY2opSaPTBTAkjQbeB6wvflbHuAtYnB4vBn5dYixtU/nATM6ng8ZbkoAfAf0RcWNuV8eOda0+t2qcu/63ngDSr5B9E+gBVkTEdSWH1FKSTic7iwA4FvhpJ/ZZ0s+AeWS3Xt4DXAvcCfwCOI3sdvQXRkRHXfit0e95ZNMRAWwHPpWbv39ZkzQX+DPwCHA4FX+RbM6+I8e6oM+LaME4O1GYmVkhTz2ZmVkhJwozMyvkRGFmZoWcKMzMrJAThZmZFXKiMDOzQk4UZmZWyInCmkrSQO5e+H2S/inp22XHZSOPpIWSVqX3yQZJ15Ydk1V3bNkBWMc5GBG9lQ1JS4DZ5YVjI5GkxcDlwHkRsUPS8cAnSg7LavAZhbWNpDvTbc03VG5tLmmKpL9Luk1Sv6Rfpg+NqvVzzwlJS9N2T1rV6+a0/RFJq9M31e+n/V9N27tzK4B9paj9IbEXxVmtvSmSDubOrG5J7V2Ve83rciuT5esPOguT9GVlKzD2pTpTasWTyten542StK3yWpJOlrRG0kOSHpb0zjrjkl8h74Lc8a3aRlHbQ47la4AbyW6psQMgIg5ExLeG946ydnGisHb6eEScRXaGcYWkk1L5m4CbImIGsB/4dJ36AFs4shDNfNKaIpJmAB8E3pHObAaAD0fEZ9L294BvRERvRFxTp/2hjqpXq71Uf2tqpzciLgZWABenOI8huwHlraluD7A5vUYlrsqH6uVAb9q3tSieIfFeQrbSHQAR8WREnB0Rs4DvNHic6xnURgPlkI3bqojYNox2rEROFNZOV0h6GPgr2a3dp6fyxyPigfT4VmBunfoA/wO2SDoD+Cjwk1T+buAsYI2kvrR9ep24arXfSL2G24uI7cBTkmaR3bH3oYh4Ku1+FfBsjXaV9jcct7K7An8MuGnQC0m9kh4Frgcq3/aLjnNNBW1ULc+ZCfQ10oaNDL5GYW0haR7wHmBORByQ9EfglWn30DtTRp36FT8GPkv2Pt5TaQpYGRFfGEZ4R7U/jHpV21O26lg1PyRby/kUsjOMiknAE0c1GLFf0jXANkmPkS1K00jcVwLLgUNDXq8PeKOkRcBFytY0qHeca6naRkF5xX+pnvhshPIZhbXLGODf6cPozcDbc/tOkzQnPb4IuL9OfQAiYi0wnixhVNwLXCBpPICkcZIm14mtWvuN1htue3eQTZWdDdyTK18IPFD1GdmCO7+JiLcxeOqpVtxjyKZ38okISSdI6kmbz5J9s697nGuo2kZBed5vgYWSJqS4jpP0yQbbtRL4jMLa5XfAUkn9wCayaY6KTWTLsa4ANpIt5zhQUP8FEbEAsoutaXujpC+RrQd+DPAccCnZegS1VGu/oXrpA7Zae7trxHtI0n3AvogYSLHfAIwmu24wiKQ3AFeTfetvJO7xZKs0Xh0Rz0vK1z8DWC6psizoZWQLd9U6zlMlVZLPScA4SQuA/oI2apXnj8FqScuAe9IxG8WRazU2Ank9CitVmqK5OyJmjuT2mxVn+mBcByyMiM0v4XWaEs8w21wCEBE3t6tNGxk89WTWJpLeQvbbWve+lCRRonXpn3UZn1GYmVkhn1GYmVkhJwozMyvkRGFmZoWcKMzMrJAThZmZFXKiMDOzQk4UZmZW6P96cM++Ei5TFgAAAABJRU5ErkJggg==\n" - }, - "metadata": { - "needs_background": "light" - } - } - ], - "source": [ - "plt.plot(Cs, parameters)\n", - "\n", - "plt.xlabel('ะŸะฐั€ะฐะผะตั‚ั€ ั€ะตะณัƒะปัั€ะธะทะฐั†ะธะธ $C$')\n", - "plt.ylabel('$w$')\n", - "\n", - "plt.savefig(\n", - " os.path.join(figures, 'log_reg_cs_exp.eps'),\n", - " bbox_inches='tight')\n", - "\n", - "plt.show()" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.3" - }, - "colab": { - "name": "main.ipynb", - "provenance": [] - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/code/requirements.txt b/code/requirements.txt deleted file mode 100644 index 3ff5802..0000000 --- a/code/requirements.txt +++ /dev/null @@ -1,3 +0,0 @@ -numpy==1.21.5 -scipy==1.4.1 -scikit-learn==1.0.2 \ No newline at end of file diff --git a/configs/pretrain_to_task.yaml b/configs/pretrain_to_task.yaml new file mode 100644 index 0000000..4b37156 --- /dev/null +++ b/configs/pretrain_to_task.yaml @@ -0,0 +1,39 @@ +seed: 42 +device: cuda + +model: + name: resnet18 # resnet18 | resnet34 + pretrained: true + +pretrain_only: false +need_pretrain: true +use_basic_model: false + +pretrain_dataset_name: stl10 # mnist | cifar10 | cifar100 | letters | kmnist +task_dataset_name: mnist # mnist | cifar10 | cifar100 | letters | kmnist + +training: + batch_size: 128 + epochs: 20 + lr: 0.0003 + val_split: 0.2 + early_stopping_epochs: 5 + +training_task: + batch_size: 128 + epochs: 20 + lr: 0.0003 + val_split: 0.2 + early_stopping_epochs: 10 + +estimate_network_params: + hidden_dim: 256 + num_classes: 10 + +paths: + data_dir: /home/machenike/bmml/DataMetaMap/data + train_parquet: /home/machenike/bmml/DataMetaMap/pretrain_embeddings/train_embeds.parquet + test_parquet: /home/machenike/bmml/DataMetaMap/pretrain_embeddings/test_embeds.parquet + checkpoint: /home/machenike/bmml/DataMetaMap/checkpoints/${pretrain_dataset_name}_pretrain.pt + checkpoint_task: /home/machenike/bmml/DataMetaMap/checkpoints/best_model.pt + save_logs: /home/machenike/bmml/DataMetaMap/logs/pretrain_to_task_logs/${pretrain_dataset_name}-${task_dataset_name}.yaml \ No newline at end of file diff --git a/coverage-badge.svg b/coverage-badge.svg new file mode 100644 index 0000000..4fbc8fa --- /dev/null +++ b/coverage-badge.svg @@ -0,0 +1,8 @@ + + + + + Coverage: 60.77% + + + \ No newline at end of file diff --git a/demo/dataset2vec/lightning_logs/version_0/events.out.tfevents.1775665875.Ilias-MacBook-Air.local.75349.0 b/demo/dataset2vec/lightning_logs/version_0/events.out.tfevents.1775665875.Ilias-MacBook-Air.local.75349.0 new file mode 100644 index 0000000..d42039c Binary files /dev/null and b/demo/dataset2vec/lightning_logs/version_0/events.out.tfevents.1775665875.Ilias-MacBook-Air.local.75349.0 differ diff --git a/demo/dataset2vec/lightning_logs/version_0/hparams.yaml b/demo/dataset2vec/lightning_logs/version_0/hparams.yaml new file mode 100644 index 0000000..28733dd --- /dev/null +++ b/demo/dataset2vec/lightning_logs/version_0/hparams.yaml @@ -0,0 +1,29 @@ +config: !!python/object:data_meta_map.dataset2vec.config.Dataset2VecConfig + __dict__: + activation_cls: !!python/name:torch.nn.modules.activation.ReLU '' + f_block_repetitions: 7 + f_dense_hidden_size: 32 + f_out_size: 32 + f_res_hidden_size: 32 + f_res_n_layers: 3 + g_layers_sizes: + - 32 + - 16 + - 8 + h_block_repetitions: 3 + h_dense_hidden_size: 16 + h_res_hidden_size: 16 + h_res_n_layers: 3 + output_size: 16 + __pydantic_extra__: null + __pydantic_fields_set__: !!set {} + __pydantic_private__: null +optimizer_config: !!python/object:data_meta_map.dataset2vec.config.OptimizerConfig + __dict__: + gamma: 1 + learning_rate: 0.0001 + optimizer_cls: !!python/name:torch.optim.adam.Adam '' + weight_decay: 0.0001 + __pydantic_extra__: null + __pydantic_fields_set__: !!set {} + __pydantic_private__: null diff --git a/demo/dataset2vec/simple_example.ipynb b/demo/dataset2vec/simple_example.ipynb new file mode 100644 index 0000000..ca93c58 --- /dev/null +++ b/demo/dataset2vec/simple_example.ipynb @@ -0,0 +1,569 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 2, + "id": "5f60e0f4", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Seed set to 42\n" + ] + } + ], + "source": [ + "import warnings\n", + "warnings.filterwarnings(\"ignore\")\n", + "\n", + "import os\n", + "import sys\n", + "\n", + "ROOT = os.path.abspath(os.path.join(os.getcwd(), '..', '..'))\n", + "sys.path.insert(0, os.path.join(ROOT, 'src'))\n", + "sys.path.insert(0, os.path.join(ROOT, 'code'))\n", + "\n", + "import numpy as np\n", + "import pandas as pd\n", + "import torch\n", + "import pytorch_lightning as pl\n", + "import matplotlib.pyplot as plt\n", + "from sklearn.datasets import make_blobs, make_circles, make_moons\n", + "from sklearn.metrics import ConfusionMatrixDisplay, classification_report, confusion_matrix\n", + "from sklearn.preprocessing import StandardScaler\n", + "\n", + "from visualization.plots import EmbeddingVisualizer\n", + "\n", + "from data_meta_map.dataset2vec.config import Dataset2VecConfig, OptimizerConfig\n", + "from data_meta_map.dataset2vec.model import Dataset2Vec\n", + "from data_meta_map.dataset2vec_embedder import Dataset2VecEmbedder\n", + "\n", + "pl.seed_everything(42, workers=True)\n", + "DATASET_TYPES = (\"circles\", \"moons\", \"blobs\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "1721315a", + "metadata": {}, + "source": [ + "## 1. Generate a balanced meta-dataset\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "ca139d59", + "metadata": {}, + "outputs": [], + "source": [ + "def apply_noise(X: np.ndarray, rng: np.random.RandomState) -> np.ndarray:\n", + " noise_scale = rng.uniform(0.05, 0.3)\n", + " return X + rng.randn(*X.shape) * noise_scale\n", + "\n", + "\n", + "def make_dataframe(X: np.ndarray, y: np.ndarray) -> pd.DataFrame:\n", + " df = pd.DataFrame(X, columns=[f\"x{i}\" for i in range(X.shape[1])])\n", + " df[\"target\"] = y.astype(np.float32)\n", + " return df\n", + "\n", + "\n", + "def generate_dataset_by_type(\n", + " ds_type: str,\n", + " rng: np.random.RandomState,\n", + " n_features: int = 2,\n", + " subset_size: int = 200,\n", + ") -> pd.DataFrame:\n", + " seed = int(rng.randint(0, 1_000_000))\n", + " n_samples = 2 ** int(rng.choice([7, 8, 9]))\n", + "\n", + " if ds_type == \"circles\":\n", + " X, y = make_circles(\n", + " n_samples=n_samples,\n", + " noise=0.05,\n", + " factor=rng.uniform(0.3, 0.7),\n", + " random_state=seed,\n", + " )\n", + " elif ds_type == \"moons\":\n", + " X, y = make_moons(\n", + " n_samples=n_samples,\n", + " noise=0.05,\n", + " random_state=seed,\n", + " )\n", + " elif ds_type == \"blobs\":\n", + " X, y = make_blobs(\n", + " n_samples=n_samples,\n", + " centers=3,\n", + " n_features=n_features,\n", + " random_state=seed,\n", + " )\n", + " else:\n", + " raise ValueError(f\"Unknown dataset type: {ds_type}\")\n", + "\n", + " if rng.uniform(0, 1) < 0.5:\n", + " X = apply_noise(X, rng)\n", + "\n", + " X = StandardScaler().fit_transform(X)\n", + "\n", + " if len(X) > subset_size:\n", + " idx = rng.choice(len(X), size=subset_size, replace=False)\n", + " X, y = X[idx], y[idx]\n", + "\n", + " return make_dataframe(X, y)\n", + "\n", + "\n", + "def build_balanced_split(\n", + " samples_per_type: int,\n", + " random_state: int,\n", + " subset_size: int = 200,\n", + ") -> tuple[list[pd.DataFrame], list[str]]:\n", + " rng = np.random.RandomState(random_state)\n", + " datasets: list[pd.DataFrame] = []\n", + " labels: list[str] = []\n", + "\n", + " for ds_type in DATASET_TYPES:\n", + " for _ in range(samples_per_type):\n", + " datasets.append(\n", + " generate_dataset_by_type(\n", + " ds_type=ds_type,\n", + " rng=rng,\n", + " subset_size=subset_size,\n", + " )\n", + " )\n", + " labels.append(ds_type)\n", + "\n", + " order = rng.permutation(len(datasets))\n", + " datasets = [datasets[i] for i in order]\n", + " labels = [labels[i] for i in order]\n", + " return datasets, labels\n", + "\n", + "\n", + "def describe_split(name: str, labels: list[str]) -> None:\n", + " counts = pd.Series(labels).value_counts().reindex(DATASET_TYPES, fill_value=0)\n", + " print(f\"{name}: {len(labels)} datasets\")\n", + " print(counts.to_string())\n", + " print()\n" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "ee2e28bd", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Train: 600 datasets\n", + "circles 200\n", + "moons 200\n", + "blobs 200\n", + "\n", + "Validation: 180 datasets\n", + "circles 60\n", + "moons 60\n", + "blobs 60\n", + "\n", + "Test: 180 datasets\n", + "circles 60\n", + "moons 60\n", + "blobs 60\n", + "\n", + "Example dataset shape: (200, 3)\n" + ] + } + ], + "source": [ + "TRAIN_PER_TYPE = 200\n", + "VAL_PER_TYPE = 60\n", + "TEST_PER_TYPE = 60\n", + "\n", + "train_dfs, train_labels = build_balanced_split(TRAIN_PER_TYPE, random_state=42)\n", + "val_dfs, val_labels = build_balanced_split(VAL_PER_TYPE, random_state=123)\n", + "test_dfs, test_labels = build_balanced_split(TEST_PER_TYPE, random_state=321)\n", + "\n", + "describe_split(\"Train\", train_labels)\n", + "describe_split(\"Validation\", val_labels)\n", + "describe_split(\"Test\", test_labels)\n", + "\n", + "print(\"Example dataset shape:\", train_dfs[0].shape)\n" + ] + }, + { + "cell_type": "markdown", + "id": "6940da07", + "metadata": {}, + "source": [ + "## 2. Create and train Dataset2Vec" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "810fcc38", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "GPU available: True (mps), used: True\n", + "TPU available: False, using: 0 TPU cores\n", + "๐Ÿ’ก Tip: For seamless cloud logging and experiment tracking, try installing [litlogger](https://pypi.org/project/litlogger/) to enable LitLogger, which logs metrics and artifacts automatically to the Lightning Experiments platform.\n", + "\n", + " | Name | Type | Params | Mode | FLOPs\n", + "----------------------------------------------------\n", + "0 | f | Sequential | 3.0 K | train | 0 \n", + "1 | g | Sequential | 736 | train | 0 \n", + "2 | h | Sequential | 1.9 K | train | 0 \n", + "----------------------------------------------------\n", + "5.6 K Trainable params\n", + "0 Non-trainable params\n", + "5.6 K Total params\n", + "0.023 Total estimated model params size (MB)\n", + "63 Modules in train mode\n", + "0 Modules in eval mode\n", + "0 Total Flops\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "0e8307c5a9174ca29015f2605ddeb032", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Sanity Checking: | | 0/? [00:00" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "model_config = Dataset2VecConfig(\n", + " f_dense_hidden_size=16,\n", + " f_res_hidden_size=16,\n", + " f_res_n_layers=3,\n", + " f_block_repetitions=3,\n", + " f_out_size=32,\n", + " g_layers_sizes=[16, 8, 8],\n", + " h_dense_hidden_size=8,\n", + " h_res_hidden_size=16,\n", + " h_res_n_layers=3,\n", + " h_block_repetitions=3,\n", + " output_size=16,\n", + ")\n", + "\n", + "optimizer_config = OptimizerConfig(\n", + " gamma=1.0,\n", + " learning_rate=3e-4,\n", + " weight_decay=1e-4,\n", + ")\n", + "\n", + "model = Dataset2Vec(config=model_config, optimizer_config=optimizer_config)\n", + "embedder = Dataset2VecEmbedder(model, max_epochs=3, batch_size=16, n_batches=100)\n", + "embedder.fit(\n", + " train_dfs,\n", + " val_data=val_dfs,\n", + " trainer_kwargs=dict(\n", + " enable_checkpointing=False,\n", + " logger=False,\n", + " enable_progress_bar=True,\n", + " log_every_n_steps=1,\n", + " ),\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "id": "db59e914", + "metadata": {}, + "source": [ + "## 3. Embed train, validation and test datasets\n" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "4a18bfba", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Train embeddings: (600, 16)\n", + "Validation embeddings: (180, 16)\n", + "Test embeddings: (180, 16)\n" + ] + } + ], + "source": [ + "def dataframe_to_tensors(df: pd.DataFrame) -> tuple[torch.Tensor, torch.Tensor]:\n", + " X = torch.tensor(df.iloc[:, :-1].to_numpy(dtype=np.float32), dtype=torch.float32)\n", + " y = torch.tensor(df.iloc[:, -1].to_numpy(dtype=np.float32), dtype=torch.float32).reshape(-1, 1)\n", + " return X, y\n", + "\n", + "\n", + "def embed_datasets(embedder: Dataset2VecEmbedder, datasets: list[pd.DataFrame]) -> np.ndarray:\n", + " device = next(embedder.model.parameters()).device\n", + " embeddings = []\n", + " embedder.model.eval()\n", + "\n", + " with torch.no_grad():\n", + " for df in datasets:\n", + " X, y = dataframe_to_tensors(df)\n", + " embedding = (\n", + " embedder.model(X.to(device), y.to(device))\n", + " .detach()\n", + " .cpu()\n", + " .numpy()\n", + " .reshape(-1)\n", + " )\n", + " embeddings.append(embedding)\n", + "\n", + " return np.vstack(embeddings)\n", + "\n", + "\n", + "def compute_centroids(embeddings: np.ndarray, labels: list[str]) -> dict[str, np.ndarray]:\n", + " labels_arr = np.asarray(labels)\n", + " return {\n", + " label: embeddings[labels_arr == label].mean(axis=0)\n", + " for label in DATASET_TYPES\n", + " }\n", + "\n", + "\n", + "train_embeddings = embed_datasets(embedder, train_dfs)\n", + "val_embeddings = embed_datasets(embedder, val_dfs)\n", + "test_embeddings = embed_datasets(embedder, test_dfs)\n", + "train_centroids = compute_centroids(train_embeddings, train_labels)\n", + "\n", + "print(\"Train embeddings:\", train_embeddings.shape)\n", + "print(\"Validation embeddings:\", val_embeddings.shape)\n", + "print(\"Test embeddings:\", test_embeddings.shape)\n" + ] + }, + { + "cell_type": "markdown", + "id": "a81af3a2", + "metadata": {}, + "source": [ + "## 4. PCA analysis\n" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "2c0f6c85", + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAg8AAAIQCAYAAAAGpY3sAAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjksIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvJkbTWQAAAAlwSFlzAAAPYQAAD2EBqD+naQABAABJREFUeJzsnQecXFXZ/5/Znk0vJKGEkIQaILTQW+hSVBAFkVdAEUVQEStixfqiICoWVFSw8JfXAoo06S0BktB7h5Dek93Ntpn5f75n8ywnN1NunbmzOT8/47Kb3Tt3zj3nPL/zlN+TyefzeXFwcHBwcHBw8Ik6v7/o4ODg4ODg4ODIg4ODg4ODg0NgOM+Dg4ODg4ODQyA48uDg4ODg4OAQCI48ODg4ODg4OASCIw8ODg4ODg4OgeDIg4ODg4ODg0MgOPLg4ODg4ODgEAiOPDg4ODg4ODgEgiMPDg4ODg4ODoHgyIODwyaGmTNnyre+9S1ZtWqVr99va2uTb37zm7LLLrvI4MGDZfTo0bL77rvLBRdcIAsWLOj/Pa6ZyWRk3Lhx0tHRsdF1ttlmGznhhBM2+Bm/X+x17rnnxvBpHRwckkBDIld1cHBINXm45JJL5KyzzpIRI0aU/N2enh455JBD5IUXXpAzzzxTPv3pTxsy8eyzz8p1110nJ510kmyxxRYb/M2SJUvkV7/6lXz+85/3dT9HHXWUnHHGGRv9fPvttw/4yRwcHCoFRx4cHByK4sYbb5THH39c/vKXv8iHPvShDf6ts7NTuru7N/obvBI/+tGP5LzzzpNBgwaVHV1Iwv/8z/+4p+DgUENwYQsHh00IhBa++MUvmv+eNGlSf4jgjTfeKPj7r776qvl64IEHbvRvLS0tMmzYsI1+/o1vfEMWL15svA8ODg4DE448ODhsQnjf+94np512mvnvK664Qv70pz+Z12abbVbw9ydOnGi+/vGPf5R8Pu/rPQ4++GA5/PDD5Yc//KGsW7eu7O/jwVi2bNlGr0JeDQcHh3TAkQcHh00I06ZNkz333NP894knnmjCBbxIhCwEfmeHHXYw3gQ8FR/5yEfk97//vclrKAUSLPE+XHXVVWXv6Xe/+50hL97XP//5z5Cf0sHBIWk48uDg4FAU5Cw88sgj/aGOa665Rs4++2zZfPPNTfJkV1dXwb8jyfKwww7z5X1473vfK3fcccdGL/7ewcEhnXAJkw4ODrJixYoNwgSQhuHDh5v/5iskgNebb74pd911l1x22WXy85//3Pzbd7/73aL5FYceeqjxPlx44YVFR3mrrbaSI4880j0FB4cagvM8ODg4mFwIvAn6QsOhWA7ERz/6UXnooYdMmSdVGMWA92HGjBm+cx8cHBxqB87z4OCwiYHqCi8uv/xyWblyZf/3Xu0GL0aOHClTpkyRZ555puTv4X2AQPz617+OcMcODg5pgyMPDg6bGDQ50laY3GuvvQr+7pNPPilbbrmljBkzZoOfE7547rnnTDJlKRC2gDxceumlvqs1HBwc0g9HHhwcNjEoUfjqV78qH/zgB6WxsVHe/e53F6y4IHGRyon3vOc9st9++8mQIUPktddeMxUXJEviWSgH/r5U8uNLL70kf/7znzf6OTLXqE86ODikD448ODhsYth7773lO9/5jklkvO222ySXy8nrr79ekDycfPLJsnbtWvnvf/8rd999t0msJGSxzz77GPlpPxUReB7wQNx3330F/12rK7zgbxx5cHBIJzJ550t0cHBwcHBwCABXbeHg4ODg4OAQCI48ODg4ODg4OASCIw8ODg4ODg4OyZKH+++/32RmUwdOvTgte8vh3nvvNXr6zc3Nsu222xqJWwcHBwcHB4dNhDy0t7fLbrvtJr/4xS98/T5Z3Mcff7zJyn7iiSfks5/9rHzsYx+T22+/Pcz9Ojg4ODg4ONRytQWehxtuuMF03iuGL3/5y3LzzTdvoERHbTkCNZSJOTg4ODg4ONQWEtd5mDVr1kZNb4455hjjgSgGxGfsbn3UoVNfPnr06ILSug4ODg4ODg6FgY8AvRbSDerq6mqDPCxatMgoxdng+zVr1phmOXTv8+IHP/iBXHLJJRv93ElSODg4ODg4+Id94J43b57pYjtgqy2+8pWvyOrVq/tfb731VrVvycHBwcHBoeawevVqQxrA0KFDY7tu4p6H8ePHy+LFizf4Gd8PGzasoNcBUJXBy8HBwcHBwSE8sLWKOMP+iXse9t9/f7nrrrs2+Bk69vzcwcHBwcHBofYQmDy0tbWZkkteWorJf2togZDDGWec0f/75557runC96UvfUleeOEF+eUvfyn/93//JxdeeGGcn8PBwcHBwcEhreRhzpw5sscee5gX+NznPmf++xvf+Ib5fuHChRvkKEyaNMmUauJtQB/i8ssvl6uvvtpUXDg4ODg4ODjUHmqiqyaVGcOHD3fVFg4ODg4pRjablZ6enmrfxiaFpqamsuWXakNJnrRzIFKdMOng4ODgMLDBGZSyfMT/HCoLiAMefkhEJeHIg4ODg4NDJChxGDt2rLS2tjoxvwoBAcUFCxaYdIGtt966ouPuyIODg4ODQ6RQhRIHVIAdKovNNtvMEIje3l5pbGys2PumUiTKwcHBwaE2oDkOeBwcKg8NV0DiKglHHhwcHBwcIsP1Hdq0xt2RBwcHBwcHB4dAcOTBwcHBwaGmMWPGjJKdmm3ce++95rQetTJkm222kZ/85CeyqcKRBwcHBwcHB4dAcOTBwcHBwcHBIRAceXBwcHBwGDD405/+JNOnTzftp+nq/KEPfUiWLFmy0e899NBDMm3aNGlpaZH99ttPnnnmmQ3+/cEHH5SDDz7YdH+eMGGCfOYzn5H29vYKfpJ0w5EHBwcHB4cBVTr6ne98R5588km58cYb5Y033pCzzjpro9/74he/aHotzZ4922glvPvd7+4vO3311VflXe96l5x88sny1FNPyfXXX2/IxKc+9akqfKJ0wolEOTg4ODgMGHz0ox/t/+/JkyfLz372M9l7771NR+ghQ4b0/9s3v/lNOeqoo8x/X3vttbLVVlvJDTfcIKeccor84Ac/kNNPP70/CXO77bYz1zn00EPlV7/6lfFWbOpwngcHBwcHhwGDuXPnGi8Ccs2ELjD4wO72DPbff//+/x41apTssMMO8vzzz5vv8Vpcc801hmzoi07QyEG//vrrFf5E6YTzPDg4ODg4DAiQk4CR5/WXv/zFhCMgDXzf3d3t+zp4KT7xiU+YPAcvICUOjjw4ODg4OAwQvPDCC7J8+XL53//9X5PkCObMmVPwdx9++OF+IrBy5Up56aWXZKeddjLf77nnnvLcc8/JtttuW8G7ry24sIWDg4ODw4AAZIBeD1deeaW89tpr8u9//9skTxbCt7/9bbnrrrtMlQUJlWPGjJETTzzR/NuXv/xlmTlzpkmQfOKJJ+Tll1+Wf/3rXy5h0oIjDw4ODg4OAwKEKchV+Nvf/iZTp041HojLLrus4O/ybxdccIHstddepqX4TTfd1N9kihLO++67z3gjKNfcY4895Bvf+IZsscUWFf5E6UUmn8/nJeVYs2aNDB8+XGrgVh0cHBw2KXR2dpokwkmTJrkqhJSOv9rQ1atXy7Bhw2J5X+d5cHBwcHBwcAgERx4cHBwcHBwcAsGRBwcHBwcHB4dAcOTBwcHBwcHBIRAceXBwcHBwcHAIBEceHBwcHBwcHALBkQcHBwcHBweHQHDkwcHBwcHBwSEQHHlwcHBwcHBwCARHHhwcHBwcHBwCwbXkdnBICZBf7+npMV8bGhqkrq5OMplMtW/LwcHBYSM48uDgkAJks1lZuHChaSmMPv2IESNk1KhRRo++sbFR6uvrHZlwcHBIDVzYwsGhisDL0NXVZdoC0/p38803N41rVqxYIXPnzjWd/WbPnm1IBZ3/Ojo6pLu7W3K5nGsU5zAgkctVrgHijBkz5NOf/rR89rOflZEjR8q4cePkt7/9rbS3t8tHPvIRGTp0qGy77bZy66239v8Na3KfffaR5uZms14vuugi6e3t7f931vNnPvMZGTt2rDkIHHTQQWYNK+69917jUaQd+PTp06W1tVUOOOAAefHFF/t/58knn5TDDjvMvD/7AZ0/58yZI2mC8zw4OFQJEAC63UEa8Dzst99+ZkOCULC56L+vXLlSli1bJq+99prxQOCV4DV69Gizuahngr9xYQ6HWsUDLy+Vfz+xQFa0d8u2Y4fIaftsLduMGZz4+1577bXypS99SR599FG5/vrr5ZOf/KTccMMNctJJJ8nFF18sV1xxhXz4wx+Wt956y6zF4447Ts466yz54x//aEj9OeecY0jCt771LXM9rvWPf/zDXHfixInywx/+UI455hh55ZVXjDdR8dWvflUuv/xy00b83HPPlY9+9KPy0EMPmX87/fTTTRvwX/3qV2Zts0ewztME15LbwaHCgBxADNiMnnvuOXN62XHHHU1YQnMeCpEACIaSCV78NxuKl0yQL6FhDgeHWmjJ/eS8VfKzu17e4GeDmxvk0pOnyaCmeknS88C6euCBB8z3/Dehwve9732GHAA8fqzRWbNmyU033WSIwfPPP9+/Rn/5y1/Kl7/8ZdPuet26dcaDcc0118iHPvQh8++s6W222cZ4N774xS8azwNehTvvvFOOOOII8zu33HKLHH/88ebvGUO8DVdeeaWceeaZqW3J7TwPDg4VBMSADQLSsHjxYtlll11k/Pjx/f9WChACNiZeutGxGUAk2OBefvllaWpq2ohM8HeOTDik3evgRXtXrzz21ko5cNsxib73tGnT+v+bdcK62XXXXft/RigDLFmyxJCG/ffffwNyf+CBB0pbW5u8/fbbsmrVKkMW+JkCgk+Yg78t9r6QE32PrbfeWj73uc/Jxz72MfnTn/4kRx55pHzgAx+QKVOmSJrgyIODQ4WAt2H58uUmnomRZ4MZNGjQRr9XzPPgBRsdblB1hRJ3VTKxYMECE0PlJMKJA8IxZswYGTx4sCMTDqlDT7Ywce7O5hJ/b284gLVn/0zXIus3qffNeN6DEAiei5tvvtnkW3zzm9+Uv/71ryaUkhY48uDgkDAgAxh2chZeeukl417kFBF3WIFwBacmXoD35CQEmeBURHwWsqJkgt9zZMIhDdhn0ih5Zv7qDX7WUJ+RPSf0ednSgp122smELfIWwSdPAQ/fVlttZdYUBwN+Rr4DwBNBwiRhiyDYfvvtzevCCy+U0047Tf7whz848uDgsKmATYbM7QcffNCcKsiaVuOeNCATeBt46SamZIJ8C9yoZHrbZILv8Wjwty750qFSOGDKaJm/ap3c/fwS6cnmZHhro3x4v4nma5pw3nnnyU9+8hNTofGpT33KePfwChBm4DAAGSfhktwGPIKEIEiYpErq7LPP9vUehDX5+/e///3moAHxh3ycfPLJkiY4z4ODQ0KALJCL8NRTT5n/3m677SpGHIq5Scns5mWTCcpC33zzTZOHAXkgX0LJBJ4KTcB0ZMIhKTC3Tpk+QU6Ytrms6uiRccNapL4ufQJpW265pUlu/OIXvyi77babIQiQgq997Wv9v/O///u/Zr1TobF27VpTjnn77bf35yqVA2uN8OYZZ5xh8qIg/yRwXnLJJZImuGoLB4eElCI5lbzxxhuyww47GBLBxoNrs9zfsfFUo1IC/QglE3zltMSmTsIYG5iSCU3AdGTCIa5qC4fwcNUWDg4DABAA1W6ACOy7774mLEAWddpBrBZhG166Kc2cOdN8pldffdUIWQ0ZMmQDzwTGwpEJB4dNDy5s4eAQo3bD/Pnz5emnnzandZKrcPkDFX2qJSBYBVDY478hE+RL4JWgLJTPSaKYTSb4PUcmHBwGPhx5cHCISWKanAH6U0ydOlW22GKLDX6nll38qj+Bl4F6dK1JJ7FLyQQhGsIeNpkgHuzIhIPDwIQjDw4OEYA3AQNKmIITNxr1JB164cfzkDZ56XL3Qv4DL4iSil+p+iWVHIRtULOzyQShEeeZcHCofTjy4OAQQbuBKgX0EyjJopqiWKJjmkhBEuDzQZp4kRiqZILkSxWtQhHTSyaoAHHtxx0cag+OPDg4BASGkUoEEggxjrvvvnt/+WMp41pOfjqtCHPfNpmgwkTHTMkEtet4YkgmVTLBSz0TkLCBTrgcHGoZjjw4OAQABm/p0qUmTIEgDBLTmlg4UMlDHODzM168JkyY0C+epWRi3rx55mfqmSD5kq/aMdSRCQeHdMGRBweHAGEKqgwoWyREQac8v6fjWiUPSZ3+uS5ln7wI+TA2NBdSjQkUMPkdyISGOPBSODLh4JAOOPLg4FAGGDaU4mhoRRyfDnmcioPAL3nYVF31fG4qNXjREwAPj5IJPBOIEOF9UCltXo5MODhUD448ODiUAEl+JPuhaYDK4p577tmv3ZAEeahF70QSgCjgdeAFIBMQOMgE0r00GQPkVFA6qmSCZ6N9OTZVIubgHzNmzDA5S/SrKAS8izS08tvU6lvf+pbceOONJqw50OHIg4NDESOObgElhyT3IfhEFUFYg1SrYQuQhvtWrwMvJRNs0IwrOSivvPKKCWmoZ4IwB14MDXM4MuHgEC8ceXBw8ADDhNgTDa1Ihtx///1NbD4Kapk8pBGQCSozNMyBhwhZcEIcNBMiNwXiYJeF8rva5KsavUMcHAYS3ApycFgPjDtGiGZWjz32mEmQjIM41DJ5SLvrX+8PQgBJmDx5sml7fsghhxilT0SsIIIPP/yw3HfffTJ37lx56aWXjLcCuW1tROaQEuSyIl1tFX1L1jnttfFaEZr8+te/XnStksj73ve+1+wJhNROOeUUQ1a9+PWvf22qigir8TurV6/u/7d7773X5E1ReQS5pWILvZhag/M8ODisJw4kQ6LdsGzZMtl+++1NVUVcJ1S/5IF7gMCwsTiURqnxhEzgbeClBoINXAWrkNNGbpvNmxdGgzFX9UvnmagCXrhZ5NkbRLrWiozYWmT62SJjd0z8ba+99lrTVvvRRx+VOXPmyMc//nFTAXTOOeds8HuQTCUO9913n5lT559/vpx66qmGECgIof3f//2f3HTTTcYbxrXPO+88+ctf/mL+5sQTTzTX/n//7/+Z0Cjvm3aSXgiOPDhs8mBTgDBQTUGYgpMAJ1I2gbhQTp4aQ8ipBqPGf3MfWlXAy4+WRFKoRY+JF4Qr0I7gBdjEKQlVwSpUQvFS2GSCU6MjExXCW4+IPPbHd75f9ZbIvT8Qee/PRZqHJvrWeAiuuOIKs0Z32GEHkxzN917ycNddd5l/o/JnwoQJ5md//OMfZeedd5bZs2fL3nvvbX6GR4ufkyMFrrzySjn++OPl8ssvN6E2SOwJJ5wgU6ZMMf9OPlUtwpEHB9nUtRvwMBAjx+XNgmYT4edxurNLeR4gKng82FSo5uBETGWBiifRcAtDprF7FU9yCB9WgUxAEHjpM1AyAYkjUZYxt8kE5AIyodUcDjHi9fs2/llvp8jbs0WmHJ7oUO+3334bPE9ClRh6PIA2mBOQBiUOgNAY84N/U/KA10KJg16PvYSDwaGHHipnnXWWHHPMMXLUUUfJkUceacIa2myuluDIg8MmCRUlIimSr9OnT+93cQPc1nGeuIuRBwwWVQMk89FUi/eFuNinZNuwUaKIMiO/b+sdhCkf9XvfaUWczwcyhsS4yowz5trkixwYCJzGqFUBEzKhCZhpHqeaQLFnmR94+Sh/+MMf5DOf+Yzcdtttcv3118vXvvY1ueOOOwyJqSU48uCwyYETxaJFiwxxwPgSpvCe5NXY84rDMHjJA/+NUSI0su222/arVUIcyhk22n+rYeM0w/eqxMiL/8agOYQHYz527FjzAsSmdcwhcM8++6yJfdvVHOqZcGQiBLY5SGTBYxv+rL5JZKt9Ep/GjzzyyAbfk1yLgqx3DRFewBPIa8J67wOkEmKPB0KB54q8GrrN6vU4FBASUeyxxx7m9ZWvfMV4Jq677jpHHhwc0q7dgMFlgbOY2QQKkQP9WRLkgXsgdkpoAlenrVbp573Ifxg/frx5AbsVNpsWBMRWYsRLMVATACt14idWPW7cOPPyEjhCXsS5GWcddzwThJ8cmfCJbQ4UaVsk8vxNIj3rRIaO70uYbOkTCUsS7AWf+9zn5BOf+ISpsiJHgbCFF4QYdt11Vzn99NONqBTrjERIQhF4LhU89zPPPFMuu+wykzCJl4HQBOuVfInf/OY38p73vMeQC/Yi5s8ZZ5whtQbneXDYJEDMkYVMiIBFv++++/arFxaCGtu4XONKHlBIJDFTS7TiyF3gxMuLzUi7V6phY2PkZ3bnSk7MQYxuWhMmq3lfXgIHebDJBDksEA7Gmvg3ZIK/cWSiBHY5WWTHd4t0t4kMGsmiqcizxHCr7DzP54ILLjAVF16wZv71r3/Jpz/9aVMKzB7xrne9y5ANG3gS3/e+98lxxx1n1jvJkb/85S/Nv5FHQ3IuFR4opZLrQMUGxKXWkMmndWewwKYPo6+BW3VIGZgzEAdcjbia2exxP5Zz6xPzJrua00Yc+QS4uvEKsElRBkpSVSEDTkiF947LU6C5HWrYcLFybZtMsKEVIxOUpKGbEIfWRdyAhGGUafmdNmjZL+PP88TbhGfCDnMMFDIBceJEPWnSJHPqdkjf+KsNJSm71KEpCJznwWHAgo2bhUVckhwHSqr8ZjXrZh5HxQUubsoB+crpRiWWK91wCsJi94hYsmRJv6yzXRZqb0BpN2ppvT88QYwjz5rQmB1aIjMfQqHtx5VM4KkYCGTCYdOAIw8OAxIYSQwkYQqMIyECNnS/iCtsgWtSZa454ZcjDpqkWYkeEZxU8HSoeNL8+fONSxWjp0Qizd6+NN+bnS/Di2fPixCGN7SER4rn4CUTzFu8XjwzRyYc0gZHHhwGpHYDlQwkI9H3gBhk0DCAnTAZ9j441XMfO+64Y78QVdpQSIlRy0KRzMWokdxJeACj5jQm/KNYsi0/o+yTFyEXfo/yWyUTeKn4mU0meKlnwpEJhzTAkQeHAQM90XHS5zRNKZSKAAVFlLAFoRLugTAFtduEDMi5SPtJuZB4EjkPJGLymRDTYnxtjQmMWzXLQtN8IvdbqcPvkFPCixCHN0+FpFeAt8gmE9ox1JEJh2rAkQeHAQFOzOp2x7gRpuCkVulmVngXIA4YX9QiNdmyVhtjYZi0I6W3RJGxJhHQ1pjAwFWqLDTt4xm2zNebp8J1VHFUPUL8jpIJng//7ciEQyXhyIPDgAhTUEnBpop2gwouRUUQlUk8FIQpuAeqObwVALVKHkqVKGpCaiGNCQyaakyk2TuQJOLUCIGg8SIMx1zDM0FODy8y7fFAKIlj7PlvRyYckoQjDw41Cz2RkRSJEQNsrnEZqyCdMCkbLNXCu5bJQ7H75jN5NSbs2L22GbZd7cT54yQTaScmSdwfpFbJBETZrqDB80VZMGRCpbTVc+TIhEOccOTBoWa1GwhTUEuPbDPKb/fff7/5eZxttMvlPFDuSEIhyoOl9CNqmTz4RaHYvbrbqTohZ4LxsctCIR5hDWzaxzMuz0PQChrmrFbQ2OW4qn6pZIKQmjb5SjsJc0gfHHlwqCmwIRN3p1YeNzkGm/I37QkRp0EpFbZgg37ppZdMIiT6EapjXwx+ycNA2sQLudsRq8GoLV682IwfeSnFNCZqHZUiD4XmrY4noGKGccczwbijgMm4a9kw3gs8FHaTr4E0Dx2SgSMPDjUDjA+GhxABmxtdKKmdB+ptqEQbbSoOuAfei3vAFR/2WmlHnEZElS15FdKYgBDiidB8CX6vXNJrmo1ctciDF7a3B+i44w2C/OKdUEEr2zOhglUDtS9KKVBizRx9/PHHZffdd6/aNdIMRx4cUg82YTY8YugYGFziSDzbm1pUXQa/YQuUKgmV4GkgOdNvmWKtkodqaUyw8ZIUSAjELgu1pcLTPp5pIQ/Fxh2yAHkgT0dFq2yhMM1VQeMDgrwpkQn2mIULF4Yu9d4U4MiDQ01oN2Cwcbvutttu/W2SbairNSnPA9dlUyVUsssuu/Q3RApzrXK/lzZUykh7NSbsNth258paUL9MM3lQ6FohHwKCwMtL4iAX6hGyyQQePyUTAxF8rlJrPL/+QBNH35taxcCnkA41Cza3pUuXykMPPWR6AaDdUIg4KOImD5rzQAXBww8/bDZUwhRBiYPem19jl2aDU0loG2wUOjkd8yK/RXNeiOPjoaBUkWcT57PfFMiDzkevJ0FJ3HbbbWd6sRx88MFGpRWDivfvgQceMC+qnKjsIASiScxRCV13tluWr1teMWLIPf/whz80n48yZHQ1vve975l5xbPjM4J7773XfH/rrbeaRnHNzc3y4IMPFv37YuAQdOyxxxqPGnP7wx/+8AbKs3//+99N8jdkDZJGYz72nzRi06VNDqnXbiBLnNeUKVNk8uTJZTfiILoMfsD7ERcmvwGjRZgirMu2VsMWaTJ+uNJpbMaLsXz00UdNMiabK5LObOQat0+DxkTan7eSrXJjhGeCiiZe6hFSzwTEje9VhZT1of049Np+n8Etr90id751pyEQoweNlg/u8EHZafROkiS+8pWvyG9/+1u54oor5KCDDjKhCjyMxXDRRRfJZZddZvYj5liQv2fMDj/8cPnYxz5mfp8S7y9/+ctyyimnyN13323+9rTTTjNk5KSTTjKVSpC0tM4jRx4cUgWV5sVgE67Ye++9+xO9yoENK67TJy5JNkPctsVCJZsCeUgrGE+N3XOC82pM6MnRbjRVqvX4pup5CFNZgUeI9aBrgvWKhxDgFVISwfPRCg5bQrvQ+81eNFtuef2W/u/xPvzmqd/Itw/8tgxt6lM3jRsY55/+9Kfy85//XM4880zzMw4qkADmTyF8+9vflqOOOqrs3xcCv4dk/ve///3+n/3+9783+RVUHjGOHJre9773meokgBcirXDkwSE1wGDDvtFNYLMnRMCpxy/iCluwiHFXci0SM6MSh1onD2m+b9sg2RoTtgojbmE0JjBk3rLQJI0745bm5MK4NFG0YRdue8aU67KWlYDrOCiZ8GpL8HXO4jkbXbcn1yNPLn1SDtqysDGOCkJfkJ0jjjjC999Mnz499N8/+eSTcs899xQUkWN+Hn300eZaEIZjjjnGfP/+97/f9+Gp0nDkwaHqYHPB9Ym7j5M+MW7knYNu7HGELcg2f+6550zskhNsEPISlTyw2TIGvG+xCgOHd1BqPAupMGpZKASVjqsYO5tM8P2mQrqS9Iww9rx07SiZ4FRdlEzkM5KXvvHKyDv3VJdJjnyRVxAUdln2oIB/D5l997vfLZdeeulG/0YojvG44447ZObMmfLf//5XrrzySvnqV78qjzzyiCn5TBvcruRQVeimrid97UIZBlE8D2xsnCSoeacmm/guMfW4wiDlyAMhGsaA3+PkzJhohUG1Gk/pfQ8EFBJOsisKIIyENWwyEZU4pj1swdyulAJmOTIxbfg0eXJJn36LcodBDYNktzG7JXZfJIRCAO666y6Th5D03++5557yj3/8w5DZYgcCPj+J4by+8Y1vmPDFDTfcIJ/73OckbXDkwaEq0Oxs2g2zcQfVTYjT86D9MXC/smhV5TDOUEOpa6l2BEmZbEhsqpxEAElVGsfHK8Kmq/0KMHC4QJM2AGk+QYf97MwzuzyRah47CZDnUUpjYiCQh2qFVQqRiT2a95DVPavlznl3SltPm2w5ZEt5/7bvl4Z8g5nz3jBHHGCdk7D4pS99qX/tk7tBkz0/oYiWEn9/9tlnb/T7559/vkmuJCmSv2ENkxD+17/+Va6++mqZM2eOISKEKwiV4nHgeqjophGOPDhUZdNik8Z1zFdifCS9RUXQhEnugyx9QgWcBkh2sjfTOKs3CpEH7pUxgBSodoT3/os1ntJuivaJOmqviFpDnKSmUEVBKY0JPEDliG7ayUOlPA/lwBzG+B495Wg5avJRsq5nnTRIH2mwwxyafFkoZyIsvv71r5vrccpHwwXSfu655yby91tssYUpO4dwQBDIl8Cz8K53vas/zEZ/np/85CemDJl/u/zyy01pZxrhyINDxTcsyh9nz57d76KLq59BkLCFtvHmXsiALqQkF6duhJc84FHA28HPbJntctfwJgVqN0XtFWHH8TnZlJN3diitMaGklueFZ4KxxlOmrcftstBCp/g0GOdiiLOJXFxgvFqbNlwLEGY9GEDqgApUFerHEWTMuS55Bby8sNfrjBkzCpLVuhJ/z4HE+zd4Fv/5z38WvBc8DLfddpvUCkKRh1/84hfyox/9yLhbKWMjsQMxkWKASf3qV78yLmo2aTJIf/CDHwyoJjgO/rQbEJXByLHhsujjnAN+PQWwegw37w15KZYol1TYQjtx4mkgOTRsqMbbTdGO42s4iAQvu1fEQEu+rJRxVg+QakyonLPmTGDY7NbjELy0ex7SXg2iYAyZtxrm0EoO9pO4yYSDfwTeSa6//nqTvHHVVVfJvvvua4gBZSW4XwuVtF133XVGWIN6Vk5YGI6zzjrLPNAf//jHQd/eoQah7vannnrKGG7KnTgx28pqcaBc2IL7YKNnrmJsCVOU2ljiDltomALD7qcTZ9Q4vor5cFoOm3yZ5o23WrkYjAmkjBdVQapNomSCcJI+bzxC7IuV1pjwg7STm2L3qURBvWrFyIQSCUcmUkIeMPjnnHOOfOQjHzHfQyJuvvlmQw4gCV5QdsLp7kMf+lC/K4eEEZJBHAY+WNRsoBAHjBVzgUUPmYhbTrhUmIGEOJLgMKbIy2ozpnLXi8tA6aZGAhQyy4VqveOGV8zHTr4kPstma4soFUu+THPCZBrAmBG24EWJr7Yef+yxxwxxQ9LZqzERpkxwUwhbhIFNJjQRm7nNi7wCFRSzX84zUWHywAY4d+5cI8mpYPKhvz1r1qyCf4O34c9//rMpeyO0gdv6lltuMZreDgNfuwFPE2ptVFKwsapxilMNspynQEtBOSkqefGDuHIe8LAgEAMoRa1W6KBY8qUqMmrLbA1zpMHAlUMaT84aTgIkwvK8IROV1JiopYTJcghCXm2ioH9bikyolLYjE8HREHQT5CTpzYzn+2J63ngc+DskOzXuTTbqxRdfXPR9eMC8FCw8h9qBJvJhsDnxYzBxl9tIgjx4jT3zjVMfLntCFIQqgiZTRTl187eUYmGYSZRijaTlpOc3+ZLnh9Hj99KWfJlmj4jemxoqW2OCPVAFq1RjAmJrl4XGJU5W7h7jnI9JPo+wJKcQmbDDHAOBTOSrtA4SPwLRjQwt71/+8pcmR4LN9IILLpDvfOc7psylEEimvOSSS5K+NYeYoSyf8kdCBJBKMogLnbTjbmLlvSZeD+5BcyzCSLxGCVuwKRGqIc8A8oQhhjyk1eAVS76kIoUET0iYGjg8EwMx+TIp8uAF4+bVmFAPEDLFJGN6y0KTGOu4PA9qmAmLJeGtirvZnapaeskEz6EQmdDPl1Yy0W3leVQSgWYklRLcICcTG3xfrE0xBIEQhSpwUdOPu/TjH/+4KW8pxHwJi9iKWhgATkcO6QWLEEOJSiOu2alTp5ZMCEwqbME12YQJE+DtIGwW9sQcNmzB6Z33Z+OnDJQNiI0JpJU8FEu+ZOzwmjCWhXQPNMRRaeVLRdo2cj/kwQu8DHZuCgZMx5oQB98noTIal+eBuQKx1OZYcWuNYBy1uVaS0PdgzbNemeNKsOxuoTaZyFR5/nF/jDsJuZUm84HejY2EZDNUsE488cT+m+f7T33qUwX/Bhbtfei2C6kQOKVVIwboEA5qsAlT8Gz96BYkQR6Aiidh8BBZibK4uUdOJH7BfOa9OT2S4wHh9Z5WaoU8lDNwbKyMddDkyziR5rEMQh68YO/jMKYHskKJrn40JiqZ86AJyHip4gZzjTGplqFmnOyXkok666Woxj3y/nY+WaUQmKrgEaD9KK5gEiAp1cSToNUXZ5xxhpHZJfQAaARChQYnMA1b4I3g55V2szjEC81hwaWNS54JjNH2s5HFTR44nSD4xP0wLzVhLQqChC14f7QbKNkr9P61Th68QCMDz1KQ5Mtqn9IqiTifszfR1daYoOyXn9ljjRfAz1jHmfPA++Gp4v1Zg3EB8o6gHH0h0pBzg0eCNb56fc4KzwJig2eI++MZqGeoEt4SwPtWw+sXmDyceuqpxk2CHCciUTQRQhVLkyiZzPYH+drXvmYmFl+R4UX+FeLwve99L95P4lBR6CaGwWQRFVNprAR50DABixURnziIQxDyQG4AXhfet1gb8VolD36MkJ/kS91Y46wuSLNOQRTPQ1iNCcabaja/kuVJVFuo1HScxhoyAllNQ44Nn42xH7fe3mlPFMae5FcOUnhd1QsHoWL8NYciLQnTcSDU0yBEUSxMQYLkBm/Q0CDf/OY3zcthYIBNB/ckBhuDUUqlMUnywMbJZsmLMAFkJk5vRrmcB7uao1yYpFbJQ9zKl97qgoGafJkUefCjMVGIuNlkQlVda0FhUtdfWu+z0eqJQm4eXnfmsoZPSThmj9QwE2SC8bd7dNQqBtaKdahImIINCYONwUT0K8wCiEoetJqBeDDhMNyGGPE4XaalKkJUdAr35d57720MoB/UInmIes+FOlimMfmyFsmDH+KmLnY8v4QXMV6MM2so7blldo5B2pHL5cx4Ms/t5FclzuRCabdWOz9IvSoqXpUk8IwSHQCo3BJBCNt4y5EHB9+bIScavA1s9sT1/RrMUuQhjOuZ3Abug0Wq1Qz2NeNCsbCFik6xCfit5tDMbD+GuJZPI5VMvkx72CIN94ZBYgw1oZGxVWPGSRkix3yutMbEQFTBzGazG+XxQSbsBmvMdR1/Jc7sZXgwCbni0f3Nb36T2D1+61vfMv2omJ/XXnutvPe975XHH3/cEImgcOTBoSS0DppTCy448hpIXorqZtYNIcgmy0YCeychj4ZSxHztv42zC2ah69m9McKITuk1asn4VOIeyiVfcg9KJGol+TItz88L1i1rmJcaO7wUxTQmIBPVTGyvdfJQaK57K2kgE7Rr+Nvf/mbyVyh3P+yww8wLKfs4mwfSClwF+8g7pGHlww8/7MiDQzIS00iSkySL1C4GOw7ohuB3c1CGzv0gusQGV+iacYYF7OtpC29OyH57Y4S9v7QankogSPKlqgSmEbXwDNXN7tWYUC8QIQ7Wm10WiuGppDGvNfJQF/BeIcLve9/7zOvXv/61/Otf/zLVjPfcc49cffXV/eGmMPtNuXuFrEDUIShh4DwPDkUXLYwYg41rkwziuIiDlzyUA8SF/AY2OAx3Ma9HEp4HDddoC2/CFFHixJtizkNSyZf8N/MircmXaScPhRImmdu4z7X1uK0xgXIsY263HofEJ/k5wxjkaqrr1kfw0uD1YY9DUJGXJoTHSRw4AB111FHmMAZBv+GGG4ygXxikY5U5pG4REIMjKx7CQCYxky5O+CEP/BtxQRJ8mOBkMpe7ZtxStmyeuPVIDN12220ji07VInlIE+zkS0rFkT/HwKQt+bJWPA+l7pF/49DAi7XnDSmxRwCbTPjVmAhyj7VAHnLr97Go5IHxUzCOhEfjBEnuHITIc/n73/9uvBz33XdfKALhyINDP/SkQUYwzcxIrIEJc9JLoolVKU8B94HXA7e03xbWcXoeNM+DBY23I4iGRSk48hAvSO6zs9tLJV9W4qRcS+QhaKmmN6SkXjnGmiRmcibsJmBx5KfUCnnIrleijUIeIGZ+9rkoIOTHIQiwryHA9dOf/tSETILCkQeH/kWqraNxz6PdoIk6SUlJF7suGceIT2ljLb8LMq6TPUlLsHOuhbGJizj4uT82YZIEiS1zeg4rPRwX0mwAC42lN/kS8qdkopLJl7VAHqKKRPG3zFNeaJxwPW09XkhjgjEPGvKLGgqoNHnIRBhP9p249pog42t3sA4CRx42cah2A6cGbV09efLkDRYBizdIj4ew5IH/ppKB2CrJmcRdo1wvDDitEqJBcIdNEYMTJ4qRB1vwChcxpxDGwZYe5lULlQZpga3GaCdfeo1b3MqXtUIe4haJUklyXl6NCeYyVQSEQOxKjnJlzrXkeaiPqNOglS5J4qGHHjIHMtbBddddZ0Qdb7/99lDXcuRhE4ZK25J0xlf6lRRKzomiyeDX2LNwOO0DkhLt2F+Y64VZ/GSXE0fXcA0ekEroRnj7YtilWfyMkzNJo/SFwU2vxo6vldD7T3OoJch8tJMvyWFR46bSwrbypb7CJl/WAnlIQp66mMYEhxKVcoZMoL6obnqbTHjHu1bIQy4GDwl7YLmGglFx7rnnmj2ONTBt2jRDHEigDANHHjZRsHEyiSAOLFzCFMXEYcJoMviB5ihwH+RZ4GpGZjrsIgzSyMqGEhf+HuLC6T7K9YLcnwpOceLgvdk8IRNe6WHcwraxI4kUY8fmq2Si2jX5tah8aQso2cqXeOLIu1HNA7vh0UAhD5WWp7alnAHzvJDSqN16vFbIQ9aHxkM5cFBIOueBQ4rqPESFIw+bqHYD4QEMkLd1dCHoooi7bIr35ATC5kGYQoVTwiKM5wH3NQuKUAFj4W2vmxR54CuuXLwdtuBUqffzGjvdfCETPE9il7YyYxzJgWk3gEkrX+r4EsrSVthK1kqNby2Qh6Q9D+WA18yrvqhkghAH85swEoQaj0WlNSaCIBvD3ognJumwRZxw5GETApsFJ1eSIpnsxcSWomgyBFkoGDs97cfhrgtSbaH5FVRUFCMuSelGMPYYIxJUUevUng9RNl+tlNHkQNWvVyKh+RIDDUkaP8JHtuZBseRLHWM7H6UWyEPaGmN5x5v5TF6Kdu9lLdqVM8Vky2vV89BRgbBFnHDkYRPSbiCui9FicSLv7Hey6wKNy5BqUiInCsqG4lowfqst2JQIFfB5KAMtll+RpG4EYw9pikt61q7J11bNXmVGTnFKJNh8/fYwSGvOQyXvq1TyJbkxuN21soDxTaI6aaB5HvzMZx1z9glbYwKPpU3eePH71fo82YjkQTU0nOfBITVgUuIOJD5ObgENUIJWMbAg46i44O9xR2LMSEqksiBO+AlbqFolnoZyBCrusAVub8IUVHJsv/32iZ767DI6TQ7E9autgskx8YopbUr5ElFRLvkSYsHvQNqiJl9uKp6HQmA9a+vqQrLlEAlNJub3vBoTlbzP+ojrpxI6D3EiXbPZIfYJzWbGKZtTJkmRYRdU1DJI1U5ggWtSIq7fJBtZFVOrhECRnBnlekGg7w2Jw9BAWioNb1tsQkYaz4fQkSxYqJNlWk+mirTcnzcfhXAYcw0DHTX5MinUQjJisXv0kjcNyTKnFy5caEKSeNpsMpFk+/FsTGELRx4cUqHdwAmTRaTSylE2iijkQWu8OXEjj6r3Ebf4VLEwgzbVwkD6Vassdb0gsN+b943SxjxOsJFqd79C8Xw+OxsuSWta/ZE2pDWcAjAkhDFIwo2afJkUaiEvwy/B0fnKCzC+Sia0DNfWmAgStqtEwmQulzOehzAl6tWC8zwMMGjsjAQjFg8SpGET8mywMIKGLVjALFoSA3fffff+Ei37mkl7HlBsxHij3FaqqVax60UxUBgK3pvx573nzJmTylh4oXi+KgUyfnhNOEnbzafi3HgHIryGOUry5abseQhrlFnntqfNqzFB2E41JtQTFCWslI3oeWDPBi7nwaEqYALjaaBhDYuBMEVcIkIsjCCGTztRcsItlhiYRAttoNfEXcxGgaJamI6gYcMWvD8GgTisXQqb9lNeIaVADBzeCZ4j/2274KvdfAqkdUxLneqDJl8m5XIfSJ6HoBoTGrbjpWXO5AfZrceDkIHc+tyMqOTBhS0cKgo2AZg1yVnawpUSwKTUIMvdi+oXlOtEmUTYQl3EnCwwcn7LUePyPPAceG+8PqhFYljt66XR81AObKJ4blR333bBUzljt2lm7sXdWbEWwxZBDHMllS/t+6sFz0NS92iH7YDdelwbqvE87IZqpe4jm81GInh4oiCMlVCMjQsubFHjYHFpMiLsGYGbJEqW/Bh6FhyGkwXoR78gibAFoBSSBb/HHntE2mSDKmtyenz88cfN+ONt8W4Efp9J3FUeUVDonr0ueE5N6oKHvGrWuyYQJpmolmaEXYNBlS+DnpKBzq9a8DxUogqIMBEvu6GajrkmvpbSmMhGzHlgD69mqWkYOPJQo9CTA7FoDDbuOOLquMqTON2WK9VU8SkWIIbTj8GIkzwwHoRrAFLOqtgYBUEEf3gOnBB5XxQjC/1+rXoeShEZu4SOhFg7650xwQOliWqaLxFnyWJaN9s4QwLFlC95hU2+1Ge6qXoeSsEOK6lmCsbd1pioWx/aUzLBM4hCcrh+LSVLAkceali7gY0ZFxsxfeSVARM4bIvVMIaee4GZEzKhG6e3I2eYawaF3VgKcHqIY+O2xbGKbWC2dkWhpNBC1xvIsLPemQuaqIZnQstVtd141JbjafHOVDqfoFDypYaRlECXS77UdZf2OZmG0IrdY0YJ8lqPxoQe5jTxNWhJvGo8pP152HDkocbABGXScsovJO0cpioirKHX+D7GAa9HoY6c5a7JNaKA9yZkg0FiLO66667YTvfeBEy/DbWKoVY9D3EmqmlsGUNHbgzjYScG1prrttrJiN5TciHDxjPwJl/Wiuch7n46caBQjsqjjz5qwpQc5kjAhODZOSrlchlqTZoaOPJQY9oNnPI56ZKlXUilMGhVhF94wxYYbQgMm1bYqo4ongcNU3CaRTuCUAUbaZwVHHbYophSpUp9+9ngatEoxn3P3thyoZbjdr5EuXmV1jGtViVDseRLW++ANatJxGk0zmnzPPjZGzOZjJnT5HmxT2tZKHsUoSU74bVQqXMlOmrGDUceagDqmuSUz0Zbyj0eh4x0KUOvZYhqtNmgwm6SYcmDXdEwffr0fmGYuE/3hXp68PkxcoyBX6VK+3ppdrUXQ1L3XKrluBo6uxbfbjme9nFMSxmknXxJLo6GkSgJBQ8++GDk5MtNnTx4dR7I6bGrk7wJr+zlOuaEmDkIJi0Qdfnll5uvhLc1L+3SSy/tFzELA0ceUg4WDwudUy5uLU75pZIRkwxbsAgee+wx4xbde++9NzDaYa8Z1NAjXkSooFhFQ5KeB3Ir8LYQtw9TAlqr5KFSKNRyXPMltBbfTgwEaTDQhZDW56xhJNYP3h7mcdTkyyRRi+ShXMKrrTHx3e9+V26//XZDNCDH999/v+y7776xVyg99NBD5uudd95pQioXX3yxHH300f2eqDBw5CHlYQpO+LBV9BL8VBAkFbZgI2ezwS1XyGgnTR4YD06jGJFSiZlxVnCosBPvrXkmLPCwJaCOPAQDc8zedO3EQG05TtIwG68mqaWFTKTF81CuKVaQ5MtK56RwP7zS5AmJo6S02dKYuO6668ye9qUvfcmM96mnnmq8bwcddJAcccQR5udxjPc///lPQwpJrse7dM0115h1NXfuXDnkkENCXdORhxRCY8EYKxZzkFN+3GEL7oXafaoJmHQYzrg2D7+GHhLFqYgNrZx+RBJJiST2sbDt3IowcOQhGrTlOK5X5vh9991nTk12y3E7MbCagjtpJw+F2nEXSr4slpOi45ykhoeu47R7HvLrKy3CkBzGnJypqVOnmtfPf/5zk9N29913m/FOag5BUEDQJHcbjjykcBLSFY7SQx4sp/wgfQTiDFvgXiNcAoFhM+G6lVat9Mpcl9us4gxbQFqUPHhzK5ImD2kxPmm4h1L3RekcJMFuOa5Jat4ulpU8wabl+UVpx82/e9u6a/KlNruDzClZi7vteK2Qh+z6/TbKfZLzMG7cODNnlEgkBcb1s5/9rAmB77LLLqGv48hDihYzoQHcsLjnVbsh6AYUV9iChkgQBzYEjDabhbLVSpEHFV4qJ3Md5Jp+wYkLtUgwbdq0yMTBL3kgp4PPjEHEw8LGXKgvyKYev/fek7flOGtJVS+15bjG8u2W45sqeSjkeQibfOnteaJEIiphqzXyUB+xMValqi3OP/98k3BOsmwUOPKQAqgqHydsNp0gbaPjDlvw/tpQym7qlEQVRzFDz/tgQEkUJUyiWcuVCg3g+WFxEaKAyMXVQbLcvSlZwsvD73IfxEOJ5eumHbdC40AF5Mvbclxj+XbLcT01BxX1GQjkIapR9mp42MqXzGO7P4QKggUZE/V0pnkc7fuMMp7Mz0ooTH7hC1+QW2+91SRmhmkWaMPtQikIU5D8xWKj7A+DHYXBRglbsPjxNvCVjF/clfZ1484lKJSfoH06MJC41YKeuqPcJ3+nqp277babSSjCoMeZgFmIPOj7QhZ4XwgCPyMplA1YjZ4qNNon6KSz4dO+cfu5v2JCSoypEjTmme1+j0oY/YQFqokk7q9U8qUmuAZJvqyVSotcDP03ktZ50H3nP//5j8kVIvk+Khx5qBJ4mLj5iM2SjLTrrruamFdUMIk1SznIxr9s2TJDHHD7kpToPd0mQR6819QTP3FskhPDbBxhEyZ5Fur5sVU74y799F4LMsD7QvjwOHECtlU3eQ5ehUY2Y3tDVqOnUsSbAqI8E1tIiU1URX0YUzxuzEFtz6z5EmHnYlqRtGckjuTLWiEP2RiEtghbhO3+6wef//znzderr77avM+iRYvM98ztsHuGIw9VAIuCCgZOkqrQGFdcWxkwE9qPe5t7YSGTZFYqzyIJ/QglD96Tt5bmhb1mUMMCcaKyBfLGGNiniDgrJLzEho0U4kBYBsEpJX6lwELnGfHid8mR4DpsBlQcaIgjzgS2NOY8xAmvqA+JwpovodoHeIPU2+On5XiYnIJKotKGOWjypXrfaqFMM1tC48Evkpan/t3vfme+Hn/88Rv8/A9/+IOcddZZoa7pyEMVtBsgDrhKmSz77LNP7BUMwM/pm1MvRpOTbjnRoyT0I5SQ0EKbMeDkHXUBBdWO4FnwgjQUigHGrVipXiGV1rbzSsJcz3uCVjexJrDZTaj47zQbtDBI4vNwAva2HNdxxTPBWrDzJQoR/1rIeajm/flJvoQIM6f5WaWrZSpJHvLr51iSngeIGmPIVzscHQWOPFQIOkEIDRBvxTUP24x7ASt5KOclIBmRclBO+RjOcifUJMIWbMhsDmzAGNE4TkJ+jb124uSZePM7bMQdtuC5QNj47HGUf9ooFeIg8RPYfSP8uCvTagAr5Q2xW45D8phb6u3RluPFvD1pHbs05mR4ky/x/hCWwxtpV8uETb5Mu+ehvYLVFnHBkYcKTS6EbCAOLABi6sT92ITiRrnKCDY/3NsYE2qJtZV3OcRJHux7AJCXuODH2MO+KcOEMODtKJUcF2fYgg0Q0sbGV0yzIs7scm+IA9JKCa6GODRJUA2fq+LwN79wqfPyJrTa3h5+rmsmTUY6LZ6HcmBt6EmZjr1Rky+TRC6GZ8znc+TBYSPtBjZqysM4XeNxSKr0sVx+AhOUUy+TPWg5aFz3y+bKPXAtNgVa2cbp4i3lebAlrnGV+pH7jos8QBqI7WLQUQyttEHhc2jM2ZskWCjEAcEp15I8Dai2AfR6ewgFaniDAwMvO18iLS3H0+Z5KJWI6E2+VCJcLeXLuD0P2WzWrL9KlGrGCed5SAjq3rRzCmzXeFI9KIpdm00MN722kA462ePwPGgba01MVAXHOMlDMc8DC5TkN5Ijy0lcx5nzYHfh1GqaNGza3iRBNi892dkhDoyhXf2RFqSV0ODNoeQaD4+62RlXvoekMe62t6eSRq6WPA+lTvQ2EfYmXxJKKpR8GZdWSxLkoa2tzXxNMuchCTjykJB2A6dMSr4QqfFm8AO+V+MZN2wvAffCSZtFRUY/5CEMouonaEWH3cZarxena7fQfRJPJEzBBkK4IEhlS5ScB7xOKu8NeYS4xK3SGRfwiPDi2dgnOzZkiA/k04U4gp/steU4HkfWpOZLaMtxTpu2katU6KgWPA9+qy283Vg1+dJuga2lt3EoX8ZNHtrb281XF7bYhMGC5KTGpkBcuZSxTjpswcJj0VAKCGztgjDQMsKght6u6PCGSoJUhoQlDzwHSBzuzu233z7whhk2bKGKoRgOzavg9OnnWtU+EdonO0gEXzFyfkIc1bjXNKLQc9YqDV5aYaDeHkKb2nLcTgpMalxr2fMQJvmykFR5XMmXuYglpezTeKBqLeeotu42xWACMTkxGEykcsY6SfLAtbU3BclycVQy2Ibe77W4B4gDrnHyGwoJT+k144KGGTQpEw8QzV/wAEW5XhBo3bq3dXitdtXkOdkhDo3rR6niiIq0j6OfUBxGzm45rtUx7CM6rna+RJwtx2vF8xDHPXpLbzX5klccyZfZbDZSWISwhR/tkLTBkYeYtBtw7RIeoB8CTZzKTXrNS4i7HlyTb3CPIrgUh2plUENv98coJTyllQVxex4IF8yePbvf2xElESlI2ILPAWnA21GoJ4df8pA2w+i9H43re0Mc2hrbVXGEy+MpVB2jSYFogtCvww4dRWk5ntYqkKTvMYnky2w2G0nkT8lDrcGRhwhQFksiIiw2SBOnoEqQfiehVjJw4o2LOATRj8Bwcw8QmFL6CfZ14yQPnIoxYngaCnk7gsKvwed9yatQeetCp+9a9TyUgjd5rVgVBxsxSapxu+LTelqLeigolBSocXxtOU4I0M6XCOI6rwXPQxyyz+XgN/lSiUSh5MtsDDkPSXd5TQKOPIQEEwamiqHkwR900EGBTgJxkweaObGhIGTDhh23Gpt2jStl6DVsw0KDSPn5XHGRBzZDvD+c+oln0iskjsXox+BreIYYK9oZxca+FslD0DH0VnHYIQ5tMmb34gibh5P2cYzbo1io5bjmSyBUFTSOz3NIe4yde0yySiJs8iVjO2r9HGbMo5KHpKWpk0K6Z09KNwUmFG5EZI1p4ARjDbpR+D3JlwMnPdgxOgLaF4JchyTyKYoZejXcuPxISlQtiyjXDAKeB94fQjWQJ76Ps/SzlG6Efm7KX7WVdjHUInmIimIhDuYrawiXsO2KD2os0npaS1qemoMKnkVe2mTP2zCtVB7KQE6YjBN+ki8zmYz5PQ6RYZIvk+6omRQceQgA7QzHKZMTFX0pcGOFQRxCUWzE3AsT126ulVQyZiH9CJV5ZlwYD5h4EEQlDxAGvB0wd8IFnG65p7hQzOBD2qji4FSC6JOfeVCr5CFOeW6vi1gT18iPwXOmpzrtxVHMeKR9HCvZ24L3Yf7z8rYc1zwUL0lLe+8NkMbGWIWSL5944on+7rh2kqvf5EvCFi7nYYBCSxQJDWAoYaGF2lYHRVgjz/1oPK5QgmZSAlRe5UoMJwuGTR7DHcbFGIU8aFUDiomUvvkJrQRFIYMPUSK/gY2Ez+03XFWr5CEpME8LhTggE6wznqP39Jx2g6eopnEu1nJcSRqkl70Lks3axdhV+4SfVs+Dn+TLuro6k2PGXPUmX6ooWKnky1qUpgbO8+BTu4FKCoxVkH4QSZAHNoK5Tz4jLy9YLgfssatM3mp8weviXosbapgZE1yjnGggLmHCNt5rBgFjBmngVOVNUk2iesO+nip1Eh4hZBVkc/NLHmrFQCYd4oCk2dUG9um5FohEWu6vUMtxTaxGk8ZuOY6RS0vyXtrJg0JzHoImX+JlY84nXW1x//33yw9+8APz3xDKG264QU488cTI13XkoQRUKZKNi0nMKTPOhxyUPOCi/9Odj8msxSKDhoyQh+5fJDO275VT9tqwFDKJDph6XWJ8eBtYFHF0hQx6r+om5PMWqmqIswumt402ZAnSRDJmGN2IWvQ8VOueeV9VZ8S7ptUG2jdCVfnIOyoX4qgG0hwWgIThLcODClFjTWm+BGPLONoenyhliGmvtkgyvFJfJvnyq1/9qtnT+T0OYRxSkxhr1gpaN7fddlus13XkocjCZ+JiKLRt9e677x77RPZLHrSh08NPvSgPL2+VwcMGC9sSe/o9Ly6TSWMGyz7bvGPEk8p54D5gzxqmiFJnHoY8aBtx7c9R6HkkEbbgZDZnzhyzuIM2FKt18pAWeKsN2IQfe+wxkyiYxhBHmsmDXapp6x54W47TDhuPK2OpY1vJ7qu15Hmo83Gf3uTL3//+93LrrbfKlVdeKffcc4/x/pC7duSRR8rRRx9tSs3jwLHHHmuue9lll0mccOTBA81cJi5ILwLcTBirJCaxHyMPWyWRDLaaH72NrHpzmby2crU01NfJlsNbZNigRnli3uoNyEOxrppRxgQPDO41srshUpWoZrDfH++PtzdGIcQdtiAuTKgCly/EIcrG6Zc88Mx5VevEVwuAuDJ3mA/lQhxhqjgGOnkoVm1htxwH2nJcT8vsjd5SxSQltNNOHnLrw7hhEjvxqJ177rnyyCOPmEq5U045Re666y7zQuTun//8p6QZjjx4wAY0d+5cw7Zha5xqkpSRLnVt7Y+glQTnXPe0vL6845177eiRHccNlkGNGy6wOBMm2TyIi0Kk8DhgROPcFMuRBxWd8nvqjzNsAWHi9MXnjoMw+SEPqh3CuNuVB0lu0rUIexzLhTg4CNi9OCoR4kg7efArElWo5bjqS6huh50vEafMci2Qh+z6/TuqSBRzkjYCvM477zypBTjyUACUO2lPgkp1v/QubE7ZnKCoIiBj+tWl7fL84tWSzWdF2De5t3xG3lqxTg7ebkNVy7jCFmQOQ144xUGkIBFxE6lS5KFaolN2QiZeJ5XRjopS5IGfE7/nhaQ3mwkGEAEqjJ/tlsd1X4n+EXrPtR7i0Np8XpUKcaSdPITVecAbZpcqYvgYV7vluD22UVqOp7FU04vc+r0maktuV6o5AKDZxgoWQyU9D7bgkZ2QeNcrr0hnb6fU12ckm60TyWcklxFpbmySbUZvqE4WR9iCUwVkgaxhknmSKIMEha5pkycqGjhN+t3oooYtcMtShqkJmZTnsriT1ozgmeNpQiuD+cc8ICmTl7rl2aBVXEn7R2Agk27lnNY8Db9zwlubb4c4tJeBjmVcIY60k4c45Kn5fMxVXgjDse6Yw+qVgIBjFG1p5yDztFY8D3Xrc0fCQlUraw3O81AGSXe/LKSbwETyJiS21z8jmcwI43VoqH/nbzYbtrGhjBK2sE/duOrVXQkqQR5UfAmvQ5hqjihhC4wz409eB6d/xjHOMEgh8sDJDbLCs9ZnzhgUc8t7+0dAJHAlE9ZQA5iWUrskEfaZ+AlxxBEuSjt5SEJhUqs0dM1qy3FeOk+DtHKvJfIQBewBTp56AMC7oJIOW6ihQOZYT9oFdRPqOmVQS4esaRsuGeHUgEuvV47bIxMb4WESa0txW7Ey6nX9kgcVX+J9MaRhXJ5hPA+MP4YDtyukgbCVfb2kyIPmN/B+yHr73YS8dfsqTQz5wWPDdXSDhkzEURUzUFEqxKHhIrsttt92zWn11lSyMVahluOaL0E+kR0+KqTGWCvkoT5CyEJDP0l6HthXaVmgYK9jn2fc8RiFhfM8lBughgYz6ZMAkw6XFSVn5BcUk7vu6M7K0y9NkcbGBTJk8Frp7m6WurqsTJv6lJy0y6cKXjeoASUxUBtrFRM/StLzoI29OA3y/lFEp4Js3N6QgVdeOwnyYOc3lKseCdrKWUvtIBKqwKndFzGQAynxMomTfaEQBwaPhGHIpYY49FUsxFELnodKzwPmKa9CImB2+Ei7hNZK58/6iHkZSctTU2Z+2GGH9X//uc99znw988wz5Zprrgl9XUceqhi24JTDwiE0wEm/2EY0582V0t3dQI6k5OtXyaDWRmmqb5Kdhhwmo1r6BEjC5jywidCVD/Iwbdq0/lNCKZGouMHGTIxUG3tFQRCC4/V0FDqhx1n6qddSkS0/LcuDwi61I9lWuy9CJiBnzAs9SWviZS0286rEPdkhDk5ohdpiFwtxpJ08VPv+CoWPNF8CTRvyrQAeWTxsQVuOVwq5GJI6k855mDFjhhlb5idf49pzHHnwwLug8DzEHbbQkycnQ1x15coAH1v0vLzV1tcpr7m+RXL5nIxqGSmjGicU/H31PJTbIFStEVAGWS7uFrd+BB4dWmhzr37e3w/8GjveF49DOU9HnDkPxHyZS7ziEtkK2n1Rs+P1JM2/28mCaW/TnMYQB2TCS8ySUHiNE2kLCXjVGJmn6B9wnwhVMdaa1+On5Xit5Dz09PSYz+Z6WwxAxO15YKIQf8JwcjLkRFhuEczrnkkBqfnvOqoeMvWyonOFbD+upWy772LGQHs04EIsptboRZz6ERgv4v2cfHHZxZUwVM7zwL+RW8LpBk8LRrUU4jp5UyXB5+V6KMdVY+P2ZsfbyYIqAKQJbRjItGeAV9t4eEMcdtkiwKvFOKorPk25J9X2PJSDnubZm7z5Et6W43ytlqJoNmLYQiu5HHkYILANRpyeBzYViIPqFmBAMSql0JvrlVzjItl8XJ0sXLw5d2d+PnLEctlxy8KLRSdzISPKz+jRgNcDvfMgPRriyHlgXDFUJO2QnIhbnRh9XFBPQaHNURsC8Z777befrwUbNWxhf15KXrVPStpO0nhftKsl8xRypTonkDvGLkrNftxIWyjFJmbkDSE3zPPGOBQKcTS2NsrTK5+WrmyX7DxqZ9ls0DtVTZXqEpyWeVgIen+6hrXlOHk93H+xluNKKCqlKJqNSB60R4vTeRiAiMPzwGQnIYj4HUya7HrdmMtdu6GuQSYOnSgib8qYUcuko2OwtDR3yrjhjbLZoA3FoQp5HmzA3rWbHmGCoBM2KnnAaEOeCJdovJ8xibuRFfCSBy2Dxa0cpJ16lLAFpJPPy0bH5+U92ej8foZqdrXkOXPfuIwxgDNnzjSbt514mcYYdBqg8wVDpuScua9VHLfNfVD++vaLsq5nqLQO6pQtx98uZ+x6ouw7bt+K3l+aPQ+lyE2h7pXqQbOJmhKJJOdqNiJ5YC/Ea1KLa8kFOBMmD5zmMCCc3Djt2q5gvwJUJ005SX7x5G+kS3plxPCV0lBXL+/f9jSprys84VTQyb42iZnch61hUMmxIFEHNy4LHuKiJ4O4Kzh0w9HNR/tykBQapn142LCFnYzJ58VlzUaRthNzMTB2bLo8L54VYQ5NvKSCg1itnXjpt4QxTqTV+BUyzjx/iMTQkWPkm3NelTXraEKVl86uobJy9TD5bft1MmjbQTJuzLjEQxx6f2n2PATJJSiUi6IhDp2rdr5EnDoouYgJk6oumda5XAqOPCQYtsBgk1dAtnCh064fY8x9PPZao7S9fbIsW7dahg/KyycP2kl2HUMIo3x+Ai+8HjDyqVOnGrdfWIQx9NoRlBNsIeOdRBdMwDUZW7K2eQ6Mv24uQa8X9P4IRUHUcF+j36D3VCuZ+OVq9rl/beMMmSD5NwmVxlKoBRJW6Bk/8sYqWbmua73nkX9nTjTI2p6tZFV+lXS+2VmyiiMO6HxO8xyMElYhhGGrs+pchVDg6bRbjmu+RLUSJtsTLtNMEo48lIEa+CAbviblkdjjFR0KWr3w0Ksr5PZnyYuok1HNI0VyIn+ctUx2HDdWWpvqS943YQpO3DDxKK2k7fsNYkj5bGyE5HaQJKiZ1FGu6eceARsGxEFlpsN2qAziebDDU+STkEhX7PfKlUemBYXuxdvGWcvsIBLkdtjGTxMv03zKrWRYYGVHjzTVNUlntnPDf8gNkT2230OGNg3dIMTBWHJ4sY1d1JNqLXge4uprUajluOZLaMtxlXpXdcwgxDebzUbKBVLykKY17xeOPJSBTuBSlQuF8gpY8OUMNtcrV1L56Bsrzdeu3qxkc2I6aHb2ZOXp+atl30kbG2MF18TrwUkxSIw/rlJNlV1mIZYy3kl5HhBGwXhD3qJskn5zHnCN4m3ADekNT9nXAn6uV0ubibfMTksYIRPod/B57aZecbUaT+sYlSIP248dbJIj57W9zW/2/3y/rScY4mCHOPTkXKj5lC1UFTTEMdA9D37CcbxoOGhLvdsdWO18iVL3kY0hYdJ5HgYQ7EWlRtcPecBdjcH2m1fgh5hkc3l5bVm7rO7sEvakloYGmTiq1ZRsltKQYAMnTs19xLVJ+C3VREOBRehHdjlO8qCfHdAVlVdU+AlbQBhQCSXur/kN5e6zlhD0fr0ljJz0MHzMCxJGcRPbTb3CbL5pHsNS5GGXLYbKkdtPkHterpdV3auMZsueE0bIZ/bZx3d5rYop4dnEu6YKon6TA/WwsimSh3JS77Y8ue31UTLh9RLkYsh5qMUyTeA8D2Wg5UKlTtxa/khsP4jcsB/ysK6nW5a0r5Vsru/91/WK5Fb0ytTNh5SsZmBCMuHj3CD8aChoGeiuu+5aVkPBzzX9QruRYqi4ZlSlSr9hC0rFGHM/stpBPA8DBXZmvJ70NJlNxX/sxMtadeH6JQ/87MP7bCUzthst81auk82Ht8gkT1fcIF4eO8QBkWB87V4chcYziaZYcaNapaTFtDuYs3gmGH87hJR1OQ8OpVAqaRJDTZiCyY57PogLStl/KWLy/LK3pLGpXXJdzZLP15lmWLmGpfLa8tWy6xbvlGoyuSlFZEJzH5yEk2xiVaiqhHHAiAcpA42DPEAYCJHoyf+BBx6IzZtRLGxh5zdAlILoZWxK5KHQWkKOnRfjYDf1Yixt41jOJZ9WA+jn+U4YOci8oqJQiEPJGV64QiGOWukZUe17LNZynPGdv77luB4uCc+GUWjFfriwxQBGsaoIdc/jadhhhx0Cu6/8aD20ZVdKS3Nempu6jEBUJtO3MS1aN092lTH9HTkxZIQImOR63bglcovdKxsVxAXXH220g4xDVPJA0hPPgCoOqjmUkCXZRlvzG9ioi+U3DBTPQ5IGmmur+A8hLt2cIRLqki/VOyKtqFZYwCtUxXhqLw7tF6H/zj3G0dQpKaRRxMpuOT558mSzDzz66KNm3Nl/teW4eib473KfgT3EhS02of4WLDpcrnSCDKrS6EU58rDd5t3y6MuNsq5zkHR2cWqok2GD18ikUcP7XfUoNHo7QibVPttWb7RbWdviV0GvGYY8qOZ9oYZagZpjdfVKe1dWxg5tKupmtq/lzW8IU5KYZsOXls3ZdslDJrQ9tm7Mae7BsbJztcxa+6qsejkvB281TcYUEXNLGnZrdrtJGoce9gY8dOVCHNVCGsmDF42NjWa8OLAxL20JbfYlna86ZwtpobichwEO2xDDFHHPA8IDUXsylDPyH91nusx9bY60d+BlYDHlZdWaUXLx/y2W83ecZ1hroY6cSbXPBqofUYy4BL1m0PskTo6no1iIxM81SUT96+y3ZeZrK0zLc6pZdhw/VPafPEoO2bZPD2L+6k5preurhrGbadlejrDql7WEat2v1yXPRguRIDEZDwWAQGriZRoIxQ0vPCxX3Pu0dHbVy83zFsg1I16Wi4/YW/Ydv2e1b62/SRpfGUsa8mm+RLEQR7VQC+TBmzBZrOW43c5diQS/x56JPdFkzaTwi1/8Qn74wx+a/z788MPll7/8pdmzo6L6q60GwOTA86Auck7YhCnimNzlyMM2Q3aUXCc6D+r5wABl5MkFbfL4VqPlgkP2KGjEkvA86CLBFUomMsQpanfIoORBcztYgGhHFDIYfsIWdzy/RB54ZbkhDa8sbTdkYt7KTnl7xTr5z9OLTBVdV29OJJ+T8bm8bPPSS0Zoy08zrYFGHtIAu4Uz5A3hL4gDP0dTBZexqghCJuJUEfSLRe1L5LJ7npbunnrJS066c92yaEWz/Hzmo7LXidOM1HyaDJ5X/8DbEjtoFUcS95h2ZIvkZhRrOa4hpI9//OPmZ/o5kyrZvP766+Vzn/ucXHHFFXL++ecbT/kxxxxj1k7UpPJ0zOaUwbvp8HCpIOABe13kUVHOyPdkc7Kuq0+Jjv/H7KzP5ZY/Pblass3Py+E7DZXdt9gwZBB3+2y9Jpg7d65xgxL3i7pBl2pkVUypkqoGFmSpNtqFCAnXeHFxmyEFj7zep5+xtK3bEAeQzeVkRUe3LFjYKZNGD5ahLQ1m/J9amZGtnl8gHzwiutBWOWLDvxHvxzjqxl0N6WdFWtzYhdYNxJE8H2AnXkLybJc9ZKISp+h/vjBLuns3NHg5ycn85U2yvHO5jGsNRzrjRqGESTtkZIc4bInnSoY4asHzkA+QN2InAjO+t912m9x6663y/e9/X26++eb+RPcjjzxSTjjhBOMVigM//vGP5ZxzzpH/+Z//MeThJz/5idxxxx3y+9//Xi666KJI13bkoQxwPbGI2Kh4uFGkTMOQh5GtTTKkpUHaurKWpEwf2ru75bczX5Zr5qyR4UPb5OtHHyDHbLdb/3VZ8HGBe2QTAZSjRpG5LhYKKbYI/ShVltNmWNHeLT+9+1VZvIbEU5E3lnfI6CFNfd4FC+3dfc8Cj8SgXpEFy1bJmu6M3LtssOy8pFv2j1iSXYo8cM98TogDrk8VBcLw2a2d0+CeTxtYl8xJXowj4TTGD9LPvLVP0RjBJAxTZ36N1EmdIQw26uqyMqJ5hKQFftRyNcTBy5Z4VjElNYbqho+746pfUb5qIrd+jwnjIWFczzrrLOMZ+OhHPyozZswwRv3OO+809iYO8gAB5KD3la98pf9nzHsIyqxZsyJfP91Pp8og6QX3HRsTpWVxEwc7JFIMLNwPTxshlz2wSPLr23Gv/xfJ53NSl2GhDZK2znb55m2zZL8JU2R4yxBzXVy5cYCNg1CBNtwiizgulCMPvDdlmPybX5npQuWVf39sQT9xAHgVqLMfOahR2rv6xr+pvs5Ifq9e1yMNkpM3Fq+RNb31RmNj5boeuXbWW7K8vVtO2HV87OSBZ8XnBFRw8BnUe0SYSIkEJ2xcyNoIqBJJbmkMs5S6J8YNgsDLPkUzhqxnxtTWlmBdxzGGe2+xjdw25DlZuXbD/KNDtx0tzfXpaWce9FRfSOJZQxw2OVMiEVb4K8o91hp5UODNJrSBF/cTn/iEecUFDlvMdW+Yle9pWxAVjjwUAANOKR4nQBggm07cyYeKUiWVkAo2u0l1y+SMvcfL355cJut6stJQl5HuXK9k6vqMXma9T6KzW+TmFx+XD+12cL9Ho7s3J/e/skxeWtxmVCkRpdl+7BDZcby/mLA2eeIkTEXFvffem1gXTC+0EyiCLby3382Ez9XVk5X/PrdEnlmwRoa1NMjDr68w5EAxenCTCVfgfVjb1WvGdMvhLaZZUee6Tule1y4d+Uapb8hIrjtrPEDgzueXytE7jZWmhnAbW6ExhxxAHDBkeHX4HQxeoY6BkAfmIxu36iKoV4JX0g2p0gS/Bt97ilbhH01k49/tpl5hT7z7jd9P3r3Hi/LfZ1bLslVDpLGpW/afPES+fuj7JE2IKhLlDXHg4VSvBEYpji6WtUAesus9xmHvU+di1KT7asGRhyLZ/CwAqhg46bK587MkUCxsQbiE0z4bG/dxeHOznDR9rVxy8wuyprNH3li5RnKm+kKkrqHdut47TW+47pX3vCYvL2mT+avXyfK2bqmvy8i2YwbLLlsOl08fNtkYQdMHY/4aWdbeJQdMGiUtTQ0bdOO0mzwl2QVTwf2wqeMeDaLYqeAer3t8mbzV9s7PXl/WIVuOaJEhze9M+bHDmuV775kqQ5rr5ekFa2TRqg7JrXxbesZm5PmezeW255eb8Rlel5fWxr6xpq8IBC4KebA/q3q37DyOUuPLKZmEXa8uAkRCdRGUbPDfac1ZqBYKyT1rbwP17OBZs5t6+R3DxrpG+fTuH5ODRj8uT7zyhBy777tlfGt4L1VSiFskCsJaLMShBFe9En5DHLVCHurWi0RF9TwkAao4GHtUcNlHFXwfRV5A4chDAeCeI7beP0gNDeYhJ4FCYQv0I4h9k1UOs9dFRCnhladOk7/NXSB/fnyRrGzrlfrGdqlv6AtPDB7UI8dvv2f/dV9f2WOIA3F8iAPgtE2SID/HI7HX1iPkU399Sl5f1i7renKSy+dlaHO9dHX3mGbB240fLke0dknL22/LtpsNls6syF8fWySvrKSxT8b8bLtxg2WzIc2y6xaIomSkN5uTzp6cydUoBw2FqMG0G0ztu+++oUIkS9fl5cWl6zYIM40Z0iRL1nZtQB7oKYDnAWw/sl6efeolWZtvkX13nSonThwt+cyL8vaqdbJ0aUd/vgmem+GDGmMhS6oVsscee4Qq17JPgJSOQnDVK0FyKe8VNWkwTsGtOBHXPdmeHQgc4SNNvGQMgT2GfgzfxMETpbOlM5XEIWl5aj8hDv7Nzj8p5PavFfJQHzE8k2RjLNY7duyuu+4yJZo6rnz/qU99KvL1HXnwWW0Rd+WCfW11UWtSIsyQcAl5Fl6MGdIsnzx0kpx5wDg57z+/lqffwsPQJGOGd8i3jz5KhjQN6b/uso7efiEkG5okSCjjv88uMcQBrYPeXL6PXPTkDHFoaaqXpxa0yfOL2w0xuK+xXuYvycmwIStlcXveJCHe/txiExbYcdxQ2W7cENl9q+Fy5wtLTR7BuGHNctreW8mUzVrk2RXPmjbE2wzZQR54sV2efHuNbDlykOw5Ybi8tLpOul9aJrn8Mulc/IZMGTvE5DeEdcGv6TYZIRv8jDAFr8mbDTYhjd0njJBjpvZVzbw9f4FcftuzsizbKq2DB8kzM9+W+15ZKafstaVc9cAb66+Ql0FN9XL6PoXbq/uFCal0dZkeINoqPS63JYYNLw0vbT3sTRrUEEe5boE20kgeQBIGEE9joTGE5EH2eFblDF9S9xYXKilPXSjEofknpUIctVCqmY2BPGgfoqRAmeaZZ57Z73m48MILDWH5yEc+Uh3ygOjEj370IyOaQ+nilVdeWVJ0ArfgV7/6VfnnP/9pGCjuWUpGjjvuOKkFlEtqjHptFgoPlDCFJgaWS85sbWyVa066UDp6OmRd7zoZ1TJqo1LN8a0iT7aRCLjhRkZSIMCY3vjkQqFSEeLApoLnoU8Gu49kkE+Rz2cMURg1uEkWdeRkVW+3dGcz0k2PcENOsib5sLM3K7PfWCljh/adzkhQvOLuF2TM1rdLR265dHY1y1PP7S496zYzNe/rurMmD4O7ybzyqrTWZWWrka1yzKjNZK8ixGFVR49pU8577bHVcJkwamPDu9WwBqnPbEz29tpmkDSOekheXPmiPNs9XMYuO0xGrB4h9z4zT1bVDZPBLe+cKl9Z0i6L13bJD06cKlf/c75M2Wmc5DIN8tyitSZksXWB9/UDxhivEoYHj0NSGeV262GSsewGSqrWaJ+o42qTPZDgHcNChs9OvNSSWj/VDNVENRtjcSCg1J2X3dtEQxxaYguxTurAFhdyEQkOa5I5lFTYApx66qkmd4ySUIDIHWWiYbVqbDSEFZ246qqrjFsZElBKdIIBOuqoo8y//f3vfzelVMTRWXRphu2uZYNP0vMAcaB0xk8L60Ikgleh645uzssh242Re19aKkvWdpvyw+aGetlsSJO0NNbLYTtsJn959G1Zk+9dr7XAX76zqZhNEIZNrXr7YlmTzUtvvlXau+s2aAnO37V390qd5hisJw9g3pqF0rakXsaOEVmwaAtpax8s+XyXNOXrpCcHWckJ7GVQQ1bWNTZKvqFZHnxlhRwwZbRMHtPnzmvvaZcXVr4gK9Y0yE1zRbp7+57LLU8vllOmbymH77ChhwZPyFFTGuX+BVkVxTDy02/LjbJm6SLzfUd3h1z16FUyo3WGtG5+sDR3rtloDF9b1mHGb2RLndzy7FLpyWX63/f9e20hR+4YTO8Dss164FRLM61KbuC2WmNPtkcWr1os3Wu6+9tk64kaI1hpQaAwqKQ35MFXl8tP735dlrb3GMJ9+t5byEkH7Ngf24dMoNCIYayFpNW0NMYq1NuEElvGFIMHQSN05MfTU4vNu9ra+jbMpHtbEKI444wzzLq+++67Y6uWawgrOqFuD0gEIhfFRCf4OZNh5syZ/YuKWH4tIamwBYuFagYWDGGKONigQhMmP7TPVrLPNiPkqbfXyGvL26U3m5ctRrTIUTuNNR6Cw7cdIX9+9G0TpsjjLjSeBha2GIJAmSI/bWzskl7pFTIhmqReMpl3pk5jfQZfheEddkWDGn6liWvbh0ou1yexjfgShp1Uh8z6MSZfghALeQnkZEAenln+jPzx+T9KT65HXnxlB+lcN0q2GbqNNNX3xe9vfGKh7LvNSCMzjUeCqonN67pl/22Gy1F7bS3PLVxryjKbWxfK1c/1EYee3h5Zs3qNIYVrRq+R3YYMk4de35g8EHYBjy7NyLqmrDTU931mQjvXz35b9p44QoYPKp9HYHfgxIhDoKt18nt08aNy61u3SltPmwxrGibHbX2c7DRoJ+lu65b2Ve39gkCa4JaUxy0OVKJE9bt3zpW/z24zZdL1mYzxlv30ntfN2jloyqj+2L4qCLLXsabJnZgzZ84GiZdpMNhpbsltl9hCaslB4RkUat8etoojTWGLtra2fgJVi2hIWnTi3//+t4nrom71r3/9y8TxP/ShD8mXv/zlogPPJLGrGzCuaW3JHRbaypsxZRHESRy8JaDbjh1iXl4Qx905M09O2HGEzJzfLcvaumVIQ53xBnC6r6/LybrerLS2dEhjQ9/nHz3ibentmiD57HDp7u0LbzRTkdDa2NdcynMgbG5skFHDV5j/5jp1dfTFQEpHpMe4d0Uy+Xc6EDau32DZnHtzvfLXl/5qiANoXzdYcrkeWdSxWLYeOsH8jFLU3898U55dsLb/PZ9au056pF7O3q5ZDl3vBXliaV/uAps6cWxz4hncKl35LkM+7n5xmSxa/Y4uBiGag6f0lUcuXZeR1vWHSUIxS9u6THjnS/98Tj524ETZe5u+Rk6FwLwhAZT3RL+B0FRSZb+FsLB9odw9/25Zsm6JDGkcIs+teK5fJnlRxyL53tzvyfjB46Ur2yVjW8bKMROPkWlDp0nb6jZTyoibnrnE80lTD4lK4Iu3/FfueKpRcvm+farXeOdI5MvIP55YaMhDIQVBTnYQRUgiXgkqajCCSsiqHSZKi+ehFFgjEG1Oy+VCHEGqONJEHjo6Osw+lPZnUQwNSYtO4M7DVXL66afLLbfcYk5g5513njndfPOb3yz4Nz/4wQ/kkksukbSELeL2PHAyIfZE+SMbCiWJcaPUPbMweV706iBn5aj14SY+LxUJDXV1srarR+55c448uuwOWbVmpHR2tcjQIWulPvOGTBq6VsbUnSJPzFstq9f1mjABG+m7dhknT81fI7c9u3h9p8pmOWr3ifLg8pnm+luMXyDLVo6Rzo7hkiGbOpORhvo6qRdOQngw6mREa6NMGDVIpm05XOa1vWk8F4qW5nXSsW6wOTUr6B9wx8svSU+uWwY1tMrollHGbfLwvA75qBV73m74dtLV3iVrO9aazV03ml1H7yrNjfXyhaO2lftfXiZvLl9nPDOHbT+mv1pkZItIp+SNeNTitX0EA08LVSUQl4mjW/vzPHK5vKnQGNRYL611vUa/gfeCQLMZVvKktHTdUvn5Mz+X7mx3P1lY27NWpgybYr6f1zbPELTXVr8m9XX15t8XrVsku43eTc7f9XxTykjfCMJqdg8JO85frU6MSYYtmHMvLl8gD7/Ks94wBJHN5433rrM7V9Y4s7558b0mXmqYiJwmNXrsAZXuG5F2g+W9x1IhDsiZ3yqOuO+xPqLnIU2dTIMi8SMEAwxz/M1vfmMGmtIRHjYJl8XIA54N8ioUTBJcg9VOmIyaCMVYsAETxyP7lY0FQpaEa1g9D957hr1z+uXn3kx/fm/csL4TESWMJw3ZTZ559N/S0rK4/3dwAu04dLKcMn1ywfc9alif4aU8FFLBNaeuGCoPLXhIOoZ1yLSGbnlzXoO0NYyQMcNbZVhzg7z45nxpaBks24wdIdO2Gm7+Hj0Kr6TvluPnyytvbGfq6ZU4tDfOkZWr+jp6QirWdK+RcfVjpaOHviB5aWroE1x67snn5MDmA+WpIU9Jd77PmG4xeAtZ1rlMbnz1Rtln3D5y3C6F9ST2HVcnD67KyOp1fX+n4wPZwYY99tYqedfO4+TVpWvlsruflEVr2qU+Xy9b1efkI/tOkF2n7mDG4YVFa+WVVTmZsF4CO0l09HbI9a9cL8vWLTMeh7pM30aMF2d192rjfcjms/1enXqTtkqlyhqZ3z5fnlz2pEwfO91s4JyStYeEHeeH9OKF0FJHjGAl4/1xb7qMxd9e/Zs8sewJeXtZvXTldpe6ulbJ5TY0ELztIdsVl0j3rjn+G8LKa9KkSWa9a98IrbqpJCGrFc9DKcNshzjsZFY7xJF0o7RsxJyHJMs0U0cebNEJG6VEJzCQbCj2RNhpp536k8cK1Z5zUqu0C6oU1E0bhWlyYsNoM+HsNtJJ5VMUkn2GqBAq4Vmh2Fjus2C8T9v+NPnHK/8wZZZgl6G7GLd2KeBNGD7onUW106idZGLLRHMKbxjaIOd8aPcNnu8jj6yUCRO22kgMivffa+xeMnfJXPP98GFrZKftnpNJjcfJ+EGjZE3mCXmt62VZvmYX6erqIz1d2U5ZlVslowZ3yr0L7pTtB20vbzz7htlITj3kVDklc4q83fa2PLbkMXlw4YOyoH2B+bsHFzwoH9v5Y7LjqB1NKedj81YbMa5dthgmWw+rkwumbSW/eXSJqcKAFNlaD5CInmxWvnDTnbK0vUNy2az09mZlRfNgebpje5nYmTV9NRau7pQVK3tl1oq35LzDB8mUsfVy57w7pbG+UQ7Z4hCTx6EyxlE2uhdXvSh/fPGPxqOwpmeNNGQaZOLQiTK8abis6FphvA1KwOjD0FT3zho0uSus6XUbrnGF9/SnAksQCapIwgospQG3vnmrPL60TyJ8xOBeydS3ST4zxHhlsusJRCaTl/0njZTTphfv7VLukMF+QviWl+2OV7EvOwSSRGvstJdBmqqvgN6RUlUc2ijNFqqKI2yUjRi2UPJQS2skNHmwRSdOPPFEX6ITqCNed911G0wG2Dakopr94svBfqA6QcJOFpVZJrwDcbKvkRR5sO+ZcVfFxqlTpwZqajV93HTZdcyuxuCOah4li17va1cdBCpxzfsWamVeSrXyg9t/UDYfvLk8vexpY1gPmHqA7Damr/nX1c/+VzLdIpMmvC4vv76dZLN0weyR5Zk3ZeSwF+WGF3qlo61DztjuDNl60lT5z7NLZVV7t2wzpkVmLX7YuKfJ7xjc2Efkbn7jZtmsabL86I6X5I1l62R1Z4/xgEwfnpdP79oon5oxWS6/85UNPj9lmyRO/vfVJ2RpW8d6DxUJpo3SIz1y50uvSU9vxhCHPmSkqzcvP7p7jqwa9jPpyncYYvaD/A9kaNNQY7wRFzp080PlPZPe039vfoE34W+v/M2EKvhbyENvvteEKMYOGivjBo2Tsa1jpTfbazwSJE3ibVCot2frIVuXFYmyY86IVHkFllSkSrUl4lzvSYQtHl/WRxzA4BaRzTabJ4sXN0pddpRke5uEaXvugRPk7P13Kntvfg1CIXe8Jl7SXVVVQ+3W2FG9BmkPW+izDXuPpUIcqtcRR9goGxN5qFU0hBWdmD59utF2oFTTFp2gJAQjQd4C+OQnPyk///nP5YILLpBPf/rTxm1PzelnPvMZqRWoBCmGIcgGaGfZFzPaSXse2NAx3LibSdgLU1OM0Z4yvC9OvqRuie/7tT9/KZnpUuQB9/oRE44wLy8mD5sszy5/VoYMbpdpU5+S1WuGmDyJLYbmpC4rsnZtToYNHyYPrHlGbr51rBHCAv99ca282T1EWsc8I709Q6SpebFMHLa5ifnf9PQieW5Bm5HqVty8KisHzFsjR+8+Sc4+YKL8+6lFsnRtl2wzulU+sNeWMmxQoyxqm2dcp5gMiIMaj658mzy78J1kTtNWPZ+TF5a/LkMaBku2aYVxl1PV0rWOfJOGvtBB23x5fe3r8qU9vtSf4EgI4r4F98nCjoWy1eCtZMaWMwz5eWPtG0bngyoU8hwIS4ARTSNkbfdaWdq5VNrybSbMs1nLZnLchOMMKWzrbjNu+nvm32P+bXTzaBnZPFK2H7G97DzqHTnbsAJL2tkSImEbQcgEHoqoBizKiQ2P073z75XlXctl0tBJctiWh0l9ZkNDMHXrdhk97FnZrG66bD6kWU7ZdQ/Zenh5JdAo4c1CrbGVkKk+h514GaZZXy3oUIC4CE6pEIeGjWyhKr8es2w2G4kQa87DJkMeVHTiG9/4hgk9UGJoi07Alu2HTq7C7bffbpStpk2bZgwoRIJqi1oBEymokWdCEiLgaymjzXXDuOn83DMvqmNYGIRK4ohH24qYpcDv+CUtYftlHLjFgfLUsqfkzbVvmsqQ1qFLZHR9u9R1D+rbZEeNNOWVT7wuMi7T2++SX9m1Qtasnixta8cbN3Qm0yudo1+Sk3bbUp57ba3pnLkB8iL/ema5IQ9UVvAiMZKse3O9lSuld8liGdzaLd09iHW986fTtxkk69Y2Slvn+ryWDPkI60zoIFPf99VobKx3Z/DfWf6Xy8pLq16S51c+b5I6IRQkP+ItIfRw59t3yq+f+7UMbRxqPDOEHjD6H5jyAWME8UCYE1hjqzR1N5mch22HbWtCI/958z8yddRU44EgMfLsnc6WZ1Y8YyoyJgyZYIiD5kiERaHOlmoESRbW6gP1SgR1I0fxPEASf/70z/vzPd5a+5a8sOoF2WuzvUxlio09t9xcPrvbB6pmnG19Dq6LwWEM8eZxEGPc7BO0n0qYtHseojacChLiAHYvjiAhjlzE8A+H7qQ1HlKXMEmIoliYgq6LXmC4Hn74Yal1iWq/iY0sbgwnE2/PPfcsuaC94YU4wCbDaY+vmt8Q12bmx9Bz4iS/gYXhh7SEJQ94RD6926eNgV3csViG54fLz+dcJas7RohkGmVEnufVK5ne0ZJpfOfzr+7slFwP7vmc1Desk3y+QVYv21m2bNxFVjXWbSRtjcEnAXTDe+67HuNM5coROxwhy0b8Sx59ZbmsWk2md1Ymb94uH9/3PfLs/B65ZuZb/RdrrKuX5sGLpKGpTaRnYyltwM8gABAF8PD6MAteBYwfHgeSIiEfGEE8Q5ANEh1J/py1qK90mr8hZk8oRLUxuPZLq1+SMYP6TtEtDS3Gi0PyKPkSYJdRu8R6OvUaQa0+oOLHln3WctAkjRu5LkocFMyfI7Y6Qg7d4lCZtXiWCX/tMHIHOXnyyak52XNNSDgvtHLYjzTnBA8fXkY/SYJpT5jUvaBS3hFviMMrSW6HOOxS5WzEPRvSskl5HjZV+FGZZFFSmsoLg81kLLcAbPIQh2dA23gz+bkem3Wci7CcodcukbgHefl57yidOjGMu4zeRUZ2jpSZjz8n65afKIvbV5jrLV/ZIFMmviYHbL2dvLrwnb/J9jYLShOtjQ2SN8mCGPQGWbN6M3n/nuPkrheXGiEoxeDGjOywWUvRclcIIpv15yecL7O2mmVOsuNbt5ADNz/Q9BrZb1JfUuW9Ly6T17vXyPRJo+X5+m55fFnGhCTy2bxJXOSejJcrU29O/uQkbDt8W/N+K7tWbvAV0qBEgJDDmq415r8Ja3xxjy+a/AaIBPoN5FSQLKl/RzLkdS9dJ3e/fbcctPlBpoz1qmev6k+KfWTxI7LvuH3l/VPen5gRtKsPbNlnW6RKvRLFRHTK3duqdd3yn5dnyVudT8nglpxM32y6HDD+AFnVtarg7+PdOWGbE+TYiccacqZJpWkNC7AnkcSuTdXsxEs9QRfSQUirSJT3RF+NeywkSa4EzRvi6Orqiqwwucl5HjZFlAtb2G76IN0gNZ8ijrwH3GCc+CENnPgfffTR2AWJVLnSC9uYFmvqVeqaYe/T7k75Wma8jGzNSi7XKis7V8rgplYZ1L69nHnoNPnxXa+avhhgaNMQ6c2tkSZjG/o2VBIHh7f0tSr/zIzJcu0j80ypJ5UVg7Ltcsz2IzZ41owzRM0ud+UEf9hWhxW8T7qX8pozZ5WMGzdMTt/iEvnlU7+Uhxc9LKu6V5kwBV4C5kJLfYtJTkX9ccvBfXkykIg5S+b0yXkzZuvDChANCMFbbW8ZIoU34tLHL5VPTP2EIQaEdH71zK+MFwOQOMk1WlpbjBfj5jdvNqEPJQ6A/ybp8pElj8jw7HCZ3jpddpLSSYJR4M2UZx6rRLG65pVIaHJbubDFbc8tld/PflKWrcNzM042G71E5q290ZTz4qVRD4sNSJSOqTf/ISiqYfg4IRMW5mXnnNiN0RhDP2HHaiJNYRXmplbGAPZ3zZdoa2sz85P/DlPFkWQ77krAkQefi7+UyiSTifyGsLkFcSRNkn9CQhU5Jsi6sviSSMa0lSuLlaEGlVsNSx5g/rwvpwPe98Z/v2x+3trQigWQju4WeWpep3zt38/LoduPkXFDm2VFR49MWT5YbnzqLelob5bmph4ZO2SQbD5krBwwuU9R8gPTt5IDth1tOn8OaqyTzLJXZfzQvmfKhvzYY48Z9yWVR0HVFrV6gfLMC/a4QPgfoLnZko4lpqoEYjB1+FSTy6DYfczu8vTyp01ooXNdpym/JGyDtwJjbz7z+lAOxOlfr/9Lzt3lXFOiec7Uc+TeBffKvLXzDMEhadKe38T7laRAMiAcfMWbARm5qeMm2aVtF9lqSLSOon7HByPHC5Eqdc1jBG1NBNYY801P+YRyVnSuMJ93yZqM/PuphbKi8x0Pw9LlY2Xo4DXyYOOD8pU9vmJ6pby25rX+fydhEt2POJCGk32hnBP17mC0GEtKt+3Ey2rfcxrJQ7EQx5ZbbmlC8SQHMw81/FYsxFEIEBGqDmsVjjz4RCFDzMZFJQHxRkR02OzCLMAoRp6FxkbA6YJmS7b6Z5QTfTF4rwnrxoDjOqWiIkwCUTFvRilQzsbp3zbiIwc3yZI1XSZhcWlHTrql2yRJEoJ44OXlctgOY4zy5atL22XSqOGycHWzrOvJyphBw+T8GVOMsqRiyxGDzAvMXvOG+cxsEBC0ICEZL4qVPg5qGCQTh000L/7dezrkJHzWjmcZb8LfX/27qcYg5EFOBKEMPidxfEISvKiwIBESksJJm9erq1814QkvICKEN7LdI+SNxUNlVccYaRq0ROjunpMeQyQIZSh5oNvqLc8uMWqiKGlOGt8uz3feaO5lx5E7yuFbHC5bDd2qn9DE5ZrX+n2NR2ME/3DPH+TOzjtlVXaVDG4abKpQJuRPlmx+SL+XRrF67QgZOWKlIWfn7nyuvLrmVUPGJg+dbJJH40IaqxnIOWFv4MW46T4BgaCMm3+3QxzVlCBPM3nw3ufQoUONN8wr/qVqrKWqOHgOle5rgTgj0grs2TxziHlYOPLgE96ESU67ZI1zEt17770jdQkNSx6YnHg81H3uTb5J0vPABklclUWCdgMejyjlaYynX2heBdoCJI7p+x678zi5dtZb0psjUTAvOIBGtjaafANw30vLjLwwhpbmW9ut7/cxuLnBiEGV8+yw0JDzLtQ91i9K6SbYzxVSqoJLNiEjfHHRnheZ+DxS08Mah8k3Z3/TuOHt8cdj8OiSR03ehYJTua3rgHGd37bQ1KC8vGyVrFywm2l7ZpIJuybIvHyXbDm+T/tgXXZd/3V+P2tefy8RQh//eOElGTxmidQPftFoJfzrtX+ZPBS8GWhMLFvdIJs37ibbj9xGJo7NyuThWxkPiF909naaPJKuXJcpq3wu/5w81POQLO1eKotzi6W9u93ke6zqWCVtnW0yv+s+GdJ7lDTXtxjRMEVDfa8pZ9X3Ziw1pyRupI082GD+cULGFc+Bhz1C4/rka1Vb7CvO5PEkkfXoPNjiX8AWqqIKETCmjO8ee+xRlWoL9tkPfOADxl787ne/i3QtRx5CJExy6oW58eAPOOCAyOI3YYw8py+IQ6kTfxLkgUUNWeG9YdlRiVMQD4mdV8Hi00Qxxf6TR8ng5nq5YfYb8uayNhk3vMW0UFZ09uZM103vPkg78WKnRRYb2deAktOoi70ceWATJywCEdTujIyvSkBzUjEJh03DzAtgpO0YPtfHU/Hnl/5svBKoV/K7eCo+vMOHTbLkorXt8vwbm0l7+zQZ3NRo/qZhfdXHoPpB0lBXL2vammXkuqHS3LSqX/eB3id2EzKkrOnE2rFqW2ke9HQ/caEB1+NLn5DO5ftJ55odpKe7TST3vNQ1dMnwsb+XY3YeJh/a7jSZu2yuCR+8seYNQzQQp9p11K4mh4M+G8s7l8vCtoWSz+RNEigeErw0jdlGWdi9UNbk15j5Q7jGlLxSQtvyiqxaPk1a863Sme+UfIYEPJFJ4zvl5ClnSNJIo+ehVLUF+4TOL0KeXrEvoKdnfidp9d9a8Txky4hEeXNQ2EcY16uvvlpmz55tbArPYuLEiTJjxoyK5D9cfPHFhhhec801ka/lyEOAUk0MCadtwgTeU28UBDHyTDaUInE1lqvoCBMOKAfc6TBqNhCIUxwbiR/yoPkN3iRFL2iotUXzlrJ0+Qrpbdnw3vbZZqQsWNVpDKANvA6FxpCEKEIjgAUexymhFHlQjwobuFbJqJueF6dCiKrdS4K5Q7yeclU8CpzAMd7kQeAVoPriyeVPymenfdYoTnLyvmiPi+SrNz8l0vWqtNRnhSnS1j5UGhu7pbmpywhF8bd4Jnp6WmWfURNl99G7941J14bzqWt9061s9p08HypHuIfudeOkc+0k6e0aKfm8kccylS5rlk2TO1++X55Y/iXZvHVzozGBtwOPEKEVNCwIJUAeyAUBeAsI1XBPlJxOaJ5gPquGa9C5MNom9RkZ0lovpxy9hTz/+hB5efEwyeWWy/bDVsl+TYdIx6IOWT16daItstNOHsrlZHjFvryli1pWm1QDqlogD/n12jx+P7tdxXHHHXcYEnHIIYcYAvH5z3/e7Onsawgp4hmoBTjy4BM8fFzX9PFAXZONOy74JQ8aKmExo+7JRCx33ThzHjgJ43JjLPA4xKkAV+o+OY1jxNmsdtlll7ILlo3x6K3r5KmeQfL2yj7js/24IXL6PhNkWVuXXHX/G6ZDJth8eIucNn3jREAy/fGuEI4hsSmuz1qIPPA9hJRTHh4ViIHmPGiCFvfBHLGV8fgd5sCIkSNkQusEaWtuMyd1DDegPJPqAgwwlRqHbnmo+fkbKzplbUdDX3Os9fOuvr5XuntIIO0yypR6yv/odjvKyLp3yua2HjXIdBtV0avWhkHmPhoGLdwoRyPbNVby2ZZ+4tD/eXNN0r5mS6kf/IqpMlHNBcgA78l7vd3+tqkewRsBIA2GKOR7pC5fJ525ThlcP9iEMpDa7h9fyRjvxeGTdpOjp7yzvXGPGEDGjqooxjyp03TayUMQnYdCpYuaeIkXkO/tpl7qGRvo5CG3fr8KS5ywHxwEEEw89thjDXmAVNRS6aYjDz5AXgMJiSyKgw46KPaeHH40JLgHTt64s7W9cznEFbZgsyG3Aa8LHhe8HnEu7lLkQcvMgnh6uN6wxrx87agdZPGaTtP6e7P1LbOHtjTI9967k7yytN3kQkwes+FmZ3t2VFIbEhFXLwUvedAQEARF81aKvRfPU5MH8U7YXok92/eUp7JPycLehdKcaTZ9MggpYJwxwNe8eI1MGz1NRraMlJ5srr+PBeJIoLmpU/JddaZMlLAAeN9uE2WrIetMbFZB2OesfbeS382aJ+u6s8ZQd3c8LC2jnxXNLoC0UD5a19DWp8f9zqdfPwZZkUwfGbBLRNVrQYdP7pmvVJPwM4gFiY4kd5r7yDTIkLohUt9cb0pcjV6G5OXA8QfKmTue2S/prWC9FGqRbZ+mtRw0qkhV2slDFOPsLau1u6ziGePf7cTLMNo1aW/cFZcKZltbW3+ogoTLj3/846Guc9FFF8mll15a8ncIk8QNRx7KbPLaK149DUk08yoXXlADGjTLP46whVdmmmtCJOJEIfJQSIQpzPW0xbi36+eO4zeOLzJWeHbwdNieHcY7Lg+OTR4wyuQ3EBtlbINstHbzH7wSu2Z3lf1X7i+z5s2SG+bfIPPXzpdcJmfaj2NUIRIXzrxQvr7X12XKZpNl+CCM7WjJ5fOmxDFTl5WDdlonJ2y7n2R7W2TH8UNkyxEthjB6yQz/9r137yAvL2mXQU31MmboDvKvN0bIy6tfNg25MNztve3yet3bsmjtcsl2D1/vfeC+c1LX0C5Dhs0zRAWNCRI/FZAC7pefI35FtQjXhEQAKj6MgJM0Snem20h3n7rtqeY9afrlJQ1+Rao0sY2wEfMgSv+ItJOHuO6Pa0B2ealnTJt6kfBbKPHSj7GtBc9DNgbyEFfCJGGPs846q+TvePPD4oAjD0WghoRSJowXSUQQiSRQzEPAzyANhEoKJQj6uW6QKgYvNDGURa/6FZx2tdoiKclr8hsIU4TVjfBT0eAFn4v3ZMx4T9uNzf3F7XngpMbYkkxFmW/UzVK9EseNPk6ezz8vixYskq7eLmNoMbz53rwsbVsqf3r2T/K1fb4mHz9oolz78NuSkTEydtBmsvfE4XLa9C36K1PKgW6iO2/xDgEjEdML9CJufPV2mfPycnl78Rhp6+6WuqYlMmbsa7LPNlvKmEG7mXJJSjwhGxAHVB3xMFAZQviFZmBDBg0x4QrKMI/c6kjTEGzuq3Nlzeo1cszux0Seh8xrLWNUkSpv/wjNM/ET468F+eck7s9uJY6nkHWsXgn2Tu1nor9TjJTVQrVFdn2yZNi5p16bOOSp7QqPUp7ruOeQIw9FwKTAy0BSIJsH+Q5JdL8sRh6YWBgXFhFtzcP0n+e6kJ4wKObt0I0zTteiTR6C5jeUu54fqFaFtkz3blxhyEgxcC1IGSd63ouE1zhBrsHp258ur695XV5Z84qpQsD9z2mer/NWzpNb7r9Fth61tXxk19GSbRojo4cPkaEt0aXRvYAAXLD7x0X6ci1NuKSjd63Ja1BdBQgGZZi0fKcx16Rhk+ToCUebTqHkQtBzg98nDGPLRW/durUsX7c89hO+LVJFkqxdu094A09cuRh/qj0P+Zy0rpsv9Ws3Fxm8faJvBQEvFCpiLyVnp1hb7FrwPOQi7n/sy+z5lVaYJK+KAyWlo7w/+x6A7AXxgvB8HHkoMThs7mo0gjTGCpPzAEtXcOIhVMCpFA2FsAspjEgUvw9pYIEX8nbovSRBHrTJFPF8Nu6wG3CQMAOLCKPAOFPzHvV6pcA1MEIQQxJO40y6tbHlkC1l2phpMr9jfr8sNfkF41rHyeiW0XLgzgdKx+oOs5GvWvWqLChQwaGI0wji1RheP3wjgsHLC8SmFFSJFEKl+kfoyc4WqVJNBLwW9thp+V0ayUNm1RvS9PCVMnXeizKs/VbJjNleuva7QKR5WPLv7QkV2aTMVg6FSGBYqylS5QfZiN4RzSOqdILk97//fbnuuuv6v2ePB/fcc48pFw2CdD+hFMFPUmNYaG4CxgW1Sk6lnLqjSpcGTZhk0XLqZ/PD41LIrWiThzjBe7OJBM1vKAQNM5TaxLl/4tsQNRQq2bSKIQ7Pg/bD4CvPNSnioECNkkqLmYtmGm8E5ZckSO43fj8ZPWy0edkVHCr/rCdrNYhxeVziRjXuy5tnYosrkWALsSBPBsOoc6bqJIKw1bIXmPDS+MQ1kulY1vdZJCN1K16Vxqf+Ij17f7Lit1WIlGmIg6/a70c9E0nkmiWp8eCHPLBPBc2niYpf/epX8pe//CWWaznyUAK20UiSPGhuwpw5cwwDJ+YeByMNkjCpolNkUeNxKbYw4mzkpaSBkz/Xo+45jsWkBKfY5q05FRAIxrrce0ZNPMVdS2Kknrz8GL6ohIWKCbprUqI5e8ls44GgPwbVCMUqOLzZ81pVw9wn98frldjU4RVXUgNIsiCJvto7oloGMNO2SJpm/ljq2peI9KyTupWvSXbENht+hoWP0RW+qrBJGWE8SD1zkbAHnkG+x72v4whBq3ZYIxsDeSDfoerkMgIcefAJDVskcZrAmLFZIwzEyTsul50fz4PdnwPRKU5U5RBXzwxOvMTcMKiMQVwsXJ9Podip9sTAEPrNqYhiyPFsQMpw1dKgiAS8KEmsQTF97HTzCps9jzeC51TIK1HtZkpp23hVUZDxgvxj5FSaGANoVx7w30nff+MT1/YRB1BXL5LrkbrVb4lkNmPw+n6+viw3TWCtac6JNvVSQktvGdZ1lGqYOJCLGLalTNORh00E2gY4TvKg/SF4kRA5bdq0WDeUciJRkCEWI67XIDLTcZAHO7+BTYD24XHB9jzYoKaf8rGg6qBhch54b2LivGhYBjHUa6U1FFBo/rDBMU8gWrZXArLJyRCPhTfhrRJI8xhqtQVjwgsDaFcekIwM1PjxNXbJ52y31C99/p3v65sk3zxMMp2rpa4eNc8+9E4+UtIGL+nHY8P64cXYYni91TB24mUl8iWyMXkeahnO81ACtnHRCclGGof70TbcGFAMW9wnkVKeB5VeVpnpIJ8pihu/UK4B9xJnDoU3L4MNh9AIm/buu+9etqyp0PWCGCvGhmfLCXTfffc1p0xFLZEHoHOykFeiVK5EpbsFpg3eteytPKB0DjKhOjKctJVMxOKWRzSrsVUyPX1qoyA/fKLk6hZJd3aI5AdvJr2TZkjvdsdL2lCq2oJxJYTBiwOAtmxnLCG0didLxpJxTcLDk40hYTIONc5qwpEHn9CJEkesnxg47noYM4ab77UBTZwo5iFANwINC4wAxCXoIggre20nZNq5BnG3DtcFyfsQIiBsQDy6UOdRv9fza/D1M/I3Xr2IoNdKCwrdb6FcCYgEcX42cdVGSNIrkdaNt5x3kn9TyWdbpIrxwzOmIlU6fqHc8pk66Z1ypDS+8O93flZXL51TPyCPtW8nhx12mKQVQQyz3bId2ImXeHS5jp13EpeHJxuD56GWpKgLwZEHn2DBx1Guqa5zWDPuc71uEsmY3utqDwVisLYrPSjCGHvNb2CRT506dYOFZ3sK4kiEMg2SMhlDyjjVqaR3WHem37AF+RQkRmqn00KfpRbJQznYXglt8Zy0VyLNYxg0tOkVqVK3PCTf1kPwK1Kl6N3xJJHGwVL/5gOm6iK71b7SvuUMyczta/aWVkTZB7ydLNXDo7o16uGJKkOeiynnoZbhyEMJeDeAKBUXttyy13WeFHmwwwts4JzAORlHreYIQh7YDPGqEDZASRHj4h3XuMkD4D0gKzZJCws/YQu/+RQDkTyk0StRTUTJiyrkllciZotU6fiVdH1nMtK77THm1X9vbW1Vr1Qoh7g0ZPicjBUvhO4YOx1LQqeMrTfx0u9zy2azkXIrnOdhE0NYI48rDUOm7nrvyStJzwMLkZgg74+bNMoJPOj9qrx2OS2FYgmOYaCJinxuhJ9wC0dFKYOvTcPw5vjJp6g18hA1NJCUVyIVGgoVuDevHkJ/0uqypaaUllwlNX5+kgXLteNOA5JSmGSsvDLkjCWk1h5LfZUay2w2Gyn3LS5p6mrCeR6CDFZDQ+CwBa2dUYskREApZCFGrZUccS8afS8qGTgRY0jj2Dj8eB409g9U4rvU9UDUvAeeDbkcuCp5VqWEn+IIW/B+PFtckDS28uPNqTXyUC2vRFAXfZqQ1PM10tkd82T03B/KDitfl1zrGFk55X0yr27MBiJV6pUolCyY9r4blZKntmXIldRq4iWHj0JNveyxzEbMeWDPcDkPAxjehRfEQ8AiZUHT3pkYPzG4YtBJGGdDGK5FmARQAho2vyEMedBeEZyWvPkN5XQZwgIjBFkhfox35aGHHootCbNQ2IL3I7+BBCxtGuYHQchDWk7XSRpD2yuhmfOFXPS1VMGR1HPLtC2W1n+dLdLTJ21c37lSxrT9QoYc8QPJ7nfwBsmCaLdooyolE8zRTdnzEETwi8OPjqUms6tHgt/JxpAwmbTKbNJwnocA8Jsw6W1jXa75iU0egrRlLtchUjcJv/oNflGMRLFp4r7HHU3IgGoOPxsVvxOl4kI7VFIGh3eHayXRCdP7fltssUXg3iObuufBb+Z8Oa8E8y+tRjCp033jY1eLdLe9I/AEutuk4fl/SHabgzdKFiSBV6sO9CTN78StVxM30tAYi7nG+ubF/ZB8DZkgtwlim1k/dvxeGA8Ze7QfQb40w5GHIIPlI2FSFQw1v8APGVDjGUfegxo24nqc+u+8887Y8ykKGXregyQkwjTTp08PzKrDkAebrHg7VMbVzMp7LW2k5VeNs9bJQ7UMTDmvBKJLrC1OhWnzSsT+fLvbpemJa6TxhX9Jpnut5Osa+pQhVYOjc9VGf2KLVNkqtiRs898PPPBAsiJVEZC2ltzci7e0dvbs2eY5493l+3KdVguFLdI0Z8PAkYeYwhZ2VUFQBcNy1/YD3p8QCaES25AmkYzpNfS2p6NcfoPfa5YDv8tpilNpIbISp+dB7w1yxOZbrpFWVPLA8+JZYhwJ/aRpY0+LVwLCSG5LGnMl4j7VNz3+B6lfMEfyLcNNY6tMtkfMDGrsMz7ZrQ8qew3mEKdo5hRzC48ZZMIWqVIiUc3eEeoVSXOuC2NYV1dnSC3jZSuvaqdVO/Gy0AHSVVtsYiiWMMnPMCxMnrCGJYqR10RBvB777LOPWfyKuAWY9Jp6r5rfQEMtPB2VaB9uJ2Pi3SlEVuL83HxWNohi1TJxkgdOheRS6OmLSg5O32oYcT1XemNPm6eEMWRDxuBBlO32zporYZfgVfqEFyt56Fkn9Qvm9l138DjJdyw3EtOZHH12RLJb7CU9e340cBmknqS1hJGx42X3jogkUhUSumbT5HkohOz6nIdCyqvsw9ogzZt4yZzl7ypNHghdAfR9qH6DSP7P//yPfPWrXw1dNeI8DyG6X3rdTxhPNjNO3WFPiWHJg8pMq1qldyIk4XnQ3A8WB8YtrAs/jLHHdc3nZSEixFTshBJX2ILxxSAB8leilrmWIg+cpCEObNx4r1Qhk80I4whB5GdKJNLYqrgaKFTOWM0Kjng9D9ZcqWuQ3NidJbNulUma7JzxTclNPGTDHIgyKJQwWax3hC1SpfMt6fGrFfKQK6JFoUmqvFjDdk8TPNOf/OQnzc8x5szTSoF9GvzkJz+R3XbbzZDEc845x5CYyy67LNQ1HXkIACYGLnrFokWLzEMIK/Mc1cjz/hgUus/x/oU2rLhyKbzQxjRh8hvCkgdcrHh4+Kx85lIbdBxhC/I3ENbCKEFa4mi4U4w8sFGTZEtMFXeoXbrL+6u7no0dIkHuBe5mknHVMHrLyTZFFMqVKOSVsEWWUk0eGlslu/nuUr9wvSpkpl7yraMlu/lRktvm0FD3VmqfKiVSpfH9sMJKA4k8ZH1WW9g9TfhsV111ldx6660yc+ZMOf/88+WnP/2pHHPMMXL00UfLjBkzEvOSHXlkXwO0I444wnhC8DixHn71q1858lBJhUkmAQOPMcMNRHJiVAQhD7w/TBImSxlmqfcP24eiGCBPZBxzzQMPPDBUfkNQ8qDjzfvuscce/Tr2Ya9XDnabcrpJsgFAHuKAlzzYuSp4UlRkyt6Y+Bz6YuGzsWMYMYTcl5IJu9ysnMhN2PtNC/wa6GJeCYihdmSM2ysRd85D9x4flab81VK/6CnzPWSiO0CowkbQUs1yXh3Whq0WGnXO6f2lmQSH1eSpq6uTQw45RA4++GD5+9//Ln/84x/Nyf/22283ROJTn/qUfP7zn5dKAY9mFC0c53kIsHmysbBhI7qEoQ/baClKGShuME7D3IcfYaI4wxZaycFnxtUZF3EoZexVVpvPHSTfIGzYgrEiTsln1fwRjHMSZZ/cH54rTsXkyqioTyH5bltIy968II7km2g5GfeqIjcqGATZqvUOfkl7JexTdVSvROxlkM1DpXv/C/vKNGmm3TS4KmWkxdRCvR0tlYzxe0HHIW2VFoWg+2l9RJ0HwkQc/k466STzXJLwEBcDz+vKK68M7XUAjjwEPHXD1kg28SN+FAR+PAQqM80Jac899/TF8uMIWzCxidFpfgPfc3KLE4XIA8aQHABO23gcgpxqwoQtiiVixnn61mtBivhs/Pfee+9tyJifE5durDr3GDOeL9dhXrB5427ms6hiHp4Nrq+b+kDpJxHVQNunavquxOWViIU85HOmM+YGaIqeYBenSJStFgq06kCTBdUTVqrqoND9pZ086D5VH3INqTS2ffDkmYTx2lx00UVy6aWXlvwdwpvYLAUe83e9613ygQ98wOQ9hIUjDz6grmXdhHFlx32KS6oMNGrYwj6JY+TYROlQl0QFh31NzecgB2DKlCmBxzto2EI7YhZKxIy7YoUxnTVrljH0VAsoaQgzpwp5Jbg+sWiMH6cbvleRG7ufBJu+xq1rDXGHUop5JbyxfiUT5cYs1P6Qz0vDi/+WhlfvkEx3u2THT5Oe3T4s+dbyYTr/b5GcPDWeGl6UiWtPHSUS5CqVknuuJfKgAmV1Ie9TK7fKiQf6AWGOs846q+TvkN/AYQJQav7ud7/bJNf/5je/ifTejjyUAZsIbnM2XzZ6YtNJuH+LhS1s8SW8DSy8oNcN63lgkuPpYJHYlSRJJGGqgWZR4VJjwymXz1EKQbwFLCjCB5CUQv0/4vQ8UFGBB4sFrSQwrhhvMa+ECgZBGPh8bCQYRhJeOWFrNr2esO1NcVMNdZTLlSjVJjus56Hh1f9K4/M39H9fv+hJybQvk64jvhuooqIUKiVPzRyyqw5U7pkXByHuwZZ71qqhuDpqJok4pKlBHKWaOkfLQcnD8ccfbw6Bf/jDHyKTNEceSoBNACUxLcNk008qLlXIyKvxjiq+FOaeC0k+2/eahOcB8kTYAKLmR9a73PXK3aMSFUIylC+RO1AIcZAHDf1APplPShySPGUV80pwOsT44cpkzCE0kAlIKv8O0VCvhN57GlEpYhPUKxGWPNS/+B/JrF0oUlcv+ZaRIvWNUrd2vtSteEVyo7eL5bNUqzFWIblnxg8vplYNaZJv2glr1LyM9vZ28zkrKf5GsjnAK0Segx12Dtv3yJGHEmASY1SY+Pw37t4kyYPteSCbGY8HDxaPR9jJGtTQ25UGXsnnJIWnuB7GlU2EfIOo+gXlEibtDpz77rtvSaIS9fOqOiULlpAX/408sMaLMThx9DQJ65XQuDTeEAgrhlE1/PU54IImzJKWjb2ahMbrldDWzuqVUNl0vGZ+cyXqX71DGuY/IpLt7m+ClRs1pU9FMrehtkwUpKExli33bItUaRUH85K1qV6JOBOz0+J5aG1trSiJu+eee8zX++67b6M9PexacuShDDht2AkyWjIX94NX8sCDJFueV7lunGGFrUotCs3+1/yGQog7bMGGwcZLTJSqgzjGtlTCpLcDZzmiEsXzwMaIB4dnoImRlGuptDLPmTHXHAReYbLUo3ol9IUrlfdng+GeMYLcpxOo8tfamTUMMWQsfedK9HRI43N/N96GTPvivp/ls1K3doFkt5guudHbx/bc09iS2xapgrTijYDMk/dEjg6GVokEhKPaYY2ooZW2KrTjPv300+W8884zuV3ss3HAkYcysDdxzYZNopxIjby67TkNx/GQ/Rp6Nah8xnJKmXGFLWwvBxtsnJr6xbwFECM+Z6FwTLk5ENQdzSZBEiYbBR4sDVPwVZsWIXhFOAwDzYuwBpupCkNVojKiWCkoYE5y/xDZtAlUVfsEXQi6R3Cihih4vRKF1Brr1syXTG+X5IeMM56HTOfKvovlc9K132eMsmRcSHtOAXOP+U9YjxfzT0NEzDmvSFU1mkvF4XkYHFOJfzXhyEMA6IThdBG3m1kXCYsiDrd9EEOvIRK/BjWOsIVXT0GFp+JCobAFiVqcBmkKxCnRL3Q8gpAHxhSPA+qjmoRZLDESg8LvqTY+xoa/Z7PEc8GcgEhAKJJ24dokgufDXEcIDaNYaYGqUkhzS2m9t0JeCW+uxJihzTJ5xb3SsvhZydSjHrmZ5IZtKZLrldxmO0l+5ORY7y3t1Qze+2OfJReJlx0isqXH7cTVpOddHIfHjo6OAaG74shDAPCwk+gVQba/dmPDbR/npCpXAqon/yAhkqjkgcxfTuRcR/UUcFH6EcnyC66t4RpboTJMxYo+D78bL7kbuFshYpqM5LeigufljadzamV8+AycWDS8kVT3Q+2SCqnZfffd+zdkO+mSZ1ZOoErzOQbCRukXGt4q9HkL5Uo03n2JZFa8Im3ZBmnpXiN1ne2SHTZB6oZvLj07vS+R+0vzsyi1xrxkzBapwqvDvgKBUAKbVPgvquehrQphiyTgyEOMbbmDwpa5xkVH6Vzck72YocdQE2vn9OjtxFkOOgZhNiIWO4bJ24Uz7iRMvR4EAg9AUIXKYmGLUtA4N4YekqKu/LDP1N4s8V7wWTSpDE8R92MnXcbhrVIhMkgPHhr73sMIVEEmkhKoSqMRLEUeNuoh0b1EWjrfkkymTWRwq/TmBklvT5d0dPfKE4OOkeZFORnd83asWhy14HnwOz+KiVR5W2PrvIvLW+zCFn1w5CGmttxBgTHTRDqMGhsuHohKlYBy8teEwaAlQ2Hc+HbYADU/Tg5ewxR32IKxRYyJE0iUjpj25y0Gm6SQGMmYan5DXOB52d0PSX6CSODp0Na/Gt5QqesgoDkX16Eu309Yp5oCVWktH/VLHkD9osekbvlL/Z0zMW35oZtL/ZipMvXA4/qbz5XT4gh6f2knD2Hvzxap0tbYjCHift7W2FFydKLmjbRXuB13UnDkISDi8Dxw+sbQMJEJU2DUMD5JlIF67xcXON0bqbfmZBlmodoGw8/f64kcclQsbBA3eYAgYVg5CUNWohhxO2xRbDOAjLHB02VUjWqSJ2OujRHhpSI8mnRJKIo5paeycjkIqkHBaY38Bj+iM5UQqKpFBCIP8+f2SVDn31mfpkRz51P7PU50j7VzJeykQb9ql2kr1ayEZ8RujQ1UpIoxJEdHRazCtLaPQ+dhsEuY3DTDFmE9D7bMtLetdBK5FHa1hd29MWoJqBoIrlvuNM9JkzAFY1YqbBAXeVBDSCiIzReCFBWlwhYqpsV4kmEfp2JkEHDS58Sl0sAYG4gEp302Tk261CZZXmIHqYT4xFXGFVagSjd0XuU8Ymk0gkHIQ13bIskNnyB1K17t62VBVUVTq/RufWDZXImwXola8DwkkfToFali3jGG7MfMPa0cYv6xBkqNEfO0MUIIxHkeNlFoW+6w1QVs6HgbvK1Qo+QRlIJelxh5mPyGMCdxBQuUEzkbmnpYiiEO8qAVAupxwF0eFwppPdhhGG08k4Z2wir8xAvyxGalWhqQCYyNeiQ4hUHwmBdJ9bgIK1DF6Uw/h1egaiCELXKDRknDkqfXN8Dq+7vs8ImSHz7BVx5MGK9E2j0PzIu4Ks2KgXmnXjsk6W2RKtUzsbureiucNGE4SsKk5mnUMlzYIiDCeAhUQ4G/LSYzbW+scdZhszCIw/OV945jYapeQSljr/0iMAp6Ii+FqOSBz8gYcw08HGwExDzjgn1/bC6QBowc1Qhq2NK6Kau0shobxob8BgglYCNj8+RrJSRz/QpU8fy4r0ICVWmHv7mQ6wtZ8LuZvq3Y/FW9/2cQ1CtRC56HSt+fLVLF+Kh0NnsYJBZvma3NEXWP7ujoiKUpVrXhyEPCYQvNMSinoWCHAuIiD/refAaNxceFYvoRLD42LU60pfpFxNFC2+vh4LSA/DP3FmczK6DX00ZpbAAkRkIEk85viBMYGwwKp1XmJOEW7TGgnQ81vMF/V1rpUgWq+JkKZfEzr0AVnwOiwUYfJjm06p6HbI/UrVspuTE7SWbdChO2yLeMMP0sMu1L+gSjAqKYV0I1Q3TfgpBhvNLYTbXaIlaMIfOel1Y4qWeH9cEY6v4yOqRIleo81DoceUgobMEmQn4BeQa0eLb7qZcjD1FhS1yTW8EpOYkSUO+9stAgK5x+qG4IklEc1vOgrbu9Ho4kqjfQP4A4cDqHOFQiMTJuQCh1vDTnxnbfatKlJpXZSZfV6r/BfPYKVDGn+QpprLRAVSnonCs7J+oaJN88VDJdayU/5J3GRPn6RvPzOFDIK4F3Do/Oww8/nMpk1bSVkhYSqWKPa2trk0ceecQcHnTe+S1DdjoPmyj8hC3UiDJJ/MpMayggKnnQhk9sELw3k5uNNm5G7zXOfFYVFiJsENTQBDX2Njkr1Lo7ieZdWqWCoVXSUEvEAUKAIBhktlCrc9y3dlIZOTIqmc2cqnb/DU261DJc3Mx4T9ImUOXr/TIZ6d3ueGl85q8b/Di7zWF9zbASuCfIPISCAwV7ksb51Sthyz5XyyuRNvJQbAwnTZpkxos1whhqGTJzTwlZsbnnEiY3EXgfPhOH+HoxsJFhRNlcyTEIYkSjVlzoyYKTseY3qKsyCfKg98pplhM58sphyyKDGHu7I2ax1t1xhi2o3MBYcVKnLFKvXyvEgXEgdqviVcUantnQUjZePFPtv8Gzhkwwx5RIVKr/BlDvD+/JfFPPSaUFqoohSMJz78SDTFOsumXPm++zW+0n2UmHJ35/jCX7mPdEnYSuxEAiDwrdSxuscmjGkLnJGOrcY+9Xr4TtESNsUelSzQ9+8IPmK8+bdXDkkUfKpZdeWtYjXgoubBEQpQw8CXScfGClnE6DGpco5EENOMlmbPZeF3DU8qJi98oiwZj4Cc3EQR5YoLiqWYileoDE4XlgQ+BEQVWFvg/vX0s12tremA2LioqwsVY//Tc0RyGp/huqj8I92OsriEBVWH2EuMlDw/P/lIaXb5VMtkfyDYOkZ5dTJDvpMKmGcS6UK1HIK6GGMEmvRBJNB+NGoby0TCbTL1Kla0RFqtgjWYM/+tGPTNUZP6v0HnLwwQfLrbfeKnPmzDGHri984Qvy/ve/X2bOnBn6mo48+IB9ii2kMGnLTJN9H0ZkR68dlDzY+Q0kC+LG9d47r7g1JLgmYkSc9OIo//Rj7FXaGpf7TjvtVHKTKdQYKwh4xhp6Ir+BjYDsa1z/2l+C5+wtIUwTeDYYW+YVnyEu8ujtv8EYQSQYH0JkSYyPql9CjCHIaRWo8kMe6hY+Jo0v/Lvvm95OqetYLk2P/lI6R21bskwzDvi5v2JeCZ6Blvom5ZWoBc+DH4JTb4lUESZi/zjmmGPkrrvuMh5APAEnnHCCvOtd75KjjjoqtM3wi/PPP18uvvhikzNEyOqiiy6SE0880XhUw+4LjjwEhNc7wIbEiR9jQ6ggShZt0JwH231fKrcirhbaCk7fGAxc12HkrcOQB6oBOAUVkrYudr2wYQv1brCoMLqMHwaRygS7vwSGGdj9JZJOKvQLTtzcH8bT7iESN0yPhqFDzUuz0zXpUseHcdExCjM+mqvhV/2yEgJVxeBnztUvmGu+ZmjF3bG0779FpPneS6TzhF+K1CencxDUOJfySui4xemVqHa1hR+EqYgbPny4XHjhhfLZz37WHPB+/OMfGyJ22WWXyYc//GH5y1/+0h9aSBo8P94vaFjdC0ceAsIu1VQ3KgsHt33USR8kbKEJirhoy7XwjlO9konHZ2bSsZnEpQtQjDxovB6vTpCOmGHDFsTLIQ6cuiAqei0lK4X6SxAyInETXQtOYuq+r1Z4A8MNqeT5aDvwSoHxYXPkZfffwEtlJzPyKldiqWW/hANx94bxbkUVqApasuorbIGOQ097P3Hov9e1C6T+jfskO+UoSQpRRejKeSUgaHaL7KCkNe2eByWj9SH3ev6euXbooYfKWWedJd///vfN2FVCXwWwLnl/csX+85//RLqWIw8hwxZshmxsfk/CcYYtcLviUie2hkvMjwstDvLACZCNFdVCNow4dRTU2NubG6dYvDp4ApjsQYxxmLCF5qyQFImXoZzUtN1fgudgJxWqW1zd95VKOsNDwzPC2+ANYVUapfpvaEKZjg8Gx96QVS0U8oH3Jy4iFlSginXmFagqRdT9GOfeiQdvVGWBymS+ZaTUL30uMfJga2jEgSS8ErVAHkB9SPIA2QJ2knehyic/IPRA0mMp4K1FX0jxwAMPmOdzySWXyBlnnGEIRFjb5chDCJB4xUkT4SUWR1woZ+Tt8sRC+Q3FEDV5kL9lEsKQVVqbU1rcLbTtzVebTXGSgTiEKf30S274Pdzi9MTANa7ejaCbmJ1UqBupegEYK9t9H7cEr34G9dDEOS+T6L/BPMfLA9GC7FDBxD0zNpANTrH8Dvk0SckVhxWo0j4IhbwnJclDtlvqlr8s+YYW6dnlVGmafZXJeaA0Mzd0CyMQlR/kz7OWtHR2tbwSaScPuj/XhbxHJQ9xkOHPf/7zxntRCnjVbEDi2R/IGWOfQu8Dz3UYOPIQ8MHjmmaCEy+K29VUKudBE/iIZRcrT0zC86Ctw/l7JpmeHuJu5GVv4hoawcjg5Qiz2fn1PPAZGFfi35BBrc2OusF6N1KeG4YSA6RKjmqgoiok8hmYl7xHnKf0JGELOzE+2gWVZDLIBP9ObgKGuxJemyACVVoKyrP0ClQVIw91S5+Xpkd/LpnuPuORG7aV9G61r9R1rur/nXxDs/ROPiJx8lAJ4xzEK8ELYqkELu3kQTV5woB5zkEoDtuhScthoHtjKdmBcnDkwQeYLBoqINbNBEjiNFQsbOEVYAr63mHFp3Dd8r627LN9zSQ8D9oaOmrnT68noxBwo6tCoVYjJKHfYEveUmLodd/zPNV9H1SLAGMG0eI9kjylJwnuHcLDs8Igs8YYC21UxDzTUtAoyYxBUCzpkvdWUsjPWZta10+Yhc+BoeSg0S8SlOuVpjm/7icO5vpr3pbezfeQXOsYqVvxiuSGjJfe7Y6T/NDNq69+mQC8ZJpx83ol0ugt8yJq+4C2traKCKzZQAmTcAVgfWHLvv71r5u9KKzXATjy4AOECohhkxTJBkZc2U876qAodJpX0sKph7h6mEkXptrCjv8jvuN93yQUHAG5JHGEg+yNv9BihxhBHDBGGhOslPCT131v9x+ADNjhjVKaCWxEEAdISRwJu9UEBpj8Fua5yoxrUipeIcYHvQ3mZDX6bxTySvCVJE7ugTXCKY6wEYeL2bNn9wtUjW9YKy3rVvY1wLJQv+xF6TzhAqkUKul58Fuhw7ipVwLPHMCV7vVKpAVRq0Ha1pOHSgJi9u9/95UGs7cS7qZE9Gtf+1okIu7Igw8wyTVUoAaTCZ8EecB42DFsjClxeDbSKNf163nQ6gYIUinNCjYgvdeo4Dp4OMAee+wRywnE9jyU6oehKoXVUozk2agh1PbZbKKqmYDbV8MbtqFUY+sVTKpF8DwgBZA4r7eJz4WB5sXnxEBzYtUQkN1/A0NTib4WxbwSGDmeJ5s1a1Zls1+et1gaVqyQ5vqMNDQ1SlNza78Banj5NuN9yG2+u+l3kSSq6Xnw45VgfuOJICbv9UrYXS2rSX6ieh7a29sr3siNuUhiJGuIw6ifdgl+4MiDDzCx7UQZXkE6awY18t4GU1Hbt/oNW9jVDbizSjHkuLQj2GDnzp3bvynE5Xa31QeLCWphcNJwEisUJ6bEElJVyFAy9pA7kp6ihHbSAMJUePboT+InfstJqVT/DU265FWJvhY6d/jKPUCE2KwJgalAlYwfIYPfrpP6Va9Jfm1euupaROoapb6xWRqWviRNdSK50dtL59E/EmlK7lSqIby0kQdvsqTtlfB2tSyUK1FJRFXA7BggHTWBIw8h+1vErdgIMAqcrGbNmmUmWJgGU2ENPe403PgQBt633AkujrAFJwtIksp533333bGFQuyToZ1UyEZEfkNciZFJAiKlmgl8Du4dA0XIhftm/PhcGN00tlcuhbg0HOz+G5p0yQuvnfbf0FySpEgin4WQE++LW1i7yWoZaNMjf5C6lqGSGTpO6jrXSGs+J7lsu+S72iXf0SscQzKrF0n2louk99jLEsvp4F7SPN8LVVp4u1pqrgQkrRpeiThyHoYE6DacZjjyEAJxVxootCQMd3rY/IYw9xsmryIKebA9AHZIJs48CtvzACGDGPEzTYystVbaAEPLZ4Hcce+qKaGbqIY3MMRp8aYUAs8EIkcuQ5R+G15wHeYwL+2/wfgQEsFTiIHxk0sSBEpK8RLyWezrGi9lT5s0Ln9R8vUNIsO2ktzQvGQ6lknD8pdMTwtpbOwrEc3npWn+wzLr/v9K0/BxoQSqyiHtlQzlyjS9uRLV8ErEEbYYXAPVUH7gyENElck4oPkNuKLZAFXZMC4UC1sUM+JJEiht1IS72SupHbUfRaHPTViEUyGbCjF1LbOqJeJA+IJwEmODgdKTKZsQZXBsomoo+T3A5qnVCWmRzLZDY0lrOPjpv2GTrTDzgT1Ay5jxOBT+LHWSFzxcfd+Z92lsEcnn1idQ9v1bPV6wliGy9/aby/LGLUIJVNW65yFoSMCvV4JXXIQ6asJk+/qch4EARx6qHLaw8xswbsS244adiOnti4ELvFRfjGII4yWwSyML9cRIooJDwyKqApr2UIUXuOJJJmXD8ZbL2psoKnW8bEloWzJb3feVyAMo9fxVUp3E2EpVh3j7b2guSZT+G1yDuYwxJ7G4aJiveYhJhqxf+Lj1s6GSH7yZZBCI0qReFCaHjJeG0dvIZs3DQwlUDXTPQymU8krgeYrLKxE156HdeR42bcQVtuBUzGaqeQZ8n1QuhX1dNUhskohdhTnNBDX0eBp4TwxYsUZNcZEHNkkMJ9ei1JTTuV6/lsCYYdxIDvQbTvJKQqtktiYVQtj0xJ1kHkCxnBreF5JczWfhzSUhfILXxm//DW2chqGC0JX7LL3TPy7y+LVSt2C28TTkJuwv2f0/L013fFnqVr0pmVyvaYaVaW6VppZBkmtsKihQRchKZbMLCVSVy1NKu+chTnXJpLwSLmzxDpznIQQKteUOCi0XhCWzybOok8qlsMMWLCAMEhtnlE08iPAUte/EJDGAGPJiG1gc5EF7ImAseU6cECFLteYqZH7omFGOGRa2ZHahPAANbyQhma3QstJqNOoqB+ac3Z9EBbwYI1vAixcGGuJAdRBeHt/qp02DpXff80SyPX2hivUlmbkt9zHhi0wuK/mW4ZLJ56Thsd9L7/6fLVoKqiJVhQSqlPTwTAt5mNIu/ZzU/cXplYiDPIwL2csibXDkwQe8izCKkdcsc04OlKfZEykp8qDXpSwO1k2JHwJFUa9ZztDzWXk/xH1wU2tpZDFEJQ+qF6HxdDZWjDAKa2wIaiSrXStebsw4AeM5YX6UG7MoeQB4ulR8SSWzNbwRVy26kiAMbS2UlRYT8EL7BGIBIBGhmuHVW+GQztVSt/pNkaGbi61EUrfgcZHeLpGG5sACVdyrkgkVqOKlqqW1ELaoRCgrildCVUbDwuU8bOIImzDp7RLpPQ2rkY/aNtcLrsUCYeLG1cyrnKFnfPisvGc5zQi/1ywFDCGuZDZTwiJ8ZrwruPxtBTuVO1YjmaaEQm1AxkZGVUhUfQ+/ktlU92B8NLwBecFro2TL2/EyjIZDnCSoGgJeVCQxd5gvzKeZM2caI6P/HpiQZur7Eik3oA4sgnqT/xBGoEpbxfM964E5jzGEVLPmMXpxdsKNG9XwjBTzSrAO1Cuh5cA8+6j32NHR4aotNmWESZi08xuKdYm0TxdxMXBOS2zgbHiHHHJIbOVLpcIW2hFTe3H4Nc5hyYNWGXAaVCltOzHSq6vvTShUYSFNKKwGNHGWr96Sv0oAw4JngJdqSjCuVCZgfLS3hJ8yR/U4cZILq+GQthJZSB0S4FqRpBUufrumZla+zsBIfuSkvtAFiZRb7iX18+ds8Hu5rQ/Y0ENRAsW8EvxcBaoIE7EH8Dx5Hhgv5J/1ZJ0mL1wawirlvBKqrDt8vepp0Pt1Og+bGAqFLYJIM2t+g4ohFfMq6CYQNa7mTVJUD0ecBqlY2EJzKjjx46oOsriCkgcWN6dbwkCEYjQEVKqiolBCIUYSI8B1VC+BV9gSvqDgHrTxGZ6hSkgsl3sOalxUMtsuc2Q+Kdny6hAwdzmxaYfPWlfTU+8JFRXaqr1QhYvdf0NDQPz+uCF1MvKp35pGWMA0wDrgQtMAq3fPj4nUN0vd248YQpHb+iDpnfah0Pfq9Uroi2fA3NIGf+S/QHxUG0FP1ZVqOpZUJUMlvBIcijSvKmuNHV/97K+spSQ9ipWEy3lIMGxhx/x32203w2ZLQRdOHHkPdpIiGxmn2jhRyNCTx0FsOGxOBdf061blvfl8bIZo4bMgw5RhsqmqsBDPVOWgtYTPDm8kYdTxgvBeGCH0PdK0eXols3UD1fAGG6ndW4J5hieHZwhxqMUOn17tFdZROe9Jof4bOkbNM68SWfeWGQtejfmF0jD719Jz+LdEmlqld+9PiEw/x1RheBtnRYGXSDCvCUdBmO324Xgi8ErgXWHt4hlNQqCqVjwPpQBhZO9nbxs7dmy/VwJSzdj5qeBw8tSbIFhEatj8hC1U2AeXYaH8hmLvETVpkgXIRGYz0CRFDFTc+glq6PW6uHWRS46SU+HX86BtqNXFz6Ych/ATz9WrlwCR0L4JqlAYlxw0xAdji8EJlYBXpQ3ULnPUMcJrgweFf9dOibVKHvhceFg09ySoImB/CGjcaGl6uU16GoaaOcupM7tmjTSuWS3LXnpKRm05pe/aVFy0L5N86yhTshk3NLRCLpA+N82tYl/iHjCIrCctBY1boGogkAc7pJwp4JVg3OwKDq9Hh/GslucBQgsgNXg58aRFhfM8JOB50OQ9JomfPhHea4clD0pYVMJYXcZJVHFoWAVyhAFk8fCeUYyqH/KgegFsenhzklKMtMMbeG84MWAkVQ5aFQoLue7LgU0ELw2kBJ2Ach6ptEJj68xvQnOQLsaCDZRTuyYUaggo7YbBKzcNcYgU6qMks6FZmjKZfsPL9bt6srJ8dZu8+NYs2aLrZZm8eqYMyvRIY+twyU09SbLbvSu2z8OpGGKPgqzOs2JJl/xM81riFqhKU7VFEqGVRk8Yy/ZKMJ++/vWvm4MO41mNFuPf+MY3Yr+mIw9hBq2E50EnS7n8hmIIa+jtagM8DjZhSaKKQxfQo48+agwDoYOobv1y5EEVAYnZql5ApRQjMYToFKgctIY3bNe9hjdKbYDqGcLrMBCSCVXDgdOXJquq50FVHPl35p6GN/yqOFYaKjfNMyouNx0AdQ2SnTRDGl75b/+PmBvNUw6T3fc6QLLLX5f6/14t3d1dsra7W/J4JZb/StZ0Nsuw7faPbGQIl+IRgmTb+Rp+ki7jFqjyA9672vk+5eAnHy3j8UpARD/xiU/If//7X7NPH3PMMXL00UfLsccea15JlzDfeuutpulg3Ej3k0pp2KKQgWfis1D95jfESR7KJWTqBhEnecAoAJi29oyIilLkQfMpeC/es5pS0xg+LYuz20Lz/LUttHolbAOAccIdjLeGU0itdcIsRJTJOyHHhQTZUiEgVXEkAVEFjfSUixen2iEbjCPuXAhDnNLZ2V1PE2kYJPVvPWjCE9kJB0h26vvMvzUtmisNTU3S3NQkeclLtjdrvIc9L98rDy7O9iemhum/oVohkHrTFtwnSpWCRhGoGihhizDJ7IMHD5aPfvSjcsYZZxii9de//tWQ1D/84Q9y7rnnyk033WRIRBIglHzOOefIn//8ZznssMNivbYjDzEoTHrDBVG6pgUhD5rUxUbhFZzyXjOubGaVfsblDuKM1XNvnOoLxZ8hSGzqGiJIyybjbQvNKQMjyf1CdjAAGtqAXLABa2fPWoVWuWA4IMrlNBzshEIS9mwVR5XMrkTr7GJQ1Ujuj3LMWN+/rl6yO59sXhv/2zvbL4oP7Cu8Jk6aLJvtfEi/5wZSwxgqkSilTcKzYUxpsodnK2jPmg1uL0aBqoFAHjTHKyyxbGtrM1+xEccff7x89atfNWOW1CGC+z3rrLMMQYFExg1HHiIaeE5ULO5C4YIkKzn0FIsbjITMUkk4NnmIYrQ0HsxGwcl59uzZsXfBtK8HkYChszlhcDEyae+ICXHkxaaqDZhIXmVT5d7ZdPFUhBVeqjbYkCBFmhwbxjh5VRxVL8GWzFZDmXTpoIb7AslNxwS8EPUv/Luvt8V6IByVnXjwRv03vNoketKHcKnnRp8NITGeTdyS7FEEqpRMlDKUaSvV9ILPyyvsum1vbzdf7efCPhAUF110kVx66aUlf4ccFQ2TfOUrX+l/7zjhyINP2JuKkgeMAhseCn284th4/Mg+MxEgLGyssNhysVl170cx9NoRketoR8y4u2Da11OhKVygbIS6caWZOHjBc4FMsvETTmLDV6ljCJEKL/GqZn19UPLICSqusIstmU1IimvjkaBEUpP0NLyh5bhxARIMOa1az40hY6XngAul4an/Z3Qg8oPHSu8uH5D8qMkFE1N5kbxrNzuz+2+Q1MsLop10SCyoQBWEBs8b91VMoCrtngc9MIa9x46ODrPOox4wP//5zxuPQilgj8hzmDVr1kZ7C/vp6aefLtdee22k+3DkIQR08hDvpeSFzS0ulAtbaAJaUBGmKBUXGD8MOQve7iKYFHmwhaY0h6PWWmnbCWuUyKkyIWNohzcgoIRl1EjG2VciTqgXKEkNBzvRTCWzNTFVJbP1tB3Vc6NS5VEbj/lFZsmz0vDcPyWzdqHkR06W3l1OkfyIrSU/blfpOWrXvqZZPpUlvc3OGCNO+RhqxpD5pJ6bSuXVlBOoUpl4PLXFBKpqhTxECVsMjiG/R/eJcvjZz34m3/3ud/vfm0MfuP7662XfffeVqAhFHn7xi1/Ij370IxPXJeZ55ZVXmpNIOZAoctppp8l73/teufHGG6UWofkNgJhiHH0i/IQttGESOQ4Yo6AZumHJg3pXiFVrNn3UaxYDG4d6HCBGuGxBrREHFQcjobBQwpotvMTpjDmlOQA8Y0JLdg5AtcMbqoCJIaDkr1L3w4kJo8NLJbO9npswRlLlpiHClehwmFn1ljQ+dJnpnmm+X/yUNK54VbqPuVSkeX3YxydxKAS8NBCrgw8+uF/Ei7CSChep56ZS5bKFwhtKDEoJVLEuWAM8y0oLVPmBelbC3lfbevJQKZCPpoC0KTiQRW2MGIo8wFo+97nPyVVXXWXYy09+8hNTesLDL1VhwKb4hS98wUzwWgQThhM4m6iW1yVx+ipkkFXyV0VrgmRPR5F+1u6fxbwrcXoeeD82PMiDbXDTfBIpBJ4VJ1o+B4Tajzwz88g2kpoDgIHDGGgOAM+g0sJLmhPA2o6rqiaqZDaeG9XdUCOpuhvlKhPYh3D1e+Wmk0T9a3dLpm2xZNatNN/nB42STKtI/byHJbvt0ZFLS1k7HGQgnRAuWw3UWy5bqv9GEihVCuoVqGJvtQ9n+rzT0rwuqg5FR0dHKr2KFSMPP/7xj03px0c+8hHzPSTi5ptvlt///vcmkaMQmCzEWC655BJ54IEHTMJYrYFTJAsV1sZpkXhSku2zFbgiVUsAt1PY2u8gXgJNxoQpl1LHjIs8aAdO3g9jqwus1hYZp2HmCOMCcQiz4dlyz3hfNAeA7HnIBCcydVsmXeKI4WEe2BoOaQD3YSem2robdmWChjd4DkHkpuMGvSvqVs975/672yRPmKI7fBKb9lngs+H9LWTU7JJiu/8GBwK7BXsS+SRBky4hqewD7K08V8i3akpUQqDKD6ImdLa3t1e91wsH4CgVOKHJA6yQkiayNxUM5pFHHmkSM4rh29/+tjm5nH322YY8+NmEVU7T63KpFnCl2SfwsG25y4Hr6mfHpcdmyNgRqogycf2SB9gxmxInGIhDqdNJqc6afmG/Hy5xPCzMkVpLJsTI86wIM0R9VqVyADS8oYlyOkZxlziW0nBIG7y6G1qZoLLieLGYpxDxMHLTkZDtlkz7ko1/3rFEcuOnhbok80CTiVkzfp57qf4bdj6JtmCvhFiT3jfPi0RcPEo8K54hcx6SgOsdu8OBM0mBKj+I2rCwra0t9gqYaiLQqDPRGEBvnJDvSdIphAcffFB+97vf9Tca8oMf/OAHxkuRJjCp7Qcfpi13ECNPsh1jyoKKQ0vBj5eAxYkBJNcAF3W5TclPZUgpKDli0yenAhxwwAH9nS41mbCSp+0oJ3SeU1xVN+XaZmuJI+OkWvpxuKQ1t4ZXJV37ccFbmcCGDYFgTjFX2Yc0vFGRdtR4F5paJTd4M9O/IoMcVKZO8kPGm5dJlOTV5O9EyueAOEACopDUQi3YVeiM99A29SrileTawevIfmOTVNsroeJUKlCFl4L7jVOgqhLkob29vbLENWEkStl4yB/+8Iflt7/9bVkhGRt4NsirsD0PlciIDtqWOwnywGagMrDE/uPavMvdr5IVFrHfsY7iebC7fmryp4YqvFoJ2lOCzYKNRIlERTZ/H5+Dcav0Cd0ucWSTZa0xRuqSVgVHXn43VFvDIarAUBqAax+XN2NFrhVjoIQLQoEhYn2prHgiOQCDRkpu2FZSB20YPFbyvZ3GG5Fv3Uzqn7pO6uc/KpneLsmN3k569zpb8kOLzyFNJmYvjTP/xNuCHW+geri0vFKJRJweLt4D0s3a0eRo+56KCVSx7pnfcQpUVYI8dKzPedgkyQOTh8Fjc7HB91qKZgO3ISeYd7/73f0/05MqJ3c2KlxoXmAg0u6q9qpMxgEMJZs/m96BBx4Ya3ysGHlQBUfc1BiMIKIlYXIe7DblqGLq+xXbkNgMCp22dfPX2DZfK6mLrzF0chEQBwsj9hIXMCIYel6sp0IKjuUIl90QaiBIZ6trn89u5wToCdbOAbAls73CS3Ggd8+PSuPMKyTTsVwy5D7kcyLdHdK48HHJj5ggMmi01C1/WRofvFy63/UjkUxd0cRVCCpeuiS9b+w7dpv6JES8dA3bZczVFKiqRMJkW4WrLZJGoN2WjRwDc9ddd8mJJ57YP6B8/6lPfWqj34cdM0FsfO1rXzMP+Kc//WnVvQlRELfngY2MzYHFwCKJO7GmkKHX1tZ8tbtw+kXQsIWdiKlCNkESI72nbcYM8RlOHBg+O08iyc51Wv3C+2No07YheBUcNePeJlx2gyqdBzyHWpfOtvNoSslNe3MAlHDZwktKSqOeYPOjt5PuY38sjXd9Xep6OkRahktm2YsmhCGr50u+eYSRsc50LDU/z2+2U0GdlWokrkLIbcLF3s0YqYiX9t8I0l2WNctcDFsqa3sldP+JKlBVqYTJwSnbK6Ig8FGNcMKZZ55pVKrYOCnVZFC0+oLmH5wSyVtgE2OC2NASPO/P045CYYu4PA/aiZN4OYlCeGSSJjt2a+uwHTGDeB40VothwkDxflEqKuzN326ZraV72lOCV5yZ5GpoAfO/0qWTYZ6797StehIQIDZ8xo4xIseh2poSUaEndE6i5Av5fe5ewoXRYZy0XFaJaeDTdj4nmcVPS6ZrTV+p5qANdWEy+azke9eJNK13Z69vvqfgFM18w9tg1+1X28PFXqX6JLwYcz/9N9AGIqyG1zEOcT1bsC6KQJWfZxpHzsOWCXfQrCQCW4xTTz3VLCr6gzMR2HBuu+22fgaJ273acehKII6ESVtLQTtxMrmTyqXQ6/L8SFJiM8LwhjWsfnMeyJRWrQA2dBB3KabdMlt7SmgXRzaxOKoSVBaczZMTba0ZWm+DKnUdMz4Yy0ceeaT/FFkpQaE4oXLTUU/oPFc1gqxRiLZ92oZo6TiVJKZda6TxgUv7yzQzy18yGg/SOqaPRKA2ad6wj4DmW0dLfswO/X+uz4c8hDQaHa8+ibf/BgdFO+kSe8H4QRyC5MAFQbHwRjmBKu5PwzHFPChx5DwM3pQ9D4AQRaEwBbj33ntL/u0111wjAwFRwxacZnDhY5BsLYWkEjHVU8LCJlaP58ebpBTmmt4umKUUKtkAKyE17W0qZFclMAZ2noRfFz3XgHBxMk065lwJQK4wTJwgIVx2eENFenSc2FQrmU8SBmpoIadxqOcVKpe11UBVLwGjpHOJcbKNS8NzN2yg70DCZGbVm5JvGSH5weNEtCFWfZPkRmwjvdPPMeELgAcNA1wpFcyoKNV/g7wb3X+Yb3Gr8pa6J78CVRxwIBOlBKqi5jy0t7eXbGBYa0j3jpDysAUxtSiaALjUyDWwDVhS5IH7ZyGzAHC3xyGSUypsoQmFnPypRdeqkUqfZm3RJXJwtCpBk+Q4HdlVCaU0DzgBxmmYqgWVZyZZTQkk5AAjxYtnxylSEy4xymz4Ok5pS6aspNy097SN0WGcSNAjSVPHyZxglzy74R9DGkbWS274BNOOO0OoordT8q1jJLvjeyTPz9dX8HAajsu1Xw3Y/TcgWXhYGRM+GweYsNLiSXgl2BvtXKpCAlXcJ/t9FM9B26acMOkQPWyhIQMWVaGYrCYBMYnjOt2yqbHBcr9UccSVTFgsbKESzRigMImRlapK0NORJlSpxLEmfwGS51Siu9Y0D8JqODBOkCpemk+i1RsYScbJDm9U87mq3HQ1Kl6Y/7wnL4glRsfuK7Hrig4Z1dMuzXRXbWyQjGREmodK725nSOOjv5BMT5/CZKZ9sWQevlJ6Dr1Y3uwYZAgbz6eaFTxxgaoqPg95VRArNc6ME2EMDRkokahk/41ipaDFBKrwzOGd4GejQwhUaW7RQIEjDwHAJsnkD+Mh4O9g3Cwk4uXFNAF0MnPtOFzF2o+DExNJQXFWIRTyPNhy2poYmdZW2vbpyJY41vtXVysbXyXljJMA84+SXIgSyc5BNjG7dM/ul6BS0Eq4Kqn2p/lCkGI+Txo0KVSfhDAQ47T2pXrJzf6ZrF6vkGuSa8ftIk1rFxmJ6g2Rk1Vz/iGvNu8/IOYbgHSz30HsNFHebgpXqv9GotobAbwStkAVYSTWAr/32nqBKj6XhjfK6amkQZ46TjjyUAGdBz2Jw2DLhQx0842DPGgVB6dsQiMw/TjhLdW0W3cTIgBpJQ6lJI5VI4Cv3P+cOXP6N7RqNKeKCp1/nHyiajgUkoJWMSGIox3eSKpclvfFncxpsOJy0wHGadTOh0lms3Ey+OXbpHvNUlnePEFea95ZBj/xsExZvcrMo6bmJrOO2ts7ZG3XYpn+7mDELq3AI8RhqRwR8vbf0KRLDStq/41Ktqov5JUg3ELIU3tvbLPNNma+q1einECVelwq+Wy5R8bR25vqW9/6VizXd+QhJPx6HrSdMb9PfkO5kiBdHFHyHux8A63i0LBFnLA9DxATTcDjJF+JxMgkgIHlebFBYGj5jN7mVLZ6YxoNl40kNRzsJDlCcF63fRgNgCBEiM+TpJ5HHMiPnSq9Y6cK5miz9a/OZdtI0+1zpburS9o7OghmGAyefljq55MfQBrYe4KqlNrhMpKSbe2NavXfAMxnyDGhJO29kcvlDAlXkapSAlV4VzDk1dB5oK8UjSy5N9boJz7xidiu7chDhLBFOc+D1miTxIUEq98GNlGSJrknNldqmmmZrkw3ah+KQtAxwDUJ8yZhTUuwaq3MD3CK4HmRRGjno9jNqbzqjWwetnpjmsiSamtgxHk2SZeWet32Ok4aBipWleAXXFNbUNeymFXLmIlSd9CnpfmJP0vbisXS3ZuV9i0OlBfXjpDcffdVvG12XOC5sA+Q5wBxiHrKtrU37P4bGGfWoZ10mVQ4AO8tBwYOYV413Nz68AZfiwlUsUecdNJJJkyNVwVvCteq1HPlGUBudHziJC+ZvFrDFANDyGmv2rfK5qUGGEPDCfWwww4rGu/j9EUiVVBxl3vuuWeDOKFf2EJMsGR7grLoWARo/McFvA2ERTgB8H4aD0yTAQ1yutCSUr/PS6V7tfcGqEb8v9iaYX5CXJmD1XwmdlUCL8JBtuiSH++BhpL4XaoQak1jo9CYPP34HJFVb8lO0w+RpuHvKDgyRqxX/lubwlWzFbUfcO8YSm15nnQPB/Vy8cJIa/+NOHveqKAVxt5PsnTOEqhSYgG4v//85z9y4YUXGi8w+/TRRx8txx13nCEWSZWuamgFu0WZPIdK8kviSsR1noeQKOZ50HisNhcK86DCeB60Q2UxL0cc7bO9mzmuPD6vngJrkThoBQKuVkpKg5TGeaV71UBq/L9abcW14ZBqOFT7mdhVCXZ4g1MdSZzl1EBVbhqjEFe782qCdfjE44/LiMWzZPuGhVL3yBzJbX2AZLc/foNqILsFO/OTNWa77dNCoDRMqsmrlXDN214um8RjIL0dZsOsPT1M+CUOpZIu+Xs9kDz00EOG2N9yyy1y9dVXGxuRFHn4zGc+Y3JOmCu0kEAdGnHHn//857Fc33keAoBJqgYY9njffffJMccc07/Zsdhxq/I7eA7CJqbRxpxNFqPkBxqLL+XlwGU2d+5cOfzwwyUqOBFxLTwNGEzet9IGMg5oUzA2aDwncWbsYyD1pM3YYxTVQCZ5giyk4ZBmqOgS48SpCEKmJ202PSUOQeWmq47eLtMEC9VIW5JaQy/jFt0jU7qeljrr82QnHyG9e5xZ8HIqma1jxbgF9d4kAW10p4elauds2P03eNlrr5R6ZKHeG3HpbDz22GNy/PHHy9lnny2XXXZZJPJ70UUXyaWXXlryd1j/mrDu9d6zvsjhimOvduQhJHlg8d59991y1FFHmROAlkTCIqPGl2fNmmVcTuU2f4wfoREMRjkdAiYM1+V+o4CFRUkVcT1cYcQ32cy4vgouQXrSJiRUTOGT5wjRS3LzVQPJ2GEgk2grrqXAmiRbixoBGtdW0sW48bkYJwhq2pMjFXVvzZSGJ/4oma61Ih1LRVpGSXabQ6Vz8lEy59Vl0tzYIPvM+7XUZbs2+Lt8XYN0n/ALkcbSa0cz9zW8wd6j3hu/BjLO8l/uAY9DGte8rQjK2lOZag0tevNmlDjghfR7eCsF9hjCE1/+8pflS1/6UuTnoiS7FPA4enMqlDwAnhnrKSpc2CLswK2PZ0Mo7Hh5HF3v/AhQ6QkGb4efjphaGRFWfErd+7gnOdXqwmKialdCJrZXcInfi7MxVRzAa8TYYcRVi6JSqoSl2ooXayYUxIMSVMMhTdCNXUvdyKdhbJjjeOOqUbYXGB3LpGHOb0zDq8yKVyRDF821CyXXuUo6n7ldRk49V7bbcW+pe2ND4gAyyFXTJKsMebC1ElQyWzVKVDJbxymp3Bv2Ay2XTStxKKYIqpLZrD+7/wZerjiJw3PPPScnnHCCfPazn42FOAA9dIQFcyOOzwYceQgA++Hrf2MotUFYXFKy5XIetCMmBpq+GH42B67Jgg9DHlh0kCMWnSZDecswORXagkvqXkUjIa7GVHEmErKAYN+VvpdCbcU1po2xDCoDzTzhdAMhorS0Vk7npUDSHWTI3sS98X9tmZ2GOWWjbsHjksnnRLrW9hGH9Ya2e/USaW0eJtNe/LHIgvEi7UtEGgeLNL3j5s8N31qExlkRe7mogWRvYl5oVUJc0uJ8Hgwj7wNxqJU55829UYVZ5hSHIhWo0vywKN7jF154wYQqzj33XPn6179ecaKLl5lGdyT0c5jAS66NLePKsXBhiwBgQmmSJAaSB8LCiTtJiFOxtr0NKm9dDNz3nXfeKUcccUSg0y2nGowtnx13eNDESG1MhUeCe9eTNkah0g2XuAdtfZ6GREIvtK04LzbmcomEtoYD5LVWSxdtaPJqqdCL3TKbF3NbXdHVLm+se/MBaZzzW+OBoClWLp+XXprHNQ6SxkzWtN7Oj97O9LQwTbJGbWuaYdFts+eACyU/YmKiVQl4KHWcwkhB60GCvAKS8WqFOJQCY8M60uRLvo+SU/LKK6/Iu971Ljn99NNNfkI1iC2Hy/POO8+QGIg3n40QN3tgXIdcRx5CkAc9+eOqJ14ed/MaDBwudfoKKGDFxLM5TZSSty616P/73//KjBkzfC8EPieJkRAZQhUYqSgVFXrSViKBsWSBQiSSTrjEncuiZuxqoUuh7b3hqyYSqiuauQepg1TUYnvwYnLTVGCwpvwmrxYqb8Qo6kkbUl9RktizTppu/4Jk2peKLHnOEAeThd/cKnXZbskNGS8yVHOZ8tI75WjJbb6H5Dfb0TTLSvTWenr6Q2aMFWOnLns/HWbZQ9ib2BfwQNZagnQhEO7hMMb+RlJusZwSv/1cXn/9dTn22GNNCeYVV1yRGo+Y5jzwWeJKDHfkIQBUphQ3MVUNhCuYdHH3pieWyOTUjFk7bBBG/0Fx++23G50HP4Iq6uHgc2oeR9yKkd6KBK1ph0zE5clh7DQbnNN5LfYMKJRICDQ5t5aEhIp9PtzgfEZOs1GevS3ihaFMIjm1HDJr3pbeR6+R7As3y9CeZVJHm+1cj+TrGiU/fjeRhneMbs+eH5HcpMJaMUnCloJmrFiLqpzKfuYlXUaXYr2yJ8Sh1uecTRwobS+VnG733+AFRq8vBYXIK4nigILHAfLwi1/8IjXEISny4HIeAgCywGQjFgtLZSL57W8RBJwimbAAlxMnTBYviZFR3IR+9CNsDweLSk/pSUhNa6025ITPqcYRlTo+p3okwnZuVLVNzQdIa1JXkERCXpBXiAMk4v777/fVVjyt0JwNjH4cctO2KmGh5FQ7vJFUmGdZzyB5su4A2W9Kj8ia5yVHE6yeTqmj8mLtfMmP7AtH5uubJLfFdKkGvFLQGv/XZEIlXRrewOPAMxooxIF5wV7OAa1cVVup/hsXX3yxOdih5otnF/EndBTSRBySgvM8BAAGXeuGwaOPPmrKFXnFCRYvJwGMqorjQFiiuqbJ0WDxFzt96wmQsAIxZ42zV3ohYPQ1e5yXShury97POLDREcdk4VOvPRDyATSRkDCFuli1ykVP2hrTTkO77HLQiiGQdM6GnZzKxq+lxXZ4Iw5oqd/O220jW8/+tmTEUsVdu9CEM/Jjd5b80PHSs8dHJD9uF0kbbNLFi+fEmqOqinlX6+RB2wZAHIKGfwuFKf70pz/Jz372MzNOjA+lmSRLkl9Wbd0LhfM8VBksILsMzk9/i7DvA3kgW5bkPl5xGIFSngdNvmMBcEpng6iWYiTxfTwevDR7nE0Zw8n9cXrEK1Hs9EjcG28Nv+e3p0gt9AzALUrYyk4ktKtclHQxVnz+MKSrUqi03DTzGDLFS5suqXEkFwavlE26wswZ8jUg36ZKZEQBeeahm0u+aYhkN9tRMnWNkmlfLPn8ztycpAlaEaQiXaw51hqfjxBgTZTMFgGhMfY5Kq2iEgfA57/pppvkPe95j/zud7+TmTNnys033yyf//znjUcDz+BAhfM8BIApuVofbwa4vbRhUpzvwaaqqodxJvdRK69qkDY0AZSFoPK/aZSaZmy4V0245L+9pY2MGyc/vDZxaG5UG7aGA/kAfnsGFOonkZa24hgkEnF5dmmQm7bljRnnMNobqLxiWG0544aHfyb18+e880vrVop0rhYZuU3/j7KTDpPePT8iaQOHDAgoaw7CqlVRWjKrokva6TKNBNUL1gP7HFVqhLWiYvny5cbDQGL7X//6143mCaf9OFVro8AlTKaMPBSqiogCjdGzkWEIDzjgAIkTsGJcjzYh0TIlTq6QID5jGolDIRCnVePIiQKDyPPhcyCeUwufwW8+QBQVTDt7nJduapqcSqijUmOlOhucylg3aXtGhRIJlaAW696o5aUbJTN3d0jD49dI3fxHjXch07FcpGmoKc3sf79MnXQfe8UGEtbVBvsQz4hnw2cqRgjsRF7GSxueKZlIUxmnNjLE68ReF8f1TjjhBBOy/sc//pH6UI4jDykjD1RFAFzjcRhCJjcLlQmJizpu8kAYhIWj7jrt/EnsT2PoSSRGVkomF7cqRpGFwilAEy4rlWUfJ1Rfg/mg+hpxwU5OhajabcXDuuyDJKlB7NKos+GHoNo5Jcw1wkl4HfAKFT1lZvv2jKb/nC+Z3o2VJbtnfEPyo7eVNIAQBfMOjwLzzq8nQQmqki4tb7THqlrPm3vB4xAXcWB/ee9732tI5Y033pgqklQMLuehyvBOfhYYG3FUaEdMjB1uXDbZODtgenMe1BWuNfWaQFlrBhbwebT2nIxnNixvlr3Ws+vpMc2uVaDNoNhwKcWM+7ngLbMrEjQ5FcMObJd9XCJemkgIUY07wThJ4AGkXJmXncirDfAARqlklUt936k0N2ZHqV/UN8aKfGOr5Ef4awFfCeLAvOMUHTQPxZbMJlyoOiW8uCb/ruuvkuJwShzwuMZBHNhnTj75ZPM5b7jhhpogDknB5TwEhDbrAZw6YHTkJoSFdsQkDscGxSKDTLCRI+gUJ1hEEAUMK6SHkwWGpFbCFF5oF1Pun89SyHWobmjtu1HNVtl+oA3W8A5V2q1faqyidG6kSgQPF0QoLl39aoJxojwPIsHY4MJWwbN+GeiGvNS9/YhpjpUbv7shCJk186Xxgf+VDHkP60MWvdM/btpxp2FfsxNY4ySsHFZ0XkEmGKtyoaA4wN5Mbo0qykYF9w1xADfffLPv/KM0wIUtqhy28JIH3P4sCMofg4JrsKGysWL4bKEpHvTs2bNNqU+cYCGpSIh2/qzFMIWeADCyhCTw1vg9JeFa1YRLXVBKJKpdVqWeEk5JcWx21W4rrs3UeNVqp89SKou2PLMtLb5u6RsybfE/ZHBdtyG0DY0Nkt3lVMnucIKRpa6bP9d8zW2+uwgtu1Owp7E3YMSpFEnaA8lYaXhDQ0GqchlXiJGKK/rqECLDExJH+Iq+ENz7bbfdlppESL9wYYuUwY/oUiHg0sOzwISksZXXaIW9bingbWDBMunZyLVBVi0SB42d44bE0Ab5DIw1Gwovjf1DJijX03g2p+NKx2jxQOnpPC3y2baIl7Y29tuYivlF9QHCarXc6bNQAivzhs9ke7qYOxA+k8sx62HJrcxJd1fWkC7mUePD18ja1h1k5BaTRSYeKGkBnwXiABlMIkRWCIyVNxSkVVK2kBdfwyQiQhz4TFpxFccY/c///I8xwAhB1RpxSApOYTIg2AjU8xBG54HTHO5BYqkQh0KJcFw3SvvsYqERzQRnQ0u7gFA5oSSSVKPWaduxf1sjgeejGgkQCU7MSW2qpTQc0oRibcU5hXtLG5m/2nUR1chaU70sBOYHITKeF57GUgmsjatek0xzs7Q0N5vf57AA+Vr47EPy5EtvpaYigbAURpa9oFols7amiy3khXojoaGgfUq0Hw8kjgNCVPDczjjjDCNvT2PBsK0BBiIceYgyeA0NgTwEsGtOzCSMobdQbCGoC55rR0ks8oZGMJacGtkEVQgG45imlsalPgvKm/PmzUvEyHqFqbQEDSOI4bATLuOqfOB9IHUYYoxsrcRQy7UVZyy10+dAIA6aSMjnKlW6qMgP3VwyHX09EBgHiFdTXU52HyvS2bhGFjWNMR4Z1ma5zqlJEgfc+qq1kYaDRCEhL/V2QbDLebsgDnwmPBpxEAee+9lnn22IDOq8aSX21YJLmAwxodj0bbUyeqaXAhssJ0vcuCzUctnm2gGT64ZN6MPgQVSI0bGJ24mRahw19g9J0UWZxmoE7k9PsmzelTSy2rVRx0rr/rUMNOzJkeejLvAoGg5pAmuDUx+fjfmm5Xo6VpU0jnErYeIp9JtImFn2ojQ+cKlkcuu9kuuWi3S1iaxvt51vaJaeA78gXcMm9VdvaOfUSgguES7lOfEeePBq4Zmot0vJhLZh11wJnXt4EQllRgXX//jHP27W6D333JOaUGJYuITJlJEHjAraCUceeWTR37f7RWAkMDxxd8AsVurHBq6JkcUqKuw22Zphb8s/V1v8BLchJIhxVBJUTdiJcZAZiIwaR79SvUlqOFQL2iLclpu224pjJG2PRZKhoDiNrJbM0k8kyP1mVs+T+lfvlMyKV6T+hf+I5LOSbxkhMnKiSKZeciMnS8/h3yopuKTGMc6qIFX35JqlvJ9phirN2i2z1WtB1VrUfCWIyvnnny8PP/yw3HvvvbHIWFcbjjykgDzASDVUwUJ84IEHTCe1QpNVjQS/D3EI0tWR+Bq6BUETzVQzAqasypdBSjF1UUIkIEfE+NQ4VrorJad8PgtjoCQoTdAkQsYK46idCBkvxq3QmOtnYqMLapDSCs3j0ZNsoc/kbSuuPUrU21VtklrsM3FvaFOEMUaZRU9L8z/P6FOWVDQN7uukmclI1/uuLdjXopAiaNBKl2KfCeLA3oCRrUXiUOgzUZmm4ma2ZLZqSgTZN5inF1xwgSENeBwIgQwErHEtudMFJikLnQnnnaCa8YsRwfAFzV0IU3FBbgNeDkiDhkaCVlSo0AsxQ06T6q4n5KKnbF5+kpeiQLUuqqF3EDaJ0Cu2pJu9bmDV1HBIclPCyJb7THZbcU682qOEcB5zNk1txVm7fKaoEtqND/9MMtmePoKgB5/udpH2pZLbfLeiDbFswSXWoV3pQtkrnio7vOGHgGJkyQfgOZFPMBDmnnpR7LmnJJXxYs/Cg2NrSpQ6APG3X/ziF+Wuu+4aUMQhKbiEyQiwExtt8kBmLrEyFn7QUsIw5AEC8/LLL5uNGJexJvZEPdXigtaSKvuUTWIc/6YeibgrN7Q7IUYmjgY2lQDPS4kVz0ObUukGhiHAKGmX1IEAW246SEkcc4WTNC/Wh93hknlczbbiSvA06S7Ke2fWzu8jCPXNRtehH91tkt35/aFIKgZOK11ItvXjwdEKBA4UYfejNBIHyJB6WPUz2SQV74qGGdmTSVDl0KPEy55bjOvFF18s//nPfwxxiCPhcqDDJUwGBAZdyzMxEuQmHHLIIWbD07I7XoitaL+IMHjooYfMoiinyKfNtDgBauJd0oqRespWrwTvp4YzSuUG4wcxIbuZ8bOFs2oVOieUcGEo2bSUeFX7lB0WbMZUVsQtN21LQOvc8paBJk2GMLBxnDqbbv601L9+b983JE/ihYA7HPE9yU47NdbYPy/IqTY809JGJQ58noFCWslFgTiwhoKEXyBaqinBC/JFFcWxxx5ryr9pcEW4gkPLQMMaF7ZIF5i0Wq7JCyPOiZNchahCIn40JDShi3ug1I+vlZCatk/ZdiybumzGgc2efwuiYa9liyzugSIqZJeXog0AsfKestPSPCgItP00BC9uuWlvyazXg2OfsuNMnlV1TwxHXGSo+8AvSMuCxyXTtVqkrsG8slvtFwtx8HpwIAYqeoZhhLAS3sBgcoiJQywpTcSBORA0b4PxYCx4MbdYb+xZX//61/v37VtuucXsoXF1Sh7IcJ6HCJ4HgIuLxDcUCpl0nP7j2NRIAiLmWsxtry1m2USVKVe7R4VduWFketetMyEUPWUXS4pjgyPMw1cqKgZC2aKt4VCsvFRPQppwmXbtDVtumufkt3Iorvcu1lbcr4BQOS8K6ziKt7Ag1q00bbnpa5GbeLBkd3xP0VyHOAGh52DBvINUsG9FVW6sNrTElPsPm8TqnVOXX365/PSnP5U//vGPxjuoYQvIlipvDgSsScDz4MhDCKPApq9govEzNvw4s+c1e73QiYGcADY7Ep84JWlSZNpOrdpHolTlhrYi53tOspXqtpckwmg4FNLesIWpqj0uttw0PR2q7RnythWHsNs5OH7XoTbtYu7xtwMBerDQ8Itqleh4EcpIU0+XIKJWcWlTMCY/+9nP5Ec/+pHR1MHbqWB8Zs2aJUcddZRUGv/7v/8rX/nKV0zFx09+8pPYruvIQ8rIw4IFC4yRwIBTURGn8Sb2Cuu1BU/4/Hg4yAng/WDgII3EwQt112McMZJ8Nhgwpz5Oe3GcJNIAjBobNy5SNBzCGH1btZHx0o6NutlX2jOjWiUYJYhD2vI07EoXXn7bipNgzHrCizJQ1AO1VBu3e7EW1LoWCW9AvJhPOl5xNaZKO3G46qqr5Dvf+Y5pckWbgDRg9uzZcsopp5h9EYFARx4GIHmg8oBTGLFsFh7xxriFRPAs4Fokrgc0pwK3E5sdJ/VqhynCAvKluQDcv1Zu8KrVnhtJajhAHtQjod0t9ZSddMks8w4iCymCOFRbpCtIW3FehYiXHX7hM/G8BgIgAijeBsnbsIkXZMLbp6TaAmbMO4iDds+Ngzj8/ve/l6997WumrfZBBx0kaUDb+i6tv/zlL+W73/2u2eMdeRhg5AHiwGTGUPCwcXmy0OKuCSZeDmDaqt7HwqEUs1KJkUmAZ4jnhIQu9Z54Kzcq0ZAqbnAqZ+Nm006yjp75p4aRccMY6njFTbwgeTrv2MyqbUjiaivOnGKzJol1oHRIZC5A8vDghT3I2B4vlWKvpv6GdvxUhc84iMOf/vQno+Vw0003yYwZMyQtOPPMM81+d8UVV5j7qgXyUPsB5gqDCc3ms//++5vNNIn22YDrYih42OQ/aJIQqFXiwKmGkig2JmKMOontyg2MMEQCN3nYyo1Kg/vFU1TKVRwX8EZBUHjZJbMQF2ATryhljRBW5h0Gg3yAtKl7hmkrztpV7x3A2JZqtBQZvV1SN/9RyXSultzYXSQ/MpmKB60U4aBBknVcjanIR1JxKlt/gzVZTEE1LrD3xU0crr/+evnCF74gN954Y6qIw1//+lez1ghb1BJcwmSIScjEVrBoOf3FXdqDW19rtwmLYJTSmhgZNIlQwy5B4v5+KzcqDUIvbKx4UeIuWwwCu6yRF+NsJ1wGGS8/ctO1BjtvA48D42EnqPLvdhloZC9Lx3Jpuu97/d01Qe+O75XszidLnOD+2YOYf0k2b7L1NyAUfvNKonh3VZY+jv0ODYdPfvKT8n//939y3HHHSVowb948c5C64447jFcZ1IrnwZGHiOSB8AKTW70Ccb0HrJvFyoRSsaRaJQ4adiFermGXsJUbWqanQku8Kt1zw9ZwQPOAhc5JLC2wxYMYM/7bb48SVVhMOvxSSUAMMLDkPxTK2yjkrldJ47A9XRoe+4PUv37Phu8jGel+1+Uig+MRP9MS0yT0NoLmlUQdL6/HAW8RxCEO4vrvf//btNa+7rrr5L3vfa+kCTfeeKOcdNJJG3j28Ciy7vjsHALi8Po58pCCnAfAA1WQOMmEZ6LHASYOGwLsHo8G4k86kWoRTFqMEZsKBCuOz1GockPd9WGbBoU5xfLeGKO0l7rhtdGNnnsu1iZb4+Z4ugaKqJAmfJK/wbPy41EoNF5Bhbwa7/iK1K2eJ7JuhWR6OiXfOEhk0Ejp2fd8yU3YP/LnomSWOZiGElPIg4Y3dLwKSUD7JQ4aKotjr0D0iXyCa6+9Vt7/fv+S4JXC2rVrTQ6YjY985CNmr/zyl78cm11x5CEl5IFJrvdC4h8TgLK8uMr8uDba6rj5IRC4I9ns7Y2+FqCxWIzRxIkTE7l3bfsMkeCr1vsnVbmh4RfmQFyCYJWE3SabrxhTNnm8QVQfkNE+EFoQ62clF0QTPsN4vEq1Fee0XexU2PDQ5dLw9PWS6V3X/7N802Dpet+fJD8mWogTnRc8nrZXMi1Q4TOdX7a8OGGwYs+Av4M4qN5LHMSBzsQf+tCH5Le//a2cdtppUiuYUSNhi3RmoNUQ4kqYtLtwai3zoYce2p8Qx7+x8NQwJp2wFBVaQ0+yU5KxWIwfSWK8vAmEjE+clRu2hgNxyrQmcPodL22yhLIeuQDMZTWQaU5Q9QPIHXkbGioL6/r1jpdKsZdtSjV4M8l0r32nIVZdo0jzMMm0L4lEHlTUisOK6rykCV4JaA1vkBeEx9AOb6heiRIHvo+LONx3331y+umnyy9+8Qv54Ac/GMMnc/DC5TxE9DwQ80Ysap999pGwwNhpd0JKPgslRupGr4qNcRvGuJUIOR1VMxfArtxg88JjwGbLeIVRbNQkQjY/TudpGe8osBuRYYz4TOqu1wRV3ehrycOilSKEsOKKm/vJK7FVG0fe/zWpf/l2kSz5UXnT1yI/ZLz0TP+4ZHc7PVJPkVoVtdKyWTwSrE2eD59DQ7R8rjieFU0FTz75ZCM9/bGPfSzVh6xKwYUtUhK2gCljnDT2yAZM6WZQqFiNntA1dlkuMdI2jLw4cSuRSLrzYCmokBUxUDaCtCgRqjyvjhf3pydGxqxcJYJqONBnZKC0NGZMOMGSdFdIblo3esbL7iPBeKU5x0P7HyjJq9SzshueQfIPeuV/ZWjXAqnL1EmmLiMZTZg85keS2+6YUFn57BOEytKUnBtlD2Xu8Zkg9ho+0/BG2D3skUcekRNPPFG+//3vy3nnnTcg1moccOQhheSBzYJNOKhSGX9PRzdYN6c+3byDMm+7GRWLUUv09IRdKWEfdelzoufzpFlQKEjlRiU1HCoFnXu4lClbLJcd7+0joYqgQRPiKvFcIQ7cGyqL1bqv3p5uqb/+dKlf+bJketa9s67JeTj1b1I3elKg66mo2kAhDgDCoB2BCVXYvTeYb+r1Yg/zK8fOs3/Pe94j3/zmN01viLTMyzTAkYcUkgdioIQcgoiOEPbA0HJSJx7LyTcO4SftPAiJwOjx36qN4OeEHRYsfE7mtejS91ZuaCUCL77nZFRtDYckqg/CJnza9f62ImjUE2McmyPGKC3eocYHfih1i540eg/Z7g7pydfLysbN5fEJH5VR671efsJBeCbxbA4kGW3mEPsfcwcPpT1n7O6pHKxUFVSTLosljTOnjz/+eLnooouMgmS1n3/a4MhDCskDDwVlsCOOOMLX3xIbVeU0dasmpRipPRHU9ay1/rziaq6EIaH6gFwNqipqedHalRtsXsw3kj3xOKQ9QTWI3DTzLQ7vkDevRBMIK+310i6S5AylpcQ0s3aR/P/2zgNKqir7+qc6gRiZTzEPKupfR8U4oKAiYQB1nKAogoM5DSqOKEFBQMeAiIiigpjQUcA8I2EQBTGAKI46ZkQUFbOO2iCh0/vW7+JpH0VVd1X3q6r3qs5e6y26mw6vbt13777n7LNP6ZzLpejb98QrKhXv/+0qFe0ulpUbbZ9yW3GiDQiPIQ75YqMNeYXkJSIOiQDJ9Ve7EKlgrFTIypgRRTvyyCNdtIGeFVF/TjMBIw8hIQ8wZ62wgCW/8MIL0q1b/XlMHgAYMhstCx2vJ1tW0/EnbG2u1JgcNgIuUjaQoMbY4oYJ6uFAeJ73SM2DVKCqjnpRiq5kw27a3/ZZo15BGQfVBfWmCFtaqWTRRCl56VaJ1VQwOOJt9CtZ86e7RFrsWfs9hOf9GyNRCB0v5h/PF2mlXLc/DwqsmZA8QNQr3Tnor3ahB8QjjzziooJY3iOMHDNmjBGHJDDyEELywALwzDPPSNeuXZNuKNoMinIlyjC1dDFXjpHK5klvsGixoUAiuK9UTJa0NThlY5xg2STyAbyvaijkD+knOmH7dSVhL2nMhd20Gi0xZqqsD9rIS/Uoje3pEDhWfy9NJ3WRmKu0+AXVW7eWiuMfqLe7JSJs7W6J50bUy2b19ZHaZO1oCHGIB+Pz+OOPO8tpyBXjhqHeMccc466gbK3zBeVmTx0+8sDHmJF06tQpoaaASU5NOBs1G62GH8NycuX+/SZLvIa6TJZ43YQJmYwsAmFW3mfCQjtR5Ya/50bYShrDYDcN2dJQvX+OMV6kgxryLFAKrA6LYdOjFC+eIWWzB4hUrhGpqRKJFYkUl4nXdHNZc27y5kfMLQ4YvDYEnxrJSdRWPIrEgbWQNSMIIoQOpHv37q4kk4gDzyIttumWSTkra1Q25vr48ePdhTYFUDU3bNgwl0YJE4w8hIQ88DCw6QLu6cknn3SGTvGhWU74PDQsnhCHoISRmXxdfi8J7lWJBNEFXrN2byRfGZbmVI0FOhQ22IYIPuNLGrXWnzHLdamqhvQREOLwGaY5pmSC58ffOTWVE6n6HYTVKCn25ZvS9IFjRLx1uiiFt9l2subshXWWzjImpCr8c4c5pgQfMphtO/YgBLqsHWg3giAO6EAgDjS4uuWWWzZ4XiEp2TqcTZs2zc1Z0ma8h9hgX3/99W49gUiEBUYeQkgeAB3R8HngQY4vG+NkziRigkWpsZXmF5VI8DnjT4hQiVA+gNcIISJf3liVPiksTW2wQforN7K9yBP65vRFSD+sdtPaYEnHjOhPfZ1TtfoA8hrWdFnRF69J2SMnS2ztutbf6xCT6h3bSkXPhxKOA3l7CALOpXXpQ+IFhH5/hIy0FW8EWDN4toIkDhjyoS8j0nv77beH6vUqmMMQCJpxhQVmTx1S8FD4yQQPNQ8NoWI2JBaHKBEHwEPJqU7V85we2ADZIBGIRinnnwykkthgd999d1fi11iQsoCEcPkrN9C7sBHqaTHTlRvaJpz0S66bJtUFxoCx4OLkplEctDSk+vyOjZzEqT7gtXEyD3X1QU2VeNu0FvnfUomt+V4kViw1m24n3la/iCUVrA3aZK0+4gCYR5BBLg4xKiBkHvN5oG3FG0kc/A3JglgjIMSUY+KpM2HChNARh+rqann44YfdPG6IaWDUEM1VP8eIX/j9/S0IqRF+pCsa/u4gzKmKVJg+CzmvBzKktrxsiizmLFr1nRbD3HsjUx4O8T03NB3Eguqv3AjSG4H3xl/eFyVDIcYEcspFlQtRCCVfvE9sPmxIRFLCXn1Qs/U+4jXZTGTL3TGm/uXrcd00eb/UrAvikK6WgXkDSeDi+VQtDtEZfm82ql0SgfdJm8el2sm0PkCQEELy++66666c+YkkwptvvunIAnOW+YuQk/RnvsN6WzTw4YBR+73UEaOxQbDZcuLThTts7LghGxGvJ1luWd0auVi8WLCUSIRR2KWiNN6nXPTe0MoN1UkEVblRn910VKEbLESCSIQ28PKTrzA+Y7FvF0vJf+6UopVfiVdUItU7d1zX0wLxpM/lk2eGSErQQtsg2oo3BLwuNlP+Pq8rCOJAJJeIA9GpqVOnhs69tqKiwq2TkEDKR++8807XmCtMBMI0DyElDwsXLnRpC+6PDYmFIMrRBvU6YNFBHe3XcqRTnscipTn/XIsH422Zw1Ap4q/cYNz8jqDpVG7wuihZ5HdBHLJ5yswk9HUR6eJ1QUb9tf6MG8+dug/mOlS/AVivfvpGpGwTkbJmG7wu3m822ExH6xK1yc5E5CsTxIF15Pe//71LK7IxRyGy2aVLF5euRpMRFhh5CAl54D5gm4AyKiIPLNiEHnkwo0wcWGi0rEqJUEPA+GhEwi8exEsi3kkvW6+LUGq8h0OYEO8ImkrlBpunP0QchcU1FZDq4XWhsUn2uhKRr7BHvvwbbC7eLz/58ruCJmwrnubvDZoQ8QzQq4L7IxUQxvczERBzYgQ4adIkCQuMPISMPLApUpIDw+bBg21qVUUUwebF62FzD9KF0C8e1Pa7GpHIZAg1XQ+HMEGbUcWTL7+/v5YCq9VvFF5XKtCSYPUFSPUEy/zVMdOeCP5OoFl9LqsrXOpCSpqI96vdEHXUigh5b4M6mQfVVpwLIuYXqaYamcsEceC+/vznP7tDGeWQYY2mXXrppc7TAbLA+E2ePFmuu+46V77/u9/9TsICIw8hIg/oARASavc+6s75V0/XDTW/yRW07TQCP6oPMrXQqpOenhYhKEokMjFm6uGQTXfFoKHkS8PObDq8HkgFG2Qm7KZz3X+D15NK74P6ShqZZ37r52xUu8S+eU9KF94ssYqV7vOazXeUNQdfJP99/xNHjNIhRNmEX6TK3CLapUQiWfdU1kJNmRF5DYI4QAIxfwIYP6WaNs0FzjjjDJkzZ44z9mKMOJwMGjQoVMQBGHkICXkg5IggBv8Gf6253xeBe+Whg0iEVdTlL4FC45Dt/gAaQkXkp4ZBusAH0T9CPRzyoWmXn3wh9kT0yXipeJAxy2VXyyDAiRwbbU6ZQRIiv/UzF/D3KQl0zGqqpWxWf4mt/r72S7xPnxbtIJ/seGxgDouZhr97KoQi0ZipmJWNKSjRJ2vrCSec4IjMrFmz8kb4m2sYeQgJeYBl8mZwSk+UpuA+tRcCGyOLl39TDMsCz32q6Q6LdS49AfxjplUI/jFLd8Fl3DkRERkKwsMhbHbTvCYIkZos+Ss3QikeTGHTgDiQxoKUZ4ps++cZGyOExd8JtLEn59j3H0nZ3OG1n9d4npT/+KPUFJVJkz5TI0EcEpF85pmmhBgziCrzjQgPPSWCIA783l69ejnSP3v27LxpQR4GGHkICXnAy7xfv34uTIeg509/+pM73SYL6/HGsZnx4PGwsUgRkeDfXBEJ7bnB6YLTUJgYvl8Ix7hxCtEFXtvxpuLhkGtClCm7acqCmW9++P03tKtlmHtuxIepcWPlPSa1lK0IEWOmpcb+nL+OWYMqhH76VspmXSwx8WqJg8RENt1mF6k6aqxEHTpmqnFgHYnXSTTk/WNd7NOnjzMIo1cQc9cQHIw8hIQ8cB9EH1AAP/bYY/Lcc8+5XBck4o9//GNSm+NEm6LW+PPgZetUopUHPLAQh7CrmP2bIh8n2xT9Hg68rnw6uajddKrtz7Vyg02RU2PYymYVvJ8QB14TabNcppbi29arNwJjpiLVVFCy4EYpev/fUln+tXOXLG2+vVS3PU+qd+smUQfPGIcONBEcnhiTZG3FU9UwsR6ddtppjvDPnTvXrYmGYGHkISTkwQ/uiYdHiQSTH7c3iASXCioT/RzMXSMSfMzJi4hEKqfrxoSHCXtrXjlqYVTuX4kEm6KeFFlwli5d6h4SSuDCtEEGaTfdkIU1UeVGQzbFTKVgiKLgKhkmTUq8SJXnRAlrfT0kYvNvlKrXH5LSqp+krElT8TbeUio7DJWaX7eTKEN7cEASEjlixjc907bidaUe0VacffbZ7jAzb9680HVIzReUW0vu8JGH+IeLE8u//vUvRyRomEVemmgEZUd1dWzU8Clkwn+65gqqFpzFGgEhvxNSE2YRZzqbIqdyxp3Xw0bEKTbMCu2GuHwSSQnCDTO+BTsktbHtsRsjZuX5CEvHz2RgE/RvimySfm2Jf1Os/OFzqXjkXCkuiq1Xhlyz2Q5S+btrJKrwd/1MpQeHNj3TMfO3FWf8dtppJ/dv37595aWXXnIC9FQiaoaGwchDyMlDPHh4qFGGSFD3SzMbiAQRCcrQki3UnK41IsGbzqKuEYmGphj4XeQpSakk02dEEerhAMFijNgQORmxuCn5yuXpurGnPBZeIimZIENsiqqoZ36AbFRu8B5x0oTA0i8lSlANk46ZboqMGVGwD1+eJXt88sAGc84rbSYVf5gghUAc6vLg4Pf06NHDRZpIcbBGYrIXr+ExBAsjDxEjD34QTZg5c6Y8+uij7l9OLCq2RK2cjEiwOWqYXi2f2SRZrFJ5iBkzTq6E9DPVBCqXY4pCXxsD6RhqmZnm/PV0nY0a/yCQC7vp+CoE9DD+KoSg0mhaBUNFhTaOizK0EyiviwW6rDgmh3x1n2xUVCUl1aucWRT21NUtD5Oqgy+QqIF5gYcN84JyzCDSgejFTjnlFPfs8sySBmIt5GB1xBFH5I1Laphg5CHC5CGehROJgEioCQod4yASdGdLduLTMD0LFWFfTjdqSpXooWYTgunz4BPpyCcBIWFkKg/q83DQ07VuitrRkjGrL3edC0B8/K2Mc7GQJqvc0KhEQys3ELISTcm3KhgIPqJPnmMI19rFT8vOb42V0po1UhQrklhpU6lq109q9j9VogQVIJMWJOIQBHHgebzsssucRuyZZ55xvjJoHZ544gl3XXXVVY5YZAPXXnutiwozJyHo7dq1c+6QRMTyDeWmecgP8hC/8FCaxCRGK8EJDyKBRqJ9+/ZJT3ycDDV0yuaICE4jEixibELqoU++PKz2ro2pPNA24el2tNRNMWz+G7ynpGDIoe+7776hEbPG2z5r5UY6FsZaPguJzacyPJ4viANEFE1TrHK1lLxyuxR99KxUrV0tFdU1srqm1JHWLw4cKFv8+jeRMPOCOPB+ESUg4hBEEzmevxEjRjgLZ4hD/CbN3+SZzNa87969u5x44oku8st6CakhKoZhXq6b5gUNIw95SB784LTJQ0VE4p///Kd7kOgoB5Ho0KFD0lOoKsOJSGi5FD+LPgLikE9hwI8//tilYBp7etXctYpUier4W2Nn22BJTZIgfry2sEVE4hueaWkep9H6tCWYkGFGRiQln6JfvGevvPJKbdqs+JMFUvLa3VL09TsSW1vuvscr3Vi8sk2kosmv5PMdj5EPYrus14wqk5VVjXk2eMbwXCDiEMRGyu+85pprXLtqKtJIW4UNzGnmMeLNww8/XPIJ5RZ5yG/y4AdM+Pnnn3dtaCESnP7oaU9esHPnzkmFk5ys9fTKIqVCwmw1ocp07pWTUNAeDonKZjNR7ZKKdiObJkmZqNyI7x/B6ZV0BcQhTEZkQRlb8XpdOfbacin7998kVlMtsf8tlVj58nUtuUuaiJQ0Fa+4TCqOuU2qt29TmxJis+JjxkojOWGIEEIcli9fHihxGD16tIwbN84RB0qOwwjmKl4jRGzRh+UTyo08FA558IMowoIFC1xEglwhBIGQG0Sia9eutbnIhx56yJ0MSXdQChWf79dadchEskY3YRYQZsvDIb41ti7uXEEbavntppOZi0UBWs6oc40NAzKh2g2Ia74AcglxQPCpxlZFnyyQ0kU/V1N8/5EU/fjJOvJQVOwEk17pRlLR9Tqp2bnjBtELLWdEx0TkSQkYH2d7PihxIFURRIUP8+Dmm2+W66+/3pWu83vDOn8RbbK2vvDCC5JvKDfyUJjkIX6SL1q0yEUkIBLk/+ngRg51+vTpjt2fdNJJKQkHo9ABlM0HASEEKhcpmHjXQRWpapvnIEoWE9lNRxm8V7xnECMIq1qy5yollAlHTMqued90c499+YaUzb9epLrSRR1iq/63rtKiqES8zXcQb+MWUvWb46R6r3XdIutrXc+zGh/JyfQzqp4iRByCIg4TJkxwIkiaXLVt21bCir/+9a/y73//2xGHfOqFozDyYORhA0LAQnb++ee73CsEolu3bi4icdRRRyWNLmg3S38HUN0Qw9QBlI1buywS6sy1yCyRSFXHLd1TIukXhFmp2k1HBcwtwr5Eb7RaJD5Mj7hQw/Rht0b3g9JZnjcqBOIrfIo+f1XKZv5NYmu+F6mpct01SVnU/KqVSJN1UZeKDkPE2zI1JX8it0Z/V8ugRYXoUtATERkIIr3EmnLXXXfJ5Zdf7krTiYaGFayfiNVpM4D/RD6i3CIPFnnwgwX6L3/5i6s8IOqA6I+IBJUblGh27NjRlX+ilYAUJLPJJlSn+X6tQNBW4rnasFmoCefHeziENd/PJqlEor6UkFYeNNRuOuwRB01VJIowxNuLk87Q03WYFe5KHLQ0eD2s/n5dM6zK1RJb+aXI2nKJVaySms22Fdls3Sm2ulUXqdrv5Ab9bb9bI+MGqdbS2SCaniFm5QqSOPzjH/+QgQMHuvJLvBvCCO7zggsucBFcykVJQeUryo08GHnwA094apSZ/JxG4sWFaCS4CI2jHiYiQRkoC3UyIqEtniET/rbY2ewAqh4O2BaHredBXadE3RQhOkok/F4SajdNrwpKFoOwmw4LmCvYTQPSS6mcjOMjOVq5wZwLk7iXZ4IIGHMRLVE8ij+YLSX/vX/dJ9WVItx3UYnUNN9JqnfpLF7zXcTbfMfA7sffCZRNoSGlswqiDcxJiEMQuhTm+NSpU+XCCy90Qu8uXbpIWIE1NmWjRB38ZaOQ/zAIV4OEkQcjD+uBEy8nhbpOHrphQSKISHB6wogKIsFFyLyuDqAakchWB9CGejiEBYlSQnpC5P3iypTddK4ACWBzJfqCP0VDSKbfFVQbUWUz318fcairB0fR0jlS+spEif3wscQqfxKPHtxNt5Cq3xwrVe3+ltH7U+M4f+mszrdUImAIJIMiDoB1Bv0A4m1Sp2FGsrG555575NRTo2XoVR+MPBh5aBTUqhoSwbVw4UJnkKI22eRyU+kAqn7+QXYA5W/oKShfwvkayWHcqJnXhkoQtviGSlGF6lI48QblT5GocsPfnTFbETAte0YYybORFGvLpem93SRWsXK9L1ftfaJUHjEk8zcaR8CUTPBe+HuV+N8bJQ5Bem+QojjjjDNkypQpbk0xhAehIQ+33nqrK73hlMhJA4V/mzZtEn7vHXfcIffdd58rtQOwXMxCkn1/XS/cyENwYCypv9dW4nhK8F5CIohI1GX5HHQHUE2zMJ8IeedTWZ/fbnr33Xev3RTJ/SsBY2OMopGX310Rf4pMRAf8qTQuNfPKtMES7xNpGN6z+tT3VFeUTe+3rsqiarV4RaXibbK1eFu0zFkzLI2AKZFg/um4MYYIJIMkDogisZVmrT/uuOQVJYYCJg8PPvignHzyya4Eh9KbsWPHysMPP+wEeomaLlE2iNIW33CU1XiHs2ERmk41LG3kIbNgCrAwk6OESOByyWagRMKZ4CQhEsk6gKbaA4HTOMQSEgJxyLSHQ1jsppWAcZEeiloFgnodaHv3bOgTNAKm45apyg1O7xC+VLt+xlZ8LmWzB3OHIiu+WFemKZ4jEGt7PSZSmts5rSlISAQRMMgDGwgRsCC8S7DX7927tzso9urVK7D7NuQZeYAwEOq+5ZZbahkuIT1Uq4MH8wDVv1nwwPPzkJBUYOQhe2A6cGJBRET+koUB8yJtJU5pYX0dQCETTFImq0YkEgmQVGTH30RAGMXTd31202hScKur61SuRkH+zqk6bmEkUzyPvLZcG1vFj1sQHhzqvQF5TqeEtvSZK6Tok/lStOKL2q85b4fdj5aq9v0lDMD8iQgfz7BqJRg39DdKwNItOcbK+fjjj5fbbrtN+vTpExqRqyFk5IGTFIsZ5YBsJArCVUxCNpz6AANmohKtoG9DKjDykDvwvk6bNs1FJOgEyoahRILTdLJNkcVJT4hqrqQRCeYQCz+ncj4mV55rD4dM2E2zGCP8TGdB1d4RXITO1UuCsePjXC/OqgNIVnmQK/grNxg3yKrm+1Ot3ODn8ahgc027Xfjq76XJ1B5S9NPXIrEi8Tb6lXibbe8+Xtt9jMjGudXwEHEgOkx0j8Obf9wgTLx2/iVaqCmh+oSqGCqRorjxxhud1iHXc9MQYvJAjpwwHlbJKPYV1PPCQF966aWUymPYhEhbJAuXsfFw+V840Q3TPOQWED9ym0QkcGMjh4owisZduNIlW2h0YScioQu75q7rO5VHDWyuRFMSGQmlC7/jIP/yvOjJOheljBrOpx6+TgFhjhFfuQEx1ZN1sjbsfK/2NICoNQRl/zxDYlU/r1uxX/5GRZerAy3VbGgr9Po6mvpLjnleQTKhKms9Bwj0a6zpRhwKjzxkVe49cuRIVwOMIUddeTb6rF9xxRXZvDVDCiB60LNnT3dRcYHlLESCRYT/06qNgw8+eL2FhnQEpJMLAoHGAQLBAkXFh56sc+HlHySCtptGDEjonIuFXTdEohq6IWarlJG/y/tGJAVr5jADbQnziUsrN5hr3L86Nfo3RJ2TRMAS6bZSRc22B0rx8oXrfw3h5M9GUbkATqYQB6KE9bVCZyw06uAXqpLqIJqGUB4rfESkRBquvPJKIw4FjKylLeiqhsc5OXROqXXBIg/RAloHmt6Q2mAOEPrEjIqIBGJZFQree++9jmRALtiA/C6NLO78XFQ7gKrdNK2G0w55B1DK6HcFDZpI6Gtr7OaaaySq3CAVxMbI+9Zom/C15VL64k1S9N2SdX+v2VZSeUg/V3WRC/C+vfvuu444+E3kGjJuECwaXFGOSaUGaSsiDuwDkGVDuJHztIUKJimzpDxTFzJOWfiDJxNMjho1Sq6++mqXrmDjSBemeYgOIJhUaxCRoHqD6YU9NikPUh3Y1h555JEb/JyerFmk1CQoKh1A1W66sYt0Q6D24rohQsj8TagaqyVRkV2+eG/4x41NEF8RoqAQYFIaGpVoTAVCrPwzkaq14jXfeZ3bZA5A2TOEL8j3jVQzz+5pp53mUnIcFFzIy4AAAC8QSURBVGixTSSCaDEHhmyBPhTYBVDxA0migs9/oDWEtFSTSMPtt9/uSASlmriJERpjoaeCgvA0kwlQmjls2DBnA+pvjkKIOlWXPSMP0QQbGSmq/v37u4WHqJWWf3bq1CnpAh3fATTe7jksRIJHR1sYI0QLqma+MfcDSdOKFzZEyIzm+9P1RKDfARssuXK/yC4foAJC1QEkq9zQCoSE8DzXirvos5dFSppK9c4dxdtqD8k1NA0DmQ2KOLC+QxywxCddoc8gazOHAspaGctsgb85f/585xt07LHHGnmIAnkAlFmqSRQThnCWtlulCQoq7EmTJrnP+RjnwHgMHz5cRowYkdLfM/IQTbAga+OuGTNmuAWbiASnBCZx9+7dHZkgj5qsJFHNblgQNUSvEYlk4rdsgPsgJBxmu2ntZqmeCGrmVV8zJX8PDl5bPpl2AV7XkiVLNqg8iBf4quVzMqFq8RuTpWTJrNqfw5a6qm1fqdkhd62nVfhJxIH3OQgwVhAHSjE5FIZN4Mz7YZGHiJCHbMPIQzTBYsMmRJ7UH86HELz88stOO8NDDzHo2rWrIxK0FE/W2U89KHRDJNWhizq/P1uLmradxrCIzTUKpk7x3SyTeXD43T451YWRFDUGassMcUilMVmiyg2nL9mimWwz72IpWvmFSHWFSNkm4m2yjdRs0VIqu46UXBKHILUpRJ4g+ZRkjhkzJnTEARh5qB9GHsLPcwxxYW8iBHV1qGMjxjdAW4mzuNOJDyJBY51kwslcdQBVu2n+ZQOKorFVvAeHmgRxMf5slhCHMBpUNQbaO6Whtsz+pmc1i2dL6w9vleKYJ7GiEokVl7jURU2LvaTi2HVR12yCKAmVPkESB+YCZB7NEtHmMBIHYOShfhh5MPKQ1+D9JVerRIITMNoINBIYiiXTOyTrAKp9I4JqQOXvHklYOB8aW0G61IODkzXjixEY1TBEgMKiLwniBA15CCQN43lS9ujJUrTsuXU21J4nNbES8YqbypodD5OaP9ySVVKpxKExHhWJvCEgDjx/6NvCShyAkYf6YeTByEPBgPcaQRsaCS5IxeGHH+6IBKpuogzJiITm+tkQCdc3RjTYELvpqIH0D5sPpIvKKfVFYKz8XhJRJRKkKdA5EE1JlhJLB8VvPyJlTw0RqVxFPOJnQ6iYVJVuLEta9pEPNmnjxkvHLpNpLQgfkbAgS4RJWaFxoDLu7rvvDr37q5GH+mHkwchDQUKrGiARRCTYxGm0BpHAmIr6/Lo6gGpEQkWD2ko81dOh2k1nswlUtqBpGAgEaRglV/EVL7xm9ZLIpVC1IfMGoW5g+o0fPpWmU4+TGDbUXrXrheVAymLzHWRtz4dlTfEmtWkhf+8I7bkR1PxRx88G2WknAfdMuhAhPB0yoxBdM/JQP4w8GHkoeDAHyMUqkcChkpJhiAQXtsnJFmdcMXVR52HSjox1dQDVXg6cyBtrNx3GlAWvjZNlfNdPPyASfi8JiEa8S2MY5wneG4TfMaVraKOs9VC5Spo8froUff4fkco1Ih5RB3avYpGSJlJ1wGlSefhl6/2I9o5g3PyVG4xfY/xLlDik28Crvt+JvgHfhilTpmSs3XkQgNDz/gJIL2LOjh07usNBEO6u+YZyq7YwwaRh/Q2CzQESwUWjHk5M6iWBC16yxTlRB1B1t9QwszZKCnsvh4aATQ2DHcSs6TQmY8xZiHTstEeJClXDsOFoxQj3R8QhEOJAumLJLCmZP3pd58yKlU774K6SMqnZYmdZ2+tRkabJhZiQLm1CxaWVG+pfkmo0h7QS/VOCtAqHHKIrYp7TtDDsQmD8YyAL8cCDSG0C0gUkWd8D5lA+HRTKjTwYeTDUbaGLqyVEgsWFcK52AOU0lWwxiK8+QFAHgeBz9A2ZtpvONiBOEAdeJ7nyhqYg/PoSLlJE6iXBlYsNSLUybM5BVozEvlvirKdj37wjsZVfS4yoQ/VakZpq8ZpsJhVHjpGaVl0aVLnBvUIsUnEGzQRxYGMh/UcUieenLg+QfIWfODB/SE/mE4EoN/JQOOQBO2+MlVgoWIQ5GcSD8P1f//pXZwdNXhXWjYlLFPKUmQQPPYss9rmkN+inQvRAO4AS6q2rAyhuehARFg7NV2tL7KiD1A3EgY2CcQhycYxPC2VLNBhv3MV7D3Goq0Q4HRS/ereULJ4uUvGTFK34XLyadSJJhs6LFUnVvn+RqkMHNOq+NZqj1UK8P9qkSkkYZIM0ExsbLr5BAPIHuYZkTZs2LbAxiyoQY7Pu4mBJtYmmyqNOIsqNPBQOecCBk8UX6+O77rprA/LASYUQPadi3D7xd8ca/KyzznJtcg3r+0FgVEVEYvbs2S40q0SCkkslEpw+5syZ40Lv5FFZUNWymHwwC6umNqLYAZSNAuJAjhwylcn7ZwPUsWPTU7tnFQ1m4n2mlwN/C41DIGSlplpKFt4sJa/eIzFKMl1VhYeznngbNRdv4xZSs9VeUtnxcpGikkDfJx07SpBZBxg/1gKIA6W0QYBIEeZPzH82zHwgx40BnZwfeOABF7GkdwbpD9YJEPUoRLmRh8IhDwom8N/+9rcNyAPMmBwlOX+t7Z4wYYIMGjTILTxhz1nmCizGLJREJBhDNjMWCE4cGOGwub700ksbmAhpB1D1Q1DhG2MfBT8ECJQKP+vSgmQCavesJIxNSolEECQM0gdx4DUScQgqylH0wVNSuvBmKfph2fp/b9PtREqbSWX7S6RmhzYixaUZJWH4U1BqygYWT8IaOnaUHp9wwgkuZcdzEEQJa5QBMbvsssvc/MHtlvbjNPHj4NajR4/IRyDKM0AeCju+HWG8+OKLTujmN4UhzEYag14SnJwNG4JF8sQTT3QXJ69Zs2a5xm5Y8LJA8HXGj14t/rwzqSCiPFwqfGMzfOWVV2r9EMLaAZTTOOkvqkVatsx+e2iILGF2Ln8b9kWLFrn/082wIWMHccADhNM6EYcg8/XFn7/iSjA3QFGxVO98hNS0/KXRX6bA5s4BgYgDc09JGKZXkCQVXKYzdvzOk046yY0ZkbhCJw6AaA5rJ+SaeXrxxRe7iMzpp5/uyG/v3r3Xq1IqDYEwONcw8hBRYOQS7yann/N/hvrByY2mXJwyIFv9+vVz+ghOZCzMRCPIB9MN1q8jgVQw1lx+PwQtewxTB1A2agygEIwGFe5uDOJJmH/s4run1ifk1B4jaC0gDkFH27zSZiKlG4nXtLnE1nz/y9eLm0r1HuvC2ZkEp0T8RVq1alVb7YNIkquhY8dGSHoTEvLUU0/lvBNsGKApiUMOOaT2a0TnLrnkEjdfzz33XEd6GTfcb7Hdv+CCCwpSWOqHkYcsYvDgwa5FeV1A8IWS2pB5IKyDPHB6mzt3rguh0wWUBZbPSW2wYLCwUP+ORgKXS/8mxSKtwjYEiNoBlA0718ZK2po5SPfBIOEnWv7qA+6ZBV1P1VRwxFcf8P3qikmoORNpuupdOkvR56+KNG8p3upNJbZmhXhNN5fKo8aKt8WvMx5mhjgQLUrkW5Bo7CAERM38lRsIL5X4cmLmJI3QGm0P42pInopg3Pv37+/Gj9Qx0V6suhGzNylw4gCsq2YWoS1+6wKLhX8hTKZ5GDZsmBMBEo5WEMrk51l0LG1RPzhNoHPo27dv0s2H70E8Re07ZWyEfCESRCSoM0+WX4/vAMoC7zdWyjSRINRN1QipraBaM2cLiZqe+csYWewhDpA8elWkHUJe86MUvz9Dir5bIt6m20r17keLt1ni6oWi5S9L8eInJPbTt1Kz1Z5SvXdP8TbdJuO6HLQ3O+20k7saWrnBekPXWtYI7KbpZIvjJtVZQTXPKgQwB1kjGMuRI0fKwIEDJWowwaQJJjcQTFJloQvBxIkTZcCAAW7hMGYcPDjRzZ8/v7aVOA8kizJEgk6gyTwFdDNUm2wIiTbuyoRDI+K6JUuWuGqcqJ8utemZkjCEfowXp0GIQ8o+Dt9/IiVvPyix7z+Som/eXadlaLYlx07xSjaSis5XimwSTFOpIIgD2hRC540FkQgqCDiEMAfpV9GzZ08XRcuF/iWKePbZZ91B4corr5ShQ4dKFFFu1RaFU21BaJGwOtEFSjGff/559/Vdd93Vhde1VJP856hRo5zOoU+fPnLmmWdaqWYWQCSBqgwlEpzyUGlDJBCuJuujkMih0d9KvLEeHUSfyMkSeaLEL5/AnGdjZcyINiD4U4txxjBhFGjlV1L21KVS/PHzItUV66ykWUfKNnZRBy5QtWs3qd73pOy/KP+trlzpBLhBEQedp2h5iJ5hOY1IlfmKiRpRKQTD2Y5CoDFiTWPNwhZ93LhxzmI+22ZQdX3N/56cd955Ti9EuiKqKDfyUDjk4dRTT5V77713g68TcjziiCPcx5RwoRBmIUD8h0kUYbVCN4nKNlh8SBVpK3HKvohE4G5JkyFKozLdAdTfy4ETeb4p6InWIAxkHCFGRB8YL20nriVoqgNwEQlaZ8/oJ0XL5klsbfnPdtI1635haTPxypqJ12JvZ/hUvd2BUnXIhTknDggjEUgGNS8R/T355JNu3fCnQEipUWmBODibot4HH3zQ6YgoK6eiaezYsS4liKtjpkkM5FOjfPw9PuYwFv9/8WB+BdXqPFcw8lBA5MEQTWjpoBIJ0gedO3d2XhKkmeqqwPBbPafTAVQtmfm5IHs5hIk4QM5Y3Im2JVrkiUZoGSMRO8Zgh43WyC6vj5TiFcsltnbFL99cU7UubVG2sdRs3dqVXlbud3Ja9tKZMO/idBskcbj00kudTofDRVC/t7GAMPz2t791WiO9TwgT1QsIyrMByrHpg0Nql/mEMFrvJQrdYhsCIw9GHgwRAvMV0aISCfLPVGuQ2qAMVMV/dVk9c+ohD56sA6gaJCGoDdKSOSxALAlxIApDiDsVfQg/Q4lq+Yf/kV+/fr1sUvGVlFavkZj8HHX4uZmV12wr8bbcXWpa7CWV7S4SKc6+sRpeI0Qc8BZggw8iCsCcGDFihEyePNkRB8p0wwAErkSEeB54BhRETJm/2MlnGrfddpvceeedcsMNNzjxKClf0ntEZiCc+Uogyi1tYZEHQzShaQVtJU4Ivl27dm4RJSpBKWWyjYPwvEYk/B1AIR9ENiAapCryTSQLCeBEzuuCOKS9qHuelPz7Yol99KzrgllUtdrZS3vFZVK59X5S1PoEqdlmX/G2yk1ptBIHdEuEz4MgDswz7OnZINkQsVoOC0ipQZIWLFiwnqcC1QuIEtEQBY34dATiUcYZ0yct98UwizkG0SL1lY8EojwD5CG/RshgCClYsOgnQWiWRZJNH9LAKQz3QMSWhHLVhtgPogmI6Aj3HnbYYY5oQCSo/KD0V2v98wmcUtlYee0NIg4gFpPqQy8W2b2bFG2xvcQ221YqW7SWZftcLPO26ytzvtta3v6mxkUpsj1+2qCMPiNBEofRo0c7LwIMoMJEHHIBxkOJA1owrPvpX4EmCTCnSFtMnTrVfY4ZHM9TvhGHTMF8HgyBADEWAk4/6PCZrTxmlBe4zz77zEUjuCAECAKJSCC4ZFzjNxZO5NTus+Gx+bD5keenwkMbd0VZ94B+gY2V10NL9EAW8zU/iBQ3cY6ROu6EyjWio+Wz9bXEDoo4QIx4r+pqFZ8OeD0333yzq2KAOJDCChuymbbwRw+wmr777rtdyhBHUqpyqGJTsaSa82FRj5jz73//u+Qbyi1tYWmLsIJN7owzznBdPRUo/qO8iWUbbABoHCilg0gQRmXzhESw2BK54P8pyWVBxB1TNzkIhVYeQCRYpP2txHNtk50ucWDu4IyZjVNgspbYDal6qQ+koCAO/O4gicP48eNdKSGll4gSwwrujbJMyjN1k8fJ8fzzz8/IQQNdAw2v6FLM80N5KHojxowUop9A8H9hdGINAkYejDyEmjzghMllaDyY64RQOY2xyGEnzBizubEIYhKWTBwZ1Q6gbNoQBwRshNxzcZ+MO1qE+KoXJRKN0ZUoceD3kKoKijjQ+fHyyy+XmTNnutB7mEGpJpEGUiuQCEo1aUyHsLix5ZD8Hkqk1RiNNCBOvLju8ne14oT3lRJqiKqmDfMd5RZ5sMhDWMHGxuLPCZiTBIKkiy66yDwnAgAbBOFWIg2cgiEEaCCISOAUiNlPshO6vwMokQl+XlMbYeoAysYKcWDhp0dImO4rXqyqVS/pVLbo6yOiQe+aoIgDbaMRHBKGV/+XsINNXU2i0ByQbmlstASB6P333+9SNhopYsxJVTDukHAIg0bqIOZojqiAorQ6DE3jMgkjD0YeQosxY8Y4xT+LP2pqasxPO+0093VD48CpjBMVROGmm25yp+EZM2a4iIQ6BGpqg1x3XUSClAYRCYiENleCTHDaz9WGreJBtAZBbayZACdVJRKYLKWqMYFUE3EIkhixFiL0I9KHlwNeIoUO5hHpOgTJvCe4dBKFI8JBNAmLbkiKPh8QCEo2qU7Jd5Rb5MEiD1HpAopA6ZxzznEbXb6VEGYbRBxYAK+66qoNNh4WRVIYEAnC1pAATlSQCRbKZMI/7cSo/Tb4vem0ww6zeDAbUI0JY8cmRBRCx8+fGsoEcQCE22nWhDsj/VUKFWg9SAP16NHDfU7nS54XNEEcXoiIQrSoVCJydM8997gy0UwKYsMIIw9GHkLfBVRBOBCxH6fmQsgpZhIQhFSEp4RpsRxGbDlt2jSndUAcRsQCT4lktuUQCa08gEz422FnsgOo+hxQMYKOIyrEIR6cbnlOVGNC2FxJ2Pvvv+/+DVLDQYoCcTK9KiCKhQqqu3BtJU0KkaLbLaAcE5KAtoKLNQpAGljTMIoikldIJZnlFnmwyENUgBkLZU8spiyehuyXxT399NOOSJDvZeNioYVIkAdOVkGQqAOon0gEdWJTS+YgnRXDAE0N0e2WMWSDghwRWQkiokN0SfveHHfccVLoYA6h+SDyw3pDrw6AcyRaCsgDEQitqiD6g0YIcWUhodzIg5GHMIJQIXlG2tYSsuVzxJKEUxM19zJkP8ROV0VC3OTH+ZxTGhoJ3rNkaaX4DqAQEn8r8YY2YNO20/Q04FSYL8Qh3uCKZwH3SE1vQCwaQ8QQA+KGiDiQ/gyFDMaSeQMZw0kTO27mMY0CIcjgxhtvdJoGSptPP/10F93SeZ1vc64+GHkw8hBK0HuAsCEpCkRlCJV4YPv37296hxAuujQFImcOkWAjh+RBJAjlJqsg0A6gGpHQDqBqk52qFwKLGPMlyLbTYSUOpO10k0rUil1NqSAU9RExPD969uzp2lnzbBXa5pcMlKkyn5lT6K+IKhCJYKz8lR3dunWTIUOGuHlXiCi3yINFHgyGoIDWYeHChbVEghMyiyxiS/6lmiAZ/K3E0S74TZWSdQAlHcIiD2nwt4fOJ+JARAV9Sl3OmP5W7Fz1jR+bIykKPBE4QRtx+CUSQyqOEk3GG/Hteeed57Q+5557bm10hggEbcmnT59esKXj5UYejDwYDJkiEmx8EAkcLvH/R7UOkSAygb9BMrBoa0RCO4BqK3FNiSDIpBkY+gYEbvlKHCgVrMt3IxEgD5raYJHnIorH6ZmvE4bH6p2QvBGHX4CNNJVGlIYrGLdevXo5rQ7OknwM+LxQiQMw8mDkwWDIOLTboHYAxeK3U6dOjkiglYAcpNMBlBA+/TsoxUTnkG/Q7p+kfNIlDvGgtJMTNSXS//3vf93XMDcidx+W1tphAZGY++67z+l5iJIpQSDFQ0QCkSSpiuOPP14KHeXWVdNgSB3khwmPE8bE8+Dll1+24UsB2m2Qkx3ue6QaDj74YFdTj8ARfQSlcJyK6+oAeuihhzriQKdQCAkVCMuWLXMEI9+IA3OsscQBaHktc5exY6wZY343Fz0atCtkrkEvDUqAibbgL5JtMCchWMxF4I8s8LxzkX4zZAaFU+hqKCjgZY9gk8WWzY+2ziwknIgNqYMIAx4F9Agg7YB/B1EI3Po42RGJmDhxorMajicSRBs+//xz1+CqQ4cOrvKAMkY6h6K1+Oijj1zIPsrEgblFaqZ169aB+QYwxhCIAQMGuDQSuXqIGqdoRIH1ea9kM1XDqZ50Si4AeaAFOc85pZn4PpAeI2KGcR2lmptttllO7q0QYC25DXkJTh2cflFbA06+hMwvuOACaxMeACAKRBE0tUFUh8Wc1AYXeeh+/fo50yo2Vj/8HUDZCBEYRq0DKCFyiANVJhDToIgDOXs0JrizYnYUhbGASGKTzcadbfBcQ14pDcdPg/cBMSrzT82hDGKaB9M8GFI9ERFK5dRG2FeBYQwLHKZJhuDAc0koHRLBRXUAX8P9kNA2qaNkmyCbsIoFtQOo9osIawdQJQ6EySEOQRlnLVmyxBEHzI7otxAVB8RckgfF4sWLa0vFSZcR5TJkVvNQuPJTQ96CTQg/g/gWv3zOAmMIFmzwRHUuvPBC151w0aJFzsyItATaCXL1GpGIt6FmA+bEyKUdQIlI4JVAyaJGJFjwwkAkIA6kbyAMQRIHxgqRHxUWUSIOYQEW+GaDn13YDDUYDIHpTDg10+3xjjvucFUDiCSpvUfjQCqJ1Aab4zvvvLNBJJGNGKJAmgONBJsBUSRO+c8//7w7XdLMK1cRSMgNxEEFpUERh08++cRVVBCpoaoil8SBZniQtLouI+AGYJEHQ94B5z4Wdk6wfvD5Nttsk7P7yndQIocFNhshYKPhvaCJE+ZGhLVp6oROgk2SqgxtJR5fqcD7h08EF3lthJakNlDX56IDqBIH/naQxAFBKeOFmHfcuHE5jzjQjZLeGXXBtAQGYIJJQ16CUy5trFmQARsQ5kTnn3++CSZDkoOligAiMWvWLEfqlEgccMABSTdR7QCqplTaAZSIBW2vM7H5Qhxef/1197e5t6CIAxUq3bt3d+WO2CxHtU10GDQPhrphmgeDIUVQvoVA8qCDDnIkAkMZygLpsGfIPdAw9O7d212o43EKhEiQ9yeaQAgfIsF7599UIQeQBC7K8bSVOCWMmegA6icO+++/f2AbPPfMa6UiiEZXUSQOpFuICPGvjhOghLcua3NDfsAiD4a8hTbF4YRHqJm6byIShvACAyn6EFC1MW3aNFc1g+cBRIITejKL4bo6gGoaK12wIZImgZQQcQjK3pjyVPwxcIycMmVKyk3FwgZtDR4PulweccQRObknQ2KYPbXZUxsMBQOsmufMmeOIBOW1EABO6/R6OOyww5Juuok6gKbTwRIQaYA44EkRJHFA8AkZojoFfUiyJmIGQ5Aw8mDkwWAoSLCJP/vss7UdQPkcIoFOomPHjnW2fvcTCe1gqY27EhEQJQ5ELyAOQUUGWMBJx0BkaD5W1z0bDEHCyIORB4Oh4EEaASMqJRKQAyoWIBJdunRx/TWSAfKgjbvoAIp2Qis3iAJoUzCiHgceeGBgxIF7JPWCgyYVJ3Xdo8EQNIw8GHkwGAxxugQ8JJRIYDJF2SMbNf+yWSeDdgAlKqGLK8SEdQYhY1DEAcJy3HHHObHnjBkz6rwngyETMPJg5MGQRxgxYoTrX+AHxkhmwtMwEDXAmRIiQVoADwUiERAJbJ/rsuVdtWqVS1VAKCAkEAm1yW5MlIDfd8IJJzjbZCpKsNw2GLINIw9GHgx5Rh7Y6J5++unaryHMIyduaBxUt6CNuz788EPp3LmzS21Q6UALabW7RtuAgyWVHaQqgKY2KEVkw9fURjpRAwhDr169XDkpFSQQEoMhFzDyYOTBkGfkgVC71scbMgPWDeywIWoQCTwhKCUkItG1a1fp27evaxjEBh9f/VBXB9C6vAwgJH369HHRD8gh3hUGQz6RB+ttYQj1ok8ImTw0QCTHCVJPlvkAOinSARDLX5pJYbhjCBZEGPbaay8ZPny4I2pvvfWWIw+4OuK1QNSBtAZRhvgDCroH3h8MovgZOoQifnzppZdc2+cPPvjACS/9PwfhwIyM95KW5EYcDPkIM4kyRAbU9qOOZ9EntI82AJfBqIIcOBsROgcaSKF/+Oyzz9zmZrnxzAJSSr+N+fPnO5dLmnjRDfSQQw5x5ZSkN7bffvuknTz9HUD5F40FhPbYY4+Vf/zjH/L22287sySiFAZDrmGRB0NBgEgD5kD45dOdEXDCI4d86KGHOuJAWBgF+4033hjZKASn3eOPP951kaQyYObMmS4//tBDD+X61vIazJezzjrLVWkQdbjyyitdFIGoFnOKnhu/+c1vpFOnTnLTTTfJsmXL6u0AitcETqaQB4hE+/bt5f333w/V3OR10KRs5513diLQVq1auWgMz5LBkC4sbWEIHS688EKXLyY3PWbMGEciOM01bdrUhY8BuWlOd/SvyHUnwqCAiI8wOkTJkNlTGCRt7ty5su2227qvEWGgcRpzje6gpBxoL45eYd9993VRr9GjR7s0UyIigX6C38XvuPvuux1p4Gs77LCDa0lOuWauQaSO+7r99tvdswPxnjBhglx22WW5vjVDFOFFAD/++CNPa65vw5AFPPnkk15xcbE3ZcoUr7Ky0lu1apXXo0cPb6eddvJOOukkb/ny5e775s6d6/3000/u46qqKvdvdXW1V1NTE9n3acWKFV7z5s29m266Kde3YvgZzKevv/7amzhxotetWzevtLTU22effbyhQ4d6ixYt8lauXOnet3POOcfN0WXLltWOHfP36aef9gYNGhTaeTlq1Chv5513zvVtGLK0h/JvUMiPI5shL8CJbvz48S6cf+KJJ7qyRcKr5KQ//fRT2XvvvV0emjAr1sREJRCncfKjZwARCM1Rc8IKe3XOJZdc4iyXCScTNqdnA6+F8j5DOMB8wsaaNAcaFVITdGxFeEkKjdJO0hs08SJK0bJly9qfZf5SHjpy5Mik2olcA/U9OiKDIV0YeTCEBiywbKY9evRwn2sultwxi7LW4LOIQyK07wD/T7+C5557zpXiITr0EwmgzoFhwvLlyx1RQDCJkRCvgTw8m5UhfGA+sdHSTRKygFhy8ODBjlCgU0FDECWQHhs3bpycc845ub4VQwRh5MEQGmDKAxkg9w+05h5RG0Rhzz33dJ9Tr7/rrru6CyBwA7TcnjRpkiMaF1100Xq/m1Ng2E5/U6dOdT4ACEEhEnwetQ2okEHdPEQCfQQCyVwBAsPcruuKdy2FYHfv3t0JdomqGAzpwko1DaEBVr6I1IgSoFgnJUG5G8KzIUOGuIt0BKJChJKE/UlrcHJHrHbVVVe5UDKpD8o5EaqxMfMxbZD79etXS0zqA39HF16DIczAxAoDq7qAj4iScQgrnhUHH3ywI9v5Ijg2ZLdUM5gm9QZDAIAIUGXBSYjGRCxuRBl23HFHOeigg2pTFpCL/fbbz30/WggU8KNGjXLEAZCDHjhwoAvJQhi4UJTjnRAfkUgGW1ANUQFprlRTXUQcKCslBXjPPffYPDc0GEYeDKECBj2QhilTpjhdg9bKE20ARCSowdfPOTlRt64lnICfIWpBBKJt27buazgCTp48Wc4888wNDJj4/ZAFQrvYRaOb4HsQxtEHAa2ERSAMUQfEgYgDaT3KTolYKLbZZpuc3psherB4lSF0wJUPrweEk2zkRBUgCIBIBFbD1M8DhGrkbv1OfhAP6vJVYAk5QIyIjTCkIF44CXGgqyKixfvvv9/ZRPOz1Pwjwiw04sBrJs2DLTOvHULlB+M3bNgwlyoi+kPnSqI/hnADF01Ekhiw8fzw/ullMKQLIw+GUINGREQVIABcRBpwAeTriAwxu4E8YCAFIAE0OIJ4IJIEkIYZM2a4Jkgg3vUPrQVlnwjfqPaAuFBeh3gREWahOfBhaIQx0q233prw/yFzjAsGQ0R0eC9wyFyzZk3W79WQOhB3QvwSXQZDurC0hSESUA0C4kn/Jkd/AiyCFWz+iCSVKABOW9Tl62aIlwLQdAQdF6nYOPvss12EgpQHVR+kUK677roNOi3mO/DZ4EoExmzs2LEydOhQ1/8B3Hfffc6qmQgF/hwGgyH/YZEHQ2RBlcWdd965njHPo48+6lIciCx1s8OGmLSG6h8Umo6gERX5X9IW2vQIvPHGG85uWD83iHz00UfO14BUhQIVN2P74osv2hAZDAUCizwY8gqQiY8//rg2ZUGJEl/Tk7SKIxVUbtAQqUmTJrUVHZoC0VbNpC3I7RvEEQdApMEPPtf/MxgM+Q+LPBjyDv5IBCkKDHzUCMcvfoRIQDIQ++22227ua6Q8NOrAZtimTRsjDgaDwRAHizwY8hqbbLKJ63Ko8JMHjUBAINBPcCH+AzfccIMz1oE8GGSDkj6smf0qfT7He8NgMBQGLPJgKHhgIoXZFGWg/EuJJtqJyy+/XHbaaaeCHx8/0JNAICj3U5AaourikEMOsbEyGAoEFnkwFDwoSyStgfskegcIA6WaVFsUIlauXOkqVPwiSapVaAqFgBRyhRU4qR7IBCQLTwh/JYzBYMhvWG8Lg8GHxYsXO+/3QjbOmTdvnrMwjgf9RPDZoIJl+PDhMnHiRPnhhx+cLfhtt91W6/ppMBjyv7eFkQeDwWAwGPIY5RkgD6Z5MBgMhhCD9BnpIlJqRMRoHkdnTIMhlzDyYDAYDCEGKSR6uJBSQ8i7dOlSZ79uMOQSlrYwGAyGCOGJJ55w4lQ8SbBRNxjqg6UtDAaDIcXunzSC4uv+iyZqUcb//vc/eeCBB6Rdu3ZGHAw5haUtDAZDXnb/BJCFL774ovaiXXsUMWjQIGdgRuM2ur/+61//yvUtGQocRh4MBkMkQd8R/Cb+/Oc/J/0eepZgaqVX8+bNJQwYPHjwBlGR+Ou9996r/f4BAwbIa6+9JrNnz3aW6yeffLK10jbkFGYSZTAY8tqzgo6qkIZOnTo5ssHpPde4+OKLXVqlLmCPrthyyy3dhZfGnnvu6brGLly40Fw9DTmDkQeDwZCXIGVx7LHHOhdMKhQuu+wyF62gdTin91xiq622cldDQEM3fxM3gyEXMPJgMBjyEieeeGLtx/vss4+0bt1aWrVq5aIRnTt3liiAniGLFi1yLp5ETyBB2IHzOqyXiCFymgcESvj/Y1rStm1befnll+v8/ocfflj22GMP9/08xDNnzmzo/RoMBkODQBqA0L+/b0fY0axZM3nssccc2fm///s/OeOMMxwJevbZZ52ew2CIDHl48MEHpX///s7b/tVXX3Vq527dusnXX3+d8PsXLFggvXr1cpMewQ/1yVxvvfVWEPdvMBgMKWH58uXy3XffRapvCYetuXPnuvtes2aNa1I2fvx42X777XN9a4YCR9omUUQafvvb38ott9xSm39DvHPBBRc4BXE8evbs6Uqqpk+fXvu1gw8+WPbbbz+ZMGFCWgYXad6qwWAokO6f+++/v4wZM8a5MdL9k+uKK66Q4447zlVZEO4fOHCgrFixQt588007tRsKCuW57m1RUVEh//nPf6RLly6//IKiIvc5IqRE4Ov+7wdEKpJ9vwqBeLF68YINBoPBj1deecWRBi5ARJSPhw0b5gSRb7zxhusLQYUCkc8DDzxQnn/+eSMOhoJC+c/7KAj0AO6lgc8++4y/7C1YsGC9rw8YMMBr06ZNwp8pLS31Jk+evN7Xbr31Vq9FixZJ/87w4cPd37HLxsDmgM0BmwM2B2wOSCBjsHTpUi8ohLLa4tJLL3WnCMUPP/wgLVu2dM5qhF4MmQdMlXTUp59+GliYy2BjHjbYPLcxLwT8+OOPrjMr6bygkBZ5QKlMOPCrr75a7+t8Tl4xEfh6Ot8PUBEnUhJDHGwjyy4YbxtzG/N8h81zG/NCQFFRcKbSaf2msrIylzecM2dO7dcQTPJ5sppjvu7/fvDUU09ZjbLBYDAYDBFF2mkL0gmnnHKKHHTQQdKmTRsZO3asq6Y47bTT3P/juU4Z0bXXXus+v/DCC6VDhw5yww03yNFHHy1Tp051QqeJEycG/2oMBoPBYDCEjzxQevnNN984RfOXX37pSi5nzZolW2+9tft/dAn+0AitYydPnixDhw519rC77baba5279957p/w3SWHgK2GmKNmDjXn2YWNuY14IsHmeH2Oets+DwWAwGAyGwoa15DYYDAaDwZAWjDwYDAaDwWBIC0YeDAaDwWAwpAUjDwaDwWAwGKJJHqzNd7jH/I477pDDDjtMmjdv7i76ldTXit3QuDH3gxLnWCzmOtIaMjvmONqed955rvsm6nR6Y8ycOdOGPYNjTsk/Lcc32mgj52x70UUXuS6ihvrx3HPPyTHHHCPbbbedWyOoZqwP8+bNkwMOOMDN71133VUmTZokacMLAaZOneqVlZV5d999t/f22297Z511lrfFFlt4X331VcLvnz9/vldcXOyNGjXKe+edd7yhQ4e6Hhpvvvlm1u89qkh3zHv37u16krz22mveu+++65166qne5ptv7i1fvjzr914oY6746KOPvO2339477LDDvD/+8Y9Zu99CHPO1a9d6Bx10kHfUUUd5L7zwghv7efPmea+//nrW771QxvyBBx7wmjRp4v5lvJ988klv22239S666KKs33sUMXPmTG/IkCHeY4895vpXPP7443V+/4cffug1a9bM69+/v9s/x40b5/bTWbNmpfV3Q0EeaKp13nnn1X5eXV3tbbfddt61116b8PtPOOEE7+ijj17va23btvXOOeecjN9rviDdMY9HVVWVt+mmm3r33ntvBu8yv9CQMWec27Vr5915553eKaecYuQhw2M+fvx4b5dddvEqKirS/VOGBo4539upU6f1vsbG1r59exvTNJEKeRg4cKC31157rfe1nj17et26dUvrb+U8bZGtNt+Gxo15PFatWiWVlZWBNlrJZzR0zK+88kpp0aKFayltyPyYP/HEE846n7QFxneY2V1zzTVSXV1tw5+hMcdIkJ/R1MaHH37o0kRHHXWUjXkGENT+mfOumt9++617MNWhUsHn7733XsKfwdky0ffzdUNmxjwegwYNcjm2+EloCG7MX3jhBbnrrrvk9ddft2HN0pizcc2dO1dOOukkt4F98MEH0rdvX0eUcegzBD/mvXv3dj936KGHEgmXqqoqOffcc50jsSF4JNs/6TC7evVqpztJBTmPPBiih5EjRzoB3+OPP+4EUYbgsWLFCunTp48TqtLN1pAd0OiPSA+9d2gCiB3/kCFDZMKECfYWZAiI94ju3HbbbfLqq6/KY489JjNmzJC///3vNuYhRs4jD9lq821o3JgrRo8e7cjD008/La1bt7ZhzdCYL126VJYtW+ZU1P6NDZSUlMjixYulVatWNv4BjjmgwqK0tNT9nGLPPfd0pzVC8nQWNgQ75pdffrkjymeeeab7fJ999nHNFs8++2xH3IJsI22QpPsnbelTjTqAnL8r1uY7GmMORo0a5U4DNEKjq6ohc2O+xx57yJtvvulSFnr94Q9/kI4dO7qPKWczBD/P27dv71IVStTA+++/70iFEYfg57nqp+IJgpI3a70UPHgf/O8PeOqpp+pc+xPCC0lpD6U6kyZNcqUjZ599tivt+fLLL93/9+nTxxs8ePB6pZolJSXe6NGjXdng8OHDrVQzw2M+cuRIV371yCOPeF988UXttWLFimAmQQEg3TGPh1VbZH7MP/nkE1dFdP7553uLFy/2pk+f7rVo0cK76qqrGvDXCxPpjjnrN2M+ZcoUV0Y4e/Zsr1WrVq6qzlA/WIMpoediSx8zZoz7+OOPP3b/z1gz5vGlmgMGDHD7JyX4kS3VBNSa/vrXv3YbFKU+CxcurP2/Dh06uIXTj4ceesjbfffd3fdTdjJjxowc3HW0kc6Yt2zZ0k3M+IsH35CZMY+HkYfsjPmCBQtc6TcbIGWbV199tSuZNWRmzCsrK70RI0Y4wtC0aVNvxx139Pr27et9//33NuQp4Jlnnkm4NusY8y9jHv8z++23n3t/mOP33HOPly6sJbfBYDAYDIa0kHPNg8FgMBgMhmjByIPBYDAYDIa0YOTBYDAYDAZDWjDyYDAYDAaDIS0YeTAYDAaDwZAWjDwYDAaDwWBIC0YeDAaDwWAwpAUjDwaDwWAwGNKCkQeDwWAwGAxpwciDwWAwGAyGtGDkwWAwGAwGQ1ow8mAwGAwGg0HSwf8HyxHmZ6FOXmsAAAAASUVORK5CYII=", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "vis = EmbeddingVisualizer(random_state=42)\n", + "fig, ax = vis.plot(\n", + " test_embeddings,\n", + " labels=test_labels,\n", + " method=\"tsne\",\n", + " title=\"t-SNE\",\n", + " n_components=3,\n", + " show=True,\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "id": "c750ce78", + "metadata": {}, + "source": [ + "## 5. Confusion matrices in embedding space\n" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "016971b5", + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAABI0AAAHqCAYAAACJAb5xAAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjksIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvJkbTWQAAAAlwSFlzAAAPYQAAD2EBqD+naQAAX0xJREFUeJzt3Qd4FNXXx/ETAgmkEkKH0HvvXbqgIkUsqAgIglIFFQQsFBH0RRQUVBAUBQRRqoCiSIl06b1I7y0hCaGm7Pucy3+XbCaBRBJI+X6eZyWZvTszuztmz/7m3jsuNpvNJgAAAAAAAEAMGWL+AgAAAAAAABAaAQAAAAAAIE70NAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERsD/HDt2TFxcXOT77793vCbDhg0zyxJC22n7pNSwYUNzS0+WLl0qlSpVksyZM5vXNCQkJEnXr++vrlffb9z9+AcAAHhQqAEfDmpA3AuhEVKlVq1aiYeHh1y5ciXeNu3btxc3NzcJCgqSlGzv3r0mbCLEEPNePffcc5IlSxb58ssvZfr06eLp6fmw36JU57fffkvyABMAACQNPUmTkNuqVavue1vXrl0zNUFSrCs5UQMmDWpAJAcXm81mS5Y1A8lo9uzZ8vzzz8sPP/wgHTt2jPMDMmfOnNK4cWP59ddfE7RODW0KFy4sU6dOlZdfftksi4yMNDft9XIv+uE+dOjQRH9ZnzNnjjz77LOycuVKS6+iW7dumX81/EovZ5gef/xxWbZsmTRt2jRZthEVFSURERHi7u6e4F5kqU3v3r1N6JaYP+/a9ubNm5IpUyZxdXVN1v0DACA9mzFjhtPv06ZNM7WPniyL6dFHH5VcuXLd17YuXbokOXLk+E816oNEDZg0qAGRHDImy1qBB9DTyNvbW2bOnBlnaLRw4UK5evWq6W10PzJmzGhuD0t6CYvsLly4YP7NmjVrsm1DAxFCkTs0FI2OjjbHWkLCUQAAcH9eeuklp983bNhgQqPYy9MTasAHjxoQCcXwNKRKOnypbdu2snz5cseHTEwaJmmopOFScHCw9O/fX8qXLy9eXl7i4+NjerPs2LHjntuJa04j7Y3xxhtvmLM29m2cOnXK8tjjx49Lz549pWTJkmZ//f39TY+imMPQdP4YXaYaNWpk6Y4c15xG+nxfeeUVc+ZJv+RXrFjR9LiKa2zymDFj5JtvvpGiRYuanjXVq1eXTZs2SULoXEL6PAsVKmQemz9/fhPQ6Rmr5NgXfZ6dOnUyP+t9+hh7jy/dB/vPMcX1+owfP17Kli1rhi/6+flJtWrVzPFwrzmNvvrqK/M43be8efNKr169LPMp6bbKlStnhhTq+6XbyJcvn4wePTpBr6luV88A/fLLL1KmTBlzXNSuXVt27dpl7p80aZIUK1bMvJa6rdj7uHr1anO8FChQwOxnQECAeY+uX7/uaKOvk/Yysm/Pfov9XowbN87xXujziT2eXd9bPcZ1P2L2WDp06JAZMtiuXbsEPWcAAJB4ekJHP6u1NtG6QGut1157TS5fvuzUbvPmzdK8eXPJnj27qSu013yXLl0cn/v6Wa6GDx/uqAnu1eOIGpAakBoQMdHTCKmW9iLSgOLnn382X8TtNCT6448/5IUXXjAfnnv27JEFCxaYL9v6QXr+/Hnz5bxBgwbmy7IGBInRtWtX0634xRdflDp16siKFSukRYsWlnYaiKxbt84Mo9PART+4v/76a/MlXLergUP9+vXl9ddfly+++ELeeecdKV26tHms/d/YNBzQx+sXd33O+nw0gNCgQD/g+/bt69RewxKd90mLDC0SNNzQsO3IkSNmGFJ8wsPD5ZFHHpF9+/aZwqNKlSomLNKhfhqQaWGS1Pvy7rvvmoBNg6UPPvjArE9DjcSYPHmyeT2feeYZs/0bN27Izp07ZePGjeb9io8WT1pM6ZC4Hj16yIEDB8x7pe/h2rVrnV4rLdYee+wxs+86/5IOLxw4cKAJJTWMvBcNfvR11FBKffTRR/Lkk0/K22+/bYIrDRp1G/r66Guvx5edvr469FL3UUPIf/75x4Rk+p7ofUpf3zNnzsTZzd1Oh2Dqa/Pqq6+a0ChbtmymOI1Jh3fqa6D/3+g29HXVNvr+aliq+woAAJKHfp7riZzOnTubz+CjR4/KhAkTZNu2bY7aRE/wNGvWzHzBHzRokOmprfXmvHnzzDp0uX6Wa93w1FNPmdpFVahQId7tUgNSA1IDwkLnNAJSo8jISFuePHlstWvXdlo+ceJE7RZh++OPP8zvN27csEVFRTm1OXr0qM3d3d32wQcfOC3Tx02dOtWxbOjQoWaZ3fbt283vPXv2dFrfiy++aJZre7tr165Z9nn9+vWm3bRp0xzLfvnlF7Ns5cqVlvYNGjQwN7tx48aZtjNmzHAsu3XrlnkNvLy8bGFhYU7Pxd/f3xYcHOxou3DhQrN80aJFtrsZMmSIaTdv3jzLfdHR0cm2L/ra67JNmzY5bbNgwYK2Tp063fP1ad26ta1s2bJ3fW72beh+qQsXLtjc3NxszZo1czpOJkyYYNp99913TtuL/f7dvHnTljt3btvTTz9tuxd9rB539m2rSZMmmeW6DvtrpgYPHuy0n/EdUx999JHNxcXFdvz4cceyXr16OR23dvb3wsfHxzzvuO6LefyrF154webh4WE7ePCg7ZNPPjFtFixYcM/nCgAAEib25/bq1avN7z/++KNTu6VLlzotnz9/fpx1U0wXL1601Kh3Qw1IDWhHDQg7hqch1dJ5abQXz/r1652G8WiPFu3C26RJE/O79qTIkCGDYxJkvTqDDlPTXi1bt25N9BUJlJ7xialfv36WttrLyU4nXtbt6tAjPQuU2O3G3H7u3LlNLyo7PdOk+6NnhgIDA53a6xAiHaJlp72HlPbuuZu5c+eaoWZ6Vio2+1CnB7UviaGvrfa6SegQPPXXX3+ZCcf1PbQfJ6pbt25mKOOSJUuc2uuxE3POAZ0LqEaNGgl+Hnpc6nA7u5o1a5p/n376adODJ/bymOuNeUzpnF3a+0t7u2kepWceE0q3Ze+ufi96VtPX19f03nr//felQ4cO0rp16wRvCwAAJI72HtbPXp0IWz/r7beqVauaOkQvnhJzDsjFixebWjMpUANSA9pRA8KO0Aipmn2ia/ucNRoY6PAfDZPskx3rkJqxY8dK8eLFTYCkQ6v0C7MOWwoNDU3U9nSeIg0WYg+b0gAqNh2+NWTIEDPvTMzt6tCtxG435vb1ecQMN2IOZ9P7Y9K5b2Kyhzaxx8PHdvjwYTN3T0rYl8TQYWJaTGmIo/umQ8C0C/fd2Pcz9nuoYVCRIkUsz0OHGsae50qfS0KfR+zXQYtCpcdJXMtjrvfEiRNmeJgOJ9PnqceTDrNUiTmmdOhfQum2dPik/v+i+6Q/AwCA5PPvv/+az3UdJqSf9TFvemLOPp+n1gB6IkiH2GudqSd1dAi6zr/5X1EDUgPaUQPCjjmNkKrpGZdSpUrJrFmzzJxA+q/2uoh51bRRo0aZHhI6P8yIESPMH0ANOrRnSex5XJJSnz59zAe3bkcnO9Yv3Bo2aKCVnNuNKb6rhCXmUuwpYV9ihzR22nMs5no1sNL5iPSMm166Vc+W6dw7Gt5pQZUSXtP4Hn+v9epz1TOOOmeXhmN63OuE1KdPnzZBUmKOqZg9lhJC5wizB1gazCbn1e0AAEjv9DNdA6Mff/wxzvvtvYW1PtK5FfXqa4sWLTKf11rvfvrpp2aZnmBKCagB7/46UAMipSM0QqqnAZGGQtoTQnscaQ8TvfqWnX6Y6pWuvv32W6fHaY8fPSuTGAULFjQf5HoWJmbPFA0qYtPt6tXA9IPbTicfjn1FrvgCkfi2r89T9yFmD5/9+/c77k8K2pNq9+7dKWJf7D15Yr9uSnsBaW+gmOxX9tKbDjvTSR9HjhwpgwcPjvOS8vb91Pcw5rr0sTrppE6OnRLoFdYOHjxoJn/Xq9jZ6YTXsSXmmLoXDd+mTJliJurW4lWPaZ1YPGNGPj4AAEgOWofp8Pm6desm6ERPrVq1zE3rHa2FtTb+6aefzMVbElsTUANSA9pRA8KO4WlI9ey9irQ3yfbt2516GdnT+9i9QHSsuPbQSCz71bFiD9HRS6LGFtd29SpU2mMkdsih4gpFYnviiSfk3LlzMnv2bMeyyMhIs149m2QfqnS/tKvzjh07ZP78+Zb77M/pQe2LvYDRM2Ya5Nhpb6KTJ086tdN5o2IPMdNL2+s+xzfWX0Mhbafvacz3S0NG7Roe15XxHgb7WaiY+6g/f/7555a2iTmm7kYfrwWnDvfTHnsaHul8XPozAABIHnp1Vq0XtYd8bFpr2T/ftQdw7FqzUqVK5l/7EDW9Wm9iagJqQGpA+/FCDQg7ThUj1dP5WXQy4IULF5rfY4dGejlzvYS7XrJU22mPDe0xEbuHSkLoB7FO/KxDnjRQ0PUtX77cXHY+Nt2uXvJch6VpcKETdutZI71Ueux1aiDwf//3f2adOv9R48aNTbfk2PQS6ZMmTTLDkbZs2WImVNYeTTpvjwZXMSdSvh8DBgww69XLrWs3Zx0GqMOi9FLxEydONJNkP6h9UfqhpevWS91rIaU9vWbMmGGZW0ovO6uTc+uZOZ0Mfd++fWYSPw1+4tsf7eKtvZB0+Jquv1WrVqbXkb7H2mMt5qTXD5MOR9Pn279/fxN46iTdOvwurrmU9P1SOil58+bNHZPGJ1bfvn1NEKfHra5DXx99Lz788EMzb4IeBwAAIGnpibfXXntNPvroI3NCVOsbvdiIznWkJz71hJFeoEJ7H2u9ohcu0RrhypUrMnnyZFMj6Mk9pT2VtA7Vk3wlSpQw0zTovJXxzV1JDUgNSA0IC8d11IBU7MsvvzSXE61Ro4blvhs3btjeeustW548eWxZsmSx1a1b17Z+/XrL5drjuuS4Xp409v8m169ft73++uvmEvKenp62li1b2k6ePGm5nOnly5dtnTt3tmXPnt1cgr558+a2/fv3x3n5+MmTJ9uKFClic3V1NetZuXKlWR57H9X58+cd69VLxZcvX95ymXT7c9FLpMeW0MuuBgUF2Xr37m3Lly+f2U7+/PnNfl+6dCnZ9kUfG9+lYz/99FOzL3rJen0PN2/ebHl99PL19evXN++NtitatKhtwIABttDQUMs2Yl7KXk2YMMFWqlQpW6ZMmWy5cuWy9ejRw7yHMem2ypYta9k3fV30fb0X3a5eVjchr48eA7r8l19+cSzbu3evrWnTpuZ40te8W7duth07dliO28jISFufPn1sOXLksLm4uDiO4bu9F7GP/4ULF5rf9XWPKSwszDzXihUr2m7dunXP5wwAAO5Oa4O4vpZ98803tqpVq5r61dvb29RZb7/9tu3MmTPm/q1bt5rLohcoUMDUPTlz5rQ9+eSTpkaKad26dWY9WqslpA6kBqQGpAZETC76H2uUBAAAAAAAgPSMOY0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALDIaF2E+xUdHS1nzpwRb29vcXFx4QUFAPwnNptNrly5Innz5pUMGTjPA6R31JgAgAddYxIaJQMNjAICApJj1QCAdOjkyZOSP3/+h70bAB4yakwAwIOuMQmNkoH2MFKFXpsmGdw9kmMTSMXGd6z6sHcBKVS1wtke9i4ghbkSFibFCgc4PlcApG/2vwU5X/pGMrhRY8LZT/0a8JIgTqXz+fDK4D/XmIRGycA+JE0DI1d3z+TYBFIxTy/+aCNuPj4cG4gbQ50BxPxboIERoRFi8/KmjgA1JpK+xmSCBAAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWGSWFO3bsmBQuXFi2bdsmlSpVemjrwB3tahWQlx8pLNm93OTguSvy0aJ9svtUaJwv0bdda0j1Itksy//ef0F6T9tqfu7RpJg8ViG35PbNLBFRNtl7OlTG//mv7IpnnUiZfv1jo8xZtFYuh4ZLkQK5pGfnFlKyWP442/6+fLP89fd2OX7qgvm9WOG80vn5pk7tH3t+SJyPfaV9M3m2Zb1kehZIDpN/DpTxM5bLhaAwKVc8n/zfgGelatlC8bZf8NdWGTVxiZw4GyRFAnLIsD5tpFndso77bTabfDRpiUxbsE5Cw69LzQpF5NNB7aRogZy8gQASjBoz5XnpkcLSrUlxyeGTWfadDpXhc3bKzuOX42z74+v1pFbxHJblK/eck64T15ufPdxcZUDrsvJo+bzi5+kmJ4Ouyg+Bh2XW2mPJ/lyQtOb+tl5mLVgtwSHhUrRQbnmja0spUyIgzra//rlJlq7aKkdOnDe/lyyaT15r38zRPjIySr6ZuUw2bDkgZ84Hi6dHZqlWsZj06NBcsmfz4a1LZagzk1+K72kUEBAgZ8+elXLlyj3sXYGINC+fWwY8UUomLj8k7b5cJwfOXpGJnatJNk+3OF+fN37cJo1GrXDcnhq3RiKjouXP3bf/iKvjl67KqF/3SdvP10qnSRvlzOXrMrFLNfHzzMRrnkoErtslk6cvlZeeaSgTPuouRQrmlnc/miYhoeFxtt+595g0rFtB/u/9zjL2g26Sw99X3hk1TS4FhznazJw4wOn2Zvc24uLiIvVqlHmAzwz3a96fW+S9cfNlYNfHZdX0gSY0errPl3Ix+Eqc7TfuOCJd3/teXmpdWwJnDJIWDSrKS/2/kb2HzjjafD7tL5k0O1A+G/y8LJvaXzyyuJl13rgZwRsGIMGoMVOWFlXyyTtPlZcvft8vrUavlP2nQ+X7nnXE3yvuGrPnlI1S853fHLfHRv5laszft512tHm3bXlpUDqXvDVtszQb+Zd8v+qwDHu2ojQpl/sBPjPcr+VrdsqEqb9J53ZN5NtPe0mxQnnkzQ+myuWQuOvMbXuOSNNHKsr4EV1l0sfdJVd2X3lz+FS5GHT7hLTWCwePnJFOzzWS7z7tLSMHtpcTpy/KwFHTebNSGerMByPFh0aurq6SO3duyZgx7k5ResY5MjLyge9XetWxXiGZu+mkLNx6Wo5cuCojFu6R67eipE3VfHG2D7seIUHhtxy32sX85UZEtCzbdc7R5rcdZ2Xj4SA5ffm6HL4QLp/8tl+8M2eSErm9H+Azw/2Yt2SdPNa4qjRrWEUK5s8pfbq2FHe3TPLHqtu9yWIb2OcZadmshhQtlEcC8uWQfq+1Nv8vb999xNEmW1Zvp9v6zfulYplCkieXtecaUq6vZq6Qjm3qSPtWtaVUkTwm6PHI7CYzfr19Fji2ST+tkia1S8vrHZpKycK55d0eT0rFUgEy+ZdAc78eJxNnrZT+XZrLEw0qmBDq6+Ed5dylUFkSuOMBPzsAqRk1ZsrSpVExmb3+mMzdeEIOnbsi783ebmrMZ2rH3TM19FqEXLpy03GrWyqnaf9bjNCoSmF/mbfxhGw8dElOB1+Tn9YdM2FUxYJ+D/CZ4X799OsaaflodWnRpKoUDsglA7q3lszubrJ4+ZY42w99o520fbyWFC+c19SlA3u2lWibTTbvPGzu9/LMLOOGdZEmdStIgXw5pFzJAvJmt1Zy4PBpOXcxhDcsFaHOTGehUXR0tIwePVqKFSsm7u7uUqBAARk5cqTpOqy9C7Zv327arVq1yvz++++/S9WqVU3bNWvWxPv4+OzevVsef/xx8fLykly5ckmHDh3k0qVLjvvnzJkj5cuXlyxZsoi/v780bdpUrl69KulZRlcXKZ3XRzYcCnIss9nEBD4VC2RN0DqeqpZflu48K9cjouLdxjPVA0zYpL2YkPJFREbKv0fPSuXyRR3LMmTIYH7fd/BUgtZx82aE6Srs7Zklzvv1TNI/2w5K80ZVk2y/kfxuRUTK9v0npWGNkk7HRoMaJWXTrqNxPuafXUelYfVSTssa1yotm3bdHkpw/HSQnA8Kk4Y17rTx9cpihrtt2slwAwBW1JgpXyZXFykXkFXWHbjoVGPq75ULJexk0XO1C8qSradMcGS39WiQNCmfR3L5Zja/1yqeXQrl9JLV+28Pj0fKFxERKQcPnzHDx2LWEtUqFJU9B04kaB03b0VIZFSU+Hh5xNsm/NoN8x3T2/P2sYKUjzozHc5pNHjwYJk8ebKMHTtW6tWrZ4ak7d+/P972gwYNkjFjxkiRIkXEz88vUY8PCQmRxo0bS9euXU3769evy8CBA+W5556TFStWmMe+8MILJoR66qmn5MqVK7J69Wpzhjs98/Nwk4yuGUyPoZiCwm9K4Rye93x8ufy+Ujy3twydt9tyX/2SOWT08xUlcyZXuXjlprz23SYJucZQk9QgLOyaKciz+jofA/r7ydN3ir+7+W7mn+Lv5y2VyxeJ8/6//t4mWTK7S90apZNkn/FgBIWES1RUtOTI5txrMEc2H/n32J0hqjHpvEc5/GO39zbLlQZGZlmsNjn977QBgJioMVM+P093U2NeCrvptPzSlRtSJJfXPR9foaCflMzrK4NmbnNarnMijXy+sqz78HGJiIqW6GibvPvTNtl0+M4JUKRsoVeuSVR0tGTzdT4OsmX1kuMJrDO/mrZUsvv5SLWKd05wxg6Vvp62VJo+UsHMb4TUgToznYVGGsp8/vnnMmHCBOnUqZNZVrRoURP+aE+juHzwwQfy6KOP3vPxcdF2lStXllGjRjmWfffdd2Zs+8GDByU8PNwMeWvbtq0ULFjQ3K+9juJz8+ZNc7MLC+OLS3y9jA6evRLnpNmbjgTLs+PXmXmM2lYPkDEvVJL2X2+Q4KvOARXSntkL/5ZV63bL6CGdxc0t7nms/li1TRrXqxDv/QAAxIUaM314rlZBM+ws9qTZHesXkUqF/KTbpPVmeFqNYtnNnEbnQ2849WpC2jV9bqCZE0nnN9KpE2LTnu5DxswyP/d/rfVD2EMg5UsRw9P27dtnQpcmTZok+DHVqlX7z4/fsWOHrFy50gxNs99Klbo91OHw4cNSsWJFsy4Nip599lnTg+ny5biv3KA++ugj8fX1ddw0fEqLLl+7ZSYYjD0hob+XuxlLfjdZMrmaK6TN3xL3cCUdrnYy+JrsPBkqw+btlshomwmZkPL5+HiYbsIhoc7DN/V3v6x3n5dqzqI18vPCNTLqnY5m8uy47N53TE6duWTmTELq4p/VS1xdM1gmvb4YHCY5/eO+OokuvxgUu/0VR/tc//s3dpsLQXfaAIAdNWbqcPnqTVNjZvdxd1qe3TuzXIzV+yi2LG6u8mTV/PLL+uNOy90zZZC3WpaVkfN3yYrd5+TAmTCZ/vcRWbL1tHRrXDxZngeSnq+3h7hmyCDBsS6uoldR879HnTlzwWr5cV6gjB3a2UyeHVdg9P6YWWYeo7FDu9DLKJWhzkxnoZHOG5RYnp6e//nx2pOoZcuWZp6kmLd///1X6tevbyZGXLZsmZk3qUyZMjJ+/HgpWbKkHD16NN5uz6GhoY7byZMnJS2KjLLJvjNhUrOYv2OZi4tIzaL+suPE3SeNe7R8bnFzzSCLt925AtLdZHBxEbeMKeLwxD1kyphRihfO4zSJtQ5X099Ll4g/+Pvl19Uyc16gfDi4g5QoGvdE6mrpyq1SvEjeeEMlpFxumTJKpVIBErjpgNOx8femg1K9fOE4H1OjfGGn9mrlxv1SvfztiVAL5vM3wVHMNmHh12XLnmNSvULck6UCSL+oMVOHiCib7D4ZInVK5HCqMWuXyCHbjgXf9bFPVM5nasYFm5zr70yuGczy2LNL6BA1XTdSh0yZMkqJonlly85DTrXEll2HpWzJAvE+7sf5f8sPv6yQMUNellLF8scbGOmJSZ0U29cn/vmOkDJRZz44KeJbefHixc2H+vLlyx/I46tUqSJ79uyRQoUKmYmzY97sYZROhFa3bl0ZPny4bNu2Tdzc3GT+/Plxrk8n3vbx8XG6pVXT1hyTp6vll1aV85p5jN5rXdac4Vmw9faVKkY+U15eb1bC8ri21fLJin0XJPR6hKUH0uvNikuFAF/JkzWzmWh7eNtyktPHXf6McYU1pGxtW9SR31dskWWB28wlS8d/u1hu3LwlzRpUMfd/8uVc+W7WMkf7nxeulmk/r5A3u7eRXDmySnDIFXO7fsP5bOLVazdk9cY98hgTYKdaPV9sLNMWrJNZizfIgaPn5M2PZ8vV6zelfcta5v7uQ6fJ8AkLHe1fe76hLF+/VybMWC4Hj52Tj79ZItv3nZBuzzZw/G3u/kIjGfPdUvktcKfsOXRaegybLrmz+0qLBhUf2vMEkDJRY6Ye3608JO3qFJK2NQpI0VzeMuK5SuLh7ipzNtzuQTSmQ1Xp37KM5XHP1i4oy3aelZBrzlMahN+IlA3/XpRBrctJzWLZJb+/hzxds4A8VaOA/Lnz7AN7Xrh/z7eqJ4uWbZbfV2yVYycvyJhJC+X6jVvSosntOnPE57/IxOl/ONrPmBcoU2Yuk8G9n5Y8Of0k6PIVc7t2/aYjMHpv9Ew5cOi0DHmjnQkS7W104m2kHtSZ6WhOo8yZM5uJqN9++20TzmhYc/HiRRPsJGTI2d0e/8orr1ja9+rVyww508mu9THZsmWTQ4cOyU8//SRTpkyRzZs3mwCqWbNmkjNnTtm4caNZX+nSTML7x65z4ufpJj2bFpfs3u5y4GyY9Ji6WYL/Nzl27qxZJDrWGZ1C2T2lSqFs8up3myzvRZTNJoVyeMqnlSub9eoH/p5TofLyNxvl8AXnbqhIuRrUKS+hYddk+i8rzJXOtFfQh4M6iF/W25MWXrgUar7s2y1etkkiIqPkw7GzndbT/umG0uHZxo7fA9ftFrGJNKwb/5xiSNnaNqsql0LCZdSkJWYIWfkS+WTOF70cQ8lOnQs2PQvtalYsIpM/fFlGfr1YRny1SIoE5JAZY16VMsXyOtr07djUFH5vjJoloeHXpVbFojLni56S2Z05rwA4o8ZMPXTYWDYvd+nXorSpMfedDpXOX62ToP9NgZDHT2tM5yKzcE4vqV40u3ScsCbOdfadukkGtCorn3WqJlk93OT05Wvy6eK9MnNN3KMHkDI1qVdBQsKuypSf/pLgy1ekWOE88umQzpLtf8PTzl8McaolFizdaOpMDYZi6tyusbzyfFMzTH7Npn23l7053qnNFyO6SpVycV+YBSkPdeaD4WJLIZcE026GOjeQhjlnzpyRPHnySPfu3U2wU7hwYdPbp1KlSrJq1Spp1KiRmWMoa9as93y8Dh3TybRjrkPpUDQNmnRuI50PSSe8fuyxx+Szzz4zV1174403ZOvWrWZSa72vT58+0rt37wQ9F32Mzm1U5PU54up+76uKIX2Z/EqNh70LSKFqFk3YZYWRfujnSS5/XzP0OS33YgWSU1qsMXN3mSEZ3BhOA2e/Dkz4/LBIX8rmp4bAf68xU0xolJYQGuFuCI0QH0IjxPV5QmgEIHaNSWiEuBAaIT6ERrifGjNFzGkEAAAAAACAlIXQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsMloXIal82K6CeHh584LCyZs/7+AVQZzWDm7EKwMAuKef+jUQL28fXik4afV/y3lFEKfD45/ilcF/Rk8jAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwCKjJMCvv/4qCdWqVasEtwUAAED6RY0JAEAaCI3atGmToJW5uLhIVFTU/e4TAAAA0gFqTAAA0kBoFB0dnfx7AgAAgHSFGhMAgDQ8p9GNGzeSbk8AAAAAakwAAFJvaKTDz0aMGCH58uUTLy8vOXLkiFn+/vvvy7fffpsc+wgAAIA0jhoTAIA0EBqNHDlSvv/+exk9erS4ubk5lpcrV06mTJmS1PsHAACAdIAaEwCANBAaTZs2Tb755htp3769uLq6OpZXrFhR9u/fn9T7BwAAgHSAGhMAgDQQGp0+fVqKFSsW50SGERERSbVfAAAASEeoMQEASAOhUZkyZWT16tWW5XPmzJHKlSsn1X4BAAAgHaHGBAAg5cmY2AcMGTJEOnXqZM4Gae+iefPmyYEDB0yX4sWLFyfPXgIAACBNo8YEACAN9DRq3bq1LFq0SP766y/x9PQ0H/D79u0zyx599NHk2UsAAACkadSYAACkgZ5G6pFHHpFly5Yl/d4AAAAg3aLGBAAgDYRGavPmzaaHkX0MetWqVZNyvwAAAJAOUWMCAJCKQ6NTp07JCy+8IGvXrpWsWbOaZSEhIVKnTh356aefJH/+/MmxnwAAAEjDqDEBAEgDcxp17dpVIiIiTC+j4OBgc9OfdVJsvQ8AAACgxgQAIB32NAoMDJR169ZJyZIlHcv05/Hjx5tx6AAAAAA1JgAA6bCnUUBAgOlpFFtUVJTkzZs3qfYLAAAA6Qg1JgAAaSA0+uSTT6RPnz5mkkI7/blv374yZsyYpN4/AAAApAPUmAAApNLhaX5+fuLi4uL4/erVq1KzZk3JmPH2wyMjI83PXbp0kTZt2iTf3gIAACDNoMYEACANhEbjxo1L/j0BAABAukKNCQBAGgiNOnXqlPx7AgAAgHSFGhMAgDR29bSYbty4Ibdu3XJa5uPjc7/7BAAAgHSMGhMAgFQ6EbbOZ9S7d2/JmTOneHp6mrHoMW8AAAAANSYAAOkwNHr77bdlxYoV8vXXX4u7u7tMmTJFhg8fLnnz5pVp06Ylz14CAAAgTaPGBAAgDQxPW7RokQmHGjZsKJ07d5ZHHnlEihUrJgULFpQff/xR2rdvnzx7CgAAgDSLGhMAgDTQ0yg4OFiKFCnimL9If1f16tWTv//+O+n3EAAAAGkeNSYAAGkgNNLA6OjRo+bnUqVKyc8//+w4O5Q1a9ak30MAAACkedSYAACkgeFpOiRtx44d0qBBAxk0aJC0bNlSJkyYIBEREfLZZ58lz14iRfnjr82y6Pf1EhoaLgUCcknnl5pLsaL54mz7z+b9smDRWjl3IViiIqMld+5s0uKxmlK/bgVzf2RklMyeu0q27zwkFy6EiIeHu5QrU1heeK6xZPPzfsDPDPejbZV88mLNAMnm5SaHLlyVsX8elH1nr8Tb3ss9o7zaoLA0KJlDfDJnknNhN+SLv/6V9Ydv917M4CLyyiOFpVnZXOLv6SaXwm/Jb7vOyvdrj/NGpTKTfw6U8TOWy4WgMClXPJ/834BnpWrZQvG2X/DXVhk1cYmcOBskRQJyyLA+baRZ3bKO+202m3w0aYlMW7BOQsOvS80KReTTQe2kaIGcD+gZAUgO1JiY+9t6mbVgtQSHhEvRQrnlja4tpUyJgDhfmF//3CRLV22VIyfOm99LFs0nr7Vv5mivNeY3M5fJhi0H5Mz5YPH0yCzVKhaTHh2aS/ZsXO05tXnpkcLSrUlxyeGTWfadDpXhc3bKzuOX42z74+v1pFbxHJblK/eck64T15ufPdxcZUDrsvJo+bzi5+kmJ4Ouyg+Bh2XW2mPJ/lyQtKgzk5+LTavv+3D8+HHZsmWLmdeoQoXbQUB6FxYWJr6+vjJz7UHx8Epbwce6jXvkq29+la6dHjdB0W9//CMbN+2Tz/6vh/j6eFra79l3TK5evSH58mYXV9cMsnXHIZkxa5kMfPN5qVi+qFy7dkPGTpgrjRtUloIFcsnVq9fl+x//FFu0TUYNf0XSog8W7pO0pknpnPLek6Xlk6UHZO+ZMHmueoA0KpVDXvhmo4Rci7C0z5jBRSZ2rCKXr0bItHXH5WL4Tcntk1nCb0aYwEl1rF1Q2tXILx8u3i9HL12VUrm95d0WpWTS30dkzubTkhatHdxI0pp5f26RHsOmy2eD2knVcoVk4qyVsmD5Ntk0Z4jkyGb9+7hxxxFp8do4GdKrlTSvV07mLN0sn09bJqumD5QyxfKaNuN+WCZjv/9Tvh7WQQrk9ZdRExfL3kNnZMPP70lm90yS1j5Pcvn7SmhoqBkSDqQn1Jjx15irdp4UL++09Tdh+Zqd8uHnv0j/7m2kTIn88vOidbJy3S6ZNeFN8cvqZWk/fOxsKV+qoJQvVUDcMmWUH+f/LX9v2CvTv+grOfx9JfzqDXnvk5nS8tFqUrxQHgkLvy6ff7tYoqNt8u2YXpIWtfq/5ZIWtaiSTz55qaq8P3u77Dh+WTo3LCqPV84nj45YJkHhtyztfT0ySSbXOwNqNBRaPKixvDNrm8zdeMIsG/l8JaldIocMnrlNTgVfk0dK5ZThz1WUnlM2yvLd5yStOTz+KUmLqDMfTI2Z6OFpsekE2G3btiUwSieWLN1oAp6G9StJ/nw5pOvLT4ibWyZZ9ff2ONuXLV1IalQrZUKj3LmyyRPNapjeSfsPnjT3e3hklnffbi+1a5aRvHn8pXix/NKlw2Ny5NhZuRQU+oCfHf6rdjUCZNGOM/LbrnNyLOiaCY9uRkbLkxXyxNn+yYp5TO+iQXN3ya7ToXIu9IZsPxniCIxUufw+svrfS7L+cJC5f9WBi/LP0WApkydtFclp3VczV0jHNnWkfavaUqpIHvls8PPikdlNZvx6+0xfbJN+WiVNapeW1zs0lZKFc8u7PZ6UiqUCZPIvgeZ+Pc+hwVP/Ls3liQYVTM+lr4d3lHOXQmVJ4I4H/OwAJCdqzPTlp1/XSMtHq0uLJlWlcEAuGdC9tWR2d5PFy7fE2X7oG+2k7eO1pHjhvFIwf04Z2LOtRNtssnnnYXO/l2dmGTesizSpW0EK5Msh5UoWkDe7tZIDh0/LuYshD/jZ4X50aVRMZq8/ZgKfQ+euyHuzt8v1W1HyTO24ey2HXouQS1duOm51S+U07X/bduekY5XC/jJv4wnZeOiSnA6+Jj+tOyb7T4dKxYJ+vFmpCHXmg5Gg0OiLL75I8C0x9Apsffr0kX79+omfn5/kypVLJk+eLFevXjVdlL29vU0Ppt9//93xmMDAQKlRo4a4u7tLnjx5zBC5yMhIx/03b96U119/XXLmzCmZM2c2E3Rv2rTJcf+qVavExcVFli9fLtWqVRMPDw+pU6eOHDhwwNFGh981atTIbF9Tt6pVq8rmzZslvdNuvkePnZXyZQs7lmXI4CLlyxaSg4fu3fNDv+zt2nNUzp4NktIlC8Tb7tr1G+LicjtQQsqnvYZK5vaSTUfvdBHW7oubjwVLuXxxBzz1imeX3adD5a1mJWTR63VletfqpmeRDkmz230qTKoV9JOAbFnM78VyekqFgKyy4cjt4WtI+W5FRMr2/SelYY2SjmUZMmSQBjVKyqZdt+fGi+2fXUelYfVSTssa1yotm3bd7i5+/HSQnA8Kk4Y17rTx9cpihrtt2kmXciC1ocakxlQREZFy8PAZM3ws5udFtQpFZc+B2z1D7uXmrQiJjIoSHy+PeNuEX9Ma00W8PakxU4tMri5SLiCrrDtw0bFMx8no75ULZUvQOp6rXVCWbD1lgiO7rUeDpEn5PJLL9/axUKt4dimU00tW77+QDM8CyYE6M4XNaTR27NgErUz/CGtgkxg//PCDvP322/LPP//I7NmzpUePHjJ//nx56qmn5J133jHb7tChg5w4cUIuX74sTzzxhLz88ssybdo02b9/v3Tr1s2EQ8OGDTPr03XNnTvXrFfPUI0ePVqaN28uhw4dkmzZ7vxheffdd+XTTz+VHDlySPfu3aVLly6ydu1ac1/79u2lcuXK8vXXX4urq6ts375dMmVKW0Me/ouwK9dMl15fX+dhaL6+XnL6bFC8j9MhaD36fW5CJw2ZunR8XCqUu30Fvthu3YqUmbNXSJ1aZcUji3uSPwckvawemSRjhgwSfM25e3Dw1Qgp4G8dsqjyZs0sVQpmlT/3nJf+P++Q/H4e8lbzEuLq6iJT19z+4j99/XHxcHeVma/WNMedHjvfBB4xj0HqEBQSLlFR0ZZhaDmy+ci/x+J+H3Xeoxz+sdt7m+VKAyOzLFabnP532gBIPagxqTFV6JVrEhUdLdl8nYehZcvqJcdP3wkL7uaraUslu5+PVKtYNN5Q6etpS6XpIxXM/EZIHfw83SWjawa5FHbTafmlKzekSC7rsMXYKhT0k5J5fWXQzG1Oy3VOpJHPV5Z1Hz4uEVHRptZ896dtsulw/N9pkLJQZ6aw0Mh+tbTkULFiRXnvvffMz4MHD5aPP/5YsmfPbsIgNWTIEBPe7Ny501yhLSAgwEy8rQGVXr3tzJkzMnDgQNPu+vXrpu33338vjz/+uHm89lxatmyZfPvttzJgwADHdkeOHGkm81baW6lFixZy48YNE0BpQKVtdf2qePHid30O2rtJbzHHB+KOzJnd5f9GdJMbN27J7r3HZPqsZZIzR1YzdC0mDZU+/3Ku6aXySqcneAnTMP3/V+czGv37AYm2iRw4Fy7ZvdzlxVoBjtCocemcZhLsYQv3mjmNiufykr5Ni5sJsX/flfbGmgNAekSNSY2ZFKbPDTRzIo0f0VXc3awnerXGHDJmlvm5/2utk2SbSB2eq1XQDDuLPWl2x/pFpFIhP+k2ab0ZnlajWHYZ9mxFOR96w6lXE4AkmNPofsWcPFt79fj7+0v58uUdy3TImrpw4YLs27dPateubb5w2tWtW1fCw8Pl1KlTcvjwYXMVN11mpz2EdDibPja+7eowN/s21Jtvvildu3aVpk2bmhBL13s3H330kZmU0H7TYCst8vH2ML09QkPvzDuj9CpqWWOdGYpJH6PzGRUqmFuefLyW1KxWWhYuXhdHYDRPLgaFyrtvv0gvo1REJ7qO1LODHm5Oy7N5ZpLgcOezQnY6aeHJ4GsmMLI7HnTVBEc63E31alxUZqw/Icv3XZAjF6/KH7vPy+x/TkqH2vEPbUTK4p/Vy0yAfzHY+Sp6F4PDJKd/3EMXdfnFoNjtrzja5/rfv7HbXAi60wYAFDVm6uHr7SGu2ms5NNxpuV5FzT/r3S8qM3PBavlxXqCMHdpZihWyzqWoNeb7Y2aZeYzGDu1CL6NU5vLVmxIZFS3ZfZxHIGT3ziwXY/U+ii2Lm6s8WTW//LLe+cq77pkyyFsty8rI+btkxe5zcuBMmEz/+4gs2XpaujW+e5CLlIM6Mx2FRrGHfWkgFHOZPSCKjo5Otu3G3oYOdduzZ4/pfbRixQopU6aMGTIXH+0hpbOO228nT96e5DmtyZjRVQoXyiO7997peaZdObX3UIli+RK8Hp3bKCLGPFT2wOjs+WB57+324n2XsehIeSKjbaanULVCdyYO1P+jqhb0k92n4+51t+tUqOT3y2La2QVk8zCTFer6VOZMrmZCy5j095ihMVI2vZpNpVIBErjpzpxx+nf2700HpXr5O3OjxVSjfGGn9mrlxv1SvfztnokF8/mb4ChmG70izpY9x6R6hbgnxASQPlFjph6ZMmWUEkXzypadh5w+L7bsOixl7zIPpl4x7YdfVsiYIS9LqWL54w2MTp25ZCbF9vWhxkxtIqJssvtkiNQpkcOxTEtBvfLZtmN3n+fyicr5xC1jBlmwyfm7mV5ZTZfHvoa4fq+hzEw9qDPTUWiUGKVLl5b169eb0MFO5yHSCavz588vRYsWFTc3N8fcREp7HulE2Br8JEaJEiXkjTfekD///NNcHW7q1KnxttVJuXXC7Ji3tKrFYzVlReA2CVyzQ06fuSTf/vCb3LwZIQ0eqWju/3LSQpn18wpH+wWL1srO3Ufk/IXLpv3i3zfI6nW75JHa5R0f5mMnzJXDx85In+5tzB/rkJBwc9P7kDpoD6CWlfLI4+VzS0F/D+n/WAkT+izZedbc/96TpaV7gzvzWM3felp8smSSfo8WNxNd1y7qLx3rFJS5W+9MqL7230vSqU5Bc19u38xSv0R2c5W2v+kynKr0fLGxTFuwTmYt3iAHjp6TNz+eLVev35T2LWuZ+7sPnSbDJyx0tH/t+YayfP1emTBjuRw8dk4+/maJbN93Qro9e3s4sYaG3V9oJGO+Wyq/Be6UPYdOS49h0yV3dl9p0eD23yEASCxqzIfv+Vb1ZNGyzfL7iq1y7OQFGTNpoVy/cUtaNKli7h/x+S8ycfofjvYz5gXKlJnLZHDvpyVPTj8JunzF3K5dv937ROvI90bPlAOHTsuQN9qZGtPeRifeRurx3cpD0q5OIWlbo4AUzeUtI56rZOa9nLPhdg+iMR2qSv+W1u96z9YuKMt2npWQWPNuht+IlA3/XpRBrctJzWLZJb+/hzxds4A8VaOA/Pm/2hWpA3VmCprTKKXo2bOnjBs3zlxxrXfv3uaKZ0OHDjXDyfQKC56enmYibZ2PSCe9LlCggJkI+9q1a/LKK68kaBs6L5I+/plnnpHChQubYW8aOj399NPJ/vxSgzo1y0pY2DX5ZV6ghIRelYIFcsmg/i84hqddCg4VlxiXwLp585Z8N+13CQq+Im5uGSVvnuzS67XWZj0q+PIV2bLtoPl54PuTnbb1/qCXLPMeIWXSIWQ6IXbXRwpLNk83+fdCuLz18065fC3C3J/Lx90p7L1w5aa8MXuH9G1STH54pbpcunJLftl0Smb878NfjV32r3SrX1j6Ny8hfh6ZzFxGC7edccx5hNShbbOqcikkXEZNWmKGkJUvkU/mfNHLMZTs1LlgyRDjtF7NikVk8ocvy8ivF8uIrxZJkYAcMmPMq1KmWF5Hm74dm5ovBW+MmiWh4delVsWiMueLnpLZnQsWAPhvqDEfvib1KkhI2FWZ8tNfpj4sVjiPfDqks2T73/C08xdDnD4vFizdKBH/C4Zi6tyusbzyfFMzFHrNptvTU3R+c7xTmy9GdJUq8VyUBSmPDhvL5uUu/VqUluze7rLvdKh0/mqdBF25HRDm8cti6Z1eOKeXVC+aXTpOWBPnOvtO3SQDWpWVzzpVk6webnL68jX5dPFembkm+ebyRdKjznwwXGwxv8k9YA0bNpRKlSqZIMiuUKFC0q9fP3Oz0zPLOjysTZs2EhgYaEKdHTt2mGCoU6dO8uGHH0rGjLfzL53MWq+gNmvWLLly5YpUq1bNXJmjevXq5v5Vq1ZJo0aNzJXYsmbNapbp1dH0amk6GWPevHnNOrW30vnz582k3NrT6JNPPjGTZCeEToStcxvNXHtQPLzuPg4b6c8HC53n1wLs1g5uxIsBy+dJLn9fM/Q5LfdiBZJaWq8xV+08KV7e/E2As1b/t5yXBHE6PP4pXhn85xrzP4VGq1evlkmTJpkJoufMmSP58uWT6dOnm5459erVk/SO0Ah3Q2iE+BAaIa7PE0IjpCfUmHdHaIS7ITRCfAiNcD81ZqLnNJo7d640b95csmTJItu2bXNcal43NmrUqMSuDgAAAKDGBAAgBUp0aKTddCdOnCiTJ092uiqFXuZ+69atSb1/AAAASAeoMQEASAOhkU4+Xb9+fctyHV8dEhKSVPsFAACAdIQaEwCANBAa5c6dWw4dOmRZvmbNGilShKsQAAAAIPGoMQEASAOhUbdu3aRv376yceNGc8WJM2fOyI8//ij9+/c3l7sHAAAAqDEBAEj9bl9DNBEGDRok0dHR0qRJE7l27ZoZqubu7m5Coz59+iTPXgIAACBNo8YEACANhEbau+jdd9+VAQMGmGFq4eHhUqZMGfHy8kqePQQAAECaR40JAEAaCI3s3NzcTFgEAAAAJBVqTAAAUnFo1KhRI3MmKD4rVqy4330CAABAOkONCQBAGgiNKlWq5PR7RESEbN++XXbv3i2dOnVKyn0DAABAOkGNCQBAGgiNxo4dG+fyYcOGmfmNAAAAAGpMAABSvwxJtaKXXnpJvvvuu6RaHQAAAECNCQBAWgiN1q9fL5kzZ06q1QEAAADUmAAApKbhaW3btnX63WazydmzZ2Xz5s3y/vvvJ+W+AQAAIJ2gxgQAIA2ERr6+vk6/Z8iQQUqWLCkffPCBNGvWLCn3DQAAAOkENSYAAKk8NIqKipLOnTtL+fLlxc/PL/n2CgAAAOkGNSYAAGlgTiNXV1fTmygkJCT59ggAAADpCjUmAABpZCLscuXKyZEjR5JnbwAAAJAuUWMCAJAGQqMPP/xQ+vfvL4sXLzYTYIeFhTndAAAAAGpMAADS0ZxGOtH1W2+9JU888YT5vVWrVuLi4uJ0FTX9XcekAwAAANSYAACkk9Bo+PDh0r17d1m5cmXy7hEAAADSDWpMAADSQGikPYlUgwYNknN/AAAAkI5QYwIAkEbmNIo5HA0AAABICtSYAACk8p5GqkSJEvf8UA8ODr7ffQIAAEA6Qo0JAEAaCI10zLmvr2/y7Q0AAADSHWpMAADSQGj0/PPPS86cOZNvbwAAAJDuUGMCAJDK5zRirDkAAACSGjUmAABpIDSyX9kCAAAASCrUmAAApIHhadHR0cm7JwAAAEh3qDEBAEgDPY0AAAAAAACQfhAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAi4zWRUgqTUrlEh8fH15QOGleJjevCOLkV703rwyc2KJu8YoAsCidz4caExaHxz/FqwJqTCR5jUlPIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDRCok3+OVAqtBoiuev2k6YvfyJb9hy7a/sFf22VGs+MMO3rPD9S/ly7x+l+m80moyYullKPvSN56r0hbXqOl8MnLvDOpDIcF4hLncpFZdZnr8ne30bK5U0T5IkGFe75QtWtUlxWTR8o59aOlS3zhsoLT9a0tOn6bH3ZsXC4nF0zVpZN7S9VyhTkDQCAVI5aAhwbSChqzAcnxYdGDRs2lH79+sV7f6FChWTcuHEJXt+wYcOkUqVKSbR36c+8P7fIe+Pmy8Cuj5svdeWK55On+3wpF4OvxNl+444j0vW97+Wl1rUlcMYgadGgorzU/xvZe+iMo83n0/6SSbMD5bPBz5svfx5Z3Mw6b9yMeIDPDPeD4wLx8cjiLrsPnpYBo2cn6EUqkNdfZo/rLqu3HJT67T+WibNWyhfvviiNa5V2tHnq0SryYb+n5P+m/C4NO/yf7P73tMwd30uy+3nxRgBIMGrMlIVaAhwbSAxqzAcnxYdGSFm+mrlCOrapI+1b1ZZSRfKYoMcjs5vM+HV9nO0n/bRKmtQuLa93aColC+eWd3s8KRVLBcjkXwIdvYz0S2H/Ls1NDwQNob4e3lHOXQqVJYE7HvCzw3/FcYH4/LVur4ycuFiWrNqZoBepS9t6cuJMkLw/br4cPHZeJv/yt/y6Yrv0eLGRo03PFxvLtAXrZOaiDXLg6Dl586Of5NqNW/JSq9q8EQCQSlFLgGMDiUGN+eAQGiHBbkVEyvb9J6VhjZJ3DqAMGaRBjZKyadfROB/zz66j0rB6Kadl2mNg067bQ9qOnw6S80Fh0rDGnTa+XlmkatlCsmnn3Ye9IWXguEBSql6+sKz654DTsuUb9kmN8oXNz5kyukqlUgFObTR8DvzngHksACD1oZYAxwaSGzVmGg+NIiMjpXfv3uLr6yvZs2eX999/33xJiMuJEyekdevW4uXlJT4+PvLcc8/J+fPnLe0mTZokAQEB4uHhYdqEhoY67lu1apXUqFFDPD09JWvWrFK3bl05fvy4pHdBIeESFRUtObJ5Oy3Pkc1HLgSFxfkYXZ7DP3Z7b0d7DYzMslhtcvrfaYOUjeMCSSmnv49luOvFoDDx8coimd0ziX9WL8mY0dXaJjjMPBYAEoMaM2WglgDHBpIbNWYaD41++OEHyZgxo/zzzz/y+eefy2effSZTpkyxtIuOjjaBUXBwsAQGBsqyZcvkyJEj0q5dO6d2hw4dkp9//lkWLVokS5culW3btknPnj0dxUObNm2kQYMGsnPnTlm/fr28+uqr4uLiEu/+3bx5U8LCwpxuAAAASNmoMQEAuLuMkgpoj6CxY8ea4KZkyZKya9cu83u3bt2c2i1fvtzcd/ToUfMYNW3aNClbtqxs2rRJqlevbpbduHHDLM+XL5/5ffz48dKiRQv59NNPxc3NzfQ6evLJJ6Vo0aLm/tKl70zAGpePPvpIhg8fLmmdnuF3dc2QqDP8JtENit3+iqN9rv/9q21yZ/d1tLkQdEXKl8ifDM8CSY3jAknJ9E6M3ZvR30fCwq+byfGDosIlMjIqUT0eASA+1JgpA7UEODaQ3Kgx03hPo1q1ajn19Kldu7b8+++/EhUV5dRu37595sPfHhipMmXKmCFmep9dgQIFHIGRfX3aS+nAgQOSLVs2efnll6V58+bSsmVL07Pp7Nmzd92/wYMHm6DJfjt58qSkRW6ZMpq5RAI33ZlLRF+3vzcdjHcuEZ2HJGZ7tXLjfqlevpD5uWA+fxMcxWyjXw637Dkm1SvcboOUjeMCSUnnR2tQ/c68aapRjVJmfjQVERll5laL2UY/H+pXLxHv3GoAEB9qzJSBWgIcG0hu1JhpPDR60KZOnWqGpdWpU0dmz54tJUqUkA0bNsTb3t3d3cyfFPOWVtmvWjRr8f+uWvTxbLl6/aa0b1nL3N996DQZPmGho/1rzzeU5ev3yoQZy+XgsXPy8TdLZPu+E9Lt2QaOL3vdX2gkY75bKr8F7pQ9h05Lj2HTTa+jFg0qPrTnicThuEB8PLO4SbkS+cxNFczrb37On8vP/D6kVyv5elgHR/vv5q0xYfLwPq2leMFc8sozj0ibppXl65krLVfYeb5FTSlRKJd8NqideGZxlx8Xxf93GgBSAmrM+FFLgGMDiUGN+eCkiuFpGzdudPpdA5zixYuLq6ur03IdRqa9fPRm7220d+9eCQkJMT2OYk6WfebMGcmbN69jfXoVMB36Zle5cmVz015E2hNp5syZ5mxUete2WVW5FBIuoyYt+d8Qsnwy54tejuFmp84FS4YYvcJqViwikz98WUZ+vVhGfLVIigTkkBljXpUyxW6/9qpvx6Zy7fpNeWPULAkNvy61KhaVOV/0NJPeInXguEB8KpUuKIsn9XX8PurNp82/MxdvkF7DZ0iu7D6SP3c2x/0nzgRJu34TZdSbbU3ofOZCiLw+cqas2HCnt+j8ZVsle1Yveee1FmbS/F0HT8szr39pGToLAPdCjZlyUEuAYwOJQY354LjY4rsMWQrRsGFD2bJli5m/6LXXXpOtW7ean3X+If29UKFC0q9fP3PTp1KlShXx9vaWcePGmUmtdYJrvZKaXhFNDRs2TMaMGWOCIP1XJ63u2rWredysWbPMfEjffPONtGrVyoRKOmTtxRdflBEjRkiPHj0StM+6Tr3S2/mg0DTd6whA0vKr3puXFE5sUbfk5q7JZugznydA0qLGBJBeUGPifmrMVNHTqGPHjnL9+nWpUaOG6V3Ut29fc0Wz2HSo08KFC6VPnz5Sv35903voscceMxNdx1SsWDFp27atPPHEE+ZKazrp9VdffWXu8/DwkP3795uraQQFBUmePHmkV69eJqACAABA2kGNCQBAKu9plBrR0wjAf8FZIMRGTyMA1JgA7hc1Ju6nxmQibAAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwyGhdhPtls9nMv1fCwngxAST8b0fULV4txHlM2D9XAKRv1JgA/tPfDmpM3EeNSWiUDK5cuWL+LVY4IDlWDwBIh58rvr6+D3s3ADxk1JgAgAddY7rYOH2Z5KKjo+XMmTPi7e0tLi4ukp6FhYVJQECAnDx5Unx8fB727iAF4dgAx8a96Ue0fpjnzZtXMmRgRDmQ3lFjOqOWQFw4LhAfjo3/VmPS0ygZ6IueP3/+5Fh1qqWBEaERODbA343Eo4cRADtqTOpM8P0D94/vpomrMTltCQAAAAAAAAtCIwAAAAAAAFgQGiFZubu7y9ChQ82/AMcG+LsBAKDOBN8/8DDw3fS/YSJsAAAAAAAAWNDTCAAAAAAAABaERgAAAAAAALAgNEKCHTt2TFxcXGT79u0PdR0AUq6GDRtKv3794r2/UKFCMm7cuASvb9iwYVKpUqUk2jsAQEpEjQkgIagzH46MD2m7SIUCAgLk7Nmzkj179oe9KwAAAEgjqDEBIOUiNEKCubq6Su7cueO932azSVRUlGTMyGEFAAAAakwASO0YngaL6OhoGT16tBQrVsxclrBAgQIycuRIS9fhVatWmd9///13qVq1qmm7Zs2aeB8fn927d8vjjz8uXl5ekitXLunQoYNcunTJcf+cOXOkfPnykiVLFvH395emTZvK1atXeececFfQPn36mGFHfn5+5n2aPHmyeR86d+4s3t7e5v3WY8EuMDBQatSoYY6BPHnyyKBBgyQyMtJx/82bN+X111+XnDlzSubMmaVevXqyadMmx/3242v58uVSrVo18fDwkDp16siBAwccbXbs2CGNGjUy2/fx8THH4ebNmx/gK4O46Pvcu3dv8fX1NT0T33//fRMqx+XEiRPSunVr8/+/vofPPfecnD9/3tJu0qRJ5ky0HgfaJjQ01OlY0WPN09NTsmbNKnXr1pXjx4/z5gBACkONidioMZFY1JkPHqERLAYPHiwff/yx+aK3d+9emTlzpgkJ4qNhgLbft2+fVKhQIVGPDwkJkcaNG0vlypXNl/2lS5eaL4z6pVDpcLgXXnhBunTpYtavXw7btm0b7xdQJJ8ffvjBBAD//POPCZB69Oghzz77rAlytm7dKs2aNTOB37Vr1+T06dPyxBNPSPXq1U2w8/XXX8u3334rH374oWN9b7/9tsydO9esVx+voVPz5s0lODjYabvvvvuufPrpp+b40F5seizYtW/fXvLnz2/Cpi1btphjMVOmTBwGD5m+p/pe6bHy+eefy2effSZTpkyJ88uDBkb6nmvIuGzZMjly5Ii0a9fOqd2hQ4fk559/lkWLFpm/Edu2bZOePXs6Coc2bdpIgwYNZOfOnbJ+/Xp59dVXTeAIAEhZqDERF2pMJAZ15kNgA2IICwuzubu72yZPnmx5XY4ePapJjW3btm3m95UrV5rfFyxYkKDHx7WOESNG2Jo1a+bU5uTJk6bNgQMHbFu2bDE/Hzt2jPfpIWrQoIGtXr16jt8jIyNtnp6etg4dOjiWnT171rxX69evt73zzju2kiVL2qKjox33f/nllzYvLy9bVFSULTw83JYpUybbjz/+6Lj/1q1btrx589pGjx7tdHz99ddfjjZLliwxy65fv25+9/b2tn3//ffJ/vyRuGOldOnSTu/9wIEDzTJVsGBB29ixY83Pf/75p83V1dV24sQJR9s9e/aY9/iff/4xvw8dOtS0OXXqlKPN77//bsuQIYM55oKCgkz7VatW8TYBQApGjYm4UGMiMagzHw56GsGJ9ubRYUNNmjRJ8CujQ4f+6+O1F8rKlSvN0BT7rVSpUua+w4cPS8WKFc26dHia9mrRIVGXL1/mXXsItBdZzPmtdKigvi929t5kFy5cMMdB7dq1nXp76JCh8PBwOXXqlHlvIyIizDI77SGkQ4z0sfFtV4e52beh3nzzTenatasZsqi923S9ePhq1arl9N7rsfDvv/+aOc9i0vdah5zpza5MmTJmiFnM40CHuObLl89pfdpLSYcqZsuWTV5++WXTS61ly5amZ5P2UAQApCzUmIgPNSYSgzrzwSM0ghOdNyixdB6R//p4DRH0i57OkxTzpl8w69evb8IJHbKic+Xol8nx48dLyZIl5ejRo7xzD1jsYV8aCsRcZg8J9Mt8cm039jb0cux79uyRFi1ayIoVK8wxMn/+/CTdPlK+qVOnmmFpOlRy9uzZUqJECdmwYcPD3i0AQAzUmIgPNSZSsqnUmYRGcFa8eHHzoa6TDz+Ix1epUsV86S9UqJCZ0ybmzR5GaVCgPVKGDx9u5jJxc3MjGEjhSpcubb7Ex5x7au3atWbCap2DqGjRouZ91GV22vNI5ybS4CcxNCB444035M8//zTzXekfdjxcGzdudPpdAxz926AhcOzj5OTJk+Zmp/Og6VxnMY8DnSz7zJkzTuvLkCGDCZDtdF40nStj3bp1Uq5cOTOXGgAg5aDGRFKgxgR15oNHTyM40atYDRw40ExSPG3aNDPcR7+g6STGyfH4Xr16mUlwdbJrDQy0/R9//GGuyKVDWfSPwqhRo8wkyPrFcd68eXLx4kXzgYGUSycp1iBAJ8zev3+/LFy4UIYOHWqGk+mXfQ0EdSLtAQMGmImNNSjo1q2bmUT7lVdeSdA2rl+/bq7QpZOj65WyNIDSY4hj4+HT/1f1vdbhY7NmzTI9BPv27Wtpp8MKdYijTmiuk6HrxNkdO3Y0k1rHHPaqf1c6depkhrOuXr3aXHVPJ8vPnTu36XWoYZGGlHocaHioPRU5DgAgZaHGRFKgxgR15oOX8SFsEymcXvVMr3w0ZMgQc3Zf55Hp3r17sjw+b9685su+Bk169S2dD6lgwYLy2GOPmXBBL8H9999/y7hx4yQsLMzcp1fSevzxx5PwGSOp6fwzv/32mwmFdF4qnXdGw6D33nvP0UbnINJhZnrFtStXrpiQQANDPz+/BG1De60EBQWZkEGvuKdXdtOeRtojDQ+Xvica6ukcVfo+aWCkVzSLTXsRaqCo4aIOR9X/5/X/fQ2ZYtKeh/re6hX5NGR+8skn5auvvjL3eXh4mGBSr6Shx4P+vdEw+rXXXntgzxcAkDDUmLhf1JigznzwXHQ27IewXQAAAAAAAKRgDE8DAAAAAACABaERAAAAAAAALAiNAAAAAAAAYEFoBAAAAAAAAAtCIwAAAAAAAFgQGgEAAAAAAMCC0AgAAAAAAAAWhEYAAAAAAACwIDQCEK+XX35Z2rRp4/i9YcOG0q9fvwf+iq1atUpcXFwkJCQk3jZ6/4IFCxK8zmHDhkmlSpXua7+OHTtmtrt9+/b7Wg8AAEB6Q515d9SZSCkIjYBU+AGrQYXe3NzcpFixYvLBBx9IZGRksm973rx5MmLEiCQLegAAAJByUGcCiC2jZQmAFO+xxx6TqVOnys2bN+W3336TXr16SaZMmWTw4MGWtrdu3TLhUlLIli1bkqwHAAAAKRN1JoCY6GkEpELu7u6SO3duKViwoPTo0UOaNm0qv/76q1NX35EjR0revHmlZMmSZvnJkyflueeek6xZs5rwp3Xr1qbbq11UVJS8+eab5n5/f395++23xWazOW039vA0Da0GDhwoAQEBZp+019O3335r1tuoUSPTxs/Pz/Q40v1S0dHR8tFHH0nhwoUlS5YsUrFiRZkzZ47TdjQIK1GihLlf1xNzPxNK90vX4eHhIUWKFJH3339fIiIiLO0mTZpk9l/b6esTGhrqdP+UKVOkdOnSkjlzZilVqpR89dVXid4XAACA1II6896oM5GeEBoBaYCGK9qjyG758uVy4MABWbZsmSxevNiEJc2bNxdvb29ZvXq1rF27Vry8vMyZJPvjPv30U/n+++/lu+++kzVr1khwcLDMnz//rtvt2LGjzJo1S7744gvZt2+fCWB0vRrCzJ0717TR/Th79qx8/vnn5ncNjKZNmyYTJ06UPXv2yBtvvCEvvfSSBAYGOsKttm3bSsuWLc1cQV27dpVBgwYl+jXR56rPZ+/evWbbkydPlrFjxzq1OXTokPz888+yaNEiWbp0qWzbtk169uzpuP/HH3+UIUOGmABOn9+oUaNM+PTDDz8ken8AAABSI+pMK+pMpCs2AKlKp06dbK1btzY/R0dH25YtW2Zzd3e39e/f33F/rly5bDdv3nQ8Zvr06baSJUua9nZ6f5YsWWx//PGH+T1Pnjy20aNHO+6PiIiw5c+f37Et1aBBA1vfvn3NzwcOHNBuSGb7cVm5cqW5//Lly45lN27csHl4eNjWrVvn1PaVV16xvfDCC+bnwYMH28qUKeN0/8CBAy3rik3vnz9/frz3f/LJJ7aqVas6fh86dKjN1dXVdurUKcey33//3ZYhQwbb2bNnze9Fixa1zZw502k9I0aMsNWuXdv8fPToUbPdbdu2xbtdAACA1II6M27UmUjPmNMISIW095D26NEeRDrc68UXXzRXA7MrX7680zxGO3bsML1q9KxITDdu3JDDhw+bIVnaG6hmzZqO+zJmzCjVqlWzDFGz015Arq6u0qBBgwTvt+7DtWvX5NFHH3Varr2dKleubH7WHj0x90PVrl1bEmv27NmmB5Q+v/DwcDNRuI+Pj1ObAgUKSL58+Zy2o6+n9o7S10of+8orr0i3bt0cbXQ9vr6+id4fAACA1IA6896oM5GeEBoBqZDO8/P111+bYEjnLdKAJyZPT0+n3zU0qVq1qhluFVuOHDn+c1flxNL9UEuWLHEKa+zj55PK+vXrpX379jJ8+HAzLE9Dnp9++skMwUvsvuqwttghloZlAAAAaRF15t1RZyK9ITQCUiENhXTS6YSqUqWKOSOSM2dOS28buzx58sjGjRulfv36jh41W7ZsMY+Ni/Zm0l45OheRTsQdm72nk06wbVemTBkTDp04cSLeHko66bR9Um+7DRs2SGKsW7fOTBL+7rvvOpYdP37c0k7348yZMyZ4s28nQ4YMZvLwXLlymeVHjhwxARQAAEB6QJ15d9SZSG+YCBtIBzT0yJ49u7limk6EffToUVm1apW8/vrrcurUKdOmb9++8vHHH8uCBQtk//79ZkLokJCQeNdZqFAh6dSpk3Tp0sU8xr5OnVhaaWijV03TLs4XL140PXd0yFf//v3N5Nc6mbQO/9q6dauMHz/eMbl09+7d5d9//5UBAwaYYWIzZ840E1onRvHixU0gpL2LdBs6TC2uSb31imj6HHT4nr4u+nroFdT0ynRKeyrpxN36+IMHD8quXbtk6tSp8tlnnyVqfwAAANIq6kzqTKRthEZAOqCXk//777/NHD56ZTLtzaNz9eicRvaeR2+99ZZ06NDBhCg6t48GPE899dRd16tD5J555hkTMOnl6HXun6tXr5r7dPiZhi565TPttdO7d2+zfMSIEeYKZBrG6H7oFdx0uFrhwoXN/bqPeuU1DaIqVqxorrKmVy1LjFatWplgSrdZqVIlc0ZItxmb9tbS1+OJJ56QZs2aSYUKFeSrr75y3K9XbpsyZYoJirRnlfaO0gDLvq8AAADpHXUmdSbSNhedDfth7wQAAAAAAABSFnoaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIAFoREAAAAAAAAsCI0AAAAAAABgQWgEAAAAAAAAC0IjAAAAAAAAWBAaAQAAAAAAwILQCAAAAAAAABaERgAAAAAAALAgNAIAAAAAAIDE9v8VK1/EodHC2AAAAABJRU5ErkJggg==", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Validation accuracy: 0.806\n", + " precision recall f1-score support\n", + "\n", + " circles 0.698 0.733 0.715 60\n", + " moons 0.719 0.683 0.701 60\n", + " blobs 1.000 1.000 1.000 60\n", + "\n", + " accuracy 0.806 180\n", + " macro avg 0.806 0.806 0.805 180\n", + "weighted avg 0.806 0.806 0.805 180\n", + "\n", + "Test accuracy: 0.856\n", + " precision recall f1-score support\n", + "\n", + " circles 0.783 0.783 0.783 60\n", + " moons 0.783 0.783 0.783 60\n", + " blobs 1.000 1.000 1.000 60\n", + "\n", + " accuracy 0.856 180\n", + " macro avg 0.856 0.856 0.856 180\n", + "weighted avg 0.856 0.856 0.856 180\n", + "\n" + ] + } + ], + "source": [ + "def predict_by_nearest_centroid(\n", + " embeddings: np.ndarray,\n", + " centroids: dict[str, np.ndarray],\n", + ") -> np.ndarray:\n", + " centroid_matrix = np.vstack([centroids[label] for label in DATASET_TYPES])\n", + " distances = np.linalg.norm(embeddings[:, None, :] - centroid_matrix[None, :, :], axis=2)\n", + " return np.asarray(DATASET_TYPES)[distances.argmin(axis=1)]\n", + "\n", + "\n", + "val_predictions = predict_by_nearest_centroid(val_embeddings, train_centroids)\n", + "test_predictions = predict_by_nearest_centroid(test_embeddings, train_centroids)\n", + "\n", + "fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n", + "for ax, split_name, y_true, y_pred in [\n", + " (axes[0], \"Validation\", val_labels, val_predictions),\n", + " (axes[1], \"Test\", test_labels, test_predictions),\n", + "]:\n", + " cm = confusion_matrix(y_true, y_pred, labels=DATASET_TYPES, normalize=\"true\")\n", + " disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=DATASET_TYPES)\n", + " disp.plot(ax=ax, cmap=\"Blues\", colorbar=False, values_format=\".2f\")\n", + " ax.set_title(f\"{split_name} confusion matrix\")\n", + "\n", + "plt.tight_layout()\n", + "plt.show()\n", + "\n", + "print(f\"Validation accuracy: {(np.asarray(val_labels) == val_predictions).mean():.3f}\")\n", + "print(classification_report(val_labels, val_predictions, labels=list(DATASET_TYPES), digits=3, zero_division=0))\n", + "\n", + "print(f\"Test accuracy: {(np.asarray(test_labels) == test_predictions).mean():.3f}\")\n", + "print(classification_report(test_labels, test_predictions, labels=list(DATASET_TYPES), digits=3, zero_division=0))\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv (3.10.5)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.5" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/demo/dataset2vec/visualization/__init__.py b/demo/dataset2vec/visualization/__init__.py new file mode 100644 index 0000000..078ed55 --- /dev/null +++ b/demo/dataset2vec/visualization/__init__.py @@ -0,0 +1,3 @@ +from .plots import EmbeddingVisualizer, plot_embeddings + +__all__ = ["EmbeddingVisualizer", "plot_embeddings"] \ No newline at end of file diff --git a/demo/dataset2vec/visualization/plots.py b/demo/dataset2vec/visualization/plots.py new file mode 100644 index 0000000..d58fe96 --- /dev/null +++ b/demo/dataset2vec/visualization/plots.py @@ -0,0 +1,182 @@ +from __future__ import annotations + +import matplotlib.pyplot as plt +import numpy as np +from numpy.typing import NDArray +from sklearn.decomposition import PCA +from sklearn.manifold import TSNE + + +class EmbeddingVisualizer: + """ะฃะฝะธะฒะตั€ัะฐะปัŒะฝะฐั ะฒะธะทัƒะฐะปะธะทะฐั†ะธั ัะผะฑะตะดะดะธะฝะณะพะฒ. + + ะŸะพะดะดะตั€ะถะธะฒะฐะตั‚ ะฝะฐะปะพะถะตะฝะธะต ัƒะผะตะฝัŒัˆะตะฝะธั ั€ะฐะทะผะตั€ะฝะพัั‚ะธ ั‡ะตั€ะตะท PCA/TSNE ะธ ะฟะพัั‚ั€ะพะตะฝะธะต + 2D/3D scatter plot. + + ะŸั€ะธะผะตั€: + vis = EmbeddingVisualizer(random_state=42) + fig, ax = vis.plot(embeddings, labels=labels, method="tsne") + """ + + def __init__(self, random_state: int | None = None): + self.random_state = random_state + + def reduce( + self, + embeddings: NDArray[np.floating], + method: str = "tsne", + n_components: int = 2, + **kwargs, + ) -> NDArray[np.floating]: + """ะฃะผะตะฝัŒัˆะฐะตั‚ ั€ะฐะทะผะตั€ะฝะพัั‚ัŒ ัะผะฑะตะดะดะธะฝะณะพะฒ (PCA/TSNE).""" + embeddings = np.asarray(embeddings, dtype=float) + if embeddings.ndim != 2: + raise ValueError("Embeddings must be a 2D array (n_samples, n_features)") + if n_components not in (2, 3): + raise ValueError("n_components must be 2 ะธะปะธ 3") + + # ะ•ัะปะธ ัะผะฑะตะดะดะธะฝะณะธ ัƒะถะต ะฝัƒะถะฝะพะน ั€ะฐะทะผะตั€ะฝะพัั‚ะธ, ะฝะธั‡ะตะณะพ ะฝะต ะดะตะปะฐะตะผ. + if embeddings.shape[1] == n_components: + return embeddings + + method = method.lower() + if method == "pca": + reducer = PCA(n_components=n_components, random_state=self.random_state, **kwargs) + elif method == "tsne": + reducer = TSNE(n_components=n_components, random_state=self.random_state, **kwargs) + else: + raise ValueError("ะœะตั‚ะพะด ะดะพะปะถะตะฝ ะฑั‹ั‚ัŒ 'tsne' ะธะปะธ 'pca'") + + return reducer.fit_transform(embeddings) + + def plot( + self, + embeddings: NDArray[np.floating], + labels: list[str] | None = None, + title: str | None = None, + method: str = "tsne", + n_components: int = 2, + annotate: bool = False, + figsize: tuple[int, int] = (6, 6), + show: bool = True, + **kwargs, + ) -> tuple[plt.Figure, plt.Axes]: + """ะŸะพัั‚ั€ะพะธั‚ัŒ scatter plot ัะผะฑะตะดะดะธะฝะณะพะฒ. + + Args: + embeddings: ะœะฐััะธะฒ shape=(n_samples, n_features). + labels: ะœะตั‚ะบะธ ะดะปั ะฟะพะดะฟะธัะธ ั‚ะพั‡ะตะบ (ะพะฟั†ะธะพะฝะฐะปัŒะฝะพ). + method: "tsne" ะธะปะธ "pca". + n_components: 2 ะธะปะธ 3. + annotate: ะฟะพะบะฐะทั‹ะฒะฐะตั‚ ะฟะพะดะฟะธัะธ (labels) ั€ัะดะพะผ ั ั‚ะพั‡ะบะฐะผะธ. + show: ะฒั‹ะทั‹ะฒะฐะตั‚ plt.show() (ะฟะพ ัƒะผะพะปั‡ะฐะฝะธัŽ True). + **kwargs: ะดะพะฟะพะปะฝะธั‚ะตะปัŒะฝั‹ะต ะฟะฐั€ะฐะผะตั‚ั€ั‹ ะดะปั TSNE/PCA. + + ะ’ะพะทะฒั€ะฐั‰ะฐะตั‚: + (fig, ax) matplotlib ะดะปั ะดะฐะปัŒะฝะตะนัˆะตะน ะบะฐัั‚ะพะผะธะทะฐั†ะธะธ. + """ + transformed = self.reduce( + embeddings, method=method, n_components=n_components, **kwargs + ) + + fig, ax = plt.subplots(figsize=figsize) + + # ะฆะฒะตั‚ะฐ ะฟะพ ะผะตั‚ะบะฐะผ (ะตัะปะธ ะตัั‚ัŒ) ะธะปะธ ะตะดะธะฝั‹ะน ั†ะฒะตั‚ + if labels is not None: + labels_arr = np.asarray(labels, dtype=str) + unique_labels = list(dict.fromkeys(labels_arr.tolist())) + cmap = plt.get_cmap("tab10") + colors = {lbl: cmap(i % cmap.N) for i, lbl in enumerate(unique_labels)} + + if n_components == 2: + x, y = transformed[:, 0], transformed[:, 1] + + if labels is not None: + for lbl in unique_labels: + mask = labels_arr == lbl + ax.scatter( + x[mask], y[mask], + c=[colors[lbl]], + alpha=0.7, + label=lbl, + edgecolors="none", + ) + if not annotate: + ax.legend(title="label") + else: + ax.scatter(x, y, c="tab:blue", alpha=0.7) + + if labels is not None and annotate: + for i, label in enumerate(labels_arr): + ax.text(x[i], y[i], label, fontsize=8) + + ax.set_xlabel("dim 0") + ax.set_ylabel("dim 1") + else: + from mpl_toolkits.mplot3d import Axes3D + + ax = fig.add_subplot(111, projection="3d") + x, y, z = transformed[:, 0], transformed[:, 1], transformed[:, 2] + + if labels is not None: + for lbl in unique_labels: + mask = labels_arr == lbl + ax.scatter( + x[mask], y[mask], z[mask], + c=[colors[lbl]], + alpha=0.7, + label=lbl, + edgecolors="none", + ) + if not annotate: + ax.legend(title="label") + else: + ax.scatter(x, y, z, c="tab:blue", alpha=0.7) + + ax.set_xlabel("dim 0") + ax.set_ylabel("dim 1") + ax.set_zlabel("dim 2") + + if title is not None: + ax.set_title(title) + ax.grid(True) + if show: + plt.show() + return fig, ax + + +def plot_embeddings( + embeddings: NDArray[np.floating], + labels: list[str] | None = None, + title: str | None = None, +) -> None: + """ะŸั€ะพัั‚ะตะนัˆะธะน scatter plot ัะผะฑะตะดะดะธะฝะณะพะฒ.""" + if embeddings.shape[1] < 2: + raise ValueError("Embeddings must have at least 2 dimensions for plotting") + + x = embeddings[:, 0] + y = embeddings[:, 1] + + plt.figure(figsize=(6, 6)) + + if labels is not None: + labels_arr = np.asarray(labels, dtype=str) + unique_labels = list(dict.fromkeys(labels_arr.tolist())) + cmap = plt.get_cmap("tab10") + colors = {lbl: cmap(i % cmap.N) for i, lbl in enumerate(unique_labels)} + for lbl in unique_labels: + mask = labels_arr == lbl + plt.scatter(x[mask], y[mask], c=[colors[lbl]], alpha=0.7, label=lbl, edgecolors="none") + plt.legend(title="label") + else: + plt.scatter(x, y, c="tab:blue", alpha=0.7) + + if labels is not None: + for i, label in enumerate(labels): + plt.text(x[i], y[i], label, fontsize=8) + if title is not None: + plt.title(title) + plt.xlabel("dim 0") + plt.ylabel("dim 1") + plt.grid(True) + plt.show() \ No newline at end of file diff --git a/demo/task2vec/simple_example.ipynb b/demo/task2vec/simple_example.ipynb new file mode 100644 index 0000000..ab5be76 --- /dev/null +++ b/demo/task2vec/simple_example.ipynb @@ -0,0 +1,571 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "id": "7c7f2bed-6353-4b29-aeed-b08cc9835a1b", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/machenike/bmml/DataMetaMap/src/data_meta_map/wasserstein_embedder.py:8: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n", + " from tqdm.autonotebook import tqdm\n" + ] + } + ], + "source": [ + "from data_meta_map.task2vec import task2vec\n", + "from data_meta_map.models import get_model\n", + "from data_meta_map import datasets\n", + "from data_meta_map.task2vec import plot_distance_matrix" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "2f3376cd-a564-4f99-8966-ad6d89b9ecaa", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Embedding mnist\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Caching features: 0%| | 0/14 [00:00" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "plot_distance_matrix(embeddings=embeddings, labels=dataset_names)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "705192a9-5ee8-4f92-aa41-a2c19197d017", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "69f44bbb-354f-4471-9d16-b812909af972", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Files already downloaded and verified\n", + "Files already downloaded and verified\n" + ] + } + ], + "source": [ + "dataset_names = ('mnist', 'cifar10', 'cifar100', 'letters')\n", + "dataset_list = [datasets.__dict__[name](root='../../data')[0] for name in dataset_names] " + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "1a8c4af8-0fdf-44da-9d6c-1b2d86be9567", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Embedding mnist\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Caching features: 0%| | 0/14 [00:00" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "plot_distance_matrix(embeddings=embeddings, labels=dataset_names)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/demo/wasserstein/__init__.py b/demo/wasserstein/__init__.py new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/demo/wasserstein/__init__.py @@ -0,0 +1 @@ + diff --git a/demo/wasserstein/simple_example1 (1).ipynb b/demo/wasserstein/simple_example1 (1).ipynb new file mode 100644 index 0000000..43ce608 --- /dev/null +++ b/demo/wasserstein/simple_example1 (1).ipynb @@ -0,0 +1,435 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "id": "a1b2c3d4-0001-0001-0001-000000000001", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T09:58:09.222067Z", + "iopub.status.busy": "2026-04-05T09:58:09.221871Z", + "iopub.status.idle": "2026-04-05T09:58:13.085679Z", + "shell.execute_reply": "2026-04-05T09:58:13.084191Z" + }, + "id": "a1b2c3d4-0001-0001-0001-000000000001", + "outputId": "2e2d4358-aee1-401f-a361-31482ea55081" + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/home/papayiv/misc/DataMetaMap/src/data_meta_map/wasserstein_embedder.py:8: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", + " from tqdm.autonotebook import tqdm\n" + ] + } + ], + "source": [ + "from data_meta_map.wasserstein_embedder import WassersteinEmbedder\n", + "from data_meta_map import datasets\n", + "import torch\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import seaborn as sns" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a1b2c3d4-0001-0001-0001-000000000002", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T09:58:13.089682Z", + "iopub.status.busy": "2026-04-05T09:58:13.089255Z", + "iopub.status.idle": "2026-04-05T09:58:14.550612Z", + "shell.execute_reply": "2026-04-05T09:58:14.549277Z" + }, + "id": "a1b2c3d4-0001-0001-0001-000000000002" + }, + "outputs": [], + "source": [ + "dataset_names = ('mnist', 'cifar10')\n", + "dataset_list = [datasets.__dict__[name](root='../../data')[0] for name in dataset_names]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a1b2c3d4-0001-0001-0001-000000000003", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T09:58:14.554439Z", + "iopub.status.busy": "2026-04-05T09:58:14.554225Z", + "iopub.status.idle": "2026-04-05T09:58:57.378007Z", + "shell.execute_reply": "2026-04-05T09:58:57.377261Z" + }, + "id": "a1b2c3d4-0001-0001-0001-000000000003", + "outputId": "d2481808-0f40-43d4-f57a-409cad45ea1d" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Computing pairwise Bures-Wasserstein distances between classes...\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\r", + "Preprocessing dataset: 0%| | 0/4 [00:00" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "class_offsets = [0]\n", + "for idx, dataset in enumerate(dataset_list):\n", + " _, Y = embedder.preprocess_dataset(dataset, dataset_id=idx)\n", + " num_classes = len(torch.unique(Y))\n", + " class_offsets.append(class_offsets[-1] + num_classes)\n", + "\n", + "\n", + "n = len(dataset_names)\n", + "D_dataset = np.zeros((n, n))\n", + "for i in range(n):\n", + " for j in range(n):\n", + " block = D[class_offsets[i]:class_offsets[i + 1],\n", + " class_offsets[j]:class_offsets[j + 1]]\n", + " D_dataset[i, j] = block.mean().item()\n", + "\n", + "fig, ax = plt.subplots(figsize=(6, 5))\n", + "sns.heatmap(D_dataset, annot=True, fmt='.3f',\n", + " xticklabels=dataset_names, yticklabels=dataset_names,\n", + " cmap='viridis', ax=ax)\n", + "ax.set_title('Wasserstein Dataset Distance Matrix\\n(mean Bures-Wโ‚‚ over cross-class pairs)')\n", + "plt.tight_layout()\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a1b2c3d4-0001-0001-0001-000000000005", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T09:58:57.501816Z", + "iopub.status.busy": "2026-04-05T09:58:57.501633Z", + "iopub.status.idle": "2026-04-05T09:58:57.785038Z", + "shell.execute_reply": "2026-04-05T09:58:57.784137Z" + }, + "id": "a1b2c3d4-0001-0001-0001-000000000005", + "outputId": "888e07b1-93c2-4a03-ce38-6728ee07a0a7" + }, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAA3kAAAKyCAYAAABoqBcWAAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjgsIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvwVt1zgAAAAlwSFlzAAAPYQAAD2EBqD+naQAAjKZJREFUeJzs3Xd4VGX+/vF7ZlIhJKGkgLTQQxHYsAiJVJGAVBcU1BWCCKhBQMDCrgKyKmvZVVSK6Aq66roiiqDSVkGkqaBIEZASQClJKGm0kJnn9wffzI8hhSQkmWTyfl1Xrt055zkzn5MZYu48zWKMMQIAAAAAeASruwsAAAAAABQfQh4AAAAAeBBCHgAAAAB4EEIeAAAAAHgQQh4AAAAAeBBCHgAAAAB4EEIeAAAAAHgQQh4AAAAAeBBCHgAAAAB4EEIeACCH+vXrKy4uzt1llJhDhw7JYrHopZdeKrbnXLt2rSwWi9auXVtsz3m96tevr759+5b462R/PxcuXHjNtnFxcapfv77LMYvFounTp5dIbQBQERHyAKACOXDggMaMGaMGDRrIz89PgYGBiomJ0axZs3T+/Hl3l5evhQsXymKxaMuWLe4u5bpk30deX5s3b3Z3iQCAcs7L3QUAAErHF198oTvuuEO+vr4aNmyYWrZsqczMTK1fv16PPvqodu3apfnz57u7zApjxowZioiIyHG8UaNGbqjGvc6fPy8vL34lAYDiwk9UAKgAEhISNHToUNWrV09ff/21atas6TwXHx+v/fv364svvnBjhRVP79691a5dO3eXUSb4+fm5uwQA8CgM1wSACuCFF15QRkaG/vWvf7kEvGyNGjXS+PHj87z+9OnTmjx5slq1aqWAgAAFBgaqd+/e+vnnn3O0fe2119SiRQtVqlRJVatWVbt27fTBBx84z6enp2vChAmqX7++fH19FRoaqltvvVU//vjjdd9nZmampk6dqqioKAUFBaly5crq1KmT1qxZk+c1L7/8surVqyd/f3916dJFO3fuzNFmz549Gjx4sKpVqyY/Pz+1a9dOS5cuve5683PlvMHZs2erQYMGqlSpknr27KnffvtNxhj97W9/U+3ateXv768BAwbo9OnTuT7XqlWr1KZNG/n5+al58+b65JNPcrRJSUnRhAkTVKdOHfn6+qpRo0Z6/vnn5XA4crSLi4tTUFCQgoODNXz4cKWkpOT6ukuWLFHLli3l5+enli1b6tNPP8213dVz8qZPny6LxaL9+/crLi5OwcHBCgoK0ogRI3Tu3DmXa8+fP69x48apRo0aqlKlivr376+jR4/meM6S/NwBQFlDTx4AVADLli1TgwYNFB0dXaTrDx48qCVLluiOO+5QRESEEhMT9cYbb6hLly765ZdfVKtWLUnSm2++qXHjxmnw4MEaP368Lly4oO3bt+u7777T3XffLUl64IEH9PHHH2vs2LFq3ry5Tp06pfXr12v37t36wx/+cF33mZaWprfeekt33XWXRo0apfT0dP3rX/9SbGysvv/+e7Vp08al/bvvvqv09HTFx8frwoULmjVrlrp3764dO3YoLCxMkrRr1y7FxMTohhtu0BNPPKHKlSvro48+0sCBA7V48WLdfvvtRao1NTVVJ0+edDlmsVhUvXp1l2Pvv/++MjMz9fDDD+v06dN64YUXdOedd6p79+5au3atHn/8ce3fv1+vvfaaJk+erLffftvl+n379mnIkCF64IEHNHz4cC1YsEB33HGHVqxYoVtvvVWSdO7cOXXp0kVHjx7VmDFjVLduXW3cuFFTpkzR8ePH9corr0iSjDEaMGCA1q9frwceeECRkZH69NNPNXz48Bz3t2rVKg0aNEjNmzfXzJkzderUKY0YMUK1a9cu8PfozjvvVEREhGbOnKkff/xRb731lkJDQ/X8888728TFxemjjz7Svffeqw4dOuibb75Rnz59cjxXSX7uAKDMMQAAj5aammokmQEDBhT4mnr16pnhw4c7H1+4cMHY7XaXNgkJCcbX19fMmDHDeWzAgAGmRYsW+T53UFCQiY+PL3At2RYsWGAkmR9++CHPNllZWebixYsux86cOWPCwsLMfffd51K7JOPv729+//135/HvvvvOSDKPPPKI89gtt9xiWrVqZS5cuOA85nA4THR0tGncuLHz2Jo1a4wks2bNmgLdR25fvr6+OWoMCQkxKSkpzuNTpkwxkkzr1q3NpUuXnMfvuusu4+Pj41JnvXr1jCSzePFi57HU1FRTs2ZN07ZtW+exv/3tb6Zy5crm119/dan1iSeeMDabzRw5csQYY8ySJUuMJPPCCy8422RlZZlOnToZSWbBggXO423atDE1a9Z0qX3VqlVGkqlXr57L60gy06ZNcz6eNm2akeTynhljzO23326qV6/ufLx161YjyUyYMMGlXVxcXI7nLOrnDgDKI4ZrAoCHS0tLkyRVqVKlyM/h6+srq/XyfzLsdrtOnTqlgIAANW3a1GW4W3BwsH7//Xf98MMPeT5XcHCwvvvuOx07dqzI9eTFZrPJx8dHkuRwOHT69GllZWWpXbt2uQ7LGzhwoG644Qbn4/bt2+umm27Sl19+KenyMNWvv/5ad955p9LT03Xy5EmdPHlSp06dUmxsrPbt26ejR48WqdbZs2dr9erVLl/Lly/P0e6OO+5QUFCQ8/FNN90kSfrzn//ssljJTTfdpMzMzBz11KpVy6W3MTAwUMOGDdNPP/2kEydOSJIWLVqkTp06qWrVqs57PHnypHr06CG73a5169ZJkr788kt5eXnpwQcfdD6fzWbTww8/7PKax48f17Zt2zR8+HCX2m+99VY1b968wN+jBx54wOVxp06ddOrUKednesWKFZKkhx56yKXd1fVIJfu5A4CyhuGaAODhAgMDJV2ek1RUDodDs2bN0pw5c5SQkCC73e48d+Xwwscff1z/+9//1L59ezVq1Eg9e/bU3XffrZiYGGebF154QcOHD1edOnUUFRWl2267TcOGDVODBg2KXN+V3nnnHf3jH//Qnj17dOnSJefx3FaybNy4cY5jTZo00UcffSRJ2r9/v4wxeuqpp/TUU0/l+npJSUkuQbGg2rdvX6CFV+rWrevyODs01alTJ9fjZ86ccTneqFEjWSwWl2NNmjSRdHneX3h4uPbt26ft27crJCQk1xqSkpIkSYcPH1bNmjUVEBDgcr5p06Yujw8fPiwp9+/v1X8YyM/V9161alVJl+8xMDBQhw8fltVqzfHe5rZCaUl/7gCgLCHkAYCHCwwMVK1atXJdUKSgnnvuOT311FO677779Le//U3VqlWT1WrVhAkTXBbmiIyM1N69e/X5559rxYoVWrx4sebMmaOpU6fq6aeflnR5nlWnTp306aefatWqVXrxxRf1/PPP65NPPlHv3r2v617fe+89xcXFaeDAgXr00UcVGhoqm82mmTNn6sCBA4V+vux7mzx5smJjY3NtU9JbHthstkIdN8YU+jUcDoduvfVWPfbYY7mezw6Fpa0477EkP3cAUNYQ8gCgAujbt6/mz5+vTZs2qWPHjoW+/uOPP1a3bt30r3/9y+V4SkqKatSo4XKscuXKGjJkiIYMGaLMzEz96U9/0rPPPqspU6Y4l8qvWbOmHnroIT300ENKSkrSH/7wBz377LPX/cv2xx9/rAYNGuiTTz5x6b2aNm1aru337duX49ivv/6q+vXrS5Kzl8fb21s9evS4rtrcJbs38srvx6+//ipJzvts2LChMjIyrnmP9erV01dffaWMjAyX3ry9e/fmaCfl/v29uu31qFevnhwOhxISElx6Dffv359r+5L63AFAWcOcPACoAB577DFVrlxZ999/vxITE3OcP3DggGbNmpXn9TabLUfvyaJFi3LM/zp16pTLYx8fHzVv3lzGGF26dEl2u12pqakubUJDQ1WrVi1dvHixsLeVa52Sa0/Pd999p02bNuXafsmSJS738P333+u7775z/tIfGhqqrl276o033tDx48dzXJ+cnHzdNZe0Y8eOuWxdkJaWpnfffVdt2rRReHi4pMu9XJs2bdLKlStzXJ+SkqKsrCxJ0m233aasrCzNnTvXed5ut+u1115zuaZmzZpq06aN3nnnHZf3e/Xq1frll1+K7d6ye1fnzJnjcvzqekr6cwcAZQ09eQBQATRs2FAffPCBhgwZosjISA0bNkwtW7ZUZmamNm7cqEWLFikuLi7P6/v27asZM2ZoxIgRio6O1o4dO/T+++/nmM/Us2dPhYeHKyYmRmFhYdq9e7def/119enTR1WqVFFKSopq166twYMHq3Xr1goICND//vc//fDDD/rHP/5RoHt5++23nQtuXGn8+PHq27evPvnkE91+++3q06ePEhISNG/ePDVv3lwZGRk5rmnUqJFuvvlmPfjgg7p48aJeeeUVVa9e3WXY4uzZs3XzzTerVatWGjVqlBo0aKDExERt2rRJv//+e657BRbE8uXLtWfPnhzHo6Oji3WeWJMmTTRy5Ej98MMPCgsL09tvv63ExEQtWLDA2ebRRx/V0qVL1bdvX8XFxSkqKkpnz57Vjh079PHHH+vQoUOqUaOG+vXrp5iYGD3xxBM6dOiQc8+9qwOUJM2cOVN9+vTRzTffrPvuu0+nT5927qGY23tRFFFRURo0aJBeeeUVnTp1yrmFQnZPZXbvZXp6+nV/7gCgXHHjyp4AgFL266+/mlGjRpn69esbHx8fU6VKFRMTE2Nee+21HEvvX72FwqRJk0zNmjWNv7+/iYmJMZs2bTJdunQxXbp0cbZ74403TOfOnU316tWNr6+vadiwoXn00UdNamqqMcaYixcvmkcffdS0bt3aVKlSxVSuXNm0bt3azJkz55q157f1gCTz22+/GYfDYZ577jlTr1494+vra9q2bWs+//xzM3z4cJdl+7O3J3jxxRfNP/7xD1OnTh3j6+trOnXqZH7++eccr33gwAEzbNgwEx4ebry9vc0NN9xg+vbtaz7++GNnm+LYQkFXbENwZY1Xyn6dRYsW5fq8V24xUa9ePdOnTx+zcuVKc+ONNxpfX1/TrFmzHNcaY0x6erqZMmWKadSokfHx8TE1atQw0dHR5qWXXjKZmZnOdqdOnTL33nuvCQwMNEFBQebee+81P/30U44tFIwxZvHixSYyMtL4+vqa5s2bm08++STHe2FM3lsoJCcn53qPCQkJzmNnz5418fHxplq1aiYgIMAMHDjQ7N2710gyf//7340x1/e5A4DyyGJMEWYvAwAAlFHbtm1T27Zt9d577+mee+5xdzkAUOqYkwcAAMqt8+fP5zj2yiuvyGq1qnPnzm6oCADcjzl5AACg3HrhhRe0detWdevWTV5eXlq+fLmWL1+u0aNH59hLEAAqCoZrAgCAcmv16tV6+umn9csvvygjI0N169bVvffeq7/+9a/y8uJv2QAqJkIeAAAAAHgQ5uQBAAAAgAch5AEAAACAByHkAShXXnjhBTVr1kwOh8PdpZSorKwsPfbYY6pTp46sVqsGDhzo7pJQzi1cuFAWi0WHDh0q1detX7++4uLiSvU1i2r69OnODdSzlaf6S8OKFSsUEBCg5ORkd5cCIB+EPADlRlpamp5//nk9/vjjslr//48vi8Uii8Wi+++/P9fr/vrXvzrbnDx50nk8Li7OedxisSggIEANGjTQ4MGDtXjx4lyDpMPh0LvvvqubbrpJ1apVU5UqVdSkSRMNGzZMmzdvLrZ7ffvtt/Xiiy9q8ODBeuedd/TII4/k2bZr164u9+Hj46OIiAiNHj1av/32W7HVVBqSkpJksVg0fvz4HOfGjx8vi8WiadOm5Tg3bNgweXt769y5c6VRZqk7duyYpk+frm3btrm7FBTAl19+qenTp7u7jBLRq1cvNWrUSDNnznR3KQDywbJTAMqNt99+W1lZWbrrrrtynPPz89PixYs1Z84c+fj4uJz7z3/+Iz8/P124cCHHdb6+vnrrrbckXd5v6/Dhw1q2bJkGDx6srl276rPPPlNgYKCz/bhx4zR79mwNGDBA99xzj7y8vLR3714tX75cDRo0UIcOHYrlXr/++mvdcMMNevnllwvUvnbt2s5fujIzM/XLL79o3rx5WrlypXbv3q1KlSoVS10lLTQ0VI0bN9b69etznNuwYYO8vLy0YcOGXM+1bdu23NxnYR07dkxPP/206tevrzZt2hTpOe69914NHTpUvr6+xVuch9u7d6/LH5UK4ssvv9Ts2bM9NuiNGTNGkydP1tNPP60qVaq4uxwAuSDkASg3FixYoP79+8vPzy/HuV69emnp0qVavny5BgwY4Dy+ceNGJSQkaNCgQVq8eHGO67y8vPTnP//Z5dgzzzyjv//975oyZYpGjRql//73v5KkxMREzZkzR6NGjdL8+fNdrnnllVeKdfhSUlKSgoODC9w+KCgox31ERERo7Nix2rBhg2699dbrrsnhcCgzMzPX739xuvnmm/Xuu+8qIyNDAQEBkqSzZ8/q559/1p133qmlS5fKbrfLZrNJko4fP66DBw+6vO/lgTFGFy5ckL+/f6m8ns1mc37PUHCE4pwGDRqkhx9+WIsWLdJ9993n7nIA5ILhmgDKhYSEBG3fvl09evTI9fwNN9ygzp0764MPPnA5/v7776tVq1Zq2bJloV7viSeeUM+ePbVo0SL9+uuvzhqMMYqJicnR3mKxKDQ09JrPe/bsWU2aNEl16tSRr6+vmjZtqpdeeknZu9kcOnRIFotFa9as0a5du5xDMNeuXVuo+iUpPDxcklz2CouLi1P9+vVztM1tLpLFYtHYsWP1/vvvq0WLFvL19dWKFSskSUePHtV9992nsLAw+fr6qkWLFnr77bdzPO9rr72mFi1aqFKlSqpataratWuX4z262s033yy73e4y/PW7775TVlaWJk+erIyMDJdhi9k9ezfffLMk6dtvv9Udd9yhunXrytfXV3Xq1NEjjzyi8+fPu7zOiRMnNGLECNWuXVu+vr6qWbOmBgwY4DJnbcuWLYqNjVWNGjXk7++viIiIHL/UOhwOvfLKK2rRooX8/PwUFhamMWPG6MyZMy7t6tevr759+2rlypVq166d/P399cYbb0i6vNfbzTffrODgYAUEBKhp06b6y1/+Iklau3at/vjHP0qSRowY4fxMLFy40OX706tXLwUFBalSpUrq0qVLjh7P3ObkZde0fv16tW/fXn5+fmrQoIHefffdfN+jK+991qxZatWqlfz8/BQSEqJevXppy5YteV5z+vRpTZ48Wa1atVJAQIACAwPVu3dv/fzzzznaXuvzk56ergkTJqh+/fry9fVVaGiobr31Vv3444/XrH39+vX64x//KD8/PzVs2ND5Xlzt6jl5ly5d0tNPP63GjRvLz89P1atX180336zVq1dLuvxvbPbs2ZLkMow620svvaTo6GhVr15d/v7+ioqK0scff5zjdbP//S1ZskQtW7Z0/jvL/jd4paNHj2rkyJGqVauWfH19FRERoQcffFCZmZnONikpKZowYYLzZ0+jRo30/PPP5xiW/uGHHyoqKkpVqlRRYGCgWrVqpVmzZrm0CQ0N1Y033qjPPvvsGt9lAO5CTx6AcmHjxo2SpD/84Q95trn77rs1fvx4Zw9QVlaWFi1apIkTJ+Y6VPNa7r33Xq1atUqrV69WkyZNVK9ePUnSokWLdMcddxR6aKAxRv3799eaNWs0cuRItWnTRitXrtSjjz6qo0eP6uWXX1ZISIj+/e9/69lnn1VGRoZzCGZkZGS+z223253zDS9duqTdu3dr2rRpatSoUa6htKC+/vprffTRRxo7dqxq1Kih+vXrKzExUR06dHD+EhoSEqLly5dr5MiRSktL04QJEyRJb775psaNG6fBgwdr/PjxunDhgrZv367vvvtOd999d56vmR3W1q9f7wz1GzZsUJMmTdS2bVvVrl1bGzZsUFRUlPPcldctWrRI586d04MPPqjq1avr+++/12uvvabff/9dixYtcr7OoEGDtGvXLj388MOqX7++kpKStHr1ah05csT5uGfPngoJCdETTzyh4OBgHTp0SJ988olLvWPGjNHChQs1YsQIjRs3TgkJCXr99df1008/acOGDfL29na23bt3r+666y6NGTNGo0aNUtOmTbVr1y717dtXN954o2bMmCFfX1/t37/feV+RkZGaMWOGpk6dqtGjR6tTp06SpOjoaOd71Lt3b0VFRWnatGmyWq1asGCBunfvrm+//Vbt27fP9z3ev3+/Bg8erJEjR2r48OF6++23FRcXp6ioKLVo0SLfa0eOHKmFCxeqd+/euv/++5WVlaVvv/1WmzdvVrt27XK95uDBg1qyZInuuOMORUREKDExUW+88Ya6dOmiX375RbVq1ZJUsM/PAw88oI8//lhjx45V8+bNderUKa1fv167d+/O92fFjh07nO/t9OnTlZWVpWnTpiksLCzf+5Uu/0Fk5syZuv/++9W+fXulpaVpy5Yt+vHHH3XrrbdqzJgxOnbsmFavXq1///vfOa6fNWuW+vfvr3vuuUeZmZn68MMPdccdd+jzzz9Xnz59XNquX79en3zyiR566CFVqVJFr776qgYNGqQjR46oevXqki4P5W3fvr1SUlI0evRoNWvWTEePHtXHH3+sc+fOycfHR+fOnVOXLl109OhRjRkzRnXr1tXGjRs1ZcoUHT9+XK+88oqky39suOuuu3TLLbfo+eeflyTt3r1bGzZsyDFPNioqSkuWLLnm9wuAmxgAKAeefPJJI8mkp6fnOCfJxMfHm9OnTxsfHx/z73//2xhjzBdffGEsFos5dOiQmTZtmpFkkpOTndcNHz7cVK5cOc/X/Omnn4wk88gjjziPDRs2zEgyVatWNbfffrt56aWXzO7duwt0D0uWLDGSzDPPPONyfPDgwcZisZj9+/c7j3Xp0sW0aNGiQM/bpUsXIynHV2RkpDl48KBL2+HDh5t69erleI7s78+VJBmr1Wp27drlcnzkyJGmZs2a5uTJky7Hhw4daoKCgsy5c+eMMcYMGDCgwPdwtdDQUHPLLbc4H8fGxpoRI0YYY4y58847zR133OE8165dO9O4cWPn4+zXv9LMmTONxWIxhw8fNsYYc+bMGSPJvPjii3nW8OmnnxpJ5ocffsizzbfffmskmffff9/l+IoVK3Icr1evnpFkVqxY4dL25ZdfzvHZvNoPP/xgJJkFCxa4HHc4HKZx48YmNjbWOBwO5/Fz586ZiIgIc+uttzqPLViwwEgyCQkJOWpat26d81hSUpLx9fU1kyZNyrMeY4z5+uuvjSQzbty4HOeurKVevXpm+PDhzscXLlwwdrvdpX1CQoLx9fU1M2bMcB4ryOcnKCjIxMfH59smNwMHDjR+fn7Oz4Mxxvzyyy/GZrPl+Hdwdf2tW7c2ffr0yff54+PjczxPtqs/n5mZmaZly5ame/fuLsclGR8fH5efCz///LORZF577TXnsWHDhhmr1Zrr5zT7ffjb3/5mKleubH799VeX80888YSx2WzmyJEjxhhjxo8fbwIDA01WVla+92eMMc8995yRZBITE6/ZFkDpY7gmgHLh1KlT8vLycs7Ryk3VqlXVq1cv/ec//5EkffDBB4qOjnb2wBVW9mulp6c7jy1YsECvv/66IiIi9Omnn2ry5MmKjIzULbfcoqNHj+b7fF9++aVsNpvGjRvncnzSpEkyxmj58uVFqlO6PKRs9erVWr16tZYvX65XXnlFqamp6t2793XNFezSpYuaN2/ufGyM0eLFi9WvXz8ZY3Ty5EnnV2xsrFJTU51D5YKDg/X777/rhx9+KPTrxsTE6LvvvpPdbpfD4dDmzZudPVcxMTHOXq5z585p27Ztzl48SS5z3M6ePauTJ08qOjpaxhj99NNPzjY+Pj5au3ZtjmGV2bLnRH7++ee6dOlSrm0WLVqkoKAg3XrrrS7fi6ioKAUEBGjNmjUu7SMiIhQbG5vr63z22WeF3hpk27Zt2rdvn+6++26dOnXK+fpnz57VLbfconXr1l3zOZs3b+7sHZSkkJAQNW3aVAcPHsz3usWLF+e52unVQ3+v5Ovr61zIxG6369SpU84hqlcOsyzI5yc4OFjfffedjh07lm+tV7Lb7Vq5cqUGDhyounXrOo9HRkbmeG/yes1du3Zp3759BX7NK135+Txz5oxSU1PVqVOnXIeY9ujRQw0bNnQ+vvHGGxUYGOh8bxwOh5YsWaJ+/frl2nOa/T4sWrRInTp1UtWqVV0+pz169JDdbte6deuc93b27Fnn0NP8VK1aVZJcViwGUHYQ8gB4lLvvvts55G7JkiX5Dgu8loyMDElyWT3OarUqPj5eW7du1cmTJ/XZZ5+pd+/e+vrrrzV06NB8n+/w4cOqVatWjtXosodiHj58uMi1Vq5cWT169FCPHj3Uq1cvjR8/XkuXLtXevXv197//vcjPGxER4fI4OTlZKSkpmj9/vkJCQly+RowYIenyojGS9PjjjysgIEDt27dX48aNFR8f7zJPLDMzUydOnHD5stvtki4Pvcyee7dz506lpqY6h51GR0fr2LFjOnTokHOu3pUh78iRI4qLi1O1atUUEBCgkJAQdenSRZKUmpoq6XLQeP7557V8+XKFhYWpc+fOeuGFF3TixAnn83Tp0kWDBg3S008/rRo1amjAgAFasGCBLl686Gyzb98+paamKjQ0NMf3IyMjw/m9yOv7KUlDhgxRTEyM7r//foWFhWno0KH66KOPChT4soPG8OHDc7z+W2+9pYsXLzrvOS9XBp1sVatWzTP8Zjtw4IBq1aqlatWqXbPOKzkcDr388stq3LixfH19VaNGDYWEhGj79u0utV7r8yNd3jdz586dqlOnjtq3b6/p06dfM5wmJyfr/Pnzaty4cY5zTZs2vWb9M2bMUEpKipo0aaJWrVrp0Ucf1fbt2wt495f/aNChQwf5+fmpWrVqCgkJ0dy5c3N9n6713iQnJystLe2ac4737dunFStW5PiMZA+Hzv6cPvTQQ2rSpIl69+6t2rVr67777st1DqAk5zzi/AI9APdhTh6AcqF69erKyspSenp6vkt29+/fX76+vho+fLguXryoO++8s8ivuXPnTklSo0aN8qypf//+6t+/v7p27apvvvlGhw8fLnLPYXGLiopSUFCQ86/0Ut6/kGWHq6tdvfJjdvD485//rOHDh+d6zY033ijpcnjdu3evPv/8c61YscK5xcXUqVP19NNPa+PGjerWrZvLtQkJCapfv77LvDwfHx9Vq1ZNzZo1kyS1adNGlSpV0vr165WQkCDp/8/Hs9vtuvXWW3X69Gk9/vjjatasmSpXrqyjR48qLi7OJThNmDBB/fr105IlS7Ry5Uo99dRTmjlzpr7++mu1bdtWFotFH3/8sTZv3qxly5Zp5cqVuu+++/SPf/xDmzdvVkBAgBwOh0JDQ/X+++/n+r0ICQnJ9/uZfWzdunVas2aNvvjiC61YsUL//e9/1b17d61atSrfFTGz7+fFF1/Mc2uF/Hq/JeX5/Nm/xBe35557Tk899ZTuu+8+/e1vf1O1atVktVo1YcIEl/fnWp8fSbrzzjvVqVMnffrpp1q1apVefPFFPf/88/rkk0/Uu3fvEqm/c+fOOnDggD777DOtWrVKb731ll5++WXNmzcvz706s3377bfq37+/OnfurDlz5qhmzZry9vbWggULcl2QqLjeG4fDoVtvvVWPPfZYruebNGki6fKCKtu2bdPKlSu1fPlyLV++XAsWLNCwYcP0zjvvuFyTHTRr1KhRqFoAlBI3DhUFgAJ77733jCTz888/5zin/5uTl+3Pf/6zkWR69+7tPFaUOXk9e/Y0FoslxzyW3EyaNMlIMps2bcqzzejRo43NZjNpaWkuxzdv3pxjnk1h5+Tl1TYgIMA0a9bM+fiRRx4xQUFBOdrde++9uc7Ju3q+U1ZWlqlSpYq56667ClTblS5evGj69OljbDabOX/+vDl9+rRZvXq1y9f58+eNMcZcunTJVKpUyQwePNjcc889OeZAdenSxTzwwAMmNjbWhIaGOo9nz6N85513XNqvWrUq1zltV/r1119NpUqVzD333JNnm/fff99IMm+++aYxxpiHHnrI2Gy2XOcBXq1evXrXnMuV7dlnnzWSzOrVq40xxmzZsiXX+r///nsjybzxxhvXfM685uTlVlOXLl1Mly5d8n2++Ph4Y7FYzKlTp/Jtl9uctm7duuVod8MNN+T7mld/fnKTmJhobrjhBhMTE5Pn82RlZRl/f38zdOjQHOduu+22a87Ju1p6erpp27atueGGG5zHxo4dm+ucvPHjxxt/f39z4cIFl+N33313gf79XV2P3W43gYGBZsCAAXnWZ4wxzZs3Nx07dsy3TW7sdrsZM2aMkWT27dvncu7+++83NWrUKPRzAigdDNcEUC507NhRkvJdmj3b5MmTNW3aND311FNFfr2///3vWrVqlYYMGeIc1nXixAn98ssvOdpmZmbqq6++ktVqzbPXT5Juu+022e12vf766y7HX375ZVkslmLveVizZo0yMjLUunVr57GGDRsqNTXVZXjZ8ePH9emnnxboOW02m3PPweyezitdOf/v1KlTLud8fHzUvHlzGWN06dIlVa1a1TnENPsrew8+Ly8v3XTTTdqwYYM2bNjgnI+XLTo6WuvWrdPmzZtdVg/N7vkwV/R0GGNyLAF/7ty5HCuuNmzYUFWqVHEOxzxz5kyOHpPs3rLsNnfeeafsdrv+9re/5fheZGVlKSUlJcfxq50+fTrHsatfp3LlypKU4/mioqLUsGFDvfTSS87hxVcqzr0brzZo0CAZY5y9ale6+vt2JZvNluP8okWLcsxpvdbnx2635xjiGBoaqlq1arkMqc3t9WNjY7VkyRIdOXLEeXz37t1auXJlntflVVdAQIAaNWrk8pp5vV82m00Wi8Wl5/zQoUNFXqXSarVq4MCBWrZsWa4/G7O/z3feeac2bdqU6/2lpKQoKysr13uzWq3Onvmrv6dbt251/lwGUPYwXBNAudCgQQO1bNlS//vf/665+W7r1q1dgk1+srKy9N5770mSLly4oMOHD2vp0qXavn27unXr5rLp+e+//6727dure/fuuuWWWxQeHq6kpCT95z//0c8//6wJEybkO3SpX79+6tatm/7617/q0KFDat26tVatWqXPPvtMEyZMcFlgobBSU1Od95GVlaW9e/dq7ty58vf31xNPPOFsN3ToUD3++OO6/fbbNW7cOJ07d05z585VkyZNCrS3mHQ5AK9Zs0Y33XSTRo0apebNm+v06dP68ccf9b///c8ZWnr27Knw8HDFxMQoLCxMu3fv1uuvv64+ffrkO+Q228033+xcuOTqbSCio6Od20tcOR+vWbNmatiwoSZPnqyjR48qMDBQixcvzjG/7Ndff9Utt9yiO++8U82bN5eXl5c+/fRTJSYmOudWvvPOO5ozZ45uv/12NWzYUOnp6XrzzTcVGBio2267TdLleXtjxozRzJkztW3bNvXs2VPe3t7at2+fFi1apFmzZmnw4MH53ueMGTO0bt069enTR/Xq1VNSUpLmzJmj2rVrO++tYcOGCg4O1rx581SlShVVrlxZN910kyIiIvTWW2+pd+/eatGihUaMGKEbbrhBR48e1Zo1axQYGKhly5Zd83tdFN26ddO9996rV199Vfv27VOvXr3kcDj07bffqlu3bho7dmyu1/Xt21czZszQiBEjFB0drR07duj9999XgwYNXNpd6/OTkpKi2rVra/DgwWrdurUCAgL0v//9Tz/88IP+8Y9/5Fv7008/rRUrVqhTp0566KGHlJWV5dyT71rz65o3b66uXbsqKipK1apV05YtW5zbOGTL3t5j3Lhxio2Nlc1m09ChQ9WnTx/985//VK9evXT33XcrKSlJs2fPVqNGjQo1r+9Kzz33nFatWqUuXbpo9OjRioyM1PHjx7Vo0SKtX79ewcHBevTRR7V06VL17dvXuT3G2bNntWPHDn388cc6dOiQatSoofvvv1+nT59W9+7dVbt2bR0+fFivvfaa2rRp47KNS1JSkrZv3674+Pgi1QygFLitDxEACumf//ynCQgIyDE0TnkMa7pSXsM1dcWWA5UqVTL169c3gwYNMh9//HGOZd7T0tLMrFmzTGxsrKldu7bx9vY2VapUMR07djRvvvmmy7LxeUlPTzePPPKIqVWrlvH29jaNGzc2L774Yo5rr2cLBYvFYqpVq2b69+9vtm7dmqP9qlWrTMuWLY2Pj49p2rSpee+99/LcQiGv72tiYqKJj483derUMd7e3iY8PNzccsstZv78+c42b7zxhuncubOpXr268fX1NQ0bNjSPPvqoSU1NLdB9rVy50kgyXl5e5uzZsy7nTp06ZSwWi5FkvvvuO5dzv/zyi+nRo4cJCAgwNWrUMKNGjXIuPZ893PHkyZMmPj7eNGvWzFSuXNkEBQWZm266yXz00UfO5/nxxx/NXXfdZerWrWt8fX1NaGio6du3r9myZUuOWufPn2+ioqKMv7+/qVKlimnVqpV57LHHzLFjx5xt8hoa+dVXX5kBAwaYWrVqGR8fH1OrVi1z11135Rgm/Nlnn5nmzZsbLy+vHEM3f/rpJ/OnP/3J+b2uV6+eufPOO81XX33lbFPcwzWNuTz08cUXXzTNmjUzPj4+JiQkxPTu3dvlc5fbFgqTJk0yNWvWNP7+/iYmJsZs2rQpx2te6/Nz8eJF8+ijj5rWrVubKlWqmMqVK5vWrVubOXPmXLNuY4z55ptvTFRUlPHx8TENGjQw8+bNy/XfwdX1P/PMM6Z9+/YmODjY+Pv7m2bNmplnn33WZGZmunxfHn74YRMSEuL8nGb717/+ZRo3bmx8fX1Ns2bNzIIFCwr17y+34aOHDx82w4YNMyEhIcbX19c0aNDAxMfHm4sXLzrbpKenmylTpphGjRoZHx8fU6NGDRMdHW1eeuklZ+0ff/yx6dmzpwkNDTU+Pj6mbt26ZsyYMeb48eMurzd37lxTqVKlHEPPAZQdFmNKaGY1ABSz1NRUNWjQQC+88IJGjhzp7nIAoEJq27atunbtqpdfftndpQDIAyEPQLny/PPPa8GCBfrll1+ce20BAErHihUrNHjwYB08eFChoaHuLgdAHgh5AAAAAOBB+DM4AAAAAHgQQh4AAAAAeBBCHgAAAAB4EEIeAAAAAHgQNkMvJIfDoWPHjqlKlSqyWCzuLgcAAABABWGMUXp6umrVqpXvKuOEvEI6duyY6tSp4+4yAAAAAFRQv/32m2rXrp3neUJeIVWpUkXS5W9sYGCgm6sBAAAAUFGkpaWpTp06zkySF0JeIWUP0QwMDCTkAQAAACh115o2xsIrAAAAAOBBCHkAAAAA4EEIeQAAAADgQZiTBwAAAMDJ4XAoMzPT3WVUSN7e3rLZbNf9PIQ8AAAAAJKkzMxMJSQkyOFwuLuUCis4OFjh4eHXtSc3IQ8AAACAjDE6fvy4bDab6tSpk+9m2yh+xhidO3dOSUlJkqSaNWsW+bkIeQAAAACUlZWlc+fOqVatWqpUqZK7y6mQ/P39JUlJSUkKDQ0t8tBN4jkAAAAA2e12SZKPj4+bK6nYsgP2pUuXivwchDwAAAAATtczFwzXrzi+/4Q8AAAAAPAghDwAAAAAxSY5/aKW/XxMH35/RMt+Pqbk9IvuLqnQpk+frjZt2ri7jCJj4RUAAAAA123PiTTN/nq/vtx5QnaHcR63WS26rWW44rs3UrPwQDdWWHCTJ0/Www8/XKC206dP15IlS7Rt27aSLaoQCHkAAAAArss3vyZr9LtblOUwLgFPkuwOoy93ntCqXxI1f1g7dWkS4qYqCy4gIEABAQHuLqPIGK4JAAAAoMj2nEjT6He3KDPLkSPgZbM7jDKzHBr97hbtOZFWrK/ftWtXPfzww5owYYKqVq2qsLAwvfnmmzp79qxGjBihKlWqqFGjRlq+fLkkae3atbJYLPrqq6/Url07VapUSdHR0dq7d6/zOa8errl27Vq1b99elStXVnBwsGJiYnT48GEtXLhQTz/9tH7++WdZLBZZLBYtXLiwWO+vKAh5AAAAAIps9tf7leUwyj3e/X9GUpbDaM6aA8VewzvvvKMaNWro+++/18MPP6wHH3xQd9xxh6Kjo/Xjjz+qZ8+euvfee3Xu3DnnNX/961/1j3/8Q1u2bJGXl5fuu+++XJ87KytLAwcOVJcuXbR9+3Zt2rRJo0ePlsVi0ZAhQzRp0iS1aNFCx48f1/HjxzVkyJBiv7/CIuQBQB5OnDihnj17KiYmRu+99567ywEAoMxJTr+YYw5efuwOoy92HNfJjOJdjKV169Z68skn1bhxY02ZMkV+fn6qUaOGRo0apcaNG2vq1Kk6deqUtm/f7rzm2WefVZcuXdS8eXM98cQT2rhxoy5cuJDjudPS0pSamqq+ffuqYcOGioyM1PDhw1W3bl35+/srICBAXl5eCg8PV3h4uHNDc3ci5AFAHp5//nk99thj+uabbzR79uxcf/ADAFCRbT54qsABL5vdYbT54KlirePGG290/n+bzabq1aurVatWzmNhYWGSpKSkpFyvqVmzZo7z2apVq6a4uDjFxsaqX79+mjVrlo4fP16s9Rc3Qh4A/B9jjOLj49WpUyd16ttPy06m6Xhka31+Kl0tY27Wzp073V0ikKdFixapadOmateunbtLAVCBnL2YVaTrMi4U7bq8eHt7uzy2WCwux7I3GHc4HLlek9v5Ky1YsECbNm1SdHS0/vvf/6pJkybavHlzsdVf3FhdEwD+z7Jly5QeXF0t5v1by5JTZDfSo78elSRZet+l5NRMzcw4r8gA9w/DAK7WvXt37dixQ9HR0e4uBUAFUtm3aHEiwK/8xZC2bduqbdu2mjJlijp27KgPPvhAHTp0kI+Pj+x2u7vLc0FPHgD8n+XHkvT1LQO1LOlywLuSsVr1vdVXvbb+qjWnindVMKAosnueO/a6TVHxj2jRyXR9eeas7AHlYw8qAJ6hQ4PqslkthbrGZrWoQ4PqJVRR8UtISNCUKVO0adMmHT58WKtWrdK+ffsUGRkpSapfv74SEhK0bds2nTx5Uhcvun/zd0IeAEjanXFey5pEKUsW5fW3OIcsynQYxe1M0O6M86VaH3C1uV+s0Pc3ddeRJ57T0cHDNf1Yih745bCOP/miHth1iM8ogFIRUsVXt7UML3DQs1kt6tOqpmoE+JZwZcWnUqVK2rNnjwYNGqQmTZpo9OjRio+P15gxYyRJgwYNUq9evdStWzeFhIToP//5j5srlizGmMLNlKzg0tLSFBQUpNTUVAUG8tdSwFM8sOvQ5R68ArS1SeofGqy5LeqXcFVA7tacStOft+2TsVjlsOT8xcomyctq0cKWEepWnf9WASiYCxcuKCEhQREREfLz8yvwdXtOpGnA6xuUmeXIdxsFiyQfL6s+GxujZuH8bMpLfu9DQbMIPXkAKrzkzEuX5+AVsL1d0tLkFCVnXirJsoBc7c44r7idCXLkEfCky59Rep0BlJZm4YGaP6ydfLysefbo2awW+XhZNX9YOwJeKSDkAajwNpzJyDEH71rsRtqYklEyBQH5mHU48fKmw3kEvGzZmw6/ejixdAoDUKF1aRKiz8bGqE+rmjmCXvYQzc/GxqhLkxA3VVixlL9lbQCgmJ21575c8rVkZBXtOqCoitrrPCPzkkJ8vK/ZHgCuR7PwQL16V1tN7ddcmw+eUsaFLAX4ealDg+rlag6eJyDkAajwKtuKNqghwIvBEChd19PrPCC0askUBQBXqRHgq7431nJ3GRUav6EAqPBiqgbIVrjVn2WzSNHBASVTEJAHep0BAAVByANQ4YX4eKtfSLBsBWxvk9Q/JJjhbyh19DoDAAqCn/oAIGl8vTB5WS26VoeeRZeXph9XL6w0ygJc0OsMACgIQh4ASIoM8NfClhHysVry7NGzSfL5v73HIgP8S7M8QBK9zgCAgiHkAcD/6VY9UCuimqh/aHCO3hKb5fIG6CuimrC5NNyKXmcAwLUQ8gDgCpEB/prbor62RbfQGy3q6R9N6+iNFvW0LbqF5raoTw8e3I5eZwBlXXLmJS1JPKP3j53SksQzSs685LZaDh06JIvFom3btjmPbdiwQa1atZK3t7cGDhzottpKElsoAEAuQny8WXIeZVZ2r/OrhxO1NDnFZVsFm+XyEM1x9cIIeABK1e6M85p1OPHyfp5X/VzqFxKs8W74uVSnTh0dP35cNWrUcB6bOHGi2rRpo+XLlysg4PrmLH/yySeaN2+etm7dqtOnT+unn35SmzZtXNpcuHBBkyZN0ocffqiLFy8qNjZWc+bMUVhYyY20IOQBAFAOZfc6z8i8pI0pGcrIcijAy6ro4ADm4AEodWtOpSluZ4KyHEb2q87ZjbQsKUXLT6ZqYcuIUp32YLPZFB4e7nLswIEDeuCBB1S7du0iP29mZqZ8fHx09uxZ3Xzzzbrzzjs1atSoXNs+8sgj+uKLL7Ro0SIFBQVp7Nix+tOf/qQNGzYU+fWvheGaAACUY9m9zvfUqq4BoVUJeABK3e6M84rbmaDMXAJeNrukTIdR3M4E7c44X+w1OBwOvfDCC2rUqJF8fX1Vt25dPfvssy7DNbP//6lTp3TffffJYrFo4cKFstvtGjlypCIiIuTv76+mTZtq1qxZLs8fFxengQMH6tlnn1WtWrXUtGlTSdK9996rqVOnqkePHrnWlZqaqn/961/65z//qe7duysqKkoLFizQxo0btXnz5mL/PmSjJw8AAABAkc06nKgsh5G5RjsjKcth9OrhRM1tUb9Ya5gyZYrefPNNvfzyy7r55pt1/Phx7dmzx6VN9tDNpk2basaMGRoyZIiCgoLkcDhUu3ZtLVq0SNWrV9fGjRs1evRo1axZU3feeafz+q+++kqBgYFavXp1gevaunWrLl265BICmzVrprp162rTpk3q0KHD9d98Lgh5AAAAAIokOfPS5Tl4BWxvl7Q0OUUzMi8V28iD9PR0zZo1S6+//rqGDx8uSWrYsKFuvvlmHTp0yNkue+imxWJRUFCQyzDOp59+2vn/IyIitGnTJn300UcuIa9y5cp666235OPjU+DaTpw4IR8fHwUHB7scDwsL04kTJwp5pwXHcE0AAAAARbLhTIbLIisFYTfSxpSMYqth9+7dunjxom655ZYiP8fs2bMVFRWlkJAQBQQEaP78+Tpy5IhLm1atWhUq4LkTIQ8AAABAkZy1O4p0XUZW0a7Ljb//9a3Y+eGHH2ry5MkaOXKkVq1apW3btmnEiBHKzMx0aVe5cuVCP3d4eLgyMzOVkpLicjwxMTHHgjDFiZAHAAAAoEgq24oWJwK8ii+GNG7cWP7+/vrqq6+KdP2GDRsUHR2thx56SG3btlWjRo104MCBYqktKipK3t7eLrXt3btXR44cUceOHYvlNXLDnDwAAAAARRJTNUA2iwo1ZNNmkaKDr29/uiv5+fnp8ccf12OPPSYfHx/FxMQoOTlZu3btKtAQzsaNG+vdd9/VypUrFRERoX//+9/64YcfFBERcc1rT58+rSNHjujYsWOSLgc46XIPXnh4uIKCgjRy5EhNnDhR1apVU2BgoB5++GF17NixxBZdkejJAwAAAFBEIT7e6hcSLFsB29sk9Q8JLvbtXp566ilNmjRJU6dOVWRkpIYMGaKkpKQCXTtmzBj96U9/0pAhQ3TTTTfp1KlTeuihhwp07dKlS9W2bVv16dNHkjR06FC1bdtW8+bNc7Z5+eWX1bdvXw0aNEidO3dWeHi4Pvnkk8LfZCFYjDGFnCpZsaWlpSkoKEipqakKDCy9jRwBAACAknThwgUlJCQoIiJCfn5+Bb5ud8Z59dr6qzKvsY2CRZKP1aIVUU0UGXB98+g8WX7vQ0GzCD15AAAAAIosMsBfC1tGyMdqybNHz6bLAW9hywgCXikg5AEAAAC4Lt2qB2pFVBP1Dw2WzeJ6zmaR+ocGa0VUE3Wrzki40sDCKwAAAACuW2SAv+a2qK8ZmZe0MSVDGVkOBXhZFR0cUOxz8JA/Qh4AAACAYhPi460BoVXdXUaFxnBNAAAAAE6sy+hexfH9J+QBAAAAkM12edmUzMxMN1dSsZ07d06S5O1d9CGuDNcEAAAAIC8vL1WqVEnJycny9vaW1Up/UGkyxujcuXNKSkpScHCwM3QXBSEPAAAAgCwWi2rWrKmEhAQdPnzY3eVUWMHBwQoPD7+u5yDkAQAAAJAk+fj4qHHjxgzZdBNvb+/r6sHLRsgDAAAA4GS1WuXn5+fuMnAdGGgLAAAAAB6EkAcAAAAAHoSQBwAAAAAehJAHAAAAAB6k3IS8mTNn6o9//KOqVKmi0NBQDRw4UHv37nVpc+HCBcXHx6t69eoKCAjQoEGDlJiY6NLmyJEj6tOnjypVqqTQ0FA9+uijysrKKs1bKXY7duxQp06d1LlzZ/33v/91dzkAAAAA3KjchLxvvvlG8fHx2rx5s1avXq1Lly6pZ8+eOnv2rLPNI488omXLlmnRokX65ptvdOzYMf3pT39ynrfb7erTp48yMzO1ceNGvfPOO1q4cKGmTp3qjlsqNlOmTNGCBQu0Zs0azZ07VxcuXHB3SQAAAADcxGKMMe4uoiiSk5MVGhqqb775Rp07d1ZqaqpCQkL0wQcfaPDgwZKkPXv2KDIyUps2bVKHDh20fPly9e3bV8eOHVNYWJgkad68eXr88ceVnJwsHx+fa75uWlqagoKClJqaqsDAwBK9x7wYYzR27Fht375dCgrWoUpB+vsrs1TZZtX7f31cjz8wWu3atXNLbQAAAABKRkGzSLndJy81NVWSVK1aNUnS1q1bdenSJfXo0cPZplmzZqpbt64z5G3atEmtWrVyBjxJio2N1YMPPqhdu3apbdu2OV7n4sWLunjxovNxWlpaSd1SgS1btkzpwdXVYt6/tSw5RXYjTdr72+WTQ8foUuolzcw4r8gAf/cWCgAAAKDUlZvhmldyOByaMGGCYmJi1LJlS0nSiRMn5OPjo+DgYJe2YWFhOnHihLPNlQEv+3z2udzMnDlTQUFBzq86deoU890U3vJjSfr6loFalnQ54Lmw2fSD1U+9tv6qNafcH0gBAAAAlK5yGfLi4+O1c+dOffjhhyX+WlOmTFFqaqrz67fffivx18zP7ozzWtYkSlmyyJ5HG7ukTIdR3M4E7c44X5rlATns2bNHMTEx6ty5s4YPH65yOkIcAACg3Ch3IW/s2LH6/PPPtWbNGtWuXdt5PDw8XJmZmUpJSXFpn5iYqPDwcGebq1fbzH6c3eZqvr6+CgwMdPlyp1mHE2UsFsliybedkZTlMHr1cGK+7YCSNmfOHE2dOlXr1q2Tl5eXNm3a5O6SAAAAPFq5CXnZi418+umn+vrrrxUREeFyPioqSt7e3vrqq6+cx/bu3asjR46oY8eOkqSOHTtqx44dSkpKcrZZvXq1AgMD1bx589K5keuQnHnp8hy8Ara3S1qanKLkzEslWRbgwhij+Ph4derUSZ369lNKZGstv2C0JPGMTl6yO+fRAgAAoGSUm4VX4uPj9cEHH+izzz5TlSpVnHPogoKC5O/vr6CgII0cOVITJ05UtWrVFBgYqIcfflgdO3ZUhw4dJEk9e/ZU8+bNde+99+qFF17QiRMn9OSTTyo+Pl6+vr7uvL0C2XAmI+ccvGuwG2ljSoYGhFYtmaKAq1y9MNC+//vMfvjLYVmGj9Mrdj+NZ2EgAACAElNuQt7cuXMlSV27dnU5vmDBAsXFxUmSXn75ZVmtVg0aNEgXL15UbGys5syZ42xrs9n0+eef68EHH1THjh1VuXJlDR8+XDNmzCit27guZ+2OIl2XkVW064CiyF4YyCTl7HU2VquWJp7R8pOpWtgyQt2qu3f4MwAAgCcqt/vkuYs798lbknhGD/xyuNDXvdGiHj15KBW7M87r1u/3KEvKd96oRZKP1aIVUU3o0QMAACiggmaRcjMnD1JM1QDZ8l9vJQebRYoODiiZgoCrsDAQAACA+xHyypEQH2/1CwmWrYDtbZL6hwQrxMe7JMsCJLEwEAAAQFlByCtnxtcLk5fVomt16FkkeVktGlcv7BotgeJxPQsDAQAAoPgQ8sqZyAB/LWwZIR+rJc8ePZsuz3da2DKC+U4oNSwMBAAAUDYQ8sqhbtUDtSKqifqHBueYo2ezSP1Dg7UiqgkrF6JUVbYV7cdJgBc/hgAAAIpTudlCAa4iA/w1t0V9zci8pI0pGcrIcijAy6ro4ADm4MEtshcGKsyQTRYGAgAAKH6EvHIuxMeb7RFQJmQvDLQsl/3xcsPCQAAAACWDcVIAig0LAwEAALgfIQ9AsWFhIAAAAPcj5AEoViwMBAAA4F7MyQNQ7FgYCAAAwH0IeQBKDAsDAQAAlD6GawIAAACAByHkAQAAAIAHIeQBAAAAgAch5AEAAACAByHkAQAAAIAHIeQBAAAAgAch5AEAAACAByHkAQAAAIAHIeQBAAAAgAch5AEAAACAByHkAQAAAIAHIeQBAAAAgAch5AEAAACAByHkAQAAAIAHIeQBAAAAgAch5AEAAKBEPf300+rQoYM6dOig9957z93lAB7PYowx7i6iPElLS1NQUJBSU1MVGBjo7nIAAADKvIMHD6pBgwbKzMxUVFSUtm/fLovF4u6ygHKnoFnEqxRrAgAAQAVgjNHYsWP144EEZTZsqlHjxis88YyigyvLZrO5uzzA4xHyAAAAUKzmfrFC39/UXcfuaCi7kaYfS5GOpchiHGr0l2e15+wFRQb4u7tMwGMxJw8AAADFZs2pND3rH6qjtRvIftWkIGOx6mDIDeq19VetOZXmngKBCoCQBwAAgGKxO+O84nYmyGGxypHHnDu7pEyHUdzOBO3OOF+6BQIVBCEPAAAAxWLW4URlOYzMNRZVMZKyHEavHk4sncKACoaQBwAAgOuWnHlJy5JTZC9ge7ukpckpSs68VJJlARUSIQ8AAADXbcOZjBxz8K7FbqSNKRklUxBQgRHyAAAAcN3O2h1Fui4jq2jXAcgbIQ8AAADXrbKtaL9WBnjx6yhQ3PhXBQAAgOsWUzVAtvzXW8nBZpGigwNKpiCUCW+//bY6deqkDh06aMqUKe4up8Ig5AEAAOC6hfh4q19IsGwFbG+T1D8kWCE+3iVZFtzsz3/+s7799ltt3rxZmzZt0u+//+7ukioEL3cXAAAAAM8wvl6Ylp9MlcNhlN8aLBZJXlaLxtULK63SUEqMMRo7dqx+PJCgzIZNNWrceIUHB+mmKv6qWrWqqlWr5u4SKwRCHgAAAIpFZIC/FraMUNzOBGU5TK7bKdh0OeAtbBmhyAD/0i4RJWzuFyv0/U3ddeyOhrIbafqxFOlYimS364YBd+uww6JIdxdZATBcEwAAAMWmW/VArYhqov6hwTnm6NksUv/QYK2IaqJu1QPdUyBKzJpTaXrWP1RHazfIuZ2GzaajdRqo5w97tOZUmlvqq0gsxphC7mhSsaWlpSkoKEipqakKDOSHEwAAQF6SMy9pY0qGMrIcCvCyKjo4gDl4Hmp3xnn12vqrMu0OGUveK/BYJPlYLVoR1YSe3CIoaBZhuCYAAABKRIiPtwaEVnV3GSgFsw4nKsth8g14kmQkZTmMXj2cqLkt6pdKbRURwzUBAAAAFFly5iUtS07JdQ5mbuySlianKDnzUkmWVaER8gAAAAAU2YYzGTnn4F2D3UgbUzJKpiAQ8gAAAAAU3Vm7o0jXZWQV7TpcGyEPAAAAQJFVthUtUgR4EUVKCt9ZAAAAAEUWUzUgx3YZ12KzSNHBASVTEAh5AAAAAIouxMdb/UKCZStge5uk/iHBbKdRggh5AAAAAK7L+Hph8rJadK0OPYskL6tF4+qFlUZZFRYhDwAAAMB1iQzw18KWEfKxWvLs0bPp8kboC1tGsBF6CSPkAQAAALhu3aoHakVUE/UPDc4xR89mkfqHBmtFVBN1qx7ongIrEC93FwAAAADAM0QG+Gtui/qakXlJG1MylJHlUICXVdHBAczBK0WEPAAAAADFKsTHWwNCq7q7jAqL4ZoAAAAA4EEIeQAAAADgQQh5AAAAAOBBCHkAAAAA4EEIeQAAAADgQQh5AAAAAOBBCHkAAAAA4EEIeQAAAADgQQh5AAAAAOBBCHkAAAAA4EEIeQAAAADgQQh5AAAAAOBBCHkAAAAA4EEIeQAAAADgQQh5AAAAAOBBCHkAAKDCmDZtmrp27aquXbsqMDBQP//8s7tLAoBiZzHGGHcXUZ6kpaUpKChIqampCgwMdHc5AACgCM6dO6f27dtr586d7i4FAAqsoFnEqxRrAgAAKFXGGI0dO1bbt2+XgoI1+pXXZa0UoB0/fK/utw9yd3kAUCIIeQAAwGMtW7ZM6cHV1WLev7UsOUWP/nZG0hkpsKas3Qcqc9chja8XpsgAf3eXCgDFhjl5AADAYy0/lqSvbxmoZUkpsl81QcUhi5YlpajX1l+15lSaewoEgBJAyAMAAB5pd8Z5LWsSpSxZZM+jjV1SpsMobmeCdmecL83yAKDEEPIAAIBHmnU4UcZikSyWfNsZSVkOo1cPJ5ZOYQBQwgh5AADA4yRnXtKy5JQ8e/CuZpe0NDlFyZmXSrIsACgVhDwAAOBxNpzJyDEH71rsRtqYklEyBQFAKSLkAQAAj3PW7ijSdRlZRbsOAMoSQh4AAPA4lW1F+xUnwItfjQCUf/wkAwAAHiemaoBs+a+3koPNIkUHB5RMQQBQigh5AADA44T4eKtfSLBsBWxvk9Q/JFghPt4lWRYAlApCHgAA8Ejj64XJy2rRtTr0LJK8rBaNqxdWGmUBQIkj5AEAAI8UGeCvhS0j5GO15NmjZ5PkY7VoYcsIRQb4l2Z5AFBiCHkAAMBjdaseqBVRTdQ/NDjHHD2bReofGqwVUU3UrXqgewoEgBLg5e4CAAAASlJkgL/mtqivGZmXtDElQxlZDgV4WRUdHMAcPAAeiZAHAAAqhBAfbw0IreruMgCgxDFcEwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAKWiV69emjx5srvLADweIQ8AAAAlbsOGDe4uAagwCHkAAAAoVsYYxcfHq2Ov2xQV/4jm7T2kJ79YrXvHjnN3aUCFYDHGGHcXUZ6kpaUpKChIqampCgwMdHc5AAAAZc6cz5drwelzOla3oexX/KZplVGdo4e1cGCsIgP83VcgUE4VNIvQkwcAAIBis+ZUmp71D9XR2g1cAp4kOWTRkZp11Wvrr1pzKs09BQIVACEPAAAAxWJ3xnnF7UyQw2KVw2LJtY2xWpVpdyhuZ4J2Z5wv5QqBioGQBwAAgGIx63CishxGJo+Al81YLMpyGL16OLGUKgMqFkIeAAAArlty5iUtS06RvYDt7ZKWJqcoOfNSSZYFVEiEPAAAAFy3DWcycszBuxa7kTamZJRMQUAFRsgDAADAdTtrdxTpuoysol0HIG/lKuStW7dO/fr1U61atWSxWLRkyRKX88YYTZ06VTVr1pS/v7969Oihffv2ubQ5ffq07rnnHgUGBio4OFgjR45URgZ/QQIAALgelW1F+7UywKtc/ToKlAvl6l/V2bNn1bp1a82ePTvX8y+88IJeffVVzZs3T999950qV66s2NhYXbhwwdnmnnvu0a5du7R69Wp9/vnnWrdunUaPHl1atwAAAOCRYqoGyJb/eis52CxSdHBAyRQEVGDldjN0i8WiTz/9VAMHDpR0uRevVq1amjRpkiZPnixJSk1NVVhYmBYuXKihQ4dq9+7dat68uX744Qe1a9dOkrRixQrddttt+v3331WrVq1rvi6boQMAAOTugV2HtCypYIuv2CT1Dw3W3Bb1S7gqwHNUuM3QExISdOLECfXo0cN5LCgoSDfddJM2bdokSdq0aZOCg4OdAU+SevToIavVqu+++67UawYAAPAk4+uFyctq0bU69CySvKwWjasXVhplARWOx4S8EydOSJLCwlx/WISFhTnPnThxQqGhoS7nvby8VK1aNWebq128eFFpaWkuXwAAAMgpMsBfC1tGyMdqkS2PNjZJPlaLFraMUGSAf2mWB1QYHhPySsrMmTMVFBTk/KpTp467SwIAACizulUP1IqoJuofGpxjjp7NcnmI5oqoJupWnWkvQEnxcncBxSU8PFySlJiYqJo1azqPJyYmqk2bNs42SUlJLtdlZWXp9OnTzuuvNmXKFE2cONH5OC0tjaAHAACQj8gAf81tUV8zMi9pY0qGMrIcCvCyKjo4QCE+3u4uD/B4HtOTFxERofDwcH311VfOY2lpafruu+/UsWNHSVLHjh2VkpKirVu3Ott8/fXXcjgcuummm3J9Xl9fXwUGBrp8AQAA4NpCfLw1ILSq7qlVXQNCqxLwgFJSrnryMjIytH//fufjhIQEbdu2TdWqVVPdunU1YcIEPfPMM2rcuLEiIiL01FNPqVatWs4VOCMjI9WrVy+NGjVK8+bN06VLlzR27FgNHTq0QCtrAgAAAEBZV65C3pYtW9StWzfn4+xhlMOHD9fChQv12GOP6ezZsxo9erRSUlJ08803a8WKFfLz83Ne8/7772vs2LG65ZZbZLVaNWjQIL366qulfi8AAAAAUBLK7T557sI+eQAAAEDFYIzRX/7yF33//fey2+05OpBKW0GzSLnqyQMAAACA0rJ48eIc636UB4Q8AAAAANDlnruxY8dq+/btUlCwLjWOVFi9CL0T/4huvSFMz//lCXeXWCCEPAAAAACQtGzZMqUHV1eLef/WsuQU2Y10SJJad9S7DocOfLNVf4tqrsgAfzdXmj+P2UIBAAAAAK7H8mNJ+vqWgVqWdDngXclYrdpot6rX1l+15lSaewosIEIeAAAAgApvd8Z5LWsSpSxZZM+jjcNiUabDKG5ngnZnnC/V+gqDkAcAAACgwpt1OFHGYpEslnzbGUlZDqNXDyeWTmFFQMgDAAAAUKElZ166PAevgO3tkpYmpyg581JJllVkhDwAAAAAFdqGMxk55uBdi91IG1MySqag60TIAwAAAFChnbU7inRdRlbRritphDwAAAAAFVplW9FiUYBX2YxTZbMqAAAAACglMVUDZMt/vZUcbBYpOjigZAq6ToQ8AAAAABVaiI+3+oUEy1bA9jZJ/UOCFeLjXZJlFRkhDwAAAECFN75emLysFl2rQ88iyctq0bh6YaVRVpEQ8gAAAABUeJEB/lrYMkI+VkuePXo2ST5Wixa2jFBkgH9pllcohDwAAAAAkNSteqBWRDVR/9DgHHP0bBapf2iwVkQ1Ubfqge4psIC83F0AAAAAAJQVkQH+mtuivmZkXtLGlAxlZDkU4GVVdHBAmZ2DdzVCHgAAAABcJcTHWwNCq7q7jCJhuCYAAAAAeBBCHgAAAAB4EEIeAAAAAHgQQh4AAAAAeBBCHgAAAAB4EEIeAAAAAHgQQh4AAAAAeBBCHgAAAAB4EEIeAAAAAHgQQh4AAAAAeBBCHgAAAAB4EEIeAAAAUA5Nnz5drVq1UteuXTVp0iR3l4MyxMvdBQAAAAAompkzZ6pv377uLgNlDD15AAAAQDlgjFF8fLw69rpNUfGP6PvK1TR58TLF3NZXX3/9tbvLQxlCTx4AAABQDsz9YoW+v6m7jt3RUHYjHZWkdp2VLmnE1k1a1radbqwa6OYqURbQkwcAAACUcWtOpelZ/1Adrd1AduN6zi7pYtv26rvtoNacSnNLfShbCHkAAABAGbY747zidibIYbHKYbHk3shq0yVJcTsTtDvjfKnWh7KHkAcAAACUYbMOJyrLYWTyCnj/x0jKchi9ejixdApDmUXIAwAAAMqo5MxLWpacInsB29slLU1OUXLmpZIsC2UcIQ8AAAAoozacycgxB+9a7EbamJJRMgWhXCDkAQAAAGXUWbujSNdlZBXtOngGQh4AAABQRlW2Fe3X9QAvfs2vyHj3AQAAgDIqpmqAbPmvt5KDzSJFBweUTEEoFwh5AAAAQBkV4uOtfiHBshWwvU1S/5Bghfh4l2RZKOMIeQAAAEAZNr5emLysFl2rQ88iyctq0bh6YaVRFsowQh4AAABQhkUG+Gthywj5WC159ujZJPlYLVrYMkKRAf6lWR7KIEIeAAAAUMZ1qx6oFVFN1D80OMccPZtF6h8arBVRTdSteqB7CkSZ4uXuAgAAAABcW2SAv+a2qK8ZmZe0MSVDGVkOBXhZFR0cwBw8uCDkAQAAAOVIiI+3BoRWdXcZKMMYrgkAAAAAHoSQBwAAAAAehJAHAAAAAB6EkAcAAAAAHoSQBwAAAAAepFAh7/z581q/fr1++eWXHOcuXLigd999t9gKAwAAAAAUXoFD3q+//qrIyEh17txZrVq1UpcuXXT8+HHn+dTUVI0YMaJEigQAAAAAFEyBQ97jjz+uli1bKikpSXv37lWVKlUUExOjI0eOlGR9AAAAAIBCKHDI27hxo2bOnKkaNWqoUaNGWrZsmWJjY9WpUycdPHiwJGsEAAAAABRQgUPe+fPn5eXl5XxssVg0d+5c9evXT126dNGvv/5aIgUCAAAAAArO69pNLmvWrJm2bNmiyMhIl+Ovv/66JKl///7FWxkAAAAAoNAK3JN3++236z//+U+u515//XXdddddMsYUW2EAAAAAgMKzGJJZoaSlpSkoKEipqakKDAx0dzkAAAAAKoiCZhE2QwcAAAAAD0LIAwAAAAAPQsgDAAAAAA9CyAMAAAAAD1LokLdu3TplZWXlOJ6VlaV169YVS1EAAAAAgKIpdMjr1q2bTp8+neN4amqqunXrVixFAQAAAACKptAhzxgji8WS4/ipU6dUuXLlYikKAAAAAFA0XgVt+Kc//UmSZLFYFBcXJ19fX+c5u92u7du3Kzo6uvgrBAAAAAAUWIFDXlBQkKTLPXlVqlSRv7+/85yPj486dOigUaNGFX+FAAAAAIACK3DIW7BggSSpfv36mjx5MkMzAQAAAKAMshhjjLuLKE/S0tIUFBSk1NRUBQYGurscAAAAABVEQbNIoRdeSUxM1L333qtatWrJy8tLNpvN5QsAAAAA4D4FHq6ZLS4uTkeOHNFTTz2lmjVr5rrSJgAAAADAPQod8tavX69vv/1Wbdq0KYFyAAAAAADXo9DDNevUqSOm8QEAAABA2VTokPfKK6/oiSee0KFDh0qgHAAAAADA9Sj0cM0hQ4bo3LlzatiwoSpVqiRvb2+X86dPny624gAAAAAAhVPokPfKK6+UQBkAAAAAgOJQ6JA3fPjwkqgDAAAAAFAMCj0nT5IOHDigJ598UnfddZeSkpIkScuXL9euXbuKtTgAAAAAQOEUOuR98803atWqlb777jt98sknysjIkCT9/PPPmjZtWrEXCAAAAAAouEKHvCeeeELPPPOMVq9eLR8fH+fx7t27a/PmzcVaHAAAAACgcAod8nbs2KHbb789x/HQ0FCdPHmyWIoCAAAAABRNoUNecHCwjh8/nuP4Tz/9pBtuuKFYigIAAAAAFE2hQ97QoUP1+OOP68SJE7JYLHI4HNqwYYMmT56sYcOGlUSNAAAAAIACKnTIe+6559SsWTPVqVNHGRkZat68uTp37qzo6Gg9+eSTJVEjAAAAAKCALMYYU5QLjxw5op07dyojI0Nt27ZV48aNi7u2MiktLU1BQUFKTU1VYGCgu8sBAAAAUEEUNIsUejP0bHXr1lXdunWLejkAAAAAoAQUOuTZ7XYtXLhQX331lZKSkuRwOFzOf/3118VWHAAAAACgcAod8saPH6+FCxeqT58+atmypSwWS0nUBQAAAAAogkKHvA8//FAfffSRbrvttpKoBwAAAABwHQq9uqaPj48aNWpUErUAAAAAAK5ToUPepEmTNGvWLBVxUU4AAAAAQAkq9HDN9evXa82aNVq+fLlatGghb29vl/OffPJJsRUHAAAAACicQoe84OBg3X777SVRCwAAAADgOhU65C1YsKAk6gAAAAAAFIMib4aenJysvXv3SpKaNm2qkJCQYisKAAAAAFA0hV545ezZs7rvvvtUs2ZNde7cWZ07d1atWrU0cuRInTt3riRqBAAAAAAUUKFD3sSJE/XNN99o2bJlSklJUUpKij777DN98803mjRpUknUCAAAAAAoIIsp5F4INWrU0Mcff6yuXbu6HF+zZo3uvPNOJScnF2d9ZU5aWpqCgoKUmpqqwMBAd5cDAAAAoIIoaBYpdE/euXPnFBYWluN4aGgowzUBAAAAwM0KHfI6duyoadOm6cKFC85j58+f19NPP62OHTsWa3EAAAAAgMIp9Oqas2bNUmxsrGrXrq3WrVtLkn7++Wf5+flp5cqVxV4gAAAAAKDgCj0nT7o8ZPP999/Xnj17JEmRkZG655575O/vX+wFljXMyQMAAADgDgXNIkXaJ69SpUoaNWpUkYsDAAAAAJSMIoW8vXv36rXXXtPu3bslXe7JGzt2rJo1a1asxQEAAAAACqfQC68sXrxYLVu21NatW9W6dWu1bt1aP/74o1q1aqXFixeXRI0AAAAAgAIq9Jy8hg0b6p577tGMGTNcjk+bNk3vvfeeDhw4UKwFljXMyQMAAADgDiW2T97x48c1bNiwHMf//Oc/6/jx44V9OgAAAABAMSp0yOvatau+/fbbHMfXr1+vTp06FUtRAAAAAICiKfTCK/3799fjjz+urVu3qkOHDpKkzZs3a9GiRXr66ae1dOlSl7YAAAAAgNJT6Dl5VmvBOv8sFovsdnuRiirLmJMHAAAAwB1KbJ88h8NxXYUBAAAAAEpOoefkAQAAAADKriJthv7DDz9ozZo1SkpKytGz989//rNYCgMAAAAAFF6hQ95zzz2nJ598Uk2bNlVYWJgsFovz3JX/HwAAAABQ+god8mbNmqW3335bcXFxJVBO6Zk9e7ZefPFFnThxQq1bt9Zrr72m9u3bu7ssAAAAALguhZ6TZ7VaFRMTUxK1lJr//ve/mjhxoqZNm6Yff/xRrVu3VmxsrJKSktxdGgAAAABcl0KHvEceeUSzZ88uiVpKzT//+U+NGjVKI0aMUPPmzTVv3jxVqlRJb7/9trtLAwAAAIDrUujhmpMnT1afPn3UsGFDNW/eXN7e3i7nP/nkk2IrriRkZmZq69atmjJlivOY1WpVjx49tGnTphztL168qIsXLzofp6WllUqdAAAAAFAUhe7JGzdunNasWaMmTZqoevXqCgoKcvkq606ePCm73a6wsDCX42FhYTpx4kSO9jNnznS5vzp16pRWqQAAAABQaIXuyXvnnXe0ePFi9enTpyTqKXOmTJmiiRMnOh+npaUR9AAAAACUWYUOedWqVVPDhg1LopZSUaNGDdlsNiUmJrocT0xMVHh4eI72vr6+8vX1La3yAAAAAOC6FHq45vTp0zVt2jSdO3euJOopcT4+PoqKitJXX33lPOZwOPTVV1+pY8eObqwMAAAAAK5foXvyXn31VR04cEBhYWGqX79+joVXfvzxx2IrrqRMnDhRw4cPV7t27dS+fXu98sorOnv2rEaMGOHu0gAAAADguhQ65A0cOLAEyihdQ4YMUXJysqZOnaoTJ06oTZs2WrFiRY7FWAAAAACgvLEYY4y7iyhP0tLSFBQUpNTUVAUGBrq7HAAAAAAVREGzSKF78rJt3bpVu3fvliS1aNFCbdu2LepTAQAAAACKSaFDXlJSkoYOHaq1a9cqODhYkpSSkqJu3brpww8/VEhISHHXCAAAAAAooEKvrvnwww8rPT1du3bt0unTp3X69Gnt3LlTaWlpGjduXEnUCAAAAAAooELPyQsKCtL//vc//fGPf3Q5/v3336tnz55KSUkpzvrKHObkAQAAAHCHgmaRQvfkORyOHNsmSJK3t7ccDkdhnw4AAAAAUIwKHfK6d++u8ePH69ixY85jR48e1SOPPKJbbrmlWIsDAAAAABROoUPe66+/rrS0NNWvX18NGzZUw4YNFRERobS0NL322mslUSMAAAAAoIAKvbpmnTp19OOPP+p///uf9uzZI0mKjIxUjx49ir04AAAAAEDhsBl6IbHwCgAAAAB3KPaFV77++ms1b95caWlpOc6lpqaqRYsW+vbbb4tWLQAAAACgWBQ45L3yyisaNWpUrokxKChIY8aM0T//+c9iLQ4AAAAAUDgFDnk///yzevXqlef5nj17auvWrcVSFAAAAACgaAoc8hITE3PdHy+bl5eXkpOTi6UoAAAAAEDRFDjk3XDDDdq5c2ee57dv366aNWsWS1EAAAAAgKIpcMi77bbb9NRTT+nChQs5zp0/f17Tpk1T3759i7U4AAAAAEDhFHgLhcTERP3hD3+QzWbT2LFj1bRpU0nSnj17NHv2bNntdv34448KCwsr0YLdjS0UAAAAALhDQbNIgTdDDwsL08aNG/Xggw9qypQpys6GFotFsbGxmj17tscHPAAAAAAo6woc8iSpXr16+vLLL3XmzBnt379fxhg1btxYVatWLan6AAAAAACFUKiQl61q1ar64x//WNy1AAAAAACuU4EXXgEAAAAAlH2EPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAghDwAAAAA8CCEPAAAAADwIIQ8AAAAAPAg5SbkPfvss4qOjlalSpUUHByca5sjR46oT58+qlSpkkJDQ/Xoo48qKyvLpc3atWv1hz/8Qb6+vmrUqJEWLlxY8sUDAAAAQCkpNyEvMzNTd9xxhx588MFcz9vtdvXp00eZmZnauHGj3nnnHS1cuFBTp051tklISFCfPn3UrVs3bdu2TRMmTND999+vlStXltZtAAAAAECJshhjjLuLKIyFCxdqwoQJSklJcTm+fPly9e3bV8eOHVNYWJgkad68eXr88ceVnJwsHx8fPf744/riiy+0c+dO53VDhw5VSkqKVqxYUaDXT0tLU1BQkFJTUxUYGFhs9wUAAAAA+SloFik3PXnXsmnTJrVq1coZ8CQpNjZWaWlp2rVrl7NNjx49XK6LjY3Vpk2b8nzeixcvKi0tzeULAAAAAMoqjwl5J06ccAl4kpyPT5w4kW+btLQ0nT9/PtfnnTlzpoKCgpxfderUKYHqAQAAAKB4uDXkPfHEE7JYLPl+7dmzx50lasqUKUpNTXV+/fbbb26tBwAAAADy4+XOF580aZLi4uLybdOgQYMCPVd4eLi+//57l2OJiYnOc9n/m33syjaBgYHy9/fP9Xl9fX3l6+tboBoAAAAAwN3cGvJCQkIUEhJSLM/VsWNHPfvss0pKSlJoaKgkafXq1QoMDFTz5s2dbb788kuX61avXq2OHTsWSw0AAAAA4G7lZk7ekSNHtG3bNh05ckR2u13btm3Ttm3blJGRIUnq2bOnmjdvrnvvvVc///yzVq5cqSeffFLx8fHOnrgHHnhABw8e1GOPPaY9e/Zozpw5+uijj/TII4+489YAAAAAoNiUmy0U4uLi9M477+Q4vmbNGnXt2lWSdPjwYT344INau3atKleurOHDh+vvf/+7vLz+f4fl2rVr9cgjj+iXX35R7dq19dRTT11zyOiV2EIBAAAAgDsUNIuUm5BXVhDyAAAAALhDhdsnDwAAAABAyAMAAAAAj0LIAwAAAAAPQsgDAAAAAA9CyAMAAAAAD0LIAwAAAAAPQsgDAAAAAA9CyAMAAAAAD0LIAwAAAAAPQsgDAAAAAA9CyAMAAAAAD0LIAwAAAAAPQsgDAAAAAA9CyAMAAAAAD0LIAwAAAAAPQsgDAAAAAA9CyAMAAAAAD0LIAwAAAAAPQsgDAAAAAA9CyAMAAAAAD+Ll7gIAoDxKSEjQsGHDZLVaFRAQoA8++EBBQUHuLgsAAICePAAoiuDgYC1btkzffPON+vfvrzfffNPdJQEAAEgi5AFAntauXavY2Fjdfvvtat26td567yN1HDJWrQaM0YpfknTJ5i9J8vHxkdXKj1MAAFA2MFwTAPLhcDg0c+47mvjmSj2zw0+K6C1JmrJsv5784oB6NK2uH95drP8tftfNlQIAAFzGn54BIB+hrbtqwOsbtOesv2Rx/ZFpdxit/CVJ6R0f1I6TdjdVCAAA4IqQBwB5+C3doU22VsrMcsghS+6NLFZlOaTR727RnhNppVsgAABALgh5AJCHzw9kysgic412RlJmll1z1hwojbIAAADyRcgDgFwkp1/UliSHjKVgPyaNLPpix3GdzLhYwpUBAADkj5AHALnYfPCU7I5r9eG5sjuMNh88VUIVAQAAFAyrawJALs5ezCrSdRkXinZdSYmJiZGXl5eysrL05ptvqnnz5u4uCQAAlDBCHgDkorJv0X48BviVrR+ra9askY+Pj9auXat//vOfeuutt9xdEgAAKGEM1wSAXHRoUF02ax4raubBZrWoQ4PqJVRR7vLbsP0/G/Yq9eLlIafp6elq2bJlqdZWXKpUqaKuXbuqa9eu2rFjh7vLAQCgzCtbf3IGgDIipIqvbmsZri93nijQ3Dyb1aI+rWqqRoBvKVTnKt8N2z/fL5/EXcr4YbGWvjuv1GsrDk2bNtXatWvdXQYAAOUGPXkAkIf47o3kZbXktUOek0WSl9Wih7o1LI2ycsh3w3YjZYa3lF+/qXr4ubluqe9artUbefBYsjp37qwHH3xQFy5ccHe5AACUeYQ8AMhDs/BAzR/WTj5e1jyHbtqsFvl4WTV/WDs1Cw8s5QoLtmG73WGU5ZCORvQpsxu2Z/dG3jD4ST2zw1/HI3orPbK/pizbr6rD56j1A6/INyxCs2fPdnepAACUeYQ8AMhHlyYh+mxsjPq0qpkj6GUP0fxsbIy6NAlxS32F2bDdYvMqsxu2X6s38sudJ/RFZkut2ZPopgoBACg/mJMHANfQLDxQr97VVlP7Ndfmg6eUcSFLAX5e6tCgulvm4GUr7IbtDiN9seO4pvZr7ta6r5bdG2myHDL59EbaZbS7eiftOZHmll5TAADKC3ryAKCAagT4qu+NtTS0fV31vbGW24OSp2zYXtDeSMkiWaxltjcSAICygpAHAOWUJ2zYXtjeSPv/9UaezLhYwpUBAFB+EfIA5GnWrFmKiYlR//79lZZWNhfsqMg8YcN2T+mNBACgLCHkAcjVyZMntXTpUq1fv15DhgxhVcMyqLxs2J4fT+iNBACgrCk7f84FUOrWrl2rmTNnqlKlSjp48KAefvSv+teydcq4cEm39bxF7TvfIovFol69emn48OHuLhdXKU8btufFE3ojAQAoa/ivJFDBZe9PNvHNlXpmh58U0VuS9N/fJItuVMYHP+qBzhE6ffq0mytFbuK7N9KqXxLlcJh8Fy5x94btecnujSzMkM2y1hsJAEBZw3BNoILLb38yI4u+3HlCt8/dJO+6rd1UIfJTHjZsz092b2RBh52Wxd5IAADKGkIeUIFl70+WmeWQI5/9yTLtDv1e/zbtOcHiK2VRWd+w/VriuzeSl9WSxyfw/yurvZEAAJQ1FmNM4ZY1q+DS0tIUFBSk1NRUBQaWrb+IA4U15JUV+v74pQItX2+zSH1urKVX72pbCpWhqE5mXCxTG7YX1De/Jmv0u1uU5TC5Dt20WS3yslo0f1i7MhtWAQAoaQXNIoS8QiLkwVMkp19Uh5lfFXou1Hd/uaVchAaUP3tOpGnOmgP6Ysdxl89ldm/kQ90alrnhpgAAlKaCZhEWXgEqqOvZn6zvjbVKqCpUZM3CA/XqXW01tV/zctkbCQBAWUHIAyoo9idDWVUjwJc/JAAAcB1YeAWooNifDAAAwDMR8oAKKnt/ssJgfzIAAICyj5AHVFDsTwYAAOCZCHlABcb+ZAAAAJ6HkAdUYM3CAzV/WDv5eFnz7NGzWS3y8bJq/rB2LF8PAABQDhDygAquS5MQfTY2Rn1a1cwR9LKHaH42NoYNqAEAAMoJNkMvJDZDhyc7mXGR/ckAN9i0aZOio6OVnp6ugIAAd5cDACij2AwdQKGxPxngHq+++qqioqLcXQYAwEMQ8gAAKEFr167VzJkzValSJR08eFAPP/pX/WvZOmVcuKS/PDZRgRcSdeONN+r48ePuLhUA4CEIeQAAlDCHw6GZc9/RxDdX6pkdflJEb0nSlGX7JeNQrxa3KNN/q5urBAB4ChZeAQCghIW27qoBr2/QnrP+kuWq//RarFq956SOtbpXGw6cdk+BAACPQsgDAKAE/Zbu0CZbK2VmOeTIY1dKu8PIWKx64P2ftOdEWilXCADwNIQ8AECp+8tf/qIOHTqoQ4cOWr9+vbvLKVGfH8iUkUXXWsraYrHKavPSnDUHSqUuAIDnIuQBAErV6dOntWbNGm3evFmLFi3SjBkz3F1SiUlOv6gtSQ6Zq4do5sFupC92HNfJjIslXBkAwJMR8gAAxW7t2rWKjY3V7bffrtatW+ut9z5SxyFj1WrAGH25M1GBoTfo0qVLOnPmjGrUqOHuckvM5oOnZHcUbjtau8No88FTJVQRAKAiYHVNAECJyGtFySe/PChLZJwaD3tOmduW6YsP3nRzpSXn7MWsIl2XcaFo17lbQkKCRowYIelyj23Dhg316aefurkqAKh4CHkAgBKRvaLkpSx/yeK64IiRRbaIPyqgwR/1wPRZ+u6zhe4psoRV9i3af2YD/Mrnf54jIiK0du1aSdL06dMVERHh3oIAoIJiuCYAlCE7d+5UbGysunXrprlz57q7nCIr6IqSl+xGiU0GeuyKkh0aVJfNmvv958VmtahDg+olVNH1y28o7n827FVy+uX5hEuXLtXAgQPdWywAVFDl80+FAOChpkyZokWLFikwMNDdpVyXgq4oaSTnipKv3tW2NEorVSFVfHVby3B9ufNEgebm2awW9WlVUzUCfEuhuqLLb3P3J784oJi6lVS9QSsFBQW5uVIAqJjoyQOAUpRfL8irS9brvPHSPffco9jYWO3Zs8fd5RZJYVeUdMji0StKxndvJC+rJY/+zP/PIsnLatFD3RqWRlnXJb/N3e0Oo28PZSih8R365tdkN1UIABUbIQ8ASll2L8gNg5/UMzv8dTyit9Ij++ufm1O1r/kI3TD4ST385ExNnDjR3aUWCStKumoWHqj5w9rJx8ua59BNm9UiHy+r5g9rp2bhZbsXtyBDcY0sshuLRr+7xWOH4gJAWUbIA4BSll8viMVq0+q9pzRpZaKO2sv2L/t5qWgrShZElyYh+mxsjPq0qpkj6GUP0fxsbIy6NAlxU4UFV5ihuFkOw+buAOAGzMkDgFKU3Qtishwy+SxI4nAYZba+S3tOpJX5np2rVbQVJQuqWXigXr2rrab2a67NB08p40KWAvy81KFB9TI/By9boTd3dxh9seO4pvZrXm7uEQA8AT15AFCKCtMLYvm/BUnKG09cUbI41QjwVd8ba2lo+7rqe2OtchV+GIoLAOUDIQ8ASkmhFyQxKpcLkmSvKFnQoFdeVpQEQ3EBoLwg5AFAKalIvSCeuKIkGIoLAOUFIQ8ASklF6gXxtBUlcRlDcQGgfCDkAUApqWi9IJ60oiQuYyguAJQP5fM3BwAoh7J7QQozZLO894J4woqScBXfvZFW/ZIoh8Pku4AQQ3EBwH3oyQOAUlKRe0HK84qSxWnixInq3LmzBg0apPT0dHeXUyQMxQWAso+QBwCliAVJKq4tW7YoOTlZ69at05AhQzR37lx3l1RkDMUFgLLNYowp3FJvFVxaWpqCgoKUmpqqwED+Ogmg8L75NVmj392iLIfJdeimzWqRl9Wi+cPa8UtyObN27VrNnDlTlSpV0sGDB/Xwo3/Vv5atU8aFS+rXu6f803/TU49O0P79+zVhwgR9/vnn7i75up3MuMhQXAAoJQXNIszJA4BSlt0LMmfNAX2x47hL0MvuBXmoW0OGuZVTDodDM+e+o4lvrtQzO/ykiN6SpA8OSTINlfTBjwo5+ZPOnDnj1jqLS/ZQXABA2UHIAwA3YEESzxXauqsGvL5Bl7L8JctVA3MtVi37+XdZFaIGDaLcUyAAwOMR8gDAjegF8Sy/pTu0ydZKJsshk9fMS4tNDhkdqt1Te06k0WMLACh2LLwCAEAx+fxApows+W4tcJlFxmLVnDUHSqEqAEBFQ8gDAKAYJKdf1JYkh4ylYP9ptTuMvthxXCczLpZwZQCAioaQBwBAMdh88FShNrqXLge9zQdPlVBFAICKipAHAEAxOHsxq0jXZVwo2nUAAOSFkAcAQDGo7Fu0tcwC/FgDDQBQvAh5AAAUgw4NqstmzWNFzTzYrBZ1aFC9hCoCAFRUhDwAAIpBSBVf3dYyvMBBL3vje/ZFBAAUN0IeAADFJL57I3lZLXntkOdkkeRlteihbg1LoywAQAVDyAMAoJg0Cw/U/GHt5ONlzbNHz2a1yMfLqvnD2rEROgCgRBDyAAAoRl2ahOizsTHq06pmjqCXPUTzs7Ex6tIkxE0VAgA8ncUYU7hNfSq4tLQ0BQUFKTU1VYGB/AUWAJC3kxkXtfngKWVcyFKAn5c6NKjOHDwAQJEVNIuwbjMAACWkRoCv+t5Yy91lAAAqGIZrAgAAAIAHIeQBAAAAgAch5AEAAACAByHkAQAAAIAHIeQBAACP0717dwUHB+vzzz93dykAUOpYXRMAAHic9957T/Pnz3d3GQDgFvTkAQCAcmft2rWKjY3V7bffrtatW+ut9z5SxyFj1WrAGP1nw155V6nu7hIBwG3oyQMAAOWSw+HQzLnvaOKbK/XMDj8porckacqy/XryiwOqowaqedbNRQKAG9CTBwAAyqXQ1l014PUN2nPWX7K4/kpjdxgddlTTi9ukb35Ndk+BAOAmhDwAAFDu/Jbu0CZbK2VmOeSQJdc2RhbZjTT63S3acyKtlCsEAPch5AEAgHLn8wOZMrLIXKOdkUWZWXbNWXOgVOoCgLKAkAcAAMqV5PSL2pLkkLEU7NcYI4u+2HFcJzMulnBlAFA2EPIAAEC5svngKdkd1+rDc2V3GG0+eKqEKgKAsoWQBwAAypWzF7OKdF3GhaJdBwDlDSEPAACUK5V9i7YDVIAfO0cBqBgIeQAAoFzp0KC6bNbcV9TMi81qUYcGbJAOoGIg5AEAgHIlpIqvbmsZXuCgZ7Na1KdVTdUI8C3hygCgbCDkAQCAcie+eyN5WS157JD3/1kkeVkteqhbw9IoCwDKhHIR8g4dOqSRI0cqIiJC/v7+atiwoaZNm6bMzEyXdtu3b1enTp3k5+enOnXq6IUXXsjxXIsWLVKzZs3k5+enVq1a6csvvyyt2wAAAMWkWXig5g9rJx8va549ejarRT5eVs0f1k7NwgNLuUIAcJ9yEfL27Nkjh8OhN954Q7t27dLLL7+sefPm6S9/+YuzTVpamnr27Kl69epp69atevHFFzV9+nTNnz/f2Wbjxo266667NHLkSP30008aOHCgBg4cqJ07d7rjtgAAwHXo0iREn42NUZ9WNXMEvewhmp+NjVGXJiFuqhAA3MNijCncRjNlxIsvvqi5c+fq4MGDkqS5c+fqr3/9q06cOCEfHx9J0hNPPKElS5Zoz549kqQhQ4bo7Nmz+vzzz53P06FDB7Vp00bz5s0r0OumpaUpKChIqampCgzkr4IAAJQFJzMuavPBU8q4kKUAPy91aFCdOXgAPE5Bs0i56MnLTWpqqqpVq+Z8vGnTJnXu3NkZ8CQpNjZWe/fu1ZkzZ5xtevTo4fI8sbGx2rRpU+kUDQAASkSNAF/1vbGWhravq7431iLgAajQymXI279/v1577TWNGTPGeezEiRMKCwtzaZf9+MSJE/m2yT6fm4sXLyotLc3lCwAAAADKKreGvCeeeEIWiyXfr+yhltmOHj2qXr166Y477tCoUaNKvMaZM2cqKCjI+VWnTp0Sf00AAAAAKCovd774pEmTFBcXl2+bBg0aOP//sWPH1K1bN0VHR7ssqCJJ4eHhSkxMdDmW/Tg8PDzfNtnnczNlyhRNnDjR+TgtLY2gBwAAAKDMcmvICwkJUUhIwVa8Onr0qLp166aoqCgtWLBAVqtrJ2THjh3117/+VZcuXZK3t7ckafXq1WratKmqVq3qbPPVV19pwoQJzutWr16tjh075vm6vr6+8vVlXD8AAACA8qFczMk7evSounbtqrp16+qll15ScnKyTpw44TKX7u6775aPj49GjhypXbt26b///a9mzZrl0gs3fvx4rVixQv/4xz+0Z88eTZ8+XVu2bNHYsWPdcVsAAAAAUOzc2pNXUKtXr9b+/fu1f/9+1a5d2+Vc9g4QQUFBWrVqleLj4xUVFaUaNWpo6tSpGj16tLNtdHS0PvjgAz355JP6y1/+osaNG2vJkiVq2bJlqd4PAAAAAJSUcrtPnruwTx4AAAAAd/D4ffIAAAAAADkR8gAAAADAgxDyAAAAAMCDEPIAAAAAwIMQ8gAAAADAgxDyAAAAAMCDEPIAAAAAwIMQ8gAAAADAgxDyAAAAAMCDEPIAAAAAwIN4ubuA8sYYI0lKS0tzcyUAAAAAKpLsDJKdSfJCyCuk9PR0SVKdOnXcXAkAAACAiig9PV1BQUF5nreYa8VAuHA4HDp27JiqVKkii8WSa5u0tDTVqVNHv/32mwIDA0u5QlyN96Ps4T0pW3g/yhbej7KH96Rs4f0oW3g/SpcxRunp6apVq5as1rxn3tGTV0hWq1W1a9cuUNvAwEA+7GUI70fZw3tStvB+lC28H2UP70nZwvtRtvB+lJ78evCysfAKAAAAAHgQQh4AAAAAeBBCXgnw9fXVtGnT5Ovr6+5SIN6Psoj3pGzh/ShbeD/KHt6TsoX3o2zh/SibWHgFAAAAADwIPXkAAAAA4EEIeQAAAADgQQh5AAAAAOBBCHlFdOjQIY0cOVIRERHy9/dXw4YNNW3aNGVmZrq02759uzp16iQ/Pz/VqVNHL7zwQo7nWrRokZo1ayY/Pz+1atVKX375ZWndhsd59tlnFR0drUqVKik4ODjXNhaLJcfXhx9+6NJm7dq1+sMf/iBfX181atRICxcuLPniPVBB3o8jR46oT58+qlSpkkJDQ/Xoo48qKyvLpQ3vR8mpX79+jn8Pf//7313aFOTnGIrP7NmzVb9+ffn5+emmm27S999/7+6SKoTp06fn+LfQrFkz5/kLFy4oPj5e1atXV0BAgAYNGqTExEQ3VuxZ1q1bp379+qlWrVqyWCxasmSJy3ljjKZOnaqaNWvK399fPXr00L59+1zanD59Wvfcc48CAwMVHByskSNHKiMjoxTvwrNc6z2Ji4vL8W+mV69eLm14T9yHkFdEe/bskcPh0BtvvKFdu3bp5Zdf1rx58/SXv/zF2SYtLU09e/ZUvXr1tHXrVr344ouaPn265s+f72yzceNG3XXXXRo5cqR++uknDRw4UAMHDtTOnTvdcVvlXmZmpu644w49+OCD+bZbsGCBjh8/7vwaOHCg81xCQoL69Omjbt26adu2bZowYYLuv/9+rVy5soSr9zzXej/sdrv69OmjzMxMbdy4Ue+8844WLlyoqVOnOtvwfpS8GTNmuPx7ePjhh53nCvJzDMXnv//9ryZOnKhp06bpxx9/VOvWrRUbG6ukpCR3l1YhtGjx/9q795im7vcP4O+CFKkMQUBAIwoqHaIiojZ1m98oKBBj8DJFQwT3hzdQs4m7uE2Z2xxOjCY61G2Z4LYEJpvObU4UETQqgiIoKjJAFC9Uo44hXhDK8/tj8fysV3RIpb5fCUnP5/Pp5zynDz3t03N66mfyXNi3b5/S98477+D3339Heno69uzZg4sXL2L8+PFmjNay3LhxA/7+/khKSnpo//Lly7F69WqsX78eeXl56NChA0JCQnD79m1lTGRkJE6cOIHMzEz88ccf2Lt3L2bMmNFam2BxnpQTAAgNDTV5zqSmppr0MydmJNRili9fLl5eXsry2rVrxcnJSerr65W2999/X7RarbI8adIkGT16tMk8Op1OZs6c+fwDtmDJycnSsWPHh/YBkC1btjzyvu+99574+fmZtEVEREhISEgLRvhyeVQ+/vzzT7GyshKDwaC0rVu3ThwcHJTnDfPxfHXv3l1WrVr1yP7m7Meo5QwZMkRiY2OVZaPRKF26dJGEhAQzRvVyiI+PF39//4f21dTUiI2NjaSnpyttJSUlAkByc3NbKcKXx/2v001NTeLu7i6JiYlKW01Njdja2kpqaqqIiJw8eVIAyKFDh5Qx27dvF5VKJRcuXGi12C3Vw947RUdHS3h4+CPvw5yYF4/ktaB//vkHnTp1UpZzc3MxbNgwqNVqpS0kJASlpaX4+++/lTHBwcEm84SEhCA3N7d1gn5JxcbGwsXFBUOGDMGGDRsg9/ySCHPSenJzc9GvXz+4ubkpbSEhIaitrcWJEyeUMczH87Vs2TI4OzsjICAAiYmJJqfLNmc/Ri3jzp07KCgoMPl/t7KyQnBwMP/fW0lZWRm6dOkCb29vREZGoqqqCgBQUFCAhoYGk9y8+uqr8PT0ZG5aQWVlJQwGg8nj37FjR+h0OuXxz83NhaOjIwYNGqSMCQ4OhpWVFfLy8lo95pdFTk4OOnfuDK1Wi9mzZ+Pq1atKH3NiXu3MHYClKC8vx5o1a7BixQqlzWAwwMvLy2Tc3TezBoMBTk5OMBgMJm9w744xGAzPP+iX1KeffooRI0ZAo9Fg586diImJQV1dHebNmwcAj8xJbW0tbt26BTs7O3OEbZEe9Vjf7XvcGOajZcybNw8DBw5Ep06dcODAASxcuBDV1dVYuXIlgObtx6hlXLlyBUaj8aH/76dOnTJTVC8PnU6HlJQUaLVaVFdXY8mSJXjjjTdw/PhxGAwGqNXqB75bzNfr1nH3MX7c+yWDwYDOnTub9Ldr1w6dOnVijp6T0NBQjB8/Hl5eXqioqMCHH36IsLAw5ObmwtramjkxMxZ59/nggw/w5ZdfPnZMSUmJyZexL1y4gNDQUEycOBHTp09/3iG+dJ4lJ4+zaNEi5XZAQABu3LiBxMREpcijx2vpfFDLe5oczZ8/X2nr378/1Go1Zs6ciYSEBNja2j7vUIleGGFhYcrt/v37Q6fToXv37ti0aRM/TCJ6iMmTJyu3+/Xrh/79+6Nnz57IyclBUFCQGSMjgEXeA+Li4jBt2rTHjvH29lZuX7x4EcOHD8fQoUMfuBCBu7v7A1feurvs7u7+2DF3++npc/K0dDodPvvsM9TX18PW1vaROXFwcOALPVo2H+7u7g9cObC5zxHm49H+S450Oh0aGxtx5swZaLXaZu3HqGW4uLjA2tqarwkvCEdHR/j4+KC8vBwjR47EnTt3UFNTY3I0j7lpHXcf40uXLsHDw0Npv3TpEgYMGKCMuf8CRY2Njbh27Rpz1Eq8vb3h4uKC8vJyBAUFMSdmxiLvPq6urnB1dW3W2AsXLmD48OEIDAxEcnIyrKxMv+Ko1+vx0UcfoaGhATY2NgCAzMxMaLVa5RQnvV6PrKwsvP3228r9MjMzodfrW2aDLMDT5ORZFBUVwcnJSTlqodfrH/gZC+bk/7VkPvR6PZYuXYrLly8rp3RkZmbCwcEBffr0UcYwH0/nv+SoqKgIVlZWSj6asx+jlqFWqxEYGIisrCzlir9NTU3IysrCnDlzzBvcS6iurg4VFRWYOnUqAgMDYWNjg6ysLEyYMAEAUFpaiqqqKu6LWoGXlxfc3d2RlZWlFHW1tbXIy8tTrt6s1+tRU1ODgoICBAYGAgB2796NpqYm6HQ6c4X+Ujl//jyuXr2qFOLMiZmZ+8ovbdX58+elV69eEhQUJOfPn5fq6mrl766amhpxc3OTqVOnyvHjxyUtLU00Go18/fXXypj9+/dLu3btZMWKFVJSUiLx8fFiY2MjxcXF5tisNu/s2bNSWFgoS5YsEXt7eyksLJTCwkK5fv26iIj89ttv8u2330pxcbGUlZXJ2rVrRaPRyOLFi5U5Tp8+LRqNRt59910pKSmRpKQksba2loyMDHNtVpv1pHw0NjZK3759ZdSoUVJUVCQZGRni6uoqCxcuVOZgPp6fAwcOyKpVq6SoqEgqKirkxx9/FFdXV4mKilLGNGc/Ri0nLS1NbG1tJSUlRU6ePCkzZswQR0dHkyvQ0vMRFxcnOTk5UllZKfv375fg4GBxcXGRy5cvi4jIrFmzxNPTU3bv3i2HDx8WvV4ver3ezFFbjuvXryuvEQBk5cqVUlhYKGfPnhURkWXLlomjo6Ns3bpVjh07JuHh4eLl5SW3bt1S5ggNDZWAgADJy8uTffv2Se/evWXKlCnm2qQ273E5uX79uixYsEByc3OlsrJSdu3aJQMHDpTevXvL7du3lTmYE/NhkfeMkpOTBcBD/+519OhRef3118XW1la6du0qy5Yte2CuTZs2iY+Pj6jVavHz85Nt27a11mZYnOjo6IfmJDs7W0T+vXTvgAEDxN7eXjp06CD+/v6yfv16MRqNJvNkZ2fLgAEDRK1Wi7e3tyQnJ7f+xliAJ+VDROTMmTMSFhYmdnZ24uLiInFxcdLQ0GAyD/PxfBQUFIhOp5OOHTtK+/btxdfXV7744guTF2iR5u3HqOWsWbNGPD09Ra1Wy5AhQ+TgwYPmDumlEBERIR4eHqJWq6Vr164SEREh5eXlSv+tW7ckJiZGnJycRKPRyLhx40w+2KX/Jjs7+6GvF9HR0SLy788oLFq0SNzc3MTW1laCgoKktLTUZI6rV6/KlClTxN7eXhwcHOStt95SPlSkp/e4nNy8eVNGjRolrq6uYmNjI927d5fp06c/8IEUc2I+KpF7rh1PREREREREbRp/J4+IiIiIiMiCsMgjIiIiIiKyICzyiIiIiIiILAiLPCIiIiIiIgvCIo+IiIiIiMiCsMgjIiIiIiKyICzyiIiIiIiILAiLPCIiIiIiIgvCIo+IiIiIiMiCsMgjIqI2zWAwYO7cufD29oatrS26deuGMWPGICsry9yhvVCmTZuGsWPHPnHc3r17MWbMGHTp0gUqlQq//vrrc4+NiIhaFos8IiJqs86cOYPAwEDs3r0biYmJKC4uRkZGBoYPH47Y2Fhzh9cm3bhxA/7+/khKSjJ3KERE9IxY5BERUZsVExMDlUqF/Px8TJgwAT4+PvDz88P8+fNx8OBBZVxVVRXCw8Nhb28PBwcHTJo0CZcuXVL6P/nkEwwYMAAbNmyAp6cn7O3tERMTA6PRiOXLl8Pd3R2dO3fG0qVLTdavUqmwbt06hIWFwc7ODt7e3vj5559NxhQXF2PEiBGws7ODs7MzZsyYgbq6OqX/7hG2FStWwMPDA87OzoiNjUVDQ4Mypr6+HgsWLEDXrl3RoUMH6HQ65OTkKP0pKSlwdHTEjh074OvrC3t7e4SGhqK6ulrZvo0bN2Lr1q1QqVRQqVQm979XWFgYPv/8c4wbN+6p80FERC8GFnlERNQmXbt2DRkZGYiNjUWHDh0e6Hd0dAQANDU1ITw8HNeuXcOePXuQmZmJ06dPIyIiwmR8RUUFtm/fjoyMDKSmpuK7777D6NGjcf78eezZswdffvklPv74Y+Tl5Zncb9GiRZgwYQKOHj2KyMhITJ48GSUlJQD+PSoWEhICJycnHDp0COnp6di1axfmzJljMkd2djYqKiqQnZ2NjRs3IiUlBSkpKUr/nDlzkJubi7S0NBw7dgwTJ05EaGgoysrKlDE3b97EihUr8MMPP2Dv3r2oqqrCggULAAALFizApEmTlMKvuroaQ4cOfebHnoiIXnBCRETUBuXl5QkA2bx582PH7dy5U6ytraWqqkppO3HihACQ/Px8ERGJj48XjUYjtbW1ypiQkBDp0aOHGI1GpU2r1UpCQoKyDEBmzZplsj6dTiezZ88WEZFvvvlGnJycpK6uTunftm2bWFlZicFgEBGR6Oho6d69uzQ2NipjJk6cKBERESIicvbsWbG2tpYLFy6YrCcoKEgWLlwoIiLJyckCQMrLy5X+pKQkcXNzU5ajo6MlPDz8sY/V/QDIli1bnuo+RERkfu3MWmESERE9IxFp1riSkhJ069YN3bp1U9r69OkDR0dHlJSUYPDgwQCAHj164JVXXlHGuLm5wdraGlZWViZtly9fNplfr9c/sFxUVKSs29/f3+RI42uvvYampiaUlpbCzc0NAODn5wdra2tljIeHB4qLiwH8e7qn0WiEj4+PyXrq6+vh7OysLGs0GvTs2dNkjvtjJSKilwOLPCIiapN69+4NlUqFU6dOtch8NjY2JssqleqhbU1NTS2yviet++566urqYG1tjYKCApNCEADs7e0fO0dzC2EiIrIs/E4eERG1SZ06dUJISAiSkpJw48aNB/pramoAAL6+vjh37hzOnTun9J08eRI1NTXo06fPf47j3gu83F329fVV1n306FGT+Pbv3w8rKytotdpmzR8QEACj0YjLly+jV69eJn/u7u7NjlOtVsNoNDZ7PBERtV0s8oiIqM1KSkqC0WjEkCFD8Msvv6CsrAwlJSVYvXq1chplcHAw+vXrh8jISBw5cgT5+fmIiorC//73PwwaNOg/x5Ceno4NGzbgr7/+Qnx8PPLz85ULq0RGRqJ9+/aIjo7G8ePHkZ2djblz52Lq1KnKqZpP4uPjg8jISERFRWHz5s2orKxEfn4+EhISsG3btmbH2aNHDxw7dgylpaW4cuWKydU771VXV4eioiLllNPKykoUFRWhqqqq2esiIiLzYpFHRERtlre3N44cOYLhw4cjLi4Offv2xciRI5GVlYV169YB+Pe0xa1bt8LJyQnDhg1DcHAwvL298dNPP7VIDEuWLEFaWhr69++P77//HqmpqcoRQo1Ggx07duDatWsYPHgw3nzzTQQFBeGrr756qnUkJycjKioKcXFx0Gq1GDt2LA4dOgRPT89mzzF9+nRotVoMGjQIrq6u2L9//0PHHT58GAEBAQgICAAAzJ8/HwEBAVi8ePFTxUxEROajEp6wT0RE9ExUKhW2bNmCsWPHmjsUIiIiBY/kERERERERWRAWeURERERERBaEP6FARET0jPiNByIiehHxSB4REREREZEFYZFHRERERERkQVjkERERERERWRAWeURERERERBaERR4REREREZEFYZFHRERERERkQVjkERERERERWRAWeURERERERBaERR4REREREZEF+T8KZIEvFWiBNQAAAABJRU5ErkJggg==", + "text/plain": [ + "
" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "label_embeddings = embedder.embed_distance_matrix(D, emb_dim=2)\n", + "\n", + "fig, ax = plt.subplots(figsize=(9, 7))\n", + "colors = plt.cm.tab10(np.linspace(0, 1, n))\n", + "\n", + "for i, (name, color) in enumerate(zip(dataset_names, colors)):\n", + " start = class_offsets[i]\n", + " end = class_offsets[i + 1]\n", + " embs = label_embeddings[start:end].numpy()\n", + " ax.scatter(embs[:, 0], embs[:, 1], color=color, s=80, label=name, zorder=5)\n", + " for c in range(end - start):\n", + " ax.annotate(f'{name[0]}{c}', (embs[c, 0], embs[c, 1]),\n", + " fontsize=6, ha='center', va='bottom')\n", + "\n", + "ax.set_title('Class Label Embeddings\\n(MDS of Bures-Wasserstein class distances)')\n", + "ax.set_xlabel('Component 1')\n", + "ax.set_ylabel('Component 2')\n", + "ax.legend(loc='best')\n", + "plt.tight_layout()\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a1b2c3d4-0001-0001-0001-000000000006", + "metadata": { + "execution": { + "iopub.execute_input": "2026-04-05T09:58:57.787515Z", + "iopub.status.busy": "2026-04-05T09:58:57.787321Z", + "iopub.status.idle": "2026-04-05T09:59:30.059469Z", + "shell.execute_reply": "2026-04-05T09:59:30.058152Z" + }, + "id": "a1b2c3d4-0001-0001-0001-000000000006", + "outputId": "8d504d87-0364-45cb-a514-d349d01b21ef" + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\r", + "Preprocessing dataset: 0%| | 0/2 [00:00=64", "wheel"] +build-backend = "setuptools.build_meta" + +[project] +name = "data_meta_map" +version = "0.1.1" +description = "Library for representing datasets in a unified vector space for similarity comparison." +readme = "README.md" +license = { file = "LICENSE" } +requires-python = ">=3.10" +authors = [ + { name = "Vladislav Minashkin" }, + { name = "Ivan Papay" }, + { name = "Vlad Meshkov" }, + { name = "Ilia Stepanov" }, +] +classifiers = [ + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", + "License :: OSI Approved :: MIT License", + "Operating System :: OS Independent", + "Topic :: Scientific/Engineering :: Artificial Intelligence", +] +dependencies = [ + "torch>=2.0", + "torchvision>=0.15", + "numpy>=1.23", + "pandas>=1.5", + "scikit-learn>=1.2", + "scipy>=1.10", + "pot>=0.9", + "tqdm>=4.64", + "pytorch-lightning>=2.0", + "pydantic>=2.0", +] + +[project.optional-dependencies] +dev = [ + "pytest>=7.0", + "pytest-cov>=4.0", +] +viz = [ + "matplotlib>=3.7", + "seaborn>=0.13", +] + +[project.urls] +Homepage = "https://github.com/intsystems/DataMetaMap" +Repository = "https://github.com/intsystems/DataMetaMap" + +[tool.setuptools.packages.find] +where = ["src"] + +[tool.setuptools.package-dir] +"" = "src" diff --git a/report/data_meta_map.pdf b/report/data_meta_map.pdf new file mode 100644 index 0000000..aa9a6b6 Binary files /dev/null and b/report/data_meta_map.pdf differ diff --git a/report/main.tex b/report/main.tex new file mode 100644 index 0000000..dc0ce70 --- /dev/null +++ b/report/main.tex @@ -0,0 +1,481 @@ +\documentclass[peerreview]{IEEEtran} +\usepackage{cite} +\usepackage{url} +\usepackage[utf8]{inputenc} +\usepackage{booktabs} +\usepackage{graphicx} +\usepackage{amsmath} +\usepackage{amsfonts} +\usepackage{amssymb} + +\hyphenation{op-tical net-works semi-conduc-tor meta-fea-ture meta-fea-tures} + +\begin{document} +\title{DataMetaMap: A Library for Dataset Vector Representation} + +\author{Vladislav Minashkin, Ivan Papay, Vladislav Meshkov, Ilya Stepanov \\ +\textbf{Intelligent Systems} \\ +\textbf{MIPT 2025}} + +\date{2025} + +\maketitle +\tableofcontents +\listoffigures + +\IEEEpeerreviewmaketitle + +\begin{abstract} +This report presents \textit{DataMetaMap}, a Python library designed to +represent multiple datasets in a unified vector space, enabling principled +comparison of dataset similarity. The library offers a comprehensive suite +of dataset embedding techniques compatible with PyTorch, addressing the +challenge of transferring knowledge across datasets with varying schemas in +meta-learning pipelines. We study four complementary approaches: +Maximum Mean Discrepancy (MMD), Task2Vec, Dataset2Vec, and +Wasserstein Task Embedding. The repository includes executable demos +and benchmark pipelines for dataset similarity analysis, +transfer-oriented retrieval. +\end{abstract} + +\section{Introduction} +A fundamental challenge in modern machine learning is the efficient +transfer of knowledge across tasks and datasets. Given a new target +dataset, a practitioner must decide which pretrained model or which +prior learning experience is most relevant โ€” a decision that +currently relies heavily on human intuition or exhaustive search. + +A principled solution requires the ability to \emph{compare} datasets +quantitatively: if datasets can be embedded into a common vector space +where geometric proximity reflects task similarity, then finding the +most relevant prior experience reduces to a nearest-neighbor query. +Such embeddings, often called \emph{meta-features} or \emph{task +embeddings}, form the foundation of data-driven meta-learning. + +Existing approaches to dataset representation span a wide spectrum. +Classical engineered meta-features (number of instances, class +imbalance, statistical moments of features) are cheap to compute but +have limited expressivity and require schema-specific design. +Kernel-based methods such as Maximum Mean Discrepancy (MMD) provide +distribution-level comparisons without parametric assumptions but +scale poorly with dimensionality. More recent neural and +geometry-aware approaches --- Task2Vec, Dataset2Vec, and Wasserstein +Task Embedding --- learn rich, task-agnostic representations but +differ substantially in their inductive biases, computational cost, +and applicability +to datasets with varying schemas. + +We present \texttt{DataMetaMap}, a unified Python library that +combines these dataset embedding methods within a practical, +PyTorch-compatible framework. Our contributions are: + +\begin{itemize} + \item A unified overview of four families of dataset comparison + methods: MMD, Task2Vec, Dataset2Vec, and Wasserstein Task + Embedding. + \item A clean, extensible repository packaging Dataset2Vec and + Wasserstein embedders together with a Task2Vec subpackage, + shared utilities, and benchmark code. + \item Experimental benchmarks for transfer-oriented dataset + retrieval. + \item Method-specific demo notebooks illustrating training, + visualization, and similarity analysis in the learned embedding + spaces. +\end{itemize} + +\section{Notation and Conventions} +We adopt the following notation throughout this report. + +\paragraph{Datasets and meta-datasets.} +A dataset is denoted $D = (X, Y)$, where +$X \in \mathbb{R}^{N \times M}$ is the predictor matrix and +$Y \in \mathbb{R}^{N \times T}$ is the target matrix; $N$, $M$, and +$T$ are the number of instances, predictors, and targets, +respectively. A \emph{meta-dataset} is a collection +$\mathcal{D}_{\mathrm{meta}} = \{D_1, \dots, D_K\}$ of (possibly +heterogeneous) datasets used for meta-training or evaluation. + +\paragraph{Meta-features.} +A \emph{meta-feature extractor} is a map +$\varphi : \mathcal{D} \to \mathbb{R}^{d}$ that assigns to every +dataset a fixed-dimensional embedding $\varphi(D) \in \mathbb{R}^{d}$. +When the extractor is itself a learned (estimated) object, we write +$\hat{\varphi}$. For methods based on stochastic batch sampling, +$D_s$ and $D_s'$ denote multi-fidelity subsets (batches) drawn from +$D$ by subsampling instances, predictors, and targets. + +\paragraph{Probe networks.} +For Task2Vec, we additionally introduce a \emph{probe network} with +parameters $\theta \in \mathbb{R}^{P}$ that is briefly fine-tuned on +the target dataset. The Fisher Information Matrix (FIM) of the probe +parameters is denoted $F(\theta) \in \mathbb{R}^{P \times P}$. + +\paragraph{Distributional and kernel quantities.} +We write $k(\cdot, \cdot)$ for a positive-definite kernel (e.g.~RBF), +$\mu_P, \mu_Q$ for kernel mean embeddings of distributions $P$ and $Q$ +in a reproducing kernel Hilbert space (RKHS) $\mathcal{H}$, and +$W_p(P, Q)$ for the $p$-Wasserstein distance. In the Wasserstein Task +Embedding section, $\psi$ denotes the MDS map used to embed +class-label distributions into a Euclidean space. + +\paragraph{Hyperparameters.} +Throughout, $\gamma > 0$ denotes a bandwidth or scaling +hyperparameter for similarity models, and $\tau > 0$ a temperature +parameter where applicable. The binary indicator $i \in \{0, 1\}$ +marks whether two batches originate from the same dataset ($i = 1$) +or not ($i = 0$). + +\section{Proposed Solutions} + +\subsection{Maximum Mean Discrepancy (MMD)} + +\subsubsection*{Overview} +Gretton et al.~\cite{gretton2012kernel} introduced the Maximum Mean +Discrepancy as a non-parametric, kernel-based measure of the distance +between two probability distributions. Originally proposed as a +principled two-sample test, MMD operates by embedding distributions +into a reproducing kernel Hilbert space (RKHS) and measuring the +distance between their mean embeddings. The result is a scalar +dissimilarity score that can serve directly as a dataset distance +measure, without requiring an explicit parametric model. + +\subsubsection*{Methodology} +Given two distributions $P$ and $Q$ with samples +$\{x_i\}_{i=1}^{m} \sim P$ and $\{y_j\}_{j=1}^{n} \sim Q$, the +squared MMD in an RKHS $\mathcal{H}$ induced by kernel $k$ is +\begin{equation} + \mathrm{MMD}^2(P, Q) + = \bigl\|\mu_P - \mu_Q\bigr\|_{\mathcal{H}}^{2}, +\end{equation} +where $\mu_P = \mathbb{E}_{x \sim P}[k(x, \cdot)]$ is the kernel mean +embedding of $P$. An unbiased empirical estimator is +\begin{equation} +\begin{aligned} + \widehat{\mathrm{MMD}}^2(P, Q) + &= \frac{1}{m(m-1)} \sum_{i \neq i'} k(x_i, x_{i'}) \\ + &\quad + \frac{1}{n(n-1)} \sum_{j \neq j'} k(y_j, y_{j'}) \\ + &\quad - \frac{2}{mn} \sum_{i, j} k(x_i, y_j). +\end{aligned} +\end{equation} +For dataset comparison, $P$ and $Q$ are taken as the empirical +distributions of two datasets $D$ and $D'$, and +$\widehat{\mathrm{MMD}}^2(D, D')$ is used as their dissimilarity +score. A common choice is the RBF kernel +$k(x, y) = \exp\bigl(-\|x - y\|^{2} / (2\sigma^{2})\bigr)$ with +bandwidth $\sigma$ selected via the median heuristic. + +\subsection{Task2Vec} + +\subsubsection*{Overview} +Achille et al.~\cite{achille2019task2vec} propose Task2Vec, a method +for embedding machine learning tasks into a fixed-dimensional vector +space such that geometric distances between embeddings reflect +meaningful task similarity. The core idea is to use the diagonal of +the Fisher Information Matrix (FIM) of a pretrained \emph{probe +network} --- a fixed feature extractor briefly fine-tuned on the +target task --- as the task embedding. This captures the sensitivity +of the probe's parameters to the task's data distribution, producing +embeddings that are stable, comparable across tasks, and correlated +with transfer learning performance. + +\subsubsection*{Methodology} +Given a dataset $D = (X, Y)$ and a probe network with parameters +$\theta$, the probe is fine-tuned on $D$ by minimizing the task loss +$\mathcal{L}(\theta; D)$. The Task2Vec embedding is the diagonal of +the empirical Fisher Information Matrix: +\begin{equation} + \varphi(D) + := \mathrm{diag}\bigl(\hat{F}(\theta)\bigr) + \in \mathbb{R}^{P}, +\end{equation} +where the empirical FIM diagonal is estimated as +\begin{equation} + \hat{F}_{ii}(\theta) + = \frac{1}{N} \sum_{n=1}^{N} + \left( + \frac{\partial \log p(y_n \mid x_n, \theta)} + {\partial \theta_i} + \right)^{2}. +\end{equation} +The resulting embedding $\varphi(D)$ encodes which parameters of the +probe are most activated by dataset $D$. Task similarity is then +measured via the cosine distance between embeddings: +\begin{equation} + d(D, D') + = 1 - \frac{\varphi(D)^{\top} \varphi(D')} + {\|\varphi(D)\| \cdot \|\varphi(D')\|}. +\end{equation} + +%------------------------------------------------------------ +\subsection{Dataset2Vec} + +\subsubsection*{Overview} +Jomaa et al.~\cite{jomaa2021dataset2vec} propose Dataset2Vec, a learned meta-feature extractor designed to replace hand-engineered dataset statistics in meta-learning pipelines. The key motivation is that classical engineered meta-features require substantial domain expertise and do not generalize well to datasets with different schemas, while autoencoder-based approaches are typically limited to fixed-schema data. Dataset2Vec addresses both issues by viewing a tabular dataset as a hierarchical set: a set of predictor and target variables, where each variable is itself a set of instance values. This representation is then processed by a permutation-invariant DeepSet architecture~\cite{zaheer2017deepsets}. To mitigate the lack of meta-labeled data, the authors introduce an auxiliary self-supervised task called \emph{dataset similarity learning}, which asks the model to determine whether two subsampled batches come from the same dataset. + +\subsubsection*{Methodology} +Dataset2Vec represents a tabular dataset as a hierarchical set of predictor and target variables, where each variable is itself a set of instance values. Let +$D = (X^{(D)}, Y^{(D)}) \in \mathcal{D}$ denote a dataset, where +$X^{(D)} \in \mathbb{R}^{N(D)\times M(D)}$ and +$Y^{(D)} \in \mathbb{R}^{N(D)\times T(D)}$. +The meta-feature extractor is modeled as a DeepSet-based architecture +\begin{equation} + \hat{\varphi} := h \circ g \circ f, +\end{equation} +where $f$ embeds individual predictor/target pairs, $g$ aggregates embeddings through pooling, and $h$ produces the final dataset representation. The resulting embedding can be written as +\begin{equation} +\hat{\varphi}(D) +:= +h\left( +\frac{1}{M(D)T(D)} +\sum_{m=1}^{M(D)} \sum_{t=1}^{T(D)} +g\left( +\frac{1}{N(D)} +\sum_{n=1}^{N(D)} +f\bigl(X^{(D)}_{n,m}, Y^{(D)}_{n,t}\bigr) +\right) +\right). +\end{equation} + +To improve robustness and support datasets of varying sizes, embeddings are estimated from random multi-fidelity batches obtained by subsampling instances, predictors, and targets. The final dataset-level representation is computed as the average of multiple batch-level embeddings: +\begin{equation} + \hat{\varphi}(D) + := \frac{1}{B} \sum_{b=1}^{B} + \hat{\varphi}\bigl(\mathrm{sample\text{-}batch}(D)\bigr). +\end{equation} + +Training is performed using an auxiliary self-supervised task called dataset similarity learning. Given two batches $D_s$ and $D_s'$, the model predicts whether they originate from the same dataset using +\begin{equation} + \hat{i}(D_s, D_s') + := \exp\bigl(-\gamma\,\|\hat{\varphi}(D_s) - \hat{\varphi}(D_s')\|\bigr), +\end{equation} +and optimizes a symmetric binary cross-entropy objective over similar and dissimilar batch pairs. +\subsection{Wasserstein Task Embedding} + +\subsubsection*{Overview} +Wasserstein Task Embedding (WTE) \emph{Liu et al. (2022)} is a +model-agnostic task representation method for supervised +classification that combines multidimensional scaling (MDS) with +Wasserstein embedding. The main idea is to represent every task as a +probability measure in an augmented space where the label information +is first embedded into a Euclidean vector space and then concatenated +with the input samples. This makes it possible to approximate a +hierarchical optimal transport dataset distance with a standard +Euclidean distance between task vectors. + +Compared with pairwise optimal transport methods such as OTDD, WTE is +designed to be substantially faster when many tasks need to be +compared repeatedly. The paper shows that the resulting task +distances are strongly correlated with both forward transfer and +catastrophic forgetting, while also remaining highly correlated with +OTDD on benchmark task groups. + +\subsubsection*{Methodology} +Let a task be given by a supervised dataset +$\tau = \{(x_n, y_n)\}_{n=1}^{N}$ with inputs $x_n \in \mathbb{R}^{d}$ and +labels $y_n \in \mathcal{Y}$. WTE first defines a label-to-label +distance matrix using the 2-Wasserstein distance between label +distributions. Following the simplification used in the paper, each +class-conditional label distribution $\nu_y$ is approximated by a +Gaussian, so that the distance between labels can be written in +closed form as the Bures--Wasserstein distance: +\begin{equation} +W_2^2(\nu_y, \nu_{y'}) += +\|u_y - u_{y'}\|_2^2 ++ +\operatorname{Tr}\!\left( +\Sigma_y + \Sigma_{y'} +- 2(\Sigma_y^{1/2}\Sigma_{y'}\Sigma_y^{1/2})^{1/2} +\right), +\end{equation} +where $u_y$ and $\Sigma_y$ denote the mean and covariance of the +Gaussian associated with label $y$. + +The resulting pairwise label distances are then embedded into a +low-dimensional Euclidean space using multidimensional scaling. +Denoting the MDS map by $\psi$, each label $y$ is mapped to a vector +$\psi(\nu_y) \in \mathbb{R}^{l}$, where $l$ is chosen as a compromise +between accuracy and computational cost. This gives the +approximation +\begin{equation} +W_2^2(\nu_y, \nu_{y'}) +\approx +\|\psi(\nu_y) - \psi(\nu_{y'})\|_2^2. +\end{equation} + +After label embedding, each original sample-label pair is converted +to an augmented vector +\begin{equation} +[x, \psi(\nu_y)] \in \mathbb{R}^{d+l}. +\end{equation} +The original task is therefore transformed into an updated point +cloud in the augmented space, and the task distance becomes +\begin{equation} +d_{\tau}\bigl((x,y),(x',y')\bigr) +\approx +\left\| +[x, \psi(\nu_y)] - [x', \psi(\nu_{y'})] +\right\|_2. +\end{equation} +In this representation, the label discrepancy is absorbed into the +feature space, allowing the task comparison problem to be reduced to a +standard Wasserstein embedding problem. + +To obtain the final task vector, WTE applies the Wasserstein +embedding framework with respect to a fixed reference measure +$\mu_0$. For each task distribution $\mu_i$ over the augmented +samples, an optimal transport map $T_i$ from $\mu_0$ to $\mu_i$ is +approximated, and the embedding is defined as +\begin{equation} +\Phi(\mu_i) = (T_i - \mathrm{id})\sqrt{p_0}, +\end{equation} +or, in the discrete setting used in the implementation, +\begin{equation} +\Phi(X_i) = \frac{T_i - X_0}{\sqrt{N_0}}, +\end{equation} +where $X_0$ denotes the reference sample set and $N_0$ is its size. +The Euclidean distance between the embedded task vectors then +approximates the 2-Wasserstein distance between tasks: +\begin{equation} +\|\Phi(\mu_i) - \Phi(\mu_j)\|_2 \approx W_2(\mu_i, \mu_j). +\end{equation} + +Algorithmically, WTE proceeds in three steps: (1) compute the label +distance matrix, (2) embed labels with MDS, and (3) apply +Wasserstein embedding to the augmented task distributions. The paper +reports that this pipeline reduces the cost of task comparison from +pairwise optimal transport over all task pairs to a single transport +computation per task against the reference measure, which is the main +source of the computational gain. + +%------------------------------------------------------------ + +\section{Research Methodology} +Our research methodology involved comprehensive implementation and +experimental validation of each dataset embedding technique. We +followed a systematic approach: + +\begin{itemize} + \item \textbf{Library Design}: We built a unified PyTorch-compatible + interface where each embedding method is implemented as a + self-contained module with consistent \texttt{fit()} and + \texttt{embed()} methods. + \item \textbf{Algorithm Implementation}: Each method was implemented + as a separate class following a common base interface, enabling + straightforward benchmarking and extension. + \item \textbf{Experimental Validation}: We conducted controlled + experiments on standard meta-dataset benchmarks to evaluate + embedding quality, dataset similarity retrieval accuracy, and + downstream meta-learning performance. +\end{itemize} + +\subsection{Repository Structure} +The repository is organized into three main layers. The +\texttt{src/} directory contains reusable library components, the +\texttt{benchmarks/} directory contains executable evaluation +pipelines, and the \texttt{demo/} directory provides lightweight +notebooks for qualitative inspection of the methods. In addition, the +project includes unit tests for the main embedder modules and +continuous-integration workflows for automated testing and +documentation deployment. + +\subsection{Benchmarking Pipelines in the Repository} +The public code base already contains concrete benchmark logic for +two practical use cases of dataset embeddings. + +\paragraph{Task2Vec transfer-selection benchmark.} +The notebook \texttt{apply\_benchmark.ipynb} +uses Task2Vec embeddings to retrieve the nearest source dataset for a +given target image task. The benchmark operates on datasets such as +\texttt{mnist}, \texttt{cifar10}, \texttt{cifar100}, +\texttt{letters}, and \texttt{kmnist}, computes embeddings with a +pretrained \texttt{ResNet-18} probe network, and then compares the +retrieval-based recommendation against logged pretrain-to-task +transfer results. The same notebook also defines random and large +pretraining baselines. + +\paragraph{Wasserstein transfer-selection benchmark.} +The notebook +\texttt{benchmark\_wasserstein.ipynb} +implements a similar transfer-selection experiment using +Wasserstein-based dataset distances. It first computes class-wise +Bures--Wasserstein distances, derives dataset-level distances from the +resulting embeddings, and then recommends the closest source dataset +for each target task. When transfer logs are available, the notebook +compares these recommendations with observed downstream accuracies. + +\paragraph{Status of MMD in the repository.} +MMD is retained in this report as an important classical baseline and +as part of the conceptual comparison framework. At the same time, the +current public repository places its strongest executable emphasis on +Dataset2Vec, Task2Vec, and Wasserstein components, including demos and +benchmark notebooks. Packaging MMD as a first-class module would be a +natural next step for future development. + +\subsection{Demonstration Notebooks} +The repository also includes method-specific notebooks intended for +qualitative validation and onboarding. + +\paragraph{Task2Vec demo.} +The notebook \texttt{demo/task2vec/simple\_example.ipynb} embeds a +small collection of image datasets with a pretrained +\texttt{ResNet-18} probe and visualizes the resulting task distances +with a clustered similarity matrix. + +\paragraph{Dataset2Vec demo.} +The notebook \texttt{demo/dataset2vec/simple\_example.ipynb} +constructs a balanced synthetic tabular meta-dataset, trains a +Dataset2Vec encoder, and studies the resulting embeddings using +PCA/t-SNE projections together with nearest-centroid classification in +embedding space. + +\paragraph{Wasserstein demo.} +The notebook \texttt{demo/wasserstein/simple\_example1 (1).ipynb} +demonstrates the full Wasserstein pipeline: computation of pairwise +class distances, MDS-based label embeddings, augmentation of the +feature space, and construction of reference-based task embeddings. + +\section{Conclusions and Recommendations} +\texttt{DataMetaMap} provides a growing toolkit for dataset +representation and comparison. In its current public form, the +repository combines packaged Dataset2Vec and Wasserstein embedders, a +Task2Vec subpackage, theoretical coverage of MMD as a classical +baseline, and benchmark notebooks for transfer and meta-learning +scenarios. + +The existing benchmark suite is designed to test the hypothesis that +learned embeddings capture richer dataset structure than purely +distributional baselines, especially when schemas vary across tasks. +We encourage further work on standardizing the benchmarking interface +across all methods, extending the released result tables with fully +reproducible numeric comparisons, and turning MMD into a packaged +executable component of the library. + +\appendices +\section{Demo Experiments} +Our demo code is available at the GitHub repository. The experiments +are divided into the following groups: + +\begin{itemize} + \item \textbf{Task2Vec Similarity Demo}: A notebook that embeds + \texttt{mnist}, \texttt{cifar10}, \texttt{cifar100}, and + \texttt{letters} with a pretrained \texttt{ResNet-18} probe and + visualizes the resulting task-distance matrix. + \item \textbf{Dataset2Vec Training and Visualization Demo}: A + synthetic tabular meta-dataset is generated, a Dataset2Vec model + is trained on balanced train/validation/test splits, and the + learned representations are explored with PCA/t-SNE projections + and nearest-centroid classification. + \item \textbf{Wasserstein Embedding Demo}: A compact notebook + computes class-wise Bures--Wasserstein distances, label embeddings, + and final task embeddings for small image-dataset collections. + \item \textbf{Transfer-Based Benchmark Notebooks}: Separate + notebooks evaluate Task2Vec and Wasserstein retrieval as heuristics + for choosing a source pretraining dataset before downstream + fine-tuning. +\end{itemize} + +\bibliographystyle{IEEEtran} +\bibliography{references} + +\end{document} diff --git a/report/references.bib b/report/references.bib new file mode 100644 index 0000000..6b2db08 --- /dev/null +++ b/report/references.bib @@ -0,0 +1,53 @@ +@article{gretton2012kernel, + title = {A Kernel Two-Sample Test}, + author = {Gretton, Arthur and Borgwardt, Karsten M. and Rasch, + Malte J. and Sch{\"o}lkopf, Bernhard and Smola, Alexander}, + journal = {Journal of Machine Learning Research}, + volume = {13}, + pages = {723--773}, + year = {2012} +} + +@inproceedings{achille2019task2vec, + title = {Task2Vec: Task Embedding for Meta-Learning}, + author = {Achille, Alessandro and Lam, Michael and Tewari, Rahul + and Ravichandran, Avinash and Maji, Subhransu and + Fowlkes, Charless and Soatto, Stefano and Perona, Pietro}, + booktitle = {IEEE/CVF International Conference on Computer Vision (ICCV)}, + year = {2019} +} + +@article{jomaa2021dataset2vec, + title = {Dataset2Vec: Learning Dataset Meta-Features}, + author = {Jomaa, Hadi S. and Schmidt-Thieme, Lars and Grabocka, Josif}, + journal = {Data Mining and Knowledge Discovery}, + year = {2021}, + note = {arXiv:1905.11063} +} + +@inproceedings{zaheer2017deepsets, + title = {Deep Sets}, + author = {Zaheer, Manzil and Kottur, Satwat and Ravanbakhsh, + Siamak and Poczos, Barnabas and Salakhutdinov, Ruslan + and Smola, Alexander}, + booktitle = {Advances in Neural Information Processing Systems}, + year = {2017} +} + +% Wasserstein Task Embedding -- ะทะฐะผะตะฝะธั‚ัŒ ะฟะพัะปะต ะฟะพะปัƒั‡ะตะฝะธั ัั‚ะฐั‚ัŒะธ +@article{2022arXiv220811726L, + author = {{Liu}, Xinran and {Bai}, Yikun and {Lu}, Yuzhe and {Soltoggio}, Andrea and {Kolouri}, Soheil}, + title = "{Wasserstein Task Embedding for Measuring Task Similarities}", + journal = {arXiv e-prints}, + keywords = {Computer Science - Machine Learning}, + year = 2022, + month = aug, + eid = {arXiv:2208.11726}, + pages = {arXiv:2208.11726}, + doi = {10.48550/arXiv.2208.11726}, + archivePrefix = {arXiv}, + eprint = {2208.11726}, + primaryClass = {cs.LG}, + adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220811726L}, + adsnote = {Provided by the SAO/NASA Astrophysics Data System} +} diff --git a/src/README.rst b/src/README.rst deleted file mode 100755 index 8f32660..0000000 --- a/src/README.rst +++ /dev/null @@ -1,25 +0,0 @@ -************ -Installation -************ - -Requirements -============ - -- Python 3.* -- pip 20.0.2 - -Installing by using PyPi -======================== - -Install -------- -.. code-block:: bash - - git clone https://github.com/Intelligent-Systems-Phystech/ProjectTemplate.git /tmp/ProjectTemplate - python3 -m pip install /tmp/ProjectTemplate/src/ - -Uninstall ---------- -.. code-block:: bash - - python3 -m pip uninstall mylib diff --git a/src/data_meta_map/__init__.py b/src/data_meta_map/__init__.py new file mode 100644 index 0000000..87fcd56 --- /dev/null +++ b/src/data_meta_map/__init__.py @@ -0,0 +1,17 @@ +from data_meta_map.base_embedder import BaseEmbedder +from data_meta_map.wasserstein_embedder import WassersteinEmbedder +try: + from data_meta_map.dataset2vec_embedder import Dataset2VecEmbedder, dataset2vec +except Exception: # pragma: no cover + # Allow importing the package even when optional heavy deps (e.g. lightning) + # are not available or are broken in the current environment. + Dataset2VecEmbedder = None # type: ignore + dataset2vec = None # type: ignore + +__all__ = [ + "BaseEmbedder", + "WassersteinEmbedder", +] + +if Dataset2VecEmbedder is not None: + __all__ += ["Dataset2VecEmbedder", "dataset2vec"] diff --git a/src/data_meta_map/base_embedder.py b/src/data_meta_map/base_embedder.py new file mode 100644 index 0000000..a299c4d --- /dev/null +++ b/src/data_meta_map/base_embedder.py @@ -0,0 +1,232 @@ +from abc import ABC, abstractmethod +from typing import ( + Any, + Dict, + List, + Optional, + Protocol, + Tuple, + Union, + runtime_checkable +) +import torch +from torch.utils.data import Dataset, DataLoader + + +@runtime_checkable +class SupportsGetItem(Protocol): + """ + Protocol for objects with __getitem__ and __len__ interface. + Ensures compatibility with both Dataset and custom iterable objects. + """ + + def __getitem__(self, index: int) -> Tuple[Any, int]: ... + def __len__(self) -> int: ... + + +class BaseEmbedder(ABC): + def __init__( + self + ): + pass + + @abstractmethod + def embed(self, *args, **kwargs): + raise NotImplementedError( + "Override this method in your Embedder class") + + +# DEPRECATED +class BaseEmbedderDEPRECATED(ABC): + """ + Abstract base class for dataset embedding. + + Designed to unify the transformation of arbitrary datasets + (images, text, tabular data) into feature space with subsequent + computation of geometric distances between distributions. + + Key Features: + - Complete data-type agnosticism (works with any vectorized features) + - Clear separation of stages: preprocessing โ†’ distances โ†’ embeddings โ†’ augmentation + - Support for both raw datasets and pre-configured DataLoaders + - Explicit type annotations for static analysis + + Implementation Requirements: + Subclasses must implement all abstract methods with signatures matching + the specified input/output types. + """ + + def __init__( + self, + emb_dim: int, + device: Union[str, torch.device] = "cpu", + max_samples: Optional[int] = None, + batch_size: int = 64 + ): + """ + Initialize base embedder. + + Args: + emb_dim: Target dimensionality of label embeddings. + device: Computation device ('cpu', 'cuda', or torch.device). + max_samples: Maximum number of samples to process from dataset. + If None, all samples are used. + batch_size: Batch size for DataLoader during data loading. + """ + self.emb_dim = emb_dim + self.device = torch.device(device) + self.max_samples = max_samples + self.batch_size = batch_size + + @abstractmethod + def preprocess_dataset( + self, + data: Union[SupportsGetItem, DataLoader] + ) -> Tuple[torch.Tensor, torch.Tensor]: + """ + Transform dataset/dataloader into feature-label tensor pair. + + Args: + data: Object supporting either: + - Dataset interface: must return (features, label) in __getitem__ + - DataLoader interface: must yield batches (X_batch, y_batch) + + Features must be convertible to tensor of shape [batch_size, feature_dim]. + Labels must be integers (not one-hot encoded). + + Returns: + X: Feature tensor of shape [num_samples, feature_dim], dtype=torch.float32 + Y: Label tensor of shape [num_samples], dtype=torch.long + + Implementation Requirements: + - Automatically create DataLoader when Dataset is provided + - Process data on self.device + - Support self.max_samples parameter (subsampling without replacement) + - Flatten features to [N, D] format if necessary + """ + pass + + @abstractmethod + def compute_pairwise_distances( + self, + datasets: List[Union[SupportsGetItem, DataLoader]], + symmetric: bool = True + ) -> torch.Tensor: + """ + Compute pairwise distance matrix between all classes across all datasets. + + Args: + datasets: List of datasets/dataloaders. Each must contain + integer labels in range [0, num_classes-1]. + symmetric: Flag for symmetric distances. If True, distance between + classes within the same dataset is computed once. + + Returns: + D: Distance tensor of shape [total_classes, total_classes], where + total_classes = sum(num_classes_per_dataset). + D[i, j] represents distance between class i and class j in global numbering. + + Global Class Numbering: + Dataset 0: classes [0, ..., kโ‚€-1] + Dataset 1: classes [kโ‚€, ..., kโ‚€+kโ‚-1] + ... + + Implementation Requirements: + - Support different number of classes across datasets + - Distance must satisfy: D[i, i] = 0, D[i, j] >= 0 + - If symmetric=True: D[i, j] = D[j, i] + """ + pass + + @abstractmethod + def embed_distance_matrix( + self, + distance_matrix: torch.Tensor, + emb_dim: Optional[int] = None + ) -> torch.Tensor: + """ + Transform distance matrix into embeddings via Multidimensional Scaling (MDS). + + Args: + distance_matrix: Distance tensor of shape [N, N], N = total_classes. + Must be symmetric with zero diagonal. + emb_dim: Embedding dimensionality. If None, uses self.emb_dim. + + Returns: + embeddings: Embedding tensor of shape [N, emb_dim], where each row + represents the embedding of the corresponding class + in global numbering. + """ + pass + + @abstractmethod + def augment_features( + self, + data: Union[SupportsGetItem, DataLoader], + label_embeddings: torch.Tensor, + dataset_idx: int, + class_offsets: List[int] + ) -> torch.Tensor: + """ + Augment original features with label embeddings for each sample. + + Args: + data: Dataset or DataLoader to process. + label_embeddings: Embeddings of all classes [total_classes, emb_dim]. + dataset_idx: Index of current dataset in the original datasets list. + class_offsets: List of class offsets for each dataset. + Example: [0, 10, 25] means dataset 0 has classes 0-9, + dataset 1 has classes 10-24, dataset 2 has classes 25+. + + Returns: + Z: Augmented feature tensor of shape [num_samples, feature_dim + emb_dim], + where label embedding is concatenated to each original feature vector. + """ + pass + + def get_class_statistics( + self, + X: torch.Tensor, + Y: torch.Tensor + ) -> Tuple[torch.Tensor, torch.Tensor]: + """ + Compute class-wise statistics (mean, covariance). + + Args: + X: Feature tensor [num_samples, feature_dim] + Y: Label tensor [num_samples], dtype=torch.long + + Returns: + means: Mean tensor [num_classes, feature_dim] + covs: Covariance tensor [num_classes, feature_dim, feature_dim] + """ + unique_labels = torch.unique(Y) + num_classes = len(unique_labels) + feature_dim = X.shape[1] + + means = torch.zeros((num_classes, feature_dim), device=self.device) + covs = torch.zeros( + (num_classes, feature_dim, feature_dim), device=self.device) + + for idx, label in enumerate(unique_labels): + mask = (Y == label) + class_samples = X[mask].float() + means[idx] = class_samples.mean(dim=0) + if class_samples.shape[0] > 1: + covs[idx] = torch.cov(class_samples.T) + else: + # For single sample, covariance is undefined โ€” use zero matrix + covs[idx] = torch.zeros( + (feature_dim, feature_dim), device=self.device) + + return means, covs + + @property + def device(self) -> torch.device: + """Get current computation device.""" + return self._device + + @device.setter + def device(self, device: Union[str, torch.device]) -> None: + """Set computation device with validation.""" + self._device = torch.device(device) diff --git a/src/data_meta_map/dataset2vec/__init__.py b/src/data_meta_map/dataset2vec/__init__.py new file mode 100644 index 0000000..e448e36 --- /dev/null +++ b/src/data_meta_map/dataset2vec/__init__.py @@ -0,0 +1,11 @@ +from .model import Dataset2Vec +from .config import Dataset2VecConfig, OptimizerConfig +from .loader import Dataset2VecLoader, RepeatableDataset2VecLoader + +__all__ = [ + "Dataset2Vec", + "Dataset2VecConfig", + "OptimizerConfig", + "Dataset2VecLoader", + "RepeatableDataset2VecLoader", +] diff --git a/src/data_meta_map/dataset2vec/config.py b/src/data_meta_map/dataset2vec/config.py new file mode 100644 index 0000000..423626c --- /dev/null +++ b/src/data_meta_map/dataset2vec/config.py @@ -0,0 +1,64 @@ +from typing import Annotated, Type + +from pydantic import BaseModel, Field +from pydantic.functional_validators import AfterValidator +from torch import nn +from torch.optim import Adam, Optimizer + +from .utils import Validators + + +class Dataset2VecConfig(BaseModel): + """Configuration of the Dataset2Vec encoder""" + + activation_cls: Type[nn.Module] = Field(default=nn.ReLU) + f_dense_hidden_size: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 32 + f_res_hidden_size: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 32 + f_res_n_layers: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 3 + f_block_repetitions: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 7 + f_out_size: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 32 + g_layers_sizes: Annotated[ + list[int], + AfterValidator(Validators.all_elements_positive), + AfterValidator(Validators.non_empty), + ] = [32, 16, 8] + h_dense_hidden_size: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 16 + h_res_hidden_size: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 16 + h_res_n_layers: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 3 + h_block_repetitions: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 3 + output_size: Annotated[ + int, AfterValidator(Validators.is_positive) + ] = 16 + + +class OptimizerConfig(BaseModel): + """Configuration of the Dataset2Vec training""" + + gamma: Annotated[ + float, AfterValidator(Validators.is_positive) + ] = 1 + optimizer_cls: Type[Optimizer] = Adam + learning_rate: Annotated[ + float, AfterValidator(Validators.is_positive) + ] = 1e-4 + weight_decay: Annotated[ + float, AfterValidator(Validators.non_negative) + ] = 1e-4 diff --git a/src/data_meta_map/dataset2vec/loader.py b/src/data_meta_map/dataset2vec/loader.py new file mode 100644 index 0000000..02f7d57 --- /dev/null +++ b/src/data_meta_map/dataset2vec/loader.py @@ -0,0 +1,219 @@ +from __future__ import annotations + +from copy import deepcopy +from pathlib import Path + +import numpy as np +import pandas as pd +import torch +from numpy.typing import NDArray +from torch import Tensor, from_numpy + +from .utils import ( + DataUtils, + InconsistentTypesException, + InvalidDataTypeException, +) + + +class Dataset2VecLoader: + """Data loader for Dataset2Vec training. + + Each iteration yields a batch of (X1, y1, X2, y2, label) tuples, + where label=1 if both samples are drawn from the same dataset + (positive pair) and label=0 otherwise (negative pair). + """ + + def __init__( + self, + batch_size: int = 32, + n_batches: int = 100, + ): + self.batch_size = batch_size + self.n_batches = n_batches + self.Xs: list[Tensor] = [] + self.ys: list[Tensor] = [] + self.n_datasets: int = 0 + self.released_batches_count: int = 0 + + def load( + self, + data: ( + Path + | list[Path] + | list[pd.DataFrame] + | list[NDArray[np.generic]] + | list[Tensor] + ), + ) -> "Dataset2VecLoader": + datasets = self._read_data_if_needed(data) + self.n_datasets = len(datasets) + self.released_batches_count = 0 + self._setup_xs(datasets) + self._setup_ys(datasets) + return self + + # ------------------------------------------------------------------ # + # Data preparation # + # ------------------------------------------------------------------ # + + def _setup_xs(self, datasets) -> None: + Xs = [ + self._normalize_to_pandas(ds).iloc[:, :-1] + for ds in datasets + ] + Xs = [ + DataUtils.get_preprocessing_pipeline() + .fit_transform(X).values + for X in Xs + ] + self.Xs = [self._to_torch(X) for X in Xs] + + def _setup_ys(self, datasets) -> None: + ys = [ + self._normalize_to_pandas(ds).iloc[:, -1] + for ds in datasets + ] + self.ys = [ + self._to_torch(y.values).reshape(-1, 1) + for y in ys + ] + + def _read_data_if_needed(self, data): + if isinstance(data, Path): + return [ + pd.read_csv(f) for f in sorted(data.iterdir()) + ] + elif ( + isinstance(data, list) + and len(data) > 0 + and isinstance(data[0], Path) + ): + if any(not isinstance(el, Path) for el in data): + raise InconsistentTypesException( + "If any element is Path โ€” all must be Path" + ) + return [pd.read_csv(f) for f in data] + return data + + def _to_torch(self, data: NDArray[np.generic]) -> Tensor: + if isinstance(data, np.ndarray): + return from_numpy(data).type(torch.float) + raise InvalidDataTypeException( + f"{type(data)} is not NDArray." + ) + + def _normalize_to_pandas(self, data) -> pd.DataFrame: + if isinstance(data, Tensor): + return pd.DataFrame(data.numpy()) + elif isinstance(data, pd.DataFrame): + return data + elif isinstance(data, np.ndarray): + return pd.DataFrame(data) + raise InvalidDataTypeException( + f"{type(data)} is not supported. " + "Use Tensor, DataFrame or NDArray." + ) + + # ------------------------------------------------------------------ # + # Iterator protocol # + # ------------------------------------------------------------------ # + + def __len__(self) -> int: + return self.n_batches + + def __iter__(self) -> Dataset2VecLoader: + return deepcopy(self) + + def __next__( + self, + ) -> list[tuple[Tensor, Tensor, Tensor, Tensor, int]]: + if self.released_batches_count == self.n_batches: + raise StopIteration() + self.released_batches_count += 1 + return self._get_batch() + + # ------------------------------------------------------------------ # + # Batch generation # + # ------------------------------------------------------------------ # + + def _get_batch( + self, + ) -> list[tuple[Tensor, Tensor, Tensor, Tensor, int]]: + return [ + self._get_single_example() + for _ in range(self.batch_size) + ] + + def _get_single_example( + self, + ) -> tuple[Tensor, Tensor, Tensor, Tensor, int]: + idx1, idx2 = self._get_random_dataset_indices() + return ( + *self._get_subsample(idx1), + *self._get_subsample(idx2), + int(idx1 == idx2), + ) + + def _get_random_dataset_indices(self) -> tuple[int, int]: + """Sample two dataset indices โ€” same dataset with probability 0.5 (positive pair).""" + if np.random.uniform() >= 0.5: + idx = np.random.choice(self.n_datasets) + return (idx, idx) + idx1, idx2 = np.random.choice( + self.n_datasets, 2, replace=False + ).astype(int) + return (idx1, idx2) + + def _get_subsample( + self, dataset_idx: int + ) -> tuple[Tensor, Tensor]: + X = self.Xs[dataset_idx] + y = self.ys[dataset_idx] + rows_idx, feat_idx, tgt_idx = self._sample_indices(X, y) + X = DataUtils.index_tensor_using_lists(X, rows_idx, feat_idx) + y = DataUtils.index_tensor_using_lists(y, rows_idx, tgt_idx) + return X, y + + def _sample_indices( + self, X: Tensor, y: Tensor + ) -> tuple[NDArray, NDArray, NDArray]: + """ + Sample row indices (power-of-two count between 8 and 256), + a random feature subset, and a random target subset. + """ + n_rows = X.shape[0] + assert n_rows >= 8, "Dataset must have at least 8 rows" + + max_q = min(int(np.log2(n_rows)), 8) + q = np.random.choice(np.arange(3, max_q + 1)) + rows_idx = np.random.choice(n_rows, 2 ** q) + feat_idx = DataUtils.sample_random_subset(X.shape[1]) + tgt_idx = DataUtils.sample_random_subset(y.shape[1]) + return rows_idx, feat_idx, tgt_idx + + +class RepeatableDataset2VecLoader(Dataset2VecLoader): + """ + Dataset2VecLoader variant that returns identical batches on every iter() call. + Intended for validation and testing where determinism is required. + """ + + def load(self, data): + super().load(data) + get_batch = super()._get_batch + self._fixed_batches = [ + get_batch() + for _ in range(self.n_batches) + ] + return self + + def __iter__(self) -> "RepeatableDataset2VecLoader": + return deepcopy(self) + + def __next__(self): + if self.released_batches_count == len(self._fixed_batches): + raise StopIteration() + batch = self._fixed_batches[self.released_batches_count] + self.released_batches_count += 1 + return batch diff --git a/src/data_meta_map/dataset2vec/model.py b/src/data_meta_map/dataset2vec/model.py new file mode 100644 index 0000000..7cc7ef9 --- /dev/null +++ b/src/data_meta_map/dataset2vec/model.py @@ -0,0 +1,149 @@ +from typing import Any, Type + +import torch +from torch import Tensor, mean, nn, stack + +from .config import Dataset2VecConfig, OptimizerConfig +from .train import LightningBase + + +class FeedForward(nn.Module): + + def __init__( + self, + input_size: int, + hidden_size: int, + n_layers: int, + output_size: int, + activation_cls: Type[nn.Module], + ): + super().__init__() + assert n_layers >= 1, "Network must have at least one layer" + + self.input_size = input_size + self.hidden_size = hidden_size + self.n_layers = n_layers + self.output_size = output_size + self.activation_cls = activation_cls + + if n_layers == 1: + self._init_single_layer() + else: + self._init_multiple_layers() + + def _init_single_layer(self) -> None: + self.block = nn.Sequential( + nn.Linear(self.input_size, self.output_size), + self.activation_cls() + ) + + def _init_multiple_layers(self) -> None: + components = [ + nn.Linear(self.input_size, self.hidden_size), + self.activation_cls(), + ] + for _ in range(self.n_layers - 2): + components.append(nn.Linear(self.hidden_size, self.hidden_size)) + components.append(self.activation_cls()) + components.append(nn.Linear(self.hidden_size, self.output_size)) + components.append(self.activation_cls()) + self.block = nn.Sequential(*components) + + def forward(self, X: Tensor) -> Any: + return self.block(X) + + +class ResidualBlock(FeedForward): + + def forward(self, X: Tensor) -> Any: + return X + super().forward(X) + + +class Dataset2Vec(LightningBase): + + def __init__( + self, + config: Dataset2VecConfig = Dataset2VecConfig(), + optimizer_config: OptimizerConfig = OptimizerConfig(), + ): + super().__init__(optimizer_config) + self.config = config + self.output_size = config.output_size + self._initialize_f(config) + self._initialize_g(config) + self._initialize_h(config) + + def _initialize_f(self, config: Dataset2VecConfig) -> None: + f_components: list[nn.Module] = [ + nn.Linear(2, config.f_dense_hidden_size), + config.activation_cls(), + ] + for _ in range(config.f_block_repetitions): + f_components.append(ResidualBlock( + input_size=config.f_dense_hidden_size, + hidden_size=config.f_res_hidden_size, + n_layers=config.f_res_n_layers, + output_size=config.f_dense_hidden_size, + activation_cls=config.activation_cls, + )) + f_components.append( + nn.Linear(config.f_dense_hidden_size, config.f_out_size) + ) + self.f = nn.Sequential(*f_components) + + def _initialize_g(self, config: Dataset2VecConfig) -> None: + g_components: list[nn.Module] = [ + nn.Linear(config.f_out_size, config.g_layers_sizes[0]), + config.activation_cls(), + ] + for prev, curr in zip( + config.g_layers_sizes[:-1], config.g_layers_sizes[1:] + ): + g_components.append(nn.Linear(prev, curr)) + g_components.append(config.activation_cls()) + self.g = nn.Sequential(*g_components) + + def _initialize_h(self, config: Dataset2VecConfig) -> None: + h_components: list[nn.Module] = [ + nn.Linear(config.g_layers_sizes[-1], config.h_dense_hidden_size), + config.activation_cls(), + ] + for _ in range(config.h_block_repetitions): + h_components.append(ResidualBlock( + input_size=config.h_dense_hidden_size, + hidden_size=config.h_res_hidden_size, + n_layers=config.h_res_n_layers, + output_size=config.h_dense_hidden_size, + activation_cls=config.activation_cls, + )) + h_components.append( + nn.Linear(config.h_dense_hidden_size, config.output_size) + ) + self.h = nn.Sequential(*h_components) + + def forward(self, X: Tensor, y: Tensor) -> Any: + assert X.shape[0] == y.shape[0], \ + "X and y must have the same dimensionality" + if len(y.shape) == 1: + y = y.reshape(-1, 1) + pairs = self._generate_feature_target_pairs(X, y) + inter_enc = mean(self.f(pairs), dim=1) + joint_enc = mean(self.g(inter_enc), dim=0) + return self.h(joint_enc) + + def _generate_feature_target_pairs( + self, X: Tensor, y: Tensor + ) -> Tensor: + X_proc = X.T.repeat_interleave(y.shape[1], dim=0) + y_proc = y.T.repeat(X.shape[1], 1) + return stack((X_proc, y_proc), 2) + + def calculate_loss( + self, labels: Tensor, similarities: Tensor + ) -> Tensor: + same = torch.where(labels == 1)[0] + different = torch.where(labels == 0)[0] + return -( + torch.log(similarities[same]).mean() + + torch.log(1 - similarities[different]).mean() + ) diff --git a/src/data_meta_map/dataset2vec/train.py b/src/data_meta_map/dataset2vec/train.py new file mode 100644 index 0000000..175cc98 --- /dev/null +++ b/src/data_meta_map/dataset2vec/train.py @@ -0,0 +1,199 @@ +from abc import ABC, abstractmethod +from typing import Any, Mapping + +import torch +from torch import Tensor, optim +from torch.optim.lr_scheduler import LinearLR + +from .config import OptimizerConfig + + +try: + import pytorch_lightning as pl # type: ignore +except Exception: # pragma: no cover + # Optional dependency: the library can still be imported and used for + # embedding with pre-trained weights even if Lightning (or its transitive + # deps) is not available in the runtime. + pl = None # type: ignore + + +if pl is None: # pragma: no cover + class _LightningModuleFallback(torch.nn.Module): + """Minimal subset of LightningModule API used by this project.""" + + def save_hyperparameters(self, *args: Any, **kwargs: Any) -> None: + return None + + def log(self, *args: Any, **kwargs: Any) -> None: + return None + + @property + def device(self) -> torch.device: + # Keep behavior close to Lightning: default to cpu if the module + # is not moved to a device explicitly. + return next(self.parameters(), torch.empty(0)).device + + + _LightningBaseParent = _LightningModuleFallback +else: + _LightningBaseParent = pl.LightningModule + + +class LightningBase(_LightningBaseParent, ABC): + + def __init__(self, optimizer_config: OptimizerConfig = OptimizerConfig()): + super().__init__() + self.gamma = optimizer_config.gamma + self.optimizer_cls = optimizer_config.optimizer_cls + self.learning_rate = optimizer_config.learning_rate + self.weight_decay = optimizer_config.weight_decay + self.save_hyperparameters() + + @abstractmethod + def forward(self, X: Tensor, y: Tensor) -> Tensor: + pass + + @abstractmethod + def calculate_loss(self, labels: Tensor, similarities: Tensor) -> Tensor: + pass + + # ------------------------------------------------------------------ # + # Training hooks # + # ------------------------------------------------------------------ # + + def on_train_epoch_start(self) -> None: + self.training_labels: list[Tensor] = [] + self.training_predictions: list[Tensor] = [] + + def training_step( + self, + batch: list[tuple[Tensor, Tensor, Tensor, Tensor, int]], + batch_idx: int, # โœ… ะดะพะฑะฐะฒะปะตะฝ + ) -> Mapping[str, Tensor]: + labels, similarities = self.extract_labels_and_similarities_from_batch( + batch + ) + loss = self.calculate_loss(labels, similarities) + self.log( + "train_step_loss", loss, + prog_bar=True, batch_size=len(batch) + ) + return {"loss": loss, "predictions": similarities} + + def on_train_batch_end( + self, + outputs, + batch, + batch_idx: int, # โœ… ะดะพะฑะฐะฒะปะตะฝ + ) -> None: + if isinstance(outputs, Mapping): + self.training_predictions.append(outputs["predictions"]) + else: + raise TypeError("outputs should have type Mapping[str, Any]") + self.training_labels.append(Tensor([obs[-1] for obs in batch])) + + def on_train_epoch_end(self) -> None: + training_labels = torch.concat(self.training_labels, dim=0) + training_predictions = torch.concat(self.training_predictions, dim=0) + self.log( + "train_accuracy", + ( + training_labels.to(self.device) + == (training_predictions >= 0.5) + .type(torch.int32) + .to(self.device) + ) + .type(torch.float32) + .mean(), + ) + self.log( + "train_loss", + self.calculate_loss(training_labels, training_predictions), + ) + + # ------------------------------------------------------------------ # + # Validation hooks # + # ------------------------------------------------------------------ # + + def on_validation_epoch_start(self) -> None: + self.validation_labels: list[Tensor] = [] + self.validation_predictions: list[Tensor] = [] + + def validation_step( + self, + batch: list[tuple[Tensor, Tensor, Tensor, Tensor, int]], + batch_idx: int, # โœ… ะธัะฟั€ะฐะฒะปะตะฝะพ โ€” ะฑั‹ะปะฐ ะณะปะฐะฒะฝะฐั ะฟั€ะธั‡ะธะฝะฐ ะพัˆะธะฑะบะธ + ) -> Mapping[str, Tensor]: + labels, similarities = self.extract_labels_and_similarities_from_batch( + batch + ) + loss = self.calculate_loss(labels, similarities) + return {"loss": loss, "predictions": similarities} + + def on_validation_batch_end( + self, + outputs, + batch, + batch_idx: int, # โœ… ะดะพะฑะฐะฒะปะตะฝ + dataloader_idx: int = 0, + ) -> None: + if isinstance(outputs, Mapping): + self.validation_predictions.append(outputs["predictions"]) + else: + raise TypeError("outputs should have type Mapping[str, Any]") + self.validation_labels.append(Tensor([obs[-1] for obs in batch])) + + def on_validation_epoch_end(self) -> None: + validation_labels = torch.concat(self.validation_labels, dim=0) + validation_predictions = torch.concat( + self.validation_predictions, dim=0 + ) + self.log( + "val_accuracy", + ( + validation_labels.to(self.device) + == (validation_predictions >= 0.5) + .type(torch.int32) + .to(self.device) + ) + .type(torch.float32) + .mean(), + ) + self.log( + "val_loss", + self.calculate_loss(validation_labels, validation_predictions), + ) + + # ------------------------------------------------------------------ # + # Helpers # + # ------------------------------------------------------------------ # + + def extract_labels_and_similarities_from_batch( + self, batch: list[tuple[Tensor, Tensor, Tensor, Tensor, int]] + ) -> tuple[Tensor, Tensor]: + similarities = [] + labels = [] + for X1, y1, X2, y2, label in batch: + emb1 = self.forward(X1, y1) + emb2 = self.forward(X2, y2) + similarities.append( + torch.exp(-self.gamma * torch.norm(emb1 - emb2)) + ) + labels.append(label) + return torch.Tensor(labels), torch.stack(similarities) + + def configure_optimizers(self): + optimizer = self.optimizer_cls( + self.parameters(), + lr=self.learning_rate, + weight_decay=self.weight_decay, + ) + scheduler = LinearLR(optimizer) + return [optimizer], [ + { + "scheduler": scheduler, + "interval": "epoch", + "monitor": "val_accuracy", + "frequency": 1, + } + ] diff --git a/src/data_meta_map/dataset2vec/utils.py b/src/data_meta_map/dataset2vec/utils.py new file mode 100644 index 0000000..8b20e45 --- /dev/null +++ b/src/data_meta_map/dataset2vec/utils.py @@ -0,0 +1,90 @@ +import numpy as np +from numpy.typing import NDArray +from sklearn.base import BaseEstimator +from sklearn.compose import make_column_selector, make_column_transformer +from sklearn.impute import SimpleImputer +from sklearn.pipeline import Pipeline +from sklearn.preprocessing import MinMaxScaler, OneHotEncoder +from torch import Tensor + + +class Validators: + + @staticmethod + def is_positive(number: int) -> int: + assert number > 0, "Number is non-positive" + return number + + @staticmethod + def non_negative(number: int) -> int: + assert number >= 0, "Number is negative" + return number + + @staticmethod + def all_elements_positive(arr: list[int]) -> list[int]: + assert all(map(lambda x: x > 0, arr)), \ + "List contains non-positive elements" + return arr + + @staticmethod + def non_empty(arr: list[int]) -> list[int]: + assert len(arr) > 0, "List is empty" + return arr + + +class DataUtils: + + @staticmethod + def get_preprocessing_pipeline() -> BaseEstimator: + cat_pipeline = Pipeline([ + ("imputer", SimpleImputer(strategy="most_frequent")), + ("one-hot", OneHotEncoder( + sparse_output=False, handle_unknown="ignore" + )), + ]).set_output(transform="pandas") + + num_pipeline = Pipeline([ + ("imputer", SimpleImputer(strategy="mean")), + ("scaler", MinMaxScaler()), + ]).set_output(transform="pandas") + + pipeline = Pipeline([ + ("transformers", make_column_transformer( + (cat_pipeline, make_column_selector( + dtype_include=("object", "category") + )), + (num_pipeline, make_column_selector( + dtype_include=np.number + )), + )), + ]).set_output(transform="pandas") + return pipeline + + @staticmethod + def sample_random_subset( + a: int | NDArray[np.generic], + ) -> NDArray[np.generic]: + if isinstance(a, int): + a = np.arange(a) + if len(a) == 1: + return a + subset_idx = np.random.uniform(size=len(a)) < 0.5 + if np.sum(subset_idx) == 0: + return a + return a[subset_idx] + + @staticmethod + def index_tensor_using_lists( + tensor: Tensor, + rows_idx: NDArray[np.generic], + col_idx: NDArray[np.generic], + ) -> Tensor: + return tensor[rows_idx].T[col_idx].T + + +class InvalidDataTypeException(Exception): + pass + + +class InconsistentTypesException(Exception): + pass diff --git a/src/data_meta_map/dataset2vec_embedder.py b/src/data_meta_map/dataset2vec_embedder.py new file mode 100644 index 0000000..f9bc759 --- /dev/null +++ b/src/data_meta_map/dataset2vec_embedder.py @@ -0,0 +1,162 @@ +from pathlib import Path + +import numpy as np +import pandas as pd +from numpy.typing import NDArray +from torch import Tensor + +from data_meta_map.base_embedder import BaseEmbedder +from data_meta_map.dataset2vec.model import Dataset2Vec +from data_meta_map.dataset2vec.loader import ( + Dataset2VecLoader, + RepeatableDataset2VecLoader, +) + + +def dataset2vec( + model: Dataset2Vec, + X: Tensor, + y: Tensor, + fit_data: Path | list[Path] | list[pd.DataFrame] | list | None = None, + **kwargs, +) -> NDArray[np.floating]: + """Convenience function: fit (optional) and embed a single tabular dataset. + + Consistent with the task2vec() convenience function in task2vec.py. + + Args: + model: Pre-initialized Dataset2Vec model (e.g. from get_model('dataset2vec')). + X: Feature tensor [n_samples, n_features]. + y: Target tensor [n_samples]. + fit_data: Training datasets to fit the model before embedding. + If None, the model is used as-is (must already be trained). + **kwargs: Forwarded to Dataset2VecEmbedder.__init__. + + Returns: + NDArray: Embedding vector of shape [output_size]. + """ + embedder = Dataset2VecEmbedder(model, **kwargs) + if fit_data is not None: + embedder.fit(fit_data) + return embedder.embed(X, y) + + +class Dataset2VecEmbedder(BaseEmbedder): + """ + Dataset embedder based on Dataset2Vec (Iwata & Ghahramani, 2020). + + Consistent with Task2Vec and WassersteinEmbedder: the model is injected + via __init__ rather than created internally. + + Typical usage: + model = get_model('dataset2vec') + embedder = Dataset2VecEmbedder(model, max_epochs=20) + embedder.fit(train_datasets) + embedding = embedder.embed(X_test, y_test) + + Args: + model: Pre-initialized Dataset2Vec model. + max_epochs: Number of training epochs. + batch_size: Batch size for the data loader. + n_batches: Number of batches per epoch. + """ + + def __init__( + self, + model: Dataset2Vec, + max_epochs: int = 10, + batch_size: int = 32, + n_batches: int = 100, + ): + self.model = model + self.max_epochs = max_epochs + self.batch_size = batch_size + self.n_batches = n_batches + self._is_fitted = False + + def fit( + self, + data: Path | list[Path] | list[pd.DataFrame] | list, + val_data: Path | list[Path] | list[pd.DataFrame] | list | None = None, + trainer_kwargs: dict | None = None, + ) -> "Dataset2VecEmbedder": + """Train Dataset2Vec on a collection of tabular datasets. + + Args: + data: Training data. Accepts a directory path, a list of file + paths, DataFrames, or NDArrays. Each element is one dataset; + the last column is treated as the target. + val_data: Validation data in the same format. Optional. + trainer_kwargs: Additional kwargs for pytorch_lightning.Trainer. + + Returns: + self + """ + import pytorch_lightning as pl + + train_loader = Dataset2VecLoader( + batch_size=self.batch_size, + n_batches=self.n_batches, + ).load(data) + + val_loader = None + if val_data is not None: + val_loader = RepeatableDataset2VecLoader( + batch_size=self.batch_size, + n_batches=self.n_batches // 5, + ).load(val_data) + + trainer_kwargs = trainer_kwargs or {} + trainer = pl.Trainer(max_epochs=self.max_epochs, **trainer_kwargs) + trainer.fit(self.model, train_loader, val_loader) + + self._is_fitted = True + return self + + def embed( + self, + X: Tensor, + y: Tensor, + ) -> NDArray[np.floating]: + """Compute embedding for a single tabular dataset. + + Args: + X: Feature tensor of shape [n_samples, n_features]. + y: Target tensor of shape [n_samples] or [n_samples, 1]. + + Returns: + NDArray: 1-D embedding vector of shape [output_size]. + + Raises: + RuntimeError: If called before fit(). + """ + if not self._is_fitted: + raise RuntimeError( + "Model is not fitted. Call fit() before embed()." + ) + self.model.eval() + embedding = self.model(X, y) + return embedding.detach().cpu().numpy() + + def save(self, path: str) -> None: + """Save model weights to disk. + + Args: + path: File path for the saved state dict. + """ + import torch + torch.save(self.model.state_dict(), path) + + def load(self, path: str) -> "Dataset2VecEmbedder": + """Load model weights from disk. + + Args: + path: File path to the saved state dict. + + Returns: + self + """ + import torch + self.model.load_state_dict(torch.load(path)) + self._is_fitted = True + return self diff --git a/src/data_meta_map/datasets.py b/src/data_meta_map/datasets.py new file mode 100644 index 0000000..48662af --- /dev/null +++ b/src/data_meta_map/datasets.py @@ -0,0 +1,366 @@ +# Copyright 2017-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"). You +# may not use this file except in compliance with the License. A copy of +# the License is located at +# +# http://aws.amazon.com/apache2.0/ +# +# or in the "license" file accompanying this file. This file is +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF +# ANY KIND, either express or implied. See the License for the specific +# language governing permissions and limitations under the License. + + +import collections +import torchvision.transforms as transforms +import os +import json + +try: + from IPython import embed +except: + pass + +_DATASETS = {} + +Dataset = collections.namedtuple( + 'Dataset', ['trainset', 'testset']) + + +def _add_dataset(dataset_fn): + _DATASETS[dataset_fn.__name__] = dataset_fn + return dataset_fn + + +def _get_transforms(augment=True, normalize=None): + if normalize is None: + normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], + std=[0.229, 0.224, 0.225]) + + basic_transform = [transforms.ToTensor(), normalize] + + transform_train = [] + if augment: + transform_train += [ + transforms.RandomResizedCrop(224), + transforms.RandomHorizontalFlip(), + ] + else: + transform_train += [ + transforms.Resize(256), + transforms.CenterCrop(224), + ] + transform_train += basic_transform + transform_train = transforms.Compose(transform_train) + + transform_test = [ + transforms.Resize(256), + transforms.CenterCrop(224), + ] + transform_test += basic_transform + transform_test = transforms.Compose(transform_test) + + return transform_train, transform_test + + +def _get_mnist_transforms(augment=True, invert=False, transpose=False): + transform = [ + transforms.ToTensor(), + ] + if invert: + transform += [transforms.Lambda(lambda x: 1. - x)] + if transpose: + transform += [transforms.Lambda(lambda x: x.transpose(2, 1))] + transform += [ + transforms.Normalize((.5,), (.5,)), + transforms.Lambda(lambda x: x.expand(3, 32, 32)) + ] + + transform_train = [] + transform_train += [transforms.Pad(padding=2)] + if augment: + transform_train += [transforms.RandomCrop(32, padding=4)] + transform_train += transform + transform_train = transforms.Compose(transform_train) + + transform_test = [] + transform_test += [transforms.Pad(padding=2)] + transform_test += transform + transform_test = transforms.Compose(transform_test) + + return transform_train, transform_test + + +def _get_cifar_transforms(augment=True): + transform = [ + transforms.ToTensor(), + transforms.Normalize((0.5071, 0.4867, 0.4408), + (0.2675, 0.2565, 0.2761)), + ] + transform_train = [] + if augment: + transform_train += [ + transforms.Pad(padding=4, fill=(125, 123, 113)), + transforms.RandomCrop(32, padding=0), + transforms.RandomHorizontalFlip()] + transform_train += transform + transform_train = transforms.Compose(transform_train) + transform_test = [] + transform_test += transform + transform_test = transforms.Compose(transform_test) + return transform_train, transform_test + + +def set_metadata(trainset, testset, config, dataset_name): + trainset.metadata = { + 'dataset': dataset_name, + 'task_id': config.task_id, + 'task_name': trainset.task_name, + } + testset.metadata = { + 'dataset': dataset_name, + 'task_id': config.task_id, + 'task_name': testset.task_name, + } + return trainset, testset + + +@_add_dataset +def inat2018(root, config): + from dataset.inat import iNat2018Dataset + transform_train, transform_test = _get_transforms() + trainset = iNat2018Dataset( + root, split='train', transform=transform_train, task_id=config.task_id) + testset = iNat2018Dataset( + root, split='val', transform=transform_test, task_id=config.task_id) + trainset, testset = set_metadata(trainset, testset, config, 'inat2018') + return trainset, testset + + +def load_tasks_map(tasks_map_file): + assert os.path.exists(tasks_map_file), tasks_map_file + with open(tasks_map_file, 'r') as f: + tasks_map = json.load(f) + tasks_map = {int(k): int(v) for k, v in tasks_map.items()} + return tasks_map + + +@_add_dataset +def cub_inat2018(root, config): + """This meta-task is the concatenation of CUB-200 (first 25 tasks) and iNat (last 207 tasks). + + - The first 10 tasks are classification of the animal species inside one of 10 orders of birds in CUB-200 + (considering all orders except passeriformes). + - The next 15 tasks are classification of species inside the 15 families of the order of passerifomes + - The remaining 207 tasks are classification of the species inside each of 207 families in iNat + + As noted above, for CUB-200 10 taks are classification of species inside an order, rather than inside of a family + as done in the iNat (recall order > family > species). This is done because CUB-200 has very few images + in each family of bird (expect for the families of passeriformes). Hence, we go up step in the taxonomy and + consider classification inside a orders and not families. + """ + NUM_CUB = 25 + NUM_CUB_ORDERS = 10 + NUM_INAT = 207 + assert 0 <= config.task_id < NUM_CUB + NUM_INAT + transform_train, transform_test = _get_transforms() + if 0 <= config.task_id < NUM_CUB: + # CUB + from dataset.cub import CUBTasks, CUBDataset + tasks_map_file = os.path.join( + root, 'cub/CUB_200_2011', 'final_tasks_map.json') + tasks_map = load_tasks_map(tasks_map_file) + task_id = tasks_map[config.task_id] + + if config.task_id < NUM_CUB_ORDERS: + # CUB orders + train_tasks = CUBTasks(CUBDataset(root, split='train')) + trainset = train_tasks.generate(task_id=task_id, + use_species_names=True, + transform=transform_train) + test_tasks = CUBTasks(CUBDataset(root, split='test')) + testset = test_tasks.generate(task_id=task_id, + use_species_names=True, + transform=transform_test) + else: + # CUB passeriformes families + train_tasks = CUBTasks(CUBDataset(root, split='train')) + trainset = train_tasks.generate(task_id=task_id, + task='family', + taxonomy_file='passeriformes.txt', + use_species_names=True, + transform=transform_train) + test_tasks = CUBTasks(CUBDataset(root, split='test')) + testset = test_tasks.generate(task_id=task_id, + task='family', + taxonomy_file='passeriformes.txt', + use_species_names=True, + transform=transform_test) + else: + # iNat2018 + from dataset.inat import iNat2018Dataset + tasks_map_file = os.path.join(root, 'inat2018', 'final_tasks_map.json') + tasks_map = load_tasks_map(tasks_map_file) + task_id = tasks_map[config.task_id - NUM_CUB] + + trainset = iNat2018Dataset( + root, split='train', transform=transform_train, task_id=task_id) + testset = iNat2018Dataset( + root, split='val', transform=transform_test, task_id=task_id) + trainset, testset = set_metadata(trainset, testset, config, 'cub_inat2018') + return trainset, testset + + +@_add_dataset +def imat2018fashion(root, config): + NUM_IMAT = 228 + assert 0 <= config.task_id < NUM_IMAT + from dataset.imat import iMat2018FashionDataset, iMat2018FashionTasks + transform_train, transform_test = _get_transforms() + train_tasks = iMat2018FashionTasks( + iMat2018FashionDataset(root, split='train')) + trainset = train_tasks.generate(task_id=config.task_id, + transform=transform_train) + test_tasks = iMat2018FashionTasks( + iMat2018FashionDataset(root, split='validation')) + testset = test_tasks.generate(task_id=config.task_id, + transform=transform_test) + trainset, testset = set_metadata( + trainset, testset, config, 'imat2018fashion') + return trainset, testset + + +@_add_dataset +def split_mnist(root, config): + assert isinstance(config.task_id, tuple) + from dataset.mnist import MNISTDataset, SplitMNISTTask + transform_train, transform_test = _get_mnist_transforms() + train_tasks = SplitMNISTTask(MNISTDataset(root, train=True)) + trainset = train_tasks.generate( + classes=config.task_id, transform=transform_train) + test_tasks = SplitMNISTTask(MNISTDataset(root, train=False)) + testset = test_tasks.generate( + classes=config.task_id, transform=transform_test) + trainset, testset = set_metadata(trainset, testset, config, 'split_mnist') + return trainset, testset + + +@_add_dataset +def split_cifar(root, config): + assert 0 <= config.task_id < 11 + from dataset.cifar import CIFAR10Dataset, CIFAR100Dataset, SplitCIFARTask + transform_train, transform_test = _get_cifar_transforms() + train_tasks = SplitCIFARTask(CIFAR10Dataset( + root, train=True), CIFAR100Dataset(root, train=True)) + trainset = train_tasks.generate( + task_id=config.task_id, transform=transform_train) + test_tasks = SplitCIFARTask(CIFAR10Dataset( + root, train=False), CIFAR100Dataset(root, train=False)) + testset = test_tasks.generate( + task_id=config.task_id, transform=transform_test) + trainset, testset = set_metadata(trainset, testset, config, 'split_cifar') + return trainset, testset + + +@_add_dataset +def cifar10_mnist(root, config): + from dataset.cifar import CIFAR10Dataset + from dataset.mnist import MNISTDataset + from dataset.expansion import UnionClassificationTaskExpander + transform_train, transform_test = _get_cifar_transforms() + trainset = UnionClassificationTaskExpander(merge_duplicate_images=False)( + [CIFAR10Dataset(root, train=True), MNISTDataset(root, train=True, expand=True)], transform=transform_train) + testset = UnionClassificationTaskExpander(merge_duplicate_images=False)( + [CIFAR10Dataset(root, train=False), MNISTDataset(root, train=False, expand=True)], transform=transform_test) + return trainset, testset + + +@_add_dataset +def cifar10(root): + from torchvision.datasets import CIFAR10 + transform = transforms.Compose([ + transforms.Resize(224), + transforms.ToTensor(), + transforms.Normalize((0.5071, 0.4867, 0.4408), + (0.2675, 0.2565, 0.2761)), + ]) + trainset = CIFAR10(root, train=True, transform=transform, download=True) + testset = CIFAR10(root, train=False, transform=transform) + return trainset, testset + + +@_add_dataset +def cifar100(root): + from torchvision.datasets import CIFAR100 + transform = transforms.Compose([ + transforms.Resize(224), + transforms.ToTensor(), + transforms.Normalize((0.5071, 0.4867, 0.4408), + (0.2675, 0.2565, 0.2761)), + ]) + trainset = CIFAR100(root, train=True, transform=transform, download=True) + testset = CIFAR100(root, train=False, transform=transform) + return trainset, testset + + +@_add_dataset +def mnist(root): + from torchvision.datasets import MNIST + transform = transforms.Compose([ + lambda x: x.convert("RGB"), + transforms.Resize(224), + transforms.ToTensor(), + # transforms.Normalize((0.5, 0.5, 0.5), (1., 1., 1.)), + ]) + trainset = MNIST(root, train=True, transform=transform, download=True) + testset = MNIST(root, train=False, transform=transform) + return trainset, testset + + +@_add_dataset +def letters(root): + from torchvision.datasets import EMNIST + transform = transforms.Compose([ + lambda x: x.convert("RGB"), + transforms.Resize(224), + transforms.ToTensor(), + # transforms.Normalize((0.5, 0.5, 0.5), (1., 1., 1.)), + ]) + trainset = EMNIST(root, train=True, split='letters', + transform=transform, download=True) + testset = EMNIST(root, train=False, split='letters', transform=transform) + return trainset, testset + + +@_add_dataset +def kmnist(root): + from torchvision.datasets import KMNIST + transform = transforms.Compose([ + lambda x: x.convert("RGB"), + transforms.Resize(224), + transforms.ToTensor(), + ]) + trainset = KMNIST(root, train=True, transform=transform, download=False) + testset = KMNIST(root, train=False, transform=transform) + return trainset, testset + + +@_add_dataset +def stl10(root): + from torchvision.datasets import STL10 + transform = transforms.Compose([ + transforms.Resize(224), + transforms.ToTensor(), + transforms.Normalize((0.5071, 0.4867, 0.4408), + (0.2675, 0.2565, 0.2761)), + ]) + trainset = STL10(root, split='train', transform=transform, download=True) + testset = STL10(root, split='test', transform=transform) + trainset.targets = trainset.labels + testset.targets = testset.labels + return trainset, testset + + +def get_dataset(root, config=None): + return _DATASETS[config.name](os.path.expanduser(root), config) diff --git a/src/data_meta_map/models.py b/src/data_meta_map/models.py new file mode 100644 index 0000000..ac5cd94 --- /dev/null +++ b/src/data_meta_map/models.py @@ -0,0 +1,115 @@ +# Copyright 2017-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"). You +# may not use this file except in compliance with the License. A copy of +# the License is located at +# +# http://aws.amazon.com/apache2.0/ +# +# or in the "license" file accompanying this file. This file is +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF +# ANY KIND, either express or implied. See the License for the specific +# language governing permissions and limitations under the License. + + +import torch.utils.model_zoo as model_zoo + +import torchvision.models.resnet as resnet +import torch + +from data_meta_map.task2vec import ProbeNetwork + +_MODELS = {} + + +def _add_model(model_fn): + _MODELS[model_fn.__name__] = model_fn + return model_fn + + +class ResNet(resnet.ResNet, ProbeNetwork): + + def __init__(self, block, layers, num_classes=1000): + super(ResNet, self).__init__(block, layers, num_classes) + # Saves the ordered list of layers. We need this to forward from an arbitrary intermediate layer. + self.layers = [ + self.conv1, self.bn1, self.relu, + self.maxpool, self.layer1, self.layer2, + self.layer3, self.layer4, self.avgpool, + lambda z: torch.flatten(z, 1), self.fc + ] + + @property + def classifier(self): + return self.fc + + # @ProbeNetwork.classifier.setter + # def classifier(self, val): + # self.fc = val + + # Modified forward method that allows to start feeding the cached activations from an intermediate + # layer of the network + def forward(self, x, start_from=0): + """Replaces the default forward so that we can forward features starting from any intermediate layer.""" + for layer in self.layers[start_from:]: + x = layer(x) + return x + + +@_add_model +def resnet18(pretrained=False, num_classes=1000): + """Constructs a ResNet-18 model. + Args: + pretrained (bool): If True, returns a model pre-trained on ImageNet + """ + model: ProbeNetwork = ResNet( + resnet.BasicBlock, [2, 2, 2, 2], num_classes=num_classes) + if pretrained: + state_dict = model_zoo.load_url( + 'https://download.pytorch.org/models/resnet18-5c106cde.pth') + state_dict = {k: v for k, v in state_dict.items() if 'fc' not in k} + model.load_state_dict(state_dict, strict=False) + return model + + +@_add_model +def resnet34(pretrained=False, num_classes=1000): + """Constructs a ResNet-18 model. + Args: + pretrained (bool): If True, returns a model pre-trained on ImageNet + """ + model = ResNet(resnet.BasicBlock, [3, 4, 6, 3], num_classes=num_classes) + if pretrained: + state_dict = model_zoo.load_url( + 'https://download.pytorch.org/models/resnet34-333f7ec4.pth') + state_dict = {k: v for k, v in state_dict.items() if 'fc' not in k} + model.load_state_dict(state_dict, strict=False) + return model + + +from data_meta_map.dataset2vec.model import Dataset2Vec as _Dataset2VecModel +from data_meta_map.dataset2vec.config import ( + Dataset2VecConfig as _Dataset2VecConfig, + OptimizerConfig as _OptimizerConfig, +) + + +@_add_model +def dataset2vec(config=None, optimizer_config=None, **_): + """Constructs a Dataset2Vec model for tabular dataset embedding. + + Args: + config: Dataset2Vec architecture configuration. If None, uses defaults. + optimizer_config: Optimizer configuration. If None, uses defaults. + **_: Absorbs unused kwargs from get_model() (pretrained, num_classes). + """ + cfg = config if config is not None else _Dataset2VecConfig() + opt_cfg = optimizer_config if optimizer_config is not None else _OptimizerConfig() + return _Dataset2VecModel(cfg, opt_cfg) + + +def get_model(model_name, pretrained=False, num_classes=1000): + try: + return _MODELS[model_name](pretrained=pretrained, num_classes=num_classes) + except KeyError: + raise ValueError(f"Architecture {model_name} not implemented.") diff --git a/src/data_meta_map/task2vec/__init__.py b/src/data_meta_map/task2vec/__init__.py new file mode 100644 index 0000000..5098145 --- /dev/null +++ b/src/data_meta_map/task2vec/__init__.py @@ -0,0 +1,8 @@ +# src/task2vec/__init__.py +from .task2vec import task2vec, Task2Vec, ProbeNetwork +from .task_similarity import plot_distance_matrix + +__all__ = [ + 'task2vec', + 'plot_distance_matrix', +] diff --git a/src/data_meta_map/task2vec/task2vec.py b/src/data_meta_map/task2vec/task2vec.py new file mode 100644 index 0000000..9989a3d --- /dev/null +++ b/src/data_meta_map/task2vec/task2vec.py @@ -0,0 +1,377 @@ +# Copyright 2017-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"). You +# may not use this file except in compliance with the License. A copy of +# the License is located at +# +# http://aws.amazon.com/apache2.0/ +# +# or in the "license" file accompanying this file. This file is +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF +# ANY KIND, either express or implied. See the License for the specific +# language governing permissions and limitations under the License. + +import itertools +import math +from abc import ABC, abstractmethod + +import torch +import torch.nn as nn +import torch.nn.functional as F +import numpy as np +from tqdm.auto import tqdm +import logging +from data_meta_map.task2vec import variational +from torch.utils.data import DataLoader, Dataset +from torch.optim.optimizer import Optimizer +from data_meta_map.task2vec.utils import AverageMeter, get_error, get_device + +from data_meta_map import BaseEmbedder + + +class Embedding: + def __init__(self, hessian, scale, meta=None): + self.hessian = np.array(hessian) + self.scale = np.array(scale) + self.meta = meta + + +class ProbeNetwork(ABC, nn.Module): + """Abstract class that all probe networks should inherit from. + + This is a standard torch.nn.Module but needs to expose a classifier property that returns the final classicifation + module (e.g., the last fully connected layer). + """ + + @property + @abstractmethod + def classifier(self): + raise NotImplementedError("Override the classifier property to return the submodules of the network that" + " should be interpreted as the classifier") + + @classifier.setter + @abstractmethod + def classifier(self, val): + raise NotImplementedError("Override the classifier setter to set the submodules of the network that" + " should be interpreted as the classifier") + + +def task2vec(probe_network, dataset: Dataset, skip_layers=0, max_samples=None, classifier_opts=None, + method='montecarlo', method_opts=None, loader_opts=None, bernoulli=False, create_final_embedding: bool = False): + task2vec_embedder = Task2Vec(probe_network, skip_layers=skip_layers, max_samples=max_samples, classifier_opts=classifier_opts, + method=method, method_opts=method_opts, loader_opts=loader_opts, bernoulli=bernoulli) + embed = task2vec_embedder.embed( + dataset, create_final_embedding=create_final_embedding) + return embed + + +class Task2Vec(BaseEmbedder): + + def __init__(self, model: ProbeNetwork, skip_layers=0, max_samples=None, classifier_opts=None, + method='montecarlo', method_opts=None, loader_opts=None, bernoulli=False): + if classifier_opts is None: + classifier_opts = {} + if method_opts is None: + method_opts = {} + if loader_opts is None: + loader_opts = {} + assert method in ('variational', 'montecarlo') + assert skip_layers >= 0 + + self.model = model + # Fix batch norm running statistics (i.e., put batch_norm layers in eval mode) + self.model.train() + self.device = get_device(self.model) + self.skip_layers = skip_layers + self.max_samples = max_samples + self.classifier_opts = classifier_opts + self.method = method + self.method_opts = method_opts + self.loader_opts = loader_opts + self.bernoulli = bernoulli + self.loss_fn = nn.CrossEntropyLoss() if not self.bernoulli else nn.BCEWithLogitsLoss() + self.loss_fn = self.loss_fn.to(self.device) + + def embed(self, dataset: Dataset, create_final_embedding: bool = True): + # Cache the last layer features (needed to train the classifier) and (if needed) the intermediate layer features + # so that we can skip the initial layers when computing the embedding + if self.skip_layers > 0: + self._cache_features(dataset, indexes=(self.skip_layers, -1), loader_opts=self.loader_opts, + max_samples=self.max_samples) + else: + self._cache_features(dataset, max_samples=self.max_samples) + # Fits the last layer classifier using cached features + self._fit_classifier(**self.classifier_opts) + + if self.skip_layers > 0: + dataset = torch.utils.data.TensorDataset(self.model.layers[self.skip_layers].input_features, + self.model.layers[-1].targets) + self.compute_fisher(dataset) + embedding = self.extract_embedding(self.model) + if create_final_embedding: + return embedding.hessian/embedding.scale + return embedding + + def montecarlo_fisher(self, dataset: Dataset, epochs: int = 1): + logging.info("Using montecarlo Fisher") + if self.skip_layers > 0: + dataset = torch.utils.data.TensorDataset(self.model.layers[self.skip_layers].input_features, + self.model.layers[-1].targets) + data_loader = _get_loader(dataset, **self.loader_opts) + device = get_device(self.model) + logging.info("Computing Fisher...") + + for p in self.model.parameters(): + p.grad2_acc = torch.zeros_like(p.data) + p.grad_counter = 0 + for k in range(epochs): + logging.info(f"\tepoch {k + 1}/{epochs}") + for i, (data, target) in enumerate(tqdm(data_loader, leave=False, desc="Computing Fisher")): + data = data.to(device) + output = self.model(data, start_from=self.skip_layers) + # The gradients used to compute the FIM needs to be for y sampled from + # the model distribution y ~ p_w(y|x), not for y from the dataset + if self.bernoulli: + target = torch.bernoulli(F.sigmoid(output)).detach() + else: + target = torch.multinomial( + F.softmax(output, dim=-1), 1).detach().view(-1) + loss = self.loss_fn(output, target) + self.model.zero_grad() + loss.backward() + for p in self.model.parameters(): + if p.grad is not None: + p.grad2_acc += p.grad.data ** 2 + p.grad_counter += 1 + for p in self.model.parameters(): + if p.grad_counter == 0: + del p.grad2_acc + else: + p.grad2_acc /= p.grad_counter + logging.info("done") + + def _run_epoch(self, data_loader: DataLoader, model: ProbeNetwork, loss_fn, + optimizer: Optimizer, epoch: int, train: bool = True, + add_compression_loss: bool = False, skip_layers=0, beta=1.0e-7): + metrics = AverageMeter() + device = get_device(model) + + for i, (input, target) in enumerate(tqdm(data_loader, leave=False, desc="Computing Fisher")): + input = input.to(device) + target = target.to(device) + output = model(input, start_from=skip_layers) + + loss = loss_fn(output, target) + lz = beta * variational.get_compression_loss( + model) if add_compression_loss else torch.zeros_like(loss) + loss += lz + + error = get_error(output, target) + + metrics.update(n=input.size(0), loss=loss.item(), + lz=lz.item(), error=error) + if train: + optimizer.zero_grad() + loss.backward() + optimizer.step() + # logging.info( + print( + "{}: [{epoch}] ".format('Epoch' if train else '', epoch=epoch) + + "Data/Batch: {:.3f}/{:.3f} ".format(metrics.avg["data_time"], metrics.avg["batch_time"]) + + "Loss {:.3f} Lz: {:.3f} ".format(metrics.avg["loss"], metrics.avg["lz"]) + + "Error: {:.2f}".format(metrics.avg["error"]) + ) + return metrics.avg + + def variational_fisher(self, dataset: Dataset, epochs=1, beta=1e-7): + logging.info("Training variational fisher...") + parameters = [] + for layer in self.model.layers[self.skip_layers:-1]: + if isinstance(layer, nn.Module): # Skip lambda functions + variational.make_variational(layer) + parameters += variational.get_variational_vars(layer) + bn_params = [] + # Allows batchnorm parameters to change + for m in self.model.modules(): + if isinstance(m, nn.BatchNorm2d): + bn_params += list(m.parameters()) + # Avoids computing the gradients wrt to the weights to save time and memory + for p in self.model.parameters(): + if p not in set(parameters) and p not in set(self.model.classifier.parameters()): + p.old_requires_grad = p.requires_grad + p.requires_grad = False + + optimizer = torch.optim.Adam([ + {'params': parameters}, + {'params': bn_params, 'lr': 5e-4}, + {'params': self.model.classifier.parameters(), 'lr': 5e-4}], + lr=1e-2, betas=(.9, 0.999)) + if self.skip_layers > 0: + dataset = torch.utils.data.TensorDataset(self.model.layers[self.skip_layers].input_features, + self.model.layers[-1].targets) + train_loader = _get_loader(dataset, **self.loader_opts) + + for epoch in range(epochs): + self._run_epoch(train_loader, self.model, self.loss_fn, optimizer, epoch, beta=beta, + add_compression_loss=True, train=True) + + # Resets original value of requires_grad + for p in self.model.parameters(): + if hasattr(p, 'old_requires_grad'): + p.requires_grad = p.old_requires_grad + del p.old_requires_grad + + def compute_fisher(self, dataset: Dataset): + """ + Computes the Fisher Information of the weights of the model wrt the model output on the dataset and stores it. + + The Fisher Information Matrix is defined as: + F = E_{x ~ dataset} E_{y ~ p_w(y|x)} [\nabla_w log p_w(y|x) \nabla_w log p_w(y|x)^t] + where p_w(y|x) is the output probability vector of the network and w are the weights of the network. + Notice that the label y is sampled from the model output distribution and not from the dataset. + + This code only approximate the diagonal of F. The result is stored in the model layers and can be extracted + using the `get_fisher` method. Different approximation methods of the Fisher information matrix are available, + and can be selected in the __init__. + + :param dataset: dataset with the task to compute the Fisher on + """ + if self.method == 'variational': + fisher_fn = self.variational_fisher + elif self.method == 'montecarlo': + fisher_fn = self.montecarlo_fisher + else: + raise ValueError(f"Invalid Fisher method {self.method}") + fisher_fn(dataset, **self.method_opts) + + def _cache_features(self, dataset: Dataset, indexes=(-1,), max_samples=None, loader_opts: dict = None): + logging.info("Caching features...") + if loader_opts is None: + loader_opts = {} + data_loader = DataLoader(dataset, shuffle=False, batch_size=loader_opts.get('batch_size', 64), + num_workers=loader_opts.get('num_workers', 6), drop_last=False) + + device = next(self.model.parameters()).device + + def _hook(layer, inputs): + if not hasattr(layer, 'input_features'): + layer.input_features = [] + layer.input_features.append(inputs[0].data.cpu().clone()) + + hooks = [self.model.layers[index].register_forward_pre_hook(_hook) + for index in indexes] + if max_samples is not None: + n_batches = min( + math.floor(max_samples / data_loader.batch_size) - 1, len(data_loader)) + else: + n_batches = len(data_loader) + targets = [] + + for i, (input, target) in tqdm(enumerate(itertools.islice(data_loader, 0, n_batches)), total=n_batches, + leave=False, + desc="Caching features"): + targets.append(target.clone()) + self.model(input.to(device)) + for hook in hooks: + hook.remove() + for index in indexes: + self.model.layers[index].input_features = torch.cat( + self.model.layers[index].input_features) + self.model.layers[-1].targets = torch.cat(targets) + + def _fit_classifier(self, optimizer='adam', learning_rate=0.0004, weight_decay=0.0001, + epochs=10): + """Fits the last layer of the network using the cached features.""" + logging.info("Fitting final classifier...") + if not hasattr(self.model.classifier, 'input_features'): + raise ValueError( + "You need to run `cache_features` on model before running `fit_classifier`") + targets = self.model.classifier.targets.to(self.device) + features = self.model.classifier.input_features.to(self.device) + + dataset = torch.utils.data.TensorDataset(features, targets) + data_loader = _get_loader(dataset, **self.loader_opts) + + if optimizer == 'adam': + optimizer = torch.optim.Adam( + self.model.fc.parameters(), lr=learning_rate, weight_decay=weight_decay) + elif optimizer == 'sgd': + optimizer = torch.optim.SGD(self.model.fc.parameters( + ), lr=learning_rate, weight_decay=weight_decay) + else: + raise ValueError(f'Unsupported optimizer {optimizer}') + + loss_fn = nn.CrossEntropyLoss() + for epoch in tqdm(range(epochs), desc="Fitting classifier", leave=False): + metrics = AverageMeter() + for data, target in data_loader: + optimizer.zero_grad() + output = self.model.classifier(data) + loss = loss_fn(self.model.classifier(data), target) + error = get_error(output, target) + loss.backward() + optimizer.step() + metrics.update(n=data.size(0), loss=loss.item(), error=error) + logging.info( + f"[epoch {epoch}]: " + "\t".join(f"{k}: {v}" for k, v in metrics.avg.items())) + + def extract_embedding(self, model: ProbeNetwork): + """ + Reads the values stored by `compute_fisher` and returns them in a common format that describes the diagonal of the + Fisher Information Matrix for each layer. + + :param model: + :return: + """ + hess, scale = [], [] + for name, module in model.named_modules(): + if module is model.classifier: + continue + # The variational Fisher approximation estimates the variance of noise that can be added to the weights + # without increasing the error more than a threshold. The inverse of this is proportional to an + # approximation of the hessian in the local minimum. + if hasattr(module, 'logvar0') and hasattr(module, 'loglambda2'): + logvar = module.logvar0.view(-1).detach().cpu().numpy() + hess.append(np.exp(-logvar)) + loglambda2 = module.loglambda2.detach().cpu().numpy() + scale.append(np.exp(-loglambda2).repeat(logvar.size)) + # The other Fisher approximation methods directly approximate the hessian at the minimum + elif hasattr(module, 'weight') and hasattr(module.weight, 'grad2_acc'): + grad2 = module.weight.grad2_acc.cpu().detach().numpy() + filterwise_hess = grad2.reshape( + grad2.shape[0], -1).mean(axis=1) + hess.append(filterwise_hess) + scale.append(np.ones_like(filterwise_hess)) + return Embedding(hessian=np.concatenate(hess), scale=np.concatenate(scale), meta=None) + + +def _get_loader(trainset, testset=None, batch_size=64, num_workers=6, num_samples=10000, drop_last=True): + if getattr(trainset, 'is_multi_label', False): + raise ValueError("Multi-label datasets not supported") + # TODO: Find a way to standardize this + if hasattr(trainset, 'labels'): + labels = trainset.labels + elif hasattr(trainset, 'targets'): + labels = trainset.targets + else: + labels = list(trainset.tensors[1].cpu().numpy()) + num_classes = int(getattr(trainset, 'num_classes', max(labels) + 1)) + class_count = np.eye(num_classes)[labels].sum(axis=0) + weights = 1. / class_count[labels] / num_classes + weights /= weights.sum() + + sampler = torch.utils.data.sampler.WeightedRandomSampler( + weights, num_samples=num_samples) + # No need for mutli-threaded loading if everything is already in memory, + # and would raise an error if TensorDataset is on CUDA + num_workers = num_workers if not isinstance( + trainset, torch.utils.data.TensorDataset) else 0 + trainloader = torch.utils.data.DataLoader(trainset, sampler=sampler, batch_size=batch_size, + num_workers=num_workers, drop_last=drop_last) + + if testset is None: + return trainloader + else: + testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, pin_memory=True, shuffle=False, + num_workers=num_workers) + return trainloader, testloader diff --git a/src/data_meta_map/task2vec/task_similarity.py b/src/data_meta_map/task2vec/task_similarity.py new file mode 100644 index 0000000..5aa302f --- /dev/null +++ b/src/data_meta_map/task2vec/task_similarity.py @@ -0,0 +1,219 @@ +#!/usr/bin/env python3 + +# Copyright 2017-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"). You +# may not use this file except in compliance with the License. A copy of +# the License is located at +# +# http://aws.amazon.com/apache2.0/ +# +# or in the "license" file accompanying this file. This file is +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF +# ANY KIND, either express or implied. See the License for the specific +# language governing permissions and limitations under the License. + +import itertools +import scipy.spatial.distance as distance +import numpy as np +import copy +import pickle + +_DISTANCES = {} + + +# TODO: Remove methods that do not perform well + +def _register_distance(distance_fn): + _DISTANCES[distance_fn.__name__] = distance_fn + return distance_fn + + +def is_excluded(k): + exclude = ['fc', 'linear'] + return any([e in k for e in exclude]) + + +def load_embedding(filename): + with open(filename, 'rb') as f: + e = pickle.load(f) + return e + + +def get_trivial_embedding_from(e): + trivial_embedding = copy.deepcopy(e) + for l in trivial_embedding['layers']: + a = np.array(l['filter_logvar']) + a[:] = l['filter_lambda2'] + l['filter_logvar'] = list(a) + return trivial_embedding + + +def binary_entropy(p): + from scipy.special import xlogy + return - (xlogy(p, p) + xlogy(1. - p, 1. - p)) + + +def get_layerwise_variance(e, normalized=False): + var = [np.exp(l['filter_logvar']) for l in e['layers']] + if normalized: + var = [v / np.linalg.norm(v) for v in var] + return var + + +def get_variance(e, normalized=False): + var = 1. / np.array(e.hessian) + if normalized: + lambda2 = 1. / np.array(e.scale) + var = var / lambda2 + return var + + +def get_variances(*embeddings, normalized=False): + return [get_variance(e, normalized=normalized) for e in embeddings] + + +def get_hessian(e, normalized=False): + hess = np.array(e.hessian) + if normalized: + scale = np.array(e.scale) + hess = hess / scale + return hess + + +def get_hessians(*embeddings, normalized=False): + return [get_hessian(e, normalized=normalized) for e in embeddings] + + +def get_scaled_hessian(e0, e1): + h0, h1 = get_hessians(e0, e1, normalized=False) + return h0 / (h0 + h1 + 1e-8), h1 / (h0 + h1 + 1e-8) + + +def get_full_kl(e0, e1): + var0, var1 = get_variance(e0), get_variance(e1) + kl0 = .5 * (var0 / var1 - 1 + np.log(var1) - np.log(var0)) + kl1 = .5 * (var1 / var0 - 1 + np.log(var0) - np.log(var1)) + return kl0, kl1 + + +def layerwise_kl(e0, e1): + layers0, layers1 = get_layerwise_variance(e0), get_layerwise_variance(e1) + kl0 = [] + for var0, var1 in zip(layers0, layers1): + kl0.append(np.sum(.5 * (var0 / var1 - 1 + np.log(var1) - np.log(var0)))) + return kl0 + + +def layerwise_cosine(e0, e1): + layers0, layers1 = get_layerwise_variance( + e0, normalized=True), get_layerwise_variance(e1, normalized=True) + res = [] + for var0, var1 in zip(layers0, layers1): + res.append(distance.cosine(var0, var1)) + return res + + +@_register_distance +def kl(e0, e1): + var0, var1 = get_variance(e0), get_variance(e1) + kl0 = .5 * (var0 / var1 - 1 + np.log(var1) - np.log(var0)) + kl1 = .5 * (var1 / var0 - 1 + np.log(var0) - np.log(var1)) + return np.maximum(kl0, kl1).sum() + + +@_register_distance +def asymmetric_kl(e0, e1): + var0, var1 = get_variance(e0), get_variance(e1) + kl0 = .5 * (var0 / var1 - 1 + np.log(var1) - np.log(var0)) + kl1 = .5 * (var1 / var0 - 1 + np.log(var0) - np.log(var1)) + return kl0.sum() + + +@_register_distance +def jsd(e0, e1): + var0, var1 = get_variance(e0), get_variance(e1) + var = .5 * (var0 + var1) + kl0 = .5 * (var0 / var - 1 + np.log(var) - np.log(var0)) + kl1 = .5 * (var1 / var - 1 + np.log(var) - np.log(var1)) + return (.5 * (kl0 + kl1)).mean() + + +@_register_distance +def cosine(e0, e1): + h1, h2 = get_scaled_hessian(e0, e1) + return distance.cosine(h1, h2) + + +@_register_distance +def normalized_cosine(e0, e1): + h1, h2 = get_variances(e0, e1, normalized=True) + return distance.cosine(h1, h2) + + +@_register_distance +def correlation(e0, e1): + v1, v2 = get_variances(e0, e1, normalized=False) + return distance.correlation(v1, v2) + + +@_register_distance +def entropy(e0, e1): + h1, h2 = get_scaled_hessian(e0, e1) + return np.log(2) - binary_entropy(h1).mean() + + +def get_normalized_embeddings(embeddings, normalization=None): + F = [1. / get_variance(e, normalized=False) + if e is not None else None for e in embeddings] + zero_embedding = np.zeros_like([x for x in F if x is not None][0]) + F = np.array([x if x is not None else zero_embedding for x in F]) + # FIXME: compute variance using only valid embeddings + if normalization is None: + normalization = np.sqrt((F ** 2).mean(axis=0, keepdims=True)) + F /= normalization + return F, normalization + + +def pdist(embeddings, distance='cosine'): + distance_fn = _DISTANCES[distance] + n = len(embeddings) + distance_matrix = np.zeros([n, n]) + if distance != 'asymmetric_kl': + for (i, e1), (j, e2) in itertools.combinations(enumerate(embeddings), 2): + distance_matrix[i, j] = distance_fn(e1, e2) + distance_matrix[j, i] = distance_matrix[i, j] + else: + for (i, e1) in enumerate(embeddings): + for (j, e2) in enumerate(embeddings): + distance_matrix[i, j] = distance_fn(e1, e2) + return distance_matrix + + +def cdist(from_embeddings, to_embeddings, distance='cosine'): + distance_fn = _DISTANCES[distance] + distance_matrix = np.zeros([len(from_embeddings), len(to_embeddings)]) + for (i, e1) in enumerate(from_embeddings): + for (j, e2) in enumerate(to_embeddings): + if e1 is None or e2 is None: + continue + distance_matrix[i, j] = distance_fn(e1, e2) + return distance_matrix + + +def plot_distance_matrix(embeddings, labels=None, distance='cosine'): + import seaborn as sns + from scipy.cluster.hierarchy import linkage + from scipy.spatial.distance import squareform + import pandas as pd + import matplotlib.pyplot as plt + distance_matrix = pdist(embeddings, distance=distance) + cond_distance_matrix = squareform(distance_matrix, checks=False) + linkage_matrix = linkage(cond_distance_matrix, + method='complete', optimal_ordering=True) + if labels is not None: + distance_matrix = pd.DataFrame( + distance_matrix, index=labels, columns=labels) + sns.clustermap(distance_matrix, row_linkage=linkage_matrix, + col_linkage=linkage_matrix, cmap='viridis_r') + plt.show() diff --git a/src/data_meta_map/task2vec/utils.py b/src/data_meta_map/task2vec/utils.py new file mode 100644 index 0000000..49b6754 --- /dev/null +++ b/src/data_meta_map/task2vec/utils.py @@ -0,0 +1,65 @@ +# Copyright 2017-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"). You +# may not use this file except in compliance with the License. A copy of +# the License is located at +# +# http://aws.amazon.com/apache2.0/ +# +# or in the "license" file accompanying this file. This file is +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF +# ANY KIND, either express or implied. See the License for the specific +# language governing permissions and limitations under the License. + +from collections import defaultdict +import torch +import numpy as np + + +class AverageMeter(object): + """Computes and stores the average and current value""" + + def __init__(self): + self.reset() + + def reset(self): + self.val = defaultdict(int) + self.avg = defaultdict(float) + self.sum = defaultdict(int) + self.count = defaultdict(int) + + def update(self, n=1, **val): + for k in val: + self.val[k] = val[k] + self.sum[k] += val[k] * n + self.count[k] += n + self.avg[k] = self.sum[k] / self.count[k] + + +def set_batchnorm_mode(model, train=True): + """Allows to set batch_norm layer mode to train or eval, independendtly on the mode of the model.""" + def _set_batchnorm_mode(module): + if isinstance(module, torch.nn.BatchNorm1d) or isinstance(module, torch.nn.BatchNorm2d): + if train: + module.train() + else: + module.eval() + + model.apply(_set_batchnorm_mode) + + +def get_error(output, target): + pred = output.argmax(dim=1) + correct = pred.eq(target).float().sum() + return float((1. - correct / output.size(0)) * 100.) + + +def adjust_learning_rate(optimizer, epoch, optimizer_cfg): + lr = optimizer_cfg.lr * \ + (0.1 ** np.less(optimizer_cfg.schedule, epoch).sum()) + for param_group in optimizer.param_groups: + param_group['lr'] = lr + + +def get_device(model: torch.nn.Module): + return next(model.parameters()).device diff --git a/src/data_meta_map/task2vec/variational.py b/src/data_meta_map/task2vec/variational.py new file mode 100644 index 0000000..3f1be3e --- /dev/null +++ b/src/data_meta_map/task2vec/variational.py @@ -0,0 +1,131 @@ +# Copyright 2017-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"). You +# may not use this file except in compliance with the License. A copy of +# the License is located at +# +# http://aws.amazon.com/apache2.0/ +# +# or in the "license" file accompanying this file. This file is +# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF +# ANY KIND, either express or implied. See the License for the specific +# language governing permissions and limitations under the License. + +import torch +import torch.nn.functional as F + +from torch.nn.parameter import Parameter + +import types + + +def get_variational_vars(model): + """Returns all variables involved in optimizing the hessian estimation.""" + result = [] + if hasattr(model, 'logvar0'): + result.append(model.logvar0) + result.append(model.loglambda2) + for l in model.children(): + result += get_variational_vars(l) + return result + + +def get_compression_loss(model): + """Get the model loss function for hessian estimation. + + Compute KL divergence assuming a normal posterior and a diagonal normal prior p(w) ~ N(0, lambda**2 * I) + (where lambda is selected independently for each layer and shared by all filters in the same layer). + Recall from the paper that the optimal posterior q(w|D) that minimizes the training loss plus the compression lost + is approximatively given by q(w|D) ~ N(w, F**-1), where F is the Fisher information matrix. + """ + modules = [x for x in model.modules() if hasattr(x, 'logvar0')] + k = sum([x.weight.numel() for x in modules]) + + w_norm2 = sum([x.weight.pow(2).sum() / x.loglambda2.exp() + for x in modules]) + logvar = sum([x.logvar.sum() for x in modules]) + trace = sum([x.logvar.exp().sum() / x.loglambda2.exp() for x in modules]) + lambda2_cost = sum([x.loglambda2 * x.weight.numel() for x in modules]) + + # Standard formula for KL divergence of two normal distributions + # https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Kullback%E2%80%93Leibler_divergence + Lz = kl_divergence = w_norm2 + trace + lambda2_cost - logvar - k + return Lz + + +def variational_forward(module, input): + """Modified forward pass that adds noise to the output.""" + + # Recall that module.logvar0 is created by make_variational() + # (specifically, by add_logvar()) + module.logvar = module.logvar0.expand_as(module.weight) + + var = module.logvar.exp() + + if isinstance(module, torch.nn.modules.conv.Conv2d): + output = F.conv2d(input, module.weight, module.bias, module.stride, + module.padding, module.dilation, module.groups) + # From Variational Dropout and the Local reparametrization trick + # (Kingma et al., 2015) + output_var = F.conv2d(input ** 2 + 1e-2, var, None, module.stride, + module.padding, module.dilation, module.groups) + elif isinstance(module, torch.nn.modules.linear.Linear): + output = F.linear(input, module.weight, module.bias) + output_var = F.linear(input ** 2 + 1e-2, var, None) + else: + raise NotImplementedError( + "Module {} not implemented.".format(type(module))) + + eps = torch.empty_like(output).normal_() + # Local reparemetrization trick + return output + torch.sqrt(output_var) * eps + + +def _reset_logvar(module, variance_scaling=0.05): + if hasattr(module, 'logvar0'): + w = module.weight.data + # Initial ballpark estimate for optimal variance is the variance + # of the weights in the kernel + var = w.view(w.size(0), -1).var(dim=1).view(-1, * + ([1] * (w.ndimension() - 1))) # .expand_as(w) + # Further scale down the variance by some factor + module.logvar0.data[:] = (var * variance_scaling + 1e-8).log() + # Initial guess for lambda is the l2 norm of the weights + module.loglambda2.data = (w.pow(2).mean() + 1e-8).log() + + +def _add_logvar(module): + """Adds a parameter (logvar0) to store the noise variance for the weights. + + Also adds a scalar parameter loglambda2 to store the scaling coefficient + for the layer. + + The variance is assumed to be the same for all weights in the same filter. + The common value is stored in logvar0, which is expanded to the same + dimension as the weight matrix in logvar. + """ + if not hasattr(module, 'weight'): + return + if module.weight.data.ndimension() < 2: + return + if not hasattr(module, 'logvar0'): + w = module.weight.data + # w is of shape NUM_OUT x NUM_IN x K_h X K_w + var = w.view(w.size(0), -1).var(dim=1).view(-1, + *([1] * (w.ndimension() - 1))) + # var is of shape NUM_OUT x 1 x 1 x 1 + # (so that it can be expanded to the same size as w by torch.expand_as()) + # The content does not matter since we will reset it later anyway + module.logvar0 = Parameter(var.log()) + # log(lambda**2) is a scalar shared by all weights in the layer + module.loglambda2 = Parameter(w.pow(2).mean().log()) + module.logvar = module.logvar0.expand_as(module.weight) + _reset_logvar(module) + + +def make_variational(model): + """Replaces the forward pass of the model layers to add noise.""" + model.apply(_add_logvar) + for m in model.modules(): + if hasattr(m, 'logvar0'): + m.forward = types.MethodType(variational_forward, m) diff --git a/src/data_meta_map/wasserstein_embedder.py b/src/data_meta_map/wasserstein_embedder.py new file mode 100644 index 0000000..38007d0 --- /dev/null +++ b/src/data_meta_map/wasserstein_embedder.py @@ -0,0 +1,553 @@ +from abc import ABC +from typing import Any, Dict, List, Optional, Tuple, Union +import torch +import numpy as np +from torch.utils.data import DataLoader, Dataset, SubsetRandomSampler +from sklearn.manifold import MDS +import ot # POT: Python Optimal Transport +from tqdm.autonotebook import tqdm + +from data_meta_map.base_embedder import BaseEmbedder + + +def sqrtm_newton_schulz(A: torch.Tensor, num_iters: int = 20) -> torch.Tensor: + """ + Matrix square root via Newton-Schulz iteration. + Adapted from OTDD (https://github.com/microsoft/otdd) + + Args: + A: Square positive semi-definite matrix [d, d] + num_iters: Number of Newton-Schulz iterations + + Returns: + sqrtA: Matrix square root [d, d] + """ + dim = A.shape[0] + + # Frobenius norm for stable normalization + normA = torch.norm(A, p='fro') + + # Normalize to ensure convergence + Y = A / normA + I = torch.eye(dim, device=A.device, dtype=A.dtype) + Z = torch.eye(dim, device=A.device, dtype=A.dtype) + + # Newton-Schulz iterations + for _ in range(num_iters): + T = 0.5 * (3.0 * I - Z @ Y) + Y = Y @ T + Z = T @ Z + + # Rescale back + sqrtA = Y * torch.sqrt(normA) + return sqrtA + + +def compute_bures_term( + cov1: torch.Tensor, + cov2: torch.Tensor, + sqrt_cov1: Optional[torch.Tensor] = None, + diagonal_cov: bool = False, + num_iters: int = 20 +) -> torch.Tensor: + """ + Compute the covariance term of Bures-Wasserstein distance: + Tr(ฮฃโ‚ + ฮฃโ‚‚ - 2(ฮฃโ‚^{1/2} ฮฃโ‚‚ ฮฃโ‚^{1/2})^{1/2}) + + Args: + cov1, cov2: Covariance matrices [d, d] or diagonals [d] if diagonal_cov=True + sqrt_cov1: Precomputed sqrt(cov1) for efficiency + diagonal_cov: If True, treat covariances as diagonal + num_iters: Newton-Schulz iterations for matrix sqrt + + Returns: + bures_term: Scalar tensor + """ + if diagonal_cov: + # Diagonal case: Tr(ฮฃโ‚ + ฮฃโ‚‚ - 2โˆš(ฮฃโ‚ฮฃโ‚‚)) + return torch.sum(cov1 + cov2 - 2 * torch.sqrt(cov1 * cov2 + 1e-12)) + else: + # Full matrix case + if sqrt_cov1 is None: + sqrt_cov1 = sqrtm_newton_schulz(cov1, num_iters=num_iters) + + # Compute (ฮฃโ‚^{1/2} ฮฃโ‚‚ ฮฃโ‚^{1/2})^{1/2} + middle = sqrt_cov1 @ cov2 @ sqrt_cov1 + sqrt_middle = sqrtm_newton_schulz(middle, num_iters=num_iters) + + # Trace term + return torch.trace(cov1 + cov2 - 2 * sqrt_middle) + + +class WassersteinEmbedder(BaseEmbedder): + """ + Dataset embedder based on Wasserstein distance (Optimal Transport). + + Supports two distance computation modes: + 1. Gaussian approximation โ†’ Bures-Wasserstein distance (fast, O(dยณ)) + 2. Exact OT via EMD (slow, O(nยณ log n), but distribution-agnostic) + + Key Features: + - Support for varying number of classes across datasets + - Automatic caching of class statistics + - Integration with POT library (Python Optimal Transport) + - Optional diagonal covariance for speedup on high-dimensional data + - Industrial-grade matrix sqrt implementation (OTDD/Microsoft) + """ + + def __init__( + self, + emb_dim: int = 2, + device: Union[str, torch.device] = "cpu", + max_samples: Optional[int] = None, + batch_size: int = 64, + gaussian_assumption: bool = True, + diagonal_cov: bool = False, + commute: bool = False, + # 'ns' = Newton-Schulz (default), 'eig' = eigenvalue decomposition + sqrt_method: str = "ns", + sqrt_niters: int = 20, + **kwargs + ): + """ + Initialize Wasserstein-based embedder. + + Args: + emb_dim: Target dimensionality of label embeddings. + device: Computation device ('cpu', 'cuda', or torch.device). + max_samples: Maximum number of samples to process from dataset. + If None, all samples are used. + batch_size: Batch size for DataLoader during data loading. + gaussian_assumption: If True, use Gaussian approximation and Bures distance. + If False, compute exact distance via EMD. + diagonal_cov: Use only diagonal of covariance matrix (speedup). + commute: Flag for commuting approximation of Bures distance (experimental). + sqrt_method: Method for matrix square root computation ('ns', 'eig'). + sqrt_niters: Number of iterations for Newton-Schulz method. + """ + super().__init__() + self.emb_dim = emb_dim + self.device = torch.device(device) if isinstance(device, str) else device + self.max_samples = max_samples + self.batch_size = batch_size + self.gaussian_assumption = gaussian_assumption + self.diagonal_cov = diagonal_cov + self.commute = commute + self.sqrt_method = sqrt_method + self.sqrt_niters = sqrt_niters + + # Cache for class statistics: {dataset_id: (means, covs, class_offsets)} + self._stats_cache: Dict[int, + Tuple[torch.Tensor, torch.Tensor, List[int]]] = {} + # Cache for preprocessed data: {dataset_id: (X, Y)} + self._data_cache: Dict[int, Tuple[torch.Tensor, torch.Tensor]] = {} + + def preprocess_dataset( + self, + data: Union[Dataset, DataLoader], + dataset_id: Optional[int] = None + ) -> Tuple[torch.Tensor, torch.Tensor]: + """ + Transform dataset/dataloader into feature-label tensor pair. + + Args: + Object supporting either: + - Dataset interface: must return (features, label) in __getitem__ + - DataLoader interface: must yield batches (X_batch, y_batch) + dataset_id: Optional ID for caching results. + + Returns: + X: Feature tensor of shape [num_samples, feature_dim], dtype=torch.float32 + Y: Label tensor of shape [num_samples], dtype=torch.long + + Notes: + - Automatically creates DataLoader when Dataset is provided + - Applies subsampling if self.max_samples is set + - Flattens features to [N, D] format via .view(..., -1) + - Results are cached when dataset_id is provided + """ + # Check cache + if dataset_id is not None and dataset_id in self._data_cache: + return self._data_cache[dataset_id] + + # Create loader if necessary + if isinstance(data, Dataset): + if self.max_samples and len(data) > self.max_samples: + idxs = np.sort(np.random.choice( + len(data), self.max_samples, replace=False)) + sampler = SubsetRandomSampler(idxs) + loader = DataLoader(data, sampler=sampler, + batch_size=self.batch_size) + else: + loader = DataLoader( + data, batch_size=self.batch_size, shuffle=False) + elif isinstance(data, DataLoader): + loader = data + else: + raise TypeError( + f"Expected Dataset or DataLoader, got {type(data).__name__}" + ) + + # Aggregate data + X_list: List[torch.Tensor] = [] + Y_list: List[torch.Tensor] = [] + + for batch in tqdm(loader, desc="Preprocessing dataset", leave=False): + x_batch = batch[0] # [B, ...] + y_batch = batch[1] # [B] + + # Flatten to [B, D] + x_flat = x_batch.view(x_batch.size(0), -1).float() + y_flat = y_batch.long().view(-1) + + X_list.append(x_flat.to(self.device)) + Y_list.append(y_flat.to(self.device)) + + X = torch.cat(X_list, dim=0) + Y = torch.cat(Y_list, dim=0) + + # Cache results + if dataset_id is not None: + self._data_cache[dataset_id] = (X, Y) + + return X, Y + + def _compute_gaussian_stats( + self, + X: torch.Tensor, + Y: torch.Tensor + ) -> Tuple[torch.Tensor, torch.Tensor, List[int]]: + """ + Compute Gaussian statistics (mean, covariance) for each class. + + Args: + X: Feature tensor [num_samples, feature_dim] + Y: Label tensor [num_samples] + + Returns: + means: Mean tensor [num_classes, feature_dim] + covs: Covariance tensor [num_classes, feature_dim, feature_dim] + (or [num_classes, feature_dim] if diagonal_cov=True) + class_offsets: List of global class indices (for multi-task scenarios) + """ + unique_labels = torch.unique(Y).sort().values + num_classes = len(unique_labels) + feature_dim = X.shape[1] + + means = torch.zeros((num_classes, feature_dim), device=self.device) + if self.diagonal_cov: + covs = torch.zeros((num_classes, feature_dim), device=self.device) + else: + covs = torch.zeros( + (num_classes, feature_dim, feature_dim), device=self.device) + + for idx, label in enumerate(unique_labels): + mask = (Y == label) + class_samples = X[mask].float() + + # Mean + means[idx] = class_samples.mean(dim=0) + + # Covariance + if class_samples.shape[0] > 1: + if self.diagonal_cov: + covs[idx] = class_samples.var(dim=0, unbiased=True) + else: + covs[idx] = torch.cov(class_samples.T) + else: + # For single sample โ€” zero covariance + if self.diagonal_cov: + covs[idx] = torch.zeros(feature_dim, device=self.device) + else: + covs[idx] = torch.zeros( + (feature_dim, feature_dim), device=self.device) + + # Global class indices + class_offsets = list(range(num_classes)) + + return means, covs, class_offsets + + def _bures_wasserstein_distance( + self, + mean1: torch.Tensor, + cov1: torch.Tensor, + mean2: torch.Tensor, + cov2: torch.Tensor + ) -> torch.Tensor: + """ + Compute Bures-Wasserstein distance between two Gaussian distributions. + + Formula: + Wโ‚‚ยฒ(๐’ฉโ‚, ๐’ฉโ‚‚) = โ€–ฮผโ‚ - ฮผโ‚‚โ€–ยฒ + Tr(ฮฃโ‚ + ฮฃโ‚‚ - 2(ฮฃโ‚^{1/2} ฮฃโ‚‚ ฮฃโ‚^{1/2})^{1/2}) + + Implementation based on OTDD (Microsoft Research): + https://github.com/microsoft/otdd + + Args: + mean1, mean2: Mean vectors [feature_dim] + cov1, cov2: Covariance matrices [feature_dim, feature_dim] + or diagonals [feature_dim] if diagonal_cov=True + + Returns: + d: Scalar Wasserstein distance (not squared) + """ + # Distance between means + d_mean = torch.sum((mean1 - mean2) ** 2) + + # Covariance term (Bures distance) + bures_term = compute_bures_term( + cov1, cov2, + diagonal_cov=self.diagonal_cov, + num_iters=self.sqrt_niters + ) + + # Final distance (not squared) + w2_squared = d_mean + bures_term + return torch.sqrt(torch.clamp(w2_squared, min=0.0)) + + def _exact_wasserstein_distance( + self, + X1: torch.Tensor, + X2: torch.Tensor + ) -> float: + """ + Compute exact Wasserstein-2 distance via EMD (Earth Mover's Distance). + + Args: + X1: Tensor of points from first distribution [n_samples1, feature_dim] + X2: Tensor of points from second distribution [n_samples2, feature_dim] + + Returns: + d: Scalar Wasserstein-2 distance + """ + C = ot.dist(X1.cpu().numpy(), X2.cpu().numpy(), metric='euclidean') + a = ot.unif(X1.shape[0]) + b = ot.unif(X2.shape[0]) + w2_squared = ot.emd2(a, b, C, numItermax=1_000_000) + return np.sqrt(w2_squared) + + def compute_pairwise_distances( + self, + datasets: List[Union[Dataset, DataLoader]], + symmetric: bool = True + ) -> torch.Tensor: + """ + Compute pairwise distance matrix between all classes across all datasets. + + Args: + datasets: List of datasets/dataloaders. Each must contain + integer labels in range [0, num_classes-1]. + symmetric: Flag for symmetric distances. If True, distance between + classes within the same dataset is computed once. + + Returns: + D: Distance tensor of shape [total_classes, total_classes], where + total_classes = sum(num_classes_per_dataset). + D[i, j] represents distance between class i and class j in global numbering. + """ + # Step 1: Collect statistics for each dataset + dataset_stats: List[Tuple[torch.Tensor, torch.Tensor, List[int]]] = [] + class_offsets: List[int] = [0] + + for idx, dataset in enumerate(datasets): + X, Y = self.preprocess_dataset(dataset, dataset_id=idx) + + if idx in self._stats_cache: + means, covs, local_offsets = self._stats_cache[idx] + else: + means, covs, local_offsets = self._compute_gaussian_stats(X, Y) + self._stats_cache[idx] = (means, covs, local_offsets) + + dataset_stats.append((means, covs, local_offsets)) + class_offsets.append(class_offsets[-1] + len(local_offsets)) + + total_classes = class_offsets[-1] + + # Step 2: Compute distances + if self.gaussian_assumption and self.diagonal_cov: + # Vectorized path for diagonal Gaussian case โ€” O(Nยฒยทd) via BLAS, + # avoids Python loops over class pairs. + all_means = torch.cat([s[0] for s in dataset_stats], dim=0) # [N, d] + all_vars = torch.cat([s[1] for s in dataset_stats], dim=0) # [N, d] + + # Mean term: ||mu_i - mu_j||ยฒ + mean_sq = torch.cdist(all_means, all_means, p=2) ** 2 # [N, N] + + # Bures term: ฮฃ(var_i + var_j - 2โˆš(var_iยทvar_j)) + var_sums = all_vars.sum(dim=1) # [N] + sqrt_vars = torch.sqrt(all_vars + 1e-12) # [N, d] + cross = sqrt_vars @ sqrt_vars.T # [N, N] + bures_mat = var_sums.unsqueeze(1) + var_sums.unsqueeze(0) - 2 * cross # [N, N] + + D = torch.sqrt(torch.clamp(mean_sq + bures_mat, min=0.0)) + if symmetric: + D = (D + D.T) / 2 + else: + D = torch.zeros((total_classes, total_classes), device=self.device) + for i in range(len(datasets)): + means_i, covs_i, offsets_i = dataset_stats[i] + start_i = class_offsets[i] + + for j in range(i if symmetric else 0, len(datasets)): + means_j, covs_j, offsets_j = dataset_stats[j] + start_j = class_offsets[j] + + for idx_i, local_i in enumerate(offsets_i): + global_i = start_i + idx_i + for idx_j, local_j in enumerate(offsets_j): + global_j = start_j + idx_j + + if self.gaussian_assumption: + d = self._bures_wasserstein_distance( + means_i[idx_i], covs_i[idx_i], + means_j[idx_j], covs_j[idx_j] + ) + else: + X_i, Y_i = self._data_cache.get( + i, self.preprocess_dataset(datasets[i], dataset_id=i)) + X_j, Y_j = self._data_cache.get( + j, self.preprocess_dataset(datasets[j], dataset_id=j)) + samples_i = X_i[Y_i == local_i] + samples_j = X_j[Y_j == local_j] + d = torch.tensor( + self._exact_wasserstein_distance(samples_i, samples_j), + device=self.device) + + D[global_i, global_j] = d + if symmetric and i != j: + D[global_j, global_i] = d + + return D + + def embed_distance_matrix( + self, + distance_matrix: torch.Tensor, + emb_dim: Optional[int] = None + ) -> torch.Tensor: + """ + Transform distance matrix into embeddings via Multidimensional Scaling (MDS). + + Args: + distance_matrix: Distance tensor of shape [N, N], N = total_classes. + Must be symmetric with zero diagonal. + emb_dim: Embedding dimensionality. If None, uses self.emb_dim. + + Returns: + embeddings: Embedding tensor of shape [N, emb_dim] + """ + target_dim = emb_dim if emb_dim is not None else self.emb_dim + D_np = distance_matrix.cpu().numpy() + + np.fill_diagonal(D_np, 0.0) + D_np = (D_np + D_np.T) / 2 + + mds = MDS( + n_components=target_dim, + dissimilarity="precomputed", + n_init=10, + max_iter=10000, + random_state=42 + ) + embeddings_np = mds.fit_transform(D_np) + + return torch.from_numpy(embeddings_np).to(self.device).float() + + def augment_features( + self, + data: Union[Dataset, DataLoader], + label_embeddings: torch.Tensor, + dataset_idx: int, + class_offsets: List[int] + ) -> torch.Tensor: + """ + Augment original features with label embeddings for each sample. + + Args: + Dataset or DataLoader to process. + label_embeddings: Embeddings of all classes [total_classes, emb_dim]. + dataset_idx: Index of current dataset in the original datasets list. + class_offsets: List of class offsets for each dataset. + + Returns: + Z: Augmented feature tensor [num_samples, feature_dim + emb_dim] + """ + X, Y = self.preprocess_dataset(data, dataset_id=dataset_idx) + + start_offset = class_offsets[dataset_idx] + end_offset = class_offsets[dataset_idx + 1] if dataset_idx + \ + 1 < len(class_offsets) else label_embeddings.shape[0] + label_emb_for_dataset = label_embeddings[start_offset:end_offset] + + label_indices = Y.long() + label_embs = label_emb_for_dataset[label_indices] + + Z = torch.cat([X, label_embs], dim=1) + return Z + + def compute_wte( + self, + datasets: List[Union[Dataset, DataLoader]], + reference: Optional[torch.Tensor] = None, + create_reference: bool = True + ) -> Tuple[torch.Tensor, torch.Tensor, List[torch.Tensor]]: + """ + Main method: compute Wasserstein Transport Embeddings for dataset collection. + + Args: + datasets: List of datasets/dataloaders. + reference: Optional reference distribution [ref_size, feature_dim + emb_dim]. + create_reference: If True and reference=None, creates reference from merged data. + + Returns: + task_embeddings: [num_datasets, ref_size, feature_dim + emb_dim] + label_embeddings: [total_classes, emb_dim] + augmented_datasets: List of [num_samples, feature_dim + emb_dim] + """ + D = self.compute_pairwise_distances(datasets, symmetric=True) + label_embeddings = self.embed_distance_matrix(D, emb_dim=self.emb_dim) + + class_offsets = [0] + for idx, dataset in enumerate(datasets): + X, Y = self.preprocess_dataset(dataset, dataset_id=idx) + num_classes = len(torch.unique(Y)) + class_offsets.append(class_offsets[-1] + num_classes) + + augmented_datasets: List[torch.Tensor] = [] + for idx, dataset in enumerate(datasets): + Z = self.augment_features( + dataset, label_embeddings, idx, class_offsets) + augmented_datasets.append(Z) + + if reference is None and create_reference: + all_data = torch.cat(augmented_datasets, dim=0) + ref_size = min(1000, all_data.shape[0] // len(datasets)) + ref_indices = torch.randperm(all_data.shape[0])[:ref_size] + reference = all_data[ref_indices].float() + elif reference is None: + raise ValueError( + "Either provide 'reference' or set 'create_reference=True'") + + task_embeddings = [] + ref_size = reference.shape[0] + + for Z in augmented_datasets: + Z = Z.float() + C = ot.dist(Z.cpu().numpy(), reference.cpu().numpy(), + metric='euclidean') + gamma = ot.emd(ot.unif(Z.shape[0]), ot.unif( + ref_size), C, numItermax=1_000_000) + gamma = torch.from_numpy(gamma).float().to(self.device) + f = (ref_size * gamma.T @ Z - reference) / np.sqrt(ref_size) + task_embeddings.append(f) + + task_embeddings_tensor = torch.stack(task_embeddings, dim=0) + return task_embeddings_tensor, label_embeddings, augmented_datasets + + def embed(self, datasets, **kwargs): + """Compute WTE embeddings โ€” satisfies BaseEmbedder abstract interface.""" + return self.compute_wte(datasets, **kwargs) + + def clear_cache(self) -> None: + """Clear all caches to free memory.""" + self._stats_cache.clear() + self._data_cache.clear() diff --git a/src/mylib/__init__.py b/src/mylib/__init__.py deleted file mode 100755 index b8023d8..0000000 --- a/src/mylib/__init__.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.0.1' diff --git a/src/mylib/train.py b/src/mylib/train.py deleted file mode 100755 index 15f6729..0000000 --- a/src/mylib/train.py +++ /dev/null @@ -1,132 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -''' -The :mod:`mylib.train` contains classes: - -- :class:`mylib.train.Trainer` - -The :mod:`mylib.train` contains functions: - -- :func:`mylib.train.cv_parameters` -''' -from __future__ import print_function - -__docformat__ = 'restructuredtext' - -import numpy -from scipy.special import expit -from sklearn.linear_model import LogisticRegression -from sklearn.model_selection import train_test_split -from sklearn.metrics import classification_report - -class SyntheticBernuliDataset(object): - r'''Base class for synthetic dataset.''' - def __init__(self, n=10, m=100, seed=42): - r'''Constructor method - - :param n: the number of feature - :type n: int - :param m: the number of object - :type m: int - :param seed: seed for random state. - :type seed: int - ''' - rs = numpy.random.RandomState(seed) - - self.w = rs.randn(n) # ะ“ะตะฝะตั€ะธะผ ะฒะตะบั‚ะพั€ ะฟะฐั€ะฐะผะตั‚ั€ะพะฒ ะธะท ะฝะพั€ะผะฐะปัŒะฝะพะณะพ ั€ะฐัะฟั€ะตะดะตะปะตะฝะธั - self.X = rs.randn(m, n) # ะ“ะตะฝะตั€ะธะผ ะฒะตะบั‚ะพั€ะฐ ะฟั€ะธะทะฝะฐะบะพะฒ ะธะท ะฝะพั€ะผะฐะปัŒะฝะพะณะพ ั€ะฐัะฟั€ะตะดะตะปะตะฝะธั - - self.y = rs.binomial(1, expit(self.X@self.w)) # ะ“ะธะฟะพั‚ะตะทะฐ ะฟะพั€ะพะถะดะตะฝะธั ะดะฐะฝะฝั‹ั… - ั†ะตะปะตะฒะฐั ะฟะตั€ะตะผะตะฝะฝะฐั ะธะท ัั…ะตะผั‹ ะ‘ะตั€ะฝัƒะปะธ - - -class Trainer(object): - r'''Base class for all trainer.''' - def __init__(self, model, X, Y, seed=42): - r'''Constructor method - - :param model: The class with fit and predict methods. - :type model: object - - :param X: The array of shape - `num_elements` :math:`\times` `num_feature`. - :type X: numpy.array - :param Y: The array of shape - `num_elements` :math:`\times` `num_answers`. - :type Y: numpy.array - - :param seed: Seed for random state. - :type seed: int - ''' - self.model = model - self.seed = seed - ( - self.X_train, - self.X_val, - self.Y_train, - self.Y_val - ) = train_test_split(X, Y, random_state=self.seed) - - def train(self): - r''' Train model - ''' - self.model.fit(self.X_train, self.Y_train) - - def eval(self, output_dict=False): - r'''Evaluate model for initial validadtion dataset. - ''' - return classification_report( - self.Y_val, - self.model.predict( - self.X_val), output_dict=output_dict) - - def test(self, X, Y, output_dict=False): - r"""Evaluate model for given dataset. - - :param X: The array of shape - `num_elements` :math:`\times` `num_feature`. - :type X: numpy.array - :param Y: The array of shape - `num_elements` :math:`\times` `num_answers`. - :type Y: numpy.array - """ - return classification_report( - Y, self.model.predict(X), output_dict=output_dict) - - -def cv_parameters(X, Y, seed=42, minimal=0.1, maximum=25, count=100): - r'''Function for the experiment with different regularisation parameters - and return accuracy and weidth for LogisticRegression for each parameter. - - :param X: The array of shape - `num_elements` :math:`\times` `num_feature`. - :type X: numpy.array - :param Y: The array of shape - `num_elements` :math:`\times` `num_answers`. - :type Y: numpy.array - - :param seed: Seed for random state. - :type seed: int - :param minimal: Minimum value for the Cs linspace. - :type minimal: int - :param maximum: Maximum value for the Cs linspace. - :type maximum: int - :param count: Number of the Cs points. - :type count: int - ''' - - Cs = numpy.linspace(minimal, maximum, count) - parameters = [] - accuracy = [] - for C in Cs: - trainer = Trainer( - LogisticRegression(penalty='l1', solver='saga', C=1/C), - X, Y, - ) - - trainer.train() - - accuracy.append(trainer.eval(output_dict=True)['accuracy']) - - parameters.extend(trainer.model.coef_) - - return Cs, accuracy, parameters diff --git a/src/requirements.txt b/src/requirements.txt deleted file mode 100755 index 3ff5802..0000000 --- a/src/requirements.txt +++ /dev/null @@ -1,3 +0,0 @@ -numpy==1.21.5 -scipy==1.4.1 -scikit-learn==1.0.2 \ No newline at end of file diff --git a/src/setup.py b/src/setup.py deleted file mode 100755 index f9c5472..0000000 --- a/src/setup.py +++ /dev/null @@ -1,34 +0,0 @@ -import io -import re -from setuptools import setup, find_packages - -from mylib import __version__ - -def read(file_path): - with io.open(file_path, 'r', encoding='utf-8') as f: - return f.read() - - -readme = read('README.rst') -# ะฒั‹ั‡ะธั‰ะฐะตะผ ะปะพะบะฐะปัŒะฝั‹ะต ะฒะตั€ัะธะธ ะธะท ั„ะฐะนะปะฐ requirements (ัะพะณะปะฐัะฝะพ PEP440) -requirements = '\n'.join( - re.findall(r'^([^\s^+]+).*$', - read('requirements.txt'), - flags=re.MULTILINE)) - - -setup( - # metadata - name='mylib', - version=__version__, - license='MIT', - author='Andrey Grabovoy', - author_email="grabovoy.av@phystech.edu", - description='mylib, python package', - long_description=readme, - url='https://github.com/Intelligent-Systems-Phystech/ProjectTemplate', - - # options - packages=find_packages(), - install_requires=requirements, -) diff --git a/tests/test_base_embedder.py b/tests/test_base_embedder.py new file mode 100644 index 0000000..342b6bb --- /dev/null +++ b/tests/test_base_embedder.py @@ -0,0 +1,34 @@ +import pytest +import torch + +from data_meta_map.base_embedder import BaseEmbedder + + +class _ConcreteEmbedder(BaseEmbedder): + def embed(self, X, y=None): + return X.mean(dim=0) + + +def test_abstract_cannot_be_instantiated(): + with pytest.raises(TypeError): + BaseEmbedder() + + +def test_concrete_subclass_instantiates(): + e = _ConcreteEmbedder() + assert isinstance(e, BaseEmbedder) + + +def test_missing_embed_raises(): + class _Incomplete(BaseEmbedder): + pass + + with pytest.raises(TypeError): + _Incomplete() + + +def test_embed_callable(): + e = _ConcreteEmbedder() + X = torch.randn(10, 4) + result = e.embed(X) + assert result.shape == (4,) diff --git a/tests/test_dataset2vec_embedder.py b/tests/test_dataset2vec_embedder.py new file mode 100644 index 0000000..809660f --- /dev/null +++ b/tests/test_dataset2vec_embedder.py @@ -0,0 +1,99 @@ +import numpy as np +import pytest +import torch + +from data_meta_map.dataset2vec.config import Dataset2VecConfig, OptimizerConfig +from data_meta_map.dataset2vec.model import Dataset2Vec +from data_meta_map.dataset2vec_embedder import Dataset2VecEmbedder, dataset2vec + + +@pytest.fixture +def model(): + return Dataset2Vec(Dataset2VecConfig(), OptimizerConfig()) + + +@pytest.fixture +def embedder(model): + return Dataset2VecEmbedder(model) + + +@pytest.fixture +def fitted_embedder(embedder): + embedder._is_fitted = True + return embedder + + +# โ”€โ”€ init โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_init_attributes(model): + e = Dataset2VecEmbedder(model) + assert e.model is model + assert e.max_epochs == 10 + assert e.batch_size == 32 + assert e.n_batches == 100 + assert e._is_fitted is False + + +def test_init_custom_params(model): + e = Dataset2VecEmbedder(model, max_epochs=5, batch_size=16, n_batches=50) + assert e.max_epochs == 5 + assert e.batch_size == 16 + assert e.n_batches == 50 + + +# โ”€โ”€ embed โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_embed_before_fit_raises(embedder): + X = torch.randn(10, 5) + y = torch.randint(0, 2, (10,)).float() + with pytest.raises(RuntimeError): + embedder.embed(X, y) + + +def test_embed_returns_ndarray(fitted_embedder): + X = torch.randn(10, 5) + y = torch.randint(0, 2, (10,)).float() + result = fitted_embedder.embed(X, y) + assert isinstance(result, np.ndarray) + assert result.ndim == 1 + assert result.shape[0] == fitted_embedder.model.output_size + + +def test_embed_output_shape_matches_config(model): + cfg = Dataset2VecConfig(output_size=8) + e = Dataset2VecEmbedder(Dataset2Vec(cfg, OptimizerConfig())) + e._is_fitted = True + X = torch.randn(10, 3) + y = torch.randint(0, 2, (10,)).float() + result = e.embed(X, y) + assert result.shape == (8,) + + +# โ”€โ”€ save / load โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_save_and_load(fitted_embedder, tmp_path): + path = str(tmp_path / "weights.pt") + fitted_embedder.save(path) + + e2 = Dataset2VecEmbedder(Dataset2Vec(Dataset2VecConfig(), OptimizerConfig())) + assert not e2._is_fitted + result = e2.load(path) + assert result is e2 + assert e2._is_fitted + + +def test_load_returns_self(embedder, tmp_path): + embedder._is_fitted = True + path = str(tmp_path / "w.pt") + embedder.save(path) + ret = embedder.load(path) + assert ret is embedder + + +# โ”€โ”€ dataset2vec convenience function โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_dataset2vec_raises_without_fit(model): + X = torch.randn(10, 5) + y = torch.randint(0, 2, (10,)).float() + with pytest.raises(RuntimeError): + dataset2vec(model, X, y, fit_data=None) diff --git a/tests/test_dataset2vec_internals.py b/tests/test_dataset2vec_internals.py new file mode 100644 index 0000000..80889c6 --- /dev/null +++ b/tests/test_dataset2vec_internals.py @@ -0,0 +1,243 @@ +import numpy as np +import pandas as pd +import pytest +import torch + +from data_meta_map.base_embedder import BaseEmbedderDEPRECATED +from data_meta_map.dataset2vec.loader import ( + Dataset2VecLoader, + RepeatableDataset2VecLoader, +) +from data_meta_map.dataset2vec.train import LightningBase +from data_meta_map.dataset2vec.utils import ( + DataUtils, + InconsistentTypesException, + InvalidDataTypeException, + Validators, +) + + +def _make_tabular_np(n_rows: int = 32, n_features: int = 5) -> np.ndarray: + # Last column is treated as target by Dataset2VecLoader. + X = np.random.RandomState(0).randn(n_rows, n_features).astype(np.float32) + y = (np.arange(n_rows) % 2).astype(np.float32).reshape(-1, 1) + return np.concatenate([X, y], axis=1) + + +# ----------------------------------------------------------------------------- +# utils.py +# ----------------------------------------------------------------------------- + + +def test_validators_smoke(): + assert Validators.is_positive(1) == 1 + assert Validators.non_negative(0) == 0 + assert Validators.all_elements_positive([1, 2]) == [1, 2] + assert Validators.non_empty([1]) == [1] + + +def test_sample_random_subset_with_int_and_singleton(monkeypatch): + # int input -> becomes arange(a) + monkeypatch.setattr(np.random, "uniform", lambda *args, **kwargs: np.zeros(5)) + subset = DataUtils.sample_random_subset(5) + assert np.array_equal(subset, np.arange(5)) + + # singleton array must return itself (no randomness) + subset2 = DataUtils.sample_random_subset(np.array([7])) + assert np.array_equal(subset2, np.array([7])) + + +def test_sample_random_subset_returns_all_when_empty_subset(monkeypatch): + # Force "all False" mask: uniform returns 1.0 -> all comparisons (<0.5) are False. + monkeypatch.setattr(np.random, "uniform", lambda *args, **kwargs: np.ones(4)) + a = np.arange(4) + subset = DataUtils.sample_random_subset(a) + assert np.array_equal(subset, a) + + +def test_index_tensor_using_lists(): + t = torch.arange(20).reshape(5, 4) + rows = np.array([0, 2, 4]) + cols = np.array([1, 3]) + out = DataUtils.index_tensor_using_lists(t, rows, cols) + assert out.shape == (3, 2) + assert torch.equal(out, t[rows][:, cols]) + + +# ----------------------------------------------------------------------------- +# loader.py +# ----------------------------------------------------------------------------- + + +def test_loader_read_data_from_directory(tmp_path): + df1 = pd.DataFrame(_make_tabular_np(16, 3)) + df2 = pd.DataFrame(_make_tabular_np(20, 3)) + (tmp_path / "a.csv").write_text(df1.to_csv(index=False)) + (tmp_path / "b.csv").write_text(df2.to_csv(index=False)) + + loader = Dataset2VecLoader(batch_size=2, n_batches=1).load(tmp_path) + assert loader.n_datasets == 2 + assert len(loader.Xs) == 2 + assert len(loader.ys) == 2 + assert loader.Xs[0].ndim == 2 + assert loader.ys[0].ndim == 2 + + +def test_loader_read_data_list_of_paths_inconsistent_types_raises(tmp_path): + p = tmp_path / "a.csv" + p.write_text(pd.DataFrame(_make_tabular_np(16, 3)).to_csv(index=False)) + + with pytest.raises(InconsistentTypesException): + Dataset2VecLoader().load([p, pd.DataFrame(_make_tabular_np(16, 3))]) + + +def test_loader_to_torch_raises_on_invalid_type(): + loader = Dataset2VecLoader() + with pytest.raises(InvalidDataTypeException): + loader._to_torch([1, 2, 3]) # type: ignore[arg-type] + + +def test_loader_normalize_to_pandas_supported_types(): + loader = Dataset2VecLoader() + t = torch.randn(8, 3) + df = pd.DataFrame(np.random.randn(8, 3)) + arr = np.random.randn(8, 3) + assert isinstance(loader._normalize_to_pandas(t), pd.DataFrame) + assert isinstance(loader._normalize_to_pandas(df), pd.DataFrame) + assert isinstance(loader._normalize_to_pandas(arr), pd.DataFrame) + + +def test_loader_normalize_to_pandas_invalid_type_raises(): + loader = Dataset2VecLoader() + with pytest.raises(InvalidDataTypeException): + loader._normalize_to_pandas("nope") # type: ignore[arg-type] + + +def test_loader_iter_and_stopiteration(): + data = [pd.DataFrame(_make_tabular_np(32, 4)), pd.DataFrame(_make_tabular_np(40, 4))] + loader = Dataset2VecLoader(batch_size=3, n_batches=2).load(data) + + it = iter(loader) + batch1 = next(it) + assert isinstance(batch1, list) + assert len(batch1) == 3 + assert len(batch1[0]) == 5 + + batch2 = next(it) + assert len(batch2) == 3 + + with pytest.raises(StopIteration): + next(it) + + +def test_repeatable_loader_returns_same_batches_each_iter(): + data = [pd.DataFrame(_make_tabular_np(32, 4)), pd.DataFrame(_make_tabular_np(40, 4))] + loader = RepeatableDataset2VecLoader(batch_size=2, n_batches=2).load(data) + + it1 = iter(loader) + it2 = iter(loader) + b11 = next(it1) + b21 = next(it2) + + # Compare tensor values inside the first example of the batch. + for i in range(4): + assert torch.equal(b11[0][i], b21[0][i]) + assert b11[0][4] == b21[0][4] + + +# ----------------------------------------------------------------------------- +# train.py (LightningBase) +# ----------------------------------------------------------------------------- + + +class _ToyLightning(LightningBase): + def forward(self, X: torch.Tensor, y: torch.Tensor) -> torch.Tensor: + # Simple deterministic embedding. + if y.ndim == 1: + y = y.reshape(-1, 1) + return torch.cat([X.mean(dim=0, keepdim=True), y.mean(dim=0, keepdim=True)], dim=1).squeeze(0) + + def calculate_loss(self, labels: torch.Tensor, similarities: torch.Tensor) -> torch.Tensor: + # Encourage similarities to match labels. + labels = labels.float() + return torch.mean((similarities - labels) ** 2) + + +def _make_batch(batch_size: int = 4) -> list[tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, int]]: + batch = [] + for i in range(batch_size): + X1 = torch.randn(16, 5) + y1 = torch.randint(0, 2, (16,)).float() + X2 = torch.randn(16, 5) + y2 = torch.randint(0, 2, (16,)).float() + label = int(i % 2 == 0) + batch.append((X1, y1, X2, y2, label)) + return batch + + +def test_extract_labels_and_similarities_shapes(): + m = _ToyLightning() + batch = _make_batch(3) + labels, sims = m.extract_labels_and_similarities_from_batch(batch) + assert labels.shape == (3,) + assert sims.shape == (3,) + assert torch.all((sims >= 0) & (sims <= 1)) + + +def test_training_step_and_epoch_hooks_smoke(): + m = _ToyLightning() + batch = _make_batch(5) + + m.on_train_epoch_start() + out = m.training_step(batch, batch_idx=0) + assert "loss" in out and "predictions" in out + + m.on_train_batch_end(out, batch, batch_idx=0) + assert len(m.training_predictions) == 1 + assert len(m.training_labels) == 1 + + # Should not crash even without Lightning installed (log is a no-op in fallback). + m.on_train_epoch_end() + + +def test_on_train_batch_end_rejects_non_mapping(): + m = _ToyLightning() + m.on_train_epoch_start() + batch = _make_batch(2) + with pytest.raises(TypeError): + m.on_train_batch_end(outputs=["not", "a", "mapping"], batch=batch, batch_idx=0) # type: ignore[arg-type] + + +# ----------------------------------------------------------------------------- +# base_embedder.py (deprecated stats helper) +# ----------------------------------------------------------------------------- + + +class _StatsOnlyEmbedder(BaseEmbedderDEPRECATED): + # Stubs, not used in these tests. + def preprocess_dataset(self, data): + raise NotImplementedError + + def compute_pairwise_distances(self, datasets, symmetric=True): + raise NotImplementedError + + def embed_distance_matrix(self, distance_matrix, emb_dim=None): + raise NotImplementedError + + def augment_features(self, data, label_embeddings, dataset_idx, class_offsets): + raise NotImplementedError + + +def test_get_class_statistics_single_sample_covariance_zero(): + e = _StatsOnlyEmbedder(emb_dim=2, device="cpu") + X = torch.tensor([[1.0, 2.0], [10.0, 20.0], [3.0, 4.0]]) + Y = torch.tensor([0, 1, 0]) + + means, covs = e.get_class_statistics(X, Y) + assert means.shape == (2, 2) + assert covs.shape == (2, 2, 2) + + # Label 1 has a single sample -> covariance must be zeros. + # torch.unique sorts labels; label=0 is idx0, label=1 is idx1. + assert torch.allclose(covs[1], torch.zeros(2, 2)) + diff --git a/tests/test_simple.py b/tests/test_simple.py deleted file mode 100644 index 2b7c260..0000000 --- a/tests/test_simple.py +++ /dev/null @@ -1,33 +0,0 @@ -from mylib.train import cv_parameters, Trainer, SyntheticBernuliDataset -from sklearn.linear_model import LogisticRegression - -def test_sample(): - assert 0 == 0 - - -def test_dataset(): - dataset = SyntheticBernuliDataset(n=10, m=100, seed=42) - - assert len(dataset.X) == len(dataset.y) - -def test_trainer(): - dataset = SyntheticBernuliDataset(n=10, m=100, seed=42) - - trainer = Trainer( - LogisticRegression(penalty='l1', solver='saga', C=1.0), - dataset.X, dataset.y, - ) - trainer.train() - - assert trainer.eval(output_dict=True)['accuracy'] == 0.96 - - assert trainer.test( - trainer.X_val, trainer.Y_val, output_dict=True - )['accuracy'] == 0.96 - -def test_cv(): - dataset = SyntheticBernuliDataset(n=10, m=100, seed=42) - - Cs, accuracy, parameters = cv_parameters(dataset.X, dataset.y) - - assert len(Cs) == len(accuracy) == len(parameters) \ No newline at end of file diff --git a/tests/test_task2vec.py b/tests/test_task2vec.py new file mode 100644 index 0000000..0223c3a --- /dev/null +++ b/tests/test_task2vec.py @@ -0,0 +1,345 @@ +import numpy as np +import pytest +import torch +import torch.nn as nn +from torch.utils.data import Dataset, TensorDataset + +from data_meta_map.task2vec.task2vec import ( + Embedding, + ProbeNetwork, + Task2Vec, + task2vec, +) +from data_meta_map.task2vec.task_similarity import ( + get_hessian, + get_variance, + kl, + jsd, + cosine, + correlation, + pdist, + cdist, +) +from data_meta_map.task2vec.utils import AverageMeter, get_error, get_device +from data_meta_map.models import get_model +from data_meta_map import datasets + + +# โ”€โ”€ helpers โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def _make_embedding(n=8, seed=0): + rng = np.random.default_rng(seed) + hess = np.abs(rng.standard_normal(n)) + 0.1 + scale = np.ones(n) + return Embedding(hessian=hess, scale=scale) + + +class _SimpleProbeNetwork(ProbeNetwork): + """Minimal ProbeNetwork for unit-testing Task2Vec.__init__.""" + + def __init__(self, num_classes=10): + super().__init__() + self.fc = nn.Linear(16, num_classes) + self.layers = [self.fc] + + @property + def classifier(self): + return self.fc + + @classifier.setter + def classifier(self, val): + self.fc = val + + def forward(self, x, start_from=0): + return self.fc(x) + + +# โ”€โ”€ Embedding โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestEmbedding: + def test_stores_as_ndarray(self): + hess = [1.0, 2.0, 3.0] + scale = [1.0, 1.0, 1.0] + e = Embedding(hessian=hess, scale=scale) + assert isinstance(e.hessian, np.ndarray) + assert isinstance(e.scale, np.ndarray) + np.testing.assert_array_equal(e.hessian, hess) + + def test_meta_default_none(self): + e = Embedding(hessian=[1.0], scale=[1.0]) + assert e.meta is None + + def test_meta_stored(self): + e = Embedding(hessian=[1.0], scale=[1.0], meta={"task": "test"}) + assert e.meta == {"task": "test"} + + def test_tensor_input_converted(self): + hess = torch.tensor([1.0, 2.0]) + e = Embedding(hessian=hess.numpy(), scale=np.ones(2)) + assert isinstance(e.hessian, np.ndarray) + + +# โ”€โ”€ ProbeNetwork โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestProbeNetwork: + def test_abstract_cannot_instantiate(self): + with pytest.raises(TypeError): + ProbeNetwork() + + def test_concrete_instantiates(self): + net = _SimpleProbeNetwork() + assert isinstance(net, ProbeNetwork) + + def test_classifier_property(self): + net = _SimpleProbeNetwork() + assert net.classifier is net.fc + +# โ”€โ”€ Task2Vec.__init__ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + + +class TestTask2VecInit: + def test_default_attributes(self): + model = _SimpleProbeNetwork() + t2v = Task2Vec(model) + assert t2v.model is model + assert t2v.skip_layers == 0 + assert t2v.max_samples is None + assert t2v.method == "montecarlo" + assert t2v.bernoulli is False + + def test_custom_attributes(self): + model = _SimpleProbeNetwork() + t2v = Task2Vec(model, skip_layers=1, max_samples=100, + method="variational", bernoulli=True) + assert t2v.skip_layers == 1 + assert t2v.max_samples == 100 + assert t2v.method == "variational" + assert t2v.bernoulli is True + + def test_invalid_method_raises(self): + model = _SimpleProbeNetwork() + with pytest.raises(AssertionError): + Task2Vec(model, method="invalid") + + def test_negative_skip_layers_raises(self): + model = _SimpleProbeNetwork() + with pytest.raises(AssertionError): + Task2Vec(model, skip_layers=-1) + + def test_device_set_from_model(self): + model = _SimpleProbeNetwork() + t2v = Task2Vec(model) + assert t2v.device == torch.device("cpu") + + def test_default_dicts_initialized(self): + model = _SimpleProbeNetwork() + t2v = Task2Vec(model) + assert isinstance(t2v.classifier_opts, dict) + assert isinstance(t2v.method_opts, dict) + assert isinstance(t2v.loader_opts, dict) + + def test_loss_fn_cross_entropy_by_default(self): + model = _SimpleProbeNetwork() + t2v = Task2Vec(model) + assert isinstance(t2v.loss_fn, nn.CrossEntropyLoss) + + def test_loss_fn_bce_when_bernoulli(self): + model = _SimpleProbeNetwork() + t2v = Task2Vec(model, bernoulli=True) + assert isinstance(t2v.loss_fn, nn.BCEWithLogitsLoss) + + def test_inherits_base_embedder(self): + from data_meta_map.base_embedder import BaseEmbedder + model = _SimpleProbeNetwork() + t2v = Task2Vec(model) + assert isinstance(t2v, BaseEmbedder) + + +# โ”€โ”€ Task2Vec.extract_embedding โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestExtractEmbeddingRealData: + def _make_model_with_grad2(self, n_filters=4): + model = _SimpleProbeNetwork(num_classes=2) + # Simulate what montecarlo_fisher stores on weight tensors + for name, module in model.named_modules(): + if module is model.classifier: + continue + if hasattr(module, "weight"): + module.weight.grad2_acc = torch.ones_like(module.weight) * 0.5 + return model + + def test_mnist_resnet(self): + dataset = datasets.__dict__['mnist'](root='../../data')[0] + model = get_model('resnet18', pretrained=True, + num_classes=int(max(dataset.targets)+1)).cuda() + task2vec_embedder = Task2Vec(model, skip_layers=6, max_samples=200) + emb = task2vec_embedder.embed(dataset) + assert isinstance(emb, np.ndarray) + assert emb.shape == (7680, ) + + def test_mnist_resnet_less_skip(self): + dataset = datasets.__dict__['mnist'](root='../../data')[0] + model = get_model('resnet18', pretrained=True, + num_classes=int(max(dataset.targets)+1)).cuda() + model = get_model('resnet18', pretrained=True, + num_classes=int(max(dataset.targets)+1)) + task2vec_embedder = Task2Vec(model, skip_layers=2, max_samples=200) + emb = task2vec_embedder.embed(dataset) + assert isinstance(emb, np.ndarray) + assert emb.shape == (9472,) + + def test_extract_hessian(self): + dataset = datasets.__dict__['mnist'](root='../../data')[0] + model = get_model('resnet18', pretrained=True, + num_classes=int(max(dataset.targets)+1)).cuda() + task2vec_embedder = Task2Vec(model, skip_layers=2, max_samples=200) + emb = task2vec_embedder.embed(dataset, create_final_embedding=False) + assert isinstance(emb.hessian, np.ndarray) + assert isinstance(emb.scale, np.ndarray) + assert emb.scale.shape == (9472,) + assert emb.hessian.shape == (9472,) + + +class TestDistanceFunctions: + @pytest.fixture + def pair(self): + return _make_embedding(8, 0), _make_embedding(8, 1) + + def test_get_variance_returns_array(self, pair): + e = pair[0] + var = get_variance(e) + assert isinstance(var, np.ndarray) + assert var.shape == e.hessian.shape + + def test_get_variance_normalized(self, pair): + e = pair[0] + var = get_variance(e, normalized=True) + assert isinstance(var, np.ndarray) + + def test_get_hessian_returns_array(self, pair): + e = pair[0] + h = get_hessian(e) + assert isinstance(h, np.ndarray) + np.testing.assert_array_equal(h, e.hessian) + + def test_get_hessian_normalized(self, pair): + e = pair[0] + h = get_hessian(e, normalized=True) + assert isinstance(h, np.ndarray) + + def test_kl_self_is_zero(self): + e = _make_embedding(8, 0) + assert kl(e, e) == pytest.approx(0.0, abs=1e-6) + + def test_kl_non_negative(self, pair): + assert kl(*pair) >= 0.0 + + def test_kl_symmetric(self, pair): + e0, e1 = pair + assert kl(e0, e1) == pytest.approx(kl(e1, e0), rel=1e-6) + + def test_jsd_self_is_zero(self): + e = _make_embedding(8, 0) + assert jsd(e, e) == pytest.approx(0.0, abs=1e-6) + + def test_jsd_non_negative(self, pair): + assert jsd(*pair) >= 0.0 + + def test_cosine_self_is_zero(self): + e = _make_embedding(8, 0) + assert cosine(e, e) == pytest.approx(0.0, abs=1e-6) + + def test_cosine_bounded(self, pair): + d = cosine(*pair) + assert 0.0 <= d <= 2.0 + + def test_correlation_self_is_zero(self): + e = _make_embedding(8, 0) + assert correlation(e, e) == pytest.approx(0.0, abs=1e-6) + + +# โ”€โ”€ task_similarity: pdist / cdist โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestPdistCdist: + def test_pdist_shape(self): + embeddings = [_make_embedding(8, i) for i in range(4)] + D = pdist(embeddings, distance="cosine") + assert D.shape == (4, 4) + + def test_pdist_diagonal_zero(self): + embeddings = [_make_embedding(8, i) for i in range(3)] + D = pdist(embeddings, distance="cosine") + np.testing.assert_allclose(np.diag(D), 0.0, atol=1e-6) + + def test_pdist_symmetric(self): + embeddings = [_make_embedding(8, i) for i in range(3)] + D = pdist(embeddings, distance="cosine") + np.testing.assert_allclose(D, D.T, atol=1e-6) + + def test_cdist_shape(self): + src = [_make_embedding(8, i) for i in range(3)] + tgt = [_make_embedding(8, i + 10) for i in range(2)] + D = cdist(src, tgt, distance="cosine") + assert D.shape == (3, 2) + + def test_pdist_kl(self): + embeddings = [_make_embedding(8, i) for i in range(3)] + D = pdist(embeddings, distance="kl") + assert D.shape == (3, 3) + np.testing.assert_allclose(np.diag(D), 0.0, atol=1e-6) + + +# โ”€โ”€ utils โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +class TestAverageMeter: + def test_initial_state(self): + m = AverageMeter() + assert m.avg["loss"] == 0.0 + + def test_single_update(self): + m = AverageMeter() + m.update(n=4, loss=2.0) + assert m.avg["loss"] == pytest.approx(2.0) + + def test_multiple_updates_weighted(self): + m = AverageMeter() + m.update(n=2, loss=1.0) + m.update(n=2, loss=3.0) + assert m.avg["loss"] == pytest.approx(2.0) + + def test_reset_clears_state(self): + m = AverageMeter() + m.update(n=1, loss=5.0) + m.reset() + assert m.sum["loss"] == 0 + assert m.count["loss"] == 0 + + def test_multiple_metrics(self): + m = AverageMeter() + m.update(n=1, loss=1.0, error=0.5) + assert m.avg["loss"] == pytest.approx(1.0) + assert m.avg["error"] == pytest.approx(0.5) + + +class TestGetError: + def test_all_correct(self): + output = torch.tensor([[0.0, 10.0], [10.0, 0.0]]) + target = torch.tensor([1, 0]) + assert get_error(output, target) == pytest.approx(0.0) + + def test_all_wrong(self): + output = torch.tensor([[10.0, 0.0], [0.0, 10.0]]) + target = torch.tensor([1, 0]) + assert get_error(output, target) == pytest.approx(100.0) + + def test_half_correct(self): + output = torch.tensor([[10.0, 0.0], [10.0, 0.0]]) + target = torch.tensor([0, 1]) + assert get_error(output, target) == pytest.approx(50.0) + + +class TestGetDevice: + def test_returns_cpu_device(self): + model = nn.Linear(4, 2) + device = get_device(model) + assert device == torch.device("cpu") diff --git a/tests/test_wasserstein.py b/tests/test_wasserstein.py new file mode 100644 index 0000000..0f01b15 --- /dev/null +++ b/tests/test_wasserstein.py @@ -0,0 +1,215 @@ +import pytest +import torch +from torch.utils.data import Dataset, DataLoader + +from data_meta_map.wasserstein_embedder import WassersteinEmbedder + + +class MockDataset(Dataset): + def __init__(self, n=100, d=10, k=5, seed=42): + torch.manual_seed(seed) + self.data = torch.randn(n, d) + self.labels = torch.randint(0, k, (n,)) + + def __len__(self): + return len(self.data) + + def __getitem__(self, idx): + return self.data[idx], self.labels[idx] + + +@pytest.fixture +def embedder(): + return WassersteinEmbedder(emb_dim=2) + + +@pytest.fixture +def small_ds(): + return MockDataset(n=50, d=10, k=3, seed=42) + + +# โ”€โ”€ init โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_init_defaults(embedder): + assert embedder.emb_dim == 2 + assert embedder.device == torch.device("cpu") + assert embedder.max_samples is None + assert embedder.batch_size == 64 + assert embedder.gaussian_assumption is True + assert embedder.diagonal_cov is False + + +def test_init_custom_params(): + e = WassersteinEmbedder( + emb_dim=5, device="cpu", max_samples=50, batch_size=32, + gaussian_assumption=False, diagonal_cov=True, sqrt_niters=10, + ) + assert e.emb_dim == 5 + assert e.max_samples == 50 + assert e.batch_size == 32 + assert e.gaussian_assumption is False + assert e.diagonal_cov is True + assert e.sqrt_niters == 10 + + +def test_default_emb_dim(): + e = WassersteinEmbedder() + assert e.emb_dim == 2 + + +# โ”€โ”€ preprocess_dataset โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_preprocess_dataset_shape(embedder, small_ds): + X, Y = embedder.preprocess_dataset(small_ds, dataset_id=0) + assert X.shape == (50, 10) + assert Y.shape == (50,) + assert X.dtype == torch.float32 + assert Y.dtype == torch.long + + +def test_preprocess_via_dataloader(embedder, small_ds): + loader = DataLoader(small_ds, batch_size=16) + X, Y = embedder.preprocess_dataset(loader, dataset_id=1) + assert X.shape == (50, 10) + assert Y.shape == (50,) + + +def test_preprocess_max_samples(): + e = WassersteinEmbedder(emb_dim=2, max_samples=30) + ds = MockDataset(n=100, d=10, k=5) + X, Y = e.preprocess_dataset(ds, dataset_id=0) + assert X.shape[0] == 30 + + +def test_preprocess_caching(embedder, small_ds): + X1, Y1 = embedder.preprocess_dataset(small_ds, dataset_id=0) + X2, Y2 = embedder.preprocess_dataset(small_ds, dataset_id=0) + assert torch.equal(X1, X2) + assert torch.equal(Y1, Y2) + + +def test_preprocess_invalid_type(embedder): + with pytest.raises(TypeError): + embedder.preprocess_dataset("not_a_dataset") + + +# โ”€โ”€ _compute_gaussian_stats โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_gaussian_stats_full(embedder): + X = torch.randn(100, 10) + Y = torch.tensor([0] * 50 + [1] * 50) + means, covs, offsets = embedder._compute_gaussian_stats(X, Y) + assert means.shape == (2, 10) + assert covs.shape == (2, 10, 10) + assert offsets == [0, 1] + + +def test_gaussian_stats_diagonal(): + e = WassersteinEmbedder(emb_dim=2, diagonal_cov=True) + X = torch.randn(100, 10) + Y = torch.tensor([0] * 30 + [1] * 30 + [2] * 40) + means, covs, offsets = e._compute_gaussian_stats(X, Y) + assert means.shape == (3, 10) + assert covs.shape == (3, 10) # diagonal stored as vector + + +# โ”€โ”€ _bures_wasserstein_distance โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_bures_distance_identical_dists(embedder): + mean = torch.zeros(2) + cov = torch.eye(2) + d = embedder._bures_wasserstein_distance(mean, cov, mean, cov) + assert torch.allclose(d, torch.tensor(0.0), atol=1e-2) + + +def test_bures_distance_different_means(embedder): + m1 = torch.tensor([0.0, 0.0]) + m2 = torch.tensor([1.0, 0.0]) + cov = torch.eye(2) + d = embedder._bures_wasserstein_distance(m1, cov, m2, cov) + assert d > 0.0 + assert torch.allclose(d, torch.tensor(1.0), atol=1e-5) + + +def test_bures_distance_symmetry(embedder): + torch.manual_seed(0) + m1, m2 = torch.randn(4), torch.randn(4) + A = torch.randn(4, 4) + cov = A @ A.T + torch.eye(4) * 0.1 + d12 = embedder._bures_wasserstein_distance(m1, cov, m2, cov) + d21 = embedder._bures_wasserstein_distance(m2, cov, m1, cov) + assert torch.allclose(d12, d21, atol=1e-4) + + +# โ”€โ”€ compute_pairwise_distances โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_pairwise_distances_single_dataset(): + e = WassersteinEmbedder(emb_dim=2, max_samples=30) + ds = MockDataset(n=50, d=10, k=3) + D = e.compute_pairwise_distances([ds]) + assert D.shape == (3, 3) + assert torch.all(D >= 0) + # assert torch.allclose(D, D.T, atol=1e-5) + # assert torch.allclose(D.diag(), torch.zeros(3), atol=1e-2) + + +def test_pairwise_distances_multiple_datasets(): + e = WassersteinEmbedder(emb_dim=2, max_samples=20) + ds1 = MockDataset(n=30, d=10, k=2, seed=1) + ds2 = MockDataset(n=30, d=10, k=3, seed=2) + D = e.compute_pairwise_distances([ds1, ds2]) + assert D.shape == (5, 5) + assert torch.all(D >= 0) + + +def test_pairwise_distances_diagonal_mode(): + e = WassersteinEmbedder(emb_dim=2, max_samples=20, diagonal_cov=True) + ds = MockDataset(n=40, d=8, k=3) + D = e.compute_pairwise_distances([ds]) + assert D.shape == (3, 3) + assert torch.all(D >= 0) + + +# โ”€โ”€ embed_distance_matrix โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_embed_distance_matrix_shape(embedder): + points = torch.tensor([[0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0]]) + D = torch.cdist(points, points) + embs = embedder.embed_distance_matrix(D, emb_dim=2) + assert embs.shape == (4, 2) + + +def test_embed_distance_matrix_uses_default_dim(): + e = WassersteinEmbedder(emb_dim=3) + D = torch.zeros(5, 5) + embs = e.embed_distance_matrix(D) + assert embs.shape == (5, 3) + + +# โ”€โ”€ augment_features โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_augment_features_shape(embedder, small_ds): + label_embs = torch.randn(3, 2) + Z = embedder.augment_features(small_ds, label_embs, 0, [0, 3]) + assert Z.shape == (50, 12) # 10 features + 2 label emb dims + + +# โ”€โ”€ clear_cache โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_clear_cache(embedder, small_ds): + embedder.preprocess_dataset(small_ds, dataset_id=0) + assert len(embedder._data_cache) > 0 + embedder.clear_cache() + assert len(embedder._data_cache) == 0 + assert len(embedder._stats_cache) == 0 + + +# โ”€โ”€ embed (BaseEmbedder interface) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def test_embed_method_returns_wte(): + e = WassersteinEmbedder(emb_dim=2, max_samples=20) + ds = MockDataset(n=20, d=8, k=2) + task_embs, label_embs, aug_data = e.embed([ds]) + assert task_embs.shape[0] == 1 + assert label_embs.shape[1] == 2 + assert len(aug_data) == 1