From 3435ea76174804700584dd00a72a40a8d5615497 Mon Sep 17 00:00:00 2001 From: ilyaspiridonov Date: Wed, 17 Apr 2024 00:08:50 +0300 Subject: [PATCH 1/2] Update KO translations --- site/ko/agents/overview.md | 128 ++ ...10_checkpointer_policysaver_tutorial.ipynb | 131 +- site/ko/agents/tutorials/1_dqn_tutorial.ipynb | 153 ++- .../tutorials/2_environments_tutorial.ipynb | 66 +- .../tutorials/3_policies_tutorial.ipynb | 58 +- .../agents/tutorials/4_drivers_tutorial.ipynb | 30 +- .../tutorials/5_replay_buffers_tutorial.ipynb | 42 +- .../tutorials/6_reinforce_tutorial.ipynb | 88 +- .../tutorials/7_SAC_minitaur_tutorial.ipynb | 118 +- .../tutorials/8_networks_tutorial.ipynb | 34 +- site/ko/agents/tutorials/9_c51_tutorial.ipynb | 64 +- .../agents/tutorials/bandits_tutorial.ipynb | 86 +- site/ko/agents/tutorials/intro_bandit.ipynb | 6 +- .../tutorials/per_arm_bandits_tutorial.ipynb | 58 +- .../agents/tutorials/ranking_tutorial.ipynb | 66 +- site/ko/datasets/overview.ipynb | 98 +- .../federated/design/life_of_a_computation.md | 4 +- site/ko/federated/design/tracing.md | 12 +- .../composing_learning_algorithms.ipynb | 94 +- ...erated_algorithm_with_tff_optimizers.ipynb | 57 +- .../custom_federated_algorithms_1.ipynb | 150 ++- ...onstruction_for_matrix_factorization.ipynb | 572 ++++++-- .../tutorials/random_noise_generation.ipynb | 37 +- site/ko/federated/tutorials/simulations.ipynb | 37 +- site/ko/guide/advanced_autodiff.ipynb | 160 ++- site/ko/guide/autodiff.ipynb | 128 +- site/ko/guide/basic_training_loops.ipynb | 60 +- .../guide/core/logistic_regression_core.ipynb | 192 ++- site/ko/guide/core/matrix_core.ipynb | 108 +- site/ko/guide/core/mlp_core.ipynb | 167 ++- site/ko/guide/create_op.md | 8 +- site/ko/guide/data.ipynb | 500 +++++-- site/ko/guide/distributed_training.ipynb | 162 ++- site/ko/guide/dtensor_overview.ipynb | 141 +- site/ko/guide/function.ipynb | 305 +++-- site/ko/guide/gpu.ipynb | 55 +- site/ko/guide/jax2tf.ipynb | 892 ++++++++++++ site/ko/guide/migrate/saved_model.ipynb | 80 +- site/ko/guide/migrate/tflite.ipynb | 40 +- site/ko/guide/mixed_precision.ipynb | 131 +- site/ko/guide/profiler.md | 20 +- site/ko/guide/ragged_tensor.ipynb | 344 +++-- site/ko/guide/saved_model.ipynb | 185 ++- site/ko/guide/tensor.ipynb | 296 ++-- site/ko/guide/tf_numpy.ipynb | 143 +- site/ko/guide/tf_numpy_type_promotion.ipynb | 1193 +++++++++++++++++ site/ko/guide/tpu.ipynb | 114 +- site/ko/guide/versions.md | 19 +- site/ko/hub/common_saved_model_apis/images.md | 7 +- site/ko/hub/common_saved_model_apis/text.md | 4 +- site/ko/hub/installation.md | 8 +- site/ko/hub/migration_tf2.md | 8 +- site/ko/hub/model_compatibility.md | 2 +- 53 files changed, 6172 insertions(+), 1489 deletions(-) create mode 100644 site/ko/agents/overview.md create mode 100644 site/ko/guide/jax2tf.ipynb create mode 100644 site/ko/guide/tf_numpy_type_promotion.ipynb diff --git a/site/ko/agents/overview.md b/site/ko/agents/overview.md new file mode 100644 index 0000000000..282eead7cd --- /dev/null +++ b/site/ko/agents/overview.md @@ -0,0 +1,128 @@ +# TensorFlow 에이전트 + +**TensorFlow를 이용한 강화 학습(Reinforcement Learning)** + +에이전트는 수정 및 확장 할 수있는 잘 테스트 된 모듈 식 구성 요소를 제공하여 새로운 RL 알고리즘을보다 쉽게 설계, 구현 및 테스트 할 수 있습니다. 우수한 테스트 통합 및 벤치마킹으로 빠른 코드 반복이 가능합니다. + +시작하려면 [튜토리얼](/tutorials) 중 하나를 확인하는 것이 좋습니다. + +## 설치 + +TF-Agent는 야간 및 안정적인 빌드를 게시합니다. 릴리즈의 목록은 읽기 릴리즈 섹션을 참조합니다. 아래 명령을 통해 TF-Agent 야간 및 안정적인 버전을 [pypi.org](https://pypi.org)와 GitHub 클론에서 설치하는 방법을 설명합니다. + +> 경고: 매우 일반적인 Reverb(리플레이 버퍼)를 사용하는 경우 TF-Agents는 Linux에서만 작동합니다. + +> 참고: Python 3.11은 파이게임 2.1.3 이상을 필요로 합니다. + +### 안정적인 빌드 + +가장 최근의 안정적인 릴리스를 설치하려면 아래 명령을 실행하세요. 릴리스에 대한 API 문서는 [tensorflow.org](https://www.tensorflow.org/agents/api_docs/python/tf_agents)에 있습니다. + +```shell +$ pip install --user tf-agents[reverb] + +# Use this tag get the matching examples and colabs. +$ git clone https://github.com/tensorflow/agents.git +$ cd agents +$ git checkout v0.9.0 +``` + +pip 종속성 검사로 호환되지 않는 것으로 플래그가 지정된 Tensorflow 또는 [Reverb](https://github.com/deepmind/reverb) 버전의 TF-Agent를 설치하려면 자신의 책임 하에 아래의 다음 패턴을 사용하세요. + +```shell +$ pip install --user tensorflow +$ pip install --user dm-reverb +$ pip install --user tf-agents +``` + +TensorFlow 1.15 또는 2.0과 함께 TF-Agents를 사용하려면 버전 0.3.0을 설치합니다. + +```shell +# Newer versions of tensorflow-probability require newer versions of TensorFlow. +$ pip install tensorflow-probability==0.8.0 +$ pip install tf-agents==0.3.0 +``` + +### 야간 빌드 + +야간 빌드에는 새로운 기능이 포함되어 있지만, 버전 릴리스보다 안정성이 떨어질 수 있습니다. 야간 빌드는 `tf-agents-nightly`로 푸시됩니다. 야간 버전의 TensorFlow (`tf-nightly{/ code1}) 및 TensorFlow Probability (>tfp-nightly`)는 야간 TF-Agent 버전과 비교하여 설치하는 것이 좋습니다. + +야간 빌드 버전을 설치하려면 다음을 실행하십시오. + +```shell +# `--force-reinstall helps guarantee the right versions. +$ pip install --user --force-reinstall tf-nightly +$ pip install --user --force-reinstall tfp-nightly +$ pip install --user --force-reinstall dm-reverb-nightly + +# Installing with the `--upgrade` flag ensures you'll get the latest version. +$ pip install --user --upgrade tf-agents-nightly +``` + +### GitHub에서 복제하기 + +리포지토리를 복제한 후 `pip install -e .[tests]`를 실행하여 종속성을 설치할 수 있습니다. TensorFlow는 독립적으로 설치해야 합니다. `pip install --user tf-nightly` + + + +## 기여하기 + +당사는 여러분과 협력하길 원합니다! 기여 방법에 대한 지침은 [`CONTRIBUTING.md`](https://github.com/tensorflow/agents/blob/master/CONTRIBUTING.md)를 참조하십시오. 이 프로젝트는 TensorFlow의 [행동 강령](https://github.com/tensorflow/agents/blob/master/CODE_OF_CONDUCT.md)을 준수합니다. 참여할 때는 해당 행동 강령을 준수해야 합니다. + + + +## 릴리즈 + +TF Agent에는 안정적인 나이틀리 릴리스가 있습니다. 나이틀리 릴리스는 대체적으로 훌륭하지만 유동적인 업스트림 라이브러리로 인해 문제가 발생할 수 있습니다. 아래 표에는 각 TF Agent 릴리스와 일치하는 TensorFlow 버전이 나와 있습니다. 관심 받는 릴리스 버전: + +- 0.16.0은 Python 3.11을 지원하는 첫 번째 버전입니다. +- 0.15.0은 Python 3.7과 호환되는 마지막 릴리스입니다. +- 1.19 미만의 numpy를 사용하는 경우 TF Agents 0.15.0 이하 버전을 사용합니다. +- 0.9.0은 Python 3.6과 호환되는 마지막 릴리스입니다. +- 0.3.0은 Python 2.x와 호환되는 마지막 릴리스입니다. + +릴리즈 | 분기/태그 | TensorFlow 버전 | dm-reverb Version +--- | --- | --- | --- +야간 | [master](https://github.com/tensorflow/agents) | tf-nightly | dm-reverb-nightly +0.17.0 | [v0.17.0](https://github.com/tensorflow/agents/tree/v0.17.0) | 2.13.0 | 0.12.0 +0.16.0 | [v0.16.0](https://github.com/tensorflow/agents/tree/v0.16.0) | 2.12.0 | 0.11.0 +0.15.0 | [v0.15.0](https://github.com/tensorflow/agents/tree/v0.15.0) | 2.11.0 | 0.10.0 +0.14.0 | [v0.14.0](https://github.com/tensorflow/agents/tree/v0.14.0) | 2.10.0 | 0.9.0 +0.13.0 | [v0.13.0](https://github.com/tensorflow/agents/tree/v0.13.0) | 2.9.0 | 0.8.0 +0.12.0 | [v0.12.0](https://github.com/tensorflow/agents/tree/v0.12.0) | 2.8.0 | 0.7.0 +0.11.0 | [v0.11.0](https://github.com/tensorflow/agents/tree/v0.11.0) | 2.7.0 | 0.6.0 +0.10.0 | [v0.10.0](https://github.com/tensorflow/agents/tree/v0.10.0) | 2.6.0 | +0.9.0 | [v0.9.0](https://github.com/tensorflow/agents/tree/v0.9.0) | 2.6.0 | +0.8.0 | [v0.8.0](https://github.com/tensorflow/agents/tree/v0.8.0) | 2.5.0 | +0.7.1 | [v0.7.1](https://github.com/tensorflow/agents/tree/v0.7.1) | 2.4.0 | +0.6.0 | [v0.6.0](https://github.com/tensorflow/agents/tree/v0.6.0) | 2.3.0 | +0.5.0 | [v0.5.0](https://github.com/tensorflow/agents/tree/v0.5.0) | 2.2.0 | +0.4.0 | [v0.4.0](https://github.com/tensorflow/agents/tree/v0.4.0) | 2.1.0 | +0.3.0 | [v0.3.0](https://github.com/tensorflow/agents/tree/v0.3.0) | 1.15.0 and 2.0.0. | + + + +## 원칙 + +이 프로젝트는 [Google의 AI 원칙](https://github.com/tensorflow/agents/blob/master/PRINCIPLES.md)을 준수합니다. 이 프로젝트에 참여, 사용 또는 기여함으로써 사용자는 이러한 원칙을 준수해야 합니다. + + + +## 인용 + +이 코드를 사용하는 경우, 다음과 같이 인용하세요. + +``` +@misc{TFAgents, + title = {{TF-Agents}: A library for Reinforcement Learning in TensorFlow}, + author = {Sergio Guadarrama and Anoop Korattikara and Oscar Ramirez and + Pablo Castro and Ethan Holly and Sam Fishman and Ke Wang and + Ekaterina Gonina and Neal Wu and Efi Kokiopoulou and Luciano Sbaiz and + Jamie Smith and Gábor Bartók and Jesse Berent and Chris Harris and + Vincent Vanhoucke and Eugene Brevdo}, + howpublished = {\url{https://github.com/tensorflow/agents}}, + url = "https://github.com/tensorflow/agents", + year = 2018, + note = "[Online; accessed 25-June-2019]" +} +``` diff --git a/site/ko/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb b/site/ko/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb index ba7ddb86d9..a712037e9a 100644 --- a/site/ko/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb +++ b/site/ko/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "W7rEsKyWcxmu" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors.\n" + "##### Copyright 2023 The TF-Agents Authors.\n" ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -41,9 +43,10 @@ "\n", "\n", " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행 깃허브(GitHub) 소스 보기 노트북 다운로드 Google Colab에서 실행하기\n", + "GitHub에서 소스 보기노트북 다운로드하기
" ] }, @@ -86,7 +89,9 @@ "metadata": { "id": "Jv668dKvZmka" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "!sudo apt-get update\n", @@ -103,7 +108,9 @@ "metadata": { "id": "bQMULMo1dCEn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -148,7 +155,9 @@ "metadata": { "id": "AwIqiLdDCX9Q" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "# Set up a virtual display for rendering OpenAI gym environments.\n", @@ -183,7 +192,9 @@ "cellView": "both", "id": "yxFs6QU0dGI_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env_name = \"CartPole-v1\"\n", "\n", @@ -215,7 +226,9 @@ "metadata": { "id": "fZwK4d-bdI7Z" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_py_env = suite_gym.load(env_name)\n", "eval_py_env = suite_gym.load(env_name)\n", @@ -239,7 +252,9 @@ "metadata": { "id": "cUrFl83ieOvV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title\n", "q_net = q_network.QNetwork(\n", @@ -276,7 +291,9 @@ "metadata": { "id": "XiT1p78HdtSe" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title\n", "replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n", @@ -317,7 +334,9 @@ "metadata": { "id": "-rDC3leXdvm_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title\n", "# (Optional) Optimize by wrapping some of the code in a graph using TF function.\n", @@ -351,7 +370,9 @@ "metadata": { "id": "ZY6w-fcieFDW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title\n", "def embed_gif(gif_buffer):\n", @@ -391,7 +412,9 @@ "metadata": { "id": "FpmPLXWbdG70" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print ('global_step:')\n", "print (global_step)\n", @@ -424,7 +447,9 @@ "metadata": { "id": "2DzCJZ-6YYbX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "checkpoint_dir = os.path.join(tempdir, 'checkpoint')\n", "train_checkpointer = common.Checkpointer(\n", @@ -452,7 +477,9 @@ "metadata": { "id": "8mDZ_YMUWEY9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "policy_dir = os.path.join(tempdir, 'policy')\n", "tf_policy_saver = policy_saver.PolicySaver(agent.policy)" @@ -473,7 +500,9 @@ "metadata": { "id": "ql_D1iq8dl0X" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "print('Training one iteration....')\n", @@ -495,7 +524,9 @@ "metadata": { "id": "usDm_Wpsl0bu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_checkpointer.save(global_step)" ] @@ -517,7 +548,9 @@ "metadata": { "id": "l6l3EB-Yhwmz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_checkpointer.initialize_or_restore()\n", "global_step = tf.compat.v1.train.get_global_step()" @@ -538,7 +571,9 @@ "metadata": { "id": "3xHz09WCWjwA" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf_policy_saver.save(policy_dir)" ] @@ -560,7 +595,9 @@ "metadata": { "id": "J6T5KLTMh9ZB" }, - "outputs": [], + "outputs": [ + + ], "source": [ "saved_policy = tf.saved_model.load(policy_dir)\n", "run_episodes_and_create_video(saved_policy, eval_env, eval_py_env)" @@ -585,7 +622,9 @@ "metadata": { "id": "fd5Cj7DVjfH4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Create zip file and upload zip file (double-click to see the code)\n", "def create_zip_file(dirname, base_filename):\n", @@ -619,7 +658,9 @@ "metadata": { "id": "nhR8NeWzF4fe" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_checkpointer.save(global_step)\n", "checkpoint_zip_filename = create_zip_file(checkpoint_dir, os.path.join(tempdir, 'exported_cp'))" @@ -640,7 +681,9 @@ "metadata": { "id": "upFxb5k8b4MC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "if files is not None:\n", @@ -662,7 +705,9 @@ "metadata": { "id": "kg-bKgMsF-H_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "upload_and_unzip_file_to(checkpoint_dir)\n", @@ -676,7 +721,7 @@ "id": "uXrNax5Zk3vF" }, "source": [ - "체크포인트 디렉토리를 업로드한 후에는 '한 번 반복 훈련하기'로 돌아가서 훈련을 계속하거나 '비디오 생성하기'으로 돌아가서 로드된 정책의 성능을 확인합니다." + "체크포인트 디렉터리를 업로드한 후에는 '한 번 반복 훈련하기'로 돌아가서 훈련을 계속하거나 '비디오 생성하기'로 돌아가서 로드된 정책의 성능을 확인합니다." ] }, { @@ -694,7 +739,9 @@ "metadata": { "id": "s7qMn6D8eiIA" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf_policy_saver.save(policy_dir)\n", "policy_zip_filename = create_zip_file(policy_dir, os.path.join(tempdir, 'exported_policy'))" @@ -706,7 +753,9 @@ "metadata": { "id": "rrGvCEXwerJj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "if files is not None:\n", @@ -728,7 +777,9 @@ "metadata": { "id": "bgWLimRlXy5z" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "upload_and_unzip_file_to(policy_dir)\n", @@ -755,7 +806,9 @@ "metadata": { "id": "iUC5XuLf1jF7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "eager_py_policy = py_tf_eager_policy.SavedModelPyTFEagerPolicy(\n", " policy_dir, eval_py_env.time_step_spec(), eval_py_env.action_spec())\n", @@ -781,7 +834,9 @@ "metadata": { "id": "z9zonVBJ0z46" }, - "outputs": [], + "outputs": [ + + ], "source": [ "converter = tf.lite.TFLiteConverter.from_saved_model(policy_dir, signature_keys=[\"action\"])\n", "tflite_policy = converter.convert()\n", @@ -806,7 +861,9 @@ "metadata": { "id": "4GeUSWyZxMlN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import numpy as np\n", "interpreter = tf.lite.Interpreter(os.path.join(tempdir, 'policy.tflite'))\n", @@ -821,7 +878,9 @@ "metadata": { "id": "eVVrdTbRxnOC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "policy_runner(**{\n", " '0/discount':tf.constant(0.0),\n", @@ -833,7 +892,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "10_checkpointer_policysaver_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/1_dqn_tutorial.ipynb b/site/ko/agents/tutorials/1_dqn_tutorial.ipynb index 284a866446..65d991e9f3 100644 --- a/site/ko/agents/tutorials/1_dqn_tutorial.ipynb +++ b/site/ko/agents/tutorials/1_dqn_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "klGNgWREsvQv" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -40,8 +42,7 @@ "# TF-Agents로 DQN(Deep Q Network) 훈련하기\n", "\n", "\n", - " \n", + " \n", " \n", " \n", " \n", @@ -96,7 +97,9 @@ "metadata": { "id": "KEHR2Ui-lo8O" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!sudo apt-get update\n", "!sudo apt-get install -y xvfb ffmpeg freeglut3-dev\n", @@ -112,7 +115,9 @@ "metadata": { "id": "sMitx5qSgJk1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "from __future__ import absolute_import, division, print_function\n", "\n", @@ -150,7 +155,9 @@ "metadata": { "id": "J6HsdS5GbSjd" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Set up a virtual display for rendering OpenAI gym environments.\n", "display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()" @@ -162,7 +169,9 @@ "metadata": { "id": "NspmzG4nP3b9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.version.VERSION" ] @@ -182,7 +191,9 @@ "metadata": { "id": "HC1kNrOsLSIZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "num_iterations = 20000 # @param {type:\"integer\"}\n", "\n", @@ -217,7 +228,9 @@ "metadata": { "id": "pYEz-S9gEv2-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env_name = 'CartPole-v0'\n", "env = suite_gym.load(env_name)" @@ -238,7 +251,9 @@ "metadata": { "id": "RlO7WIQHu_7D" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "env.reset()\n", @@ -262,7 +277,9 @@ "metadata": { "id": "exDv57iHfwQV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('Observation Spec:')\n", "print(env.time_step_spec().observation)" @@ -274,7 +291,9 @@ "metadata": { "id": "UxiSyCbBUQPi" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('Reward Spec:')\n", "print(env.time_step_spec().reward)" @@ -295,7 +314,9 @@ "metadata": { "id": "bttJ4uxZUQBr" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('Action Spec:')\n", "print(env.action_spec())" @@ -324,7 +345,9 @@ "metadata": { "id": "V2UGR5t_iZX-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "time_step = env.reset()\n", "print('Time step:')\n", @@ -352,7 +375,9 @@ "metadata": { "id": "N7brXNIGWXjC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_py_env = suite_gym.load(env_name)\n", "eval_py_env = suite_gym.load(env_name)" @@ -375,7 +400,9 @@ "metadata": { "id": "Xp-Y4mD6eDhF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_env = tf_py_environment.TFPyEnvironment(train_py_env)\n", "eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)" @@ -411,7 +438,9 @@ "metadata": { "id": "TgkdEPg_muzV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "fc_layer_params = (100, 50)\n", "action_tensor_spec = tensor_spec.from_spec(env.action_spec())\n", @@ -454,7 +483,9 @@ "metadata": { "id": "jbY4yrjTEyc9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n", "\n", @@ -498,7 +529,9 @@ "metadata": { "id": "BwY7StuMkuV4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "eval_policy = agent.policy\n", "collect_policy = agent.collect_policy" @@ -519,7 +552,9 @@ "metadata": { "id": "HE37-UCIrE69" }, - "outputs": [], + "outputs": [ + + ], "source": [ "random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),\n", " train_env.action_spec())" @@ -544,7 +579,9 @@ "metadata": { "id": "5gCcpXswVAxk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "example_environment = tf_py_environment.TFPyEnvironment(\n", " suite_gym.load('CartPole-v0'))" @@ -556,7 +593,9 @@ "metadata": { "id": "D4DHZtq3Ndis" }, - "outputs": [], + "outputs": [ + + ], "source": [ "time_step = example_environment.reset()" ] @@ -567,7 +606,9 @@ "metadata": { "id": "PRFqAUzpNaAW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "random_policy.action(time_step)" ] @@ -591,7 +632,9 @@ "metadata": { "id": "bitzHo5_UbXy" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "def compute_avg_return(environment, policy, num_episodes=10):\n", @@ -631,7 +674,9 @@ "metadata": { "id": "9bgU6Q6BZ8Bp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "compute_avg_return(eval_env, random_policy, num_eval_episodes)" ] @@ -655,7 +700,9 @@ "metadata": { "id": "vX2zGUWJGWAl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "table_name = 'uniform_table'\n", "replay_buffer_signature = tensor_spec.from_spec(\n", @@ -700,7 +747,9 @@ "metadata": { "id": "_IZ-3HcqgE1z" }, - "outputs": [], + "outputs": [ + + ], "source": [ "agent.collect_data_spec" ] @@ -711,7 +760,9 @@ "metadata": { "id": "sy6g1tGcfRlw" }, - "outputs": [], + "outputs": [ + + ], "source": [ "agent.collect_data_spec._fields" ] @@ -735,7 +786,9 @@ "metadata": { "id": "wr1KSAEGG4h9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "py_driver.PyDriver(\n", @@ -761,7 +814,9 @@ "metadata": { "id": "4wZnLu2ViO4E" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# For the curious:\n", "# Uncomment to peel one of these off and inspect it.\n", @@ -787,7 +842,9 @@ "metadata": { "id": "ba7bilizt_qW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Dataset generates trajectories with shape [Bx2x...]\n", "dataset = replay_buffer.as_dataset(\n", @@ -804,7 +861,9 @@ "metadata": { "id": "K13AST-2ppOq" }, - "outputs": [], + "outputs": [ + + ], "source": [ "iterator = iter(dataset)\n", "print(iterator)" @@ -816,7 +875,9 @@ "metadata": { "id": "Th5w5Sff0b16" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# For the curious:\n", "# Uncomment to see what the dataset iterator is feeding to the agent.\n", @@ -850,7 +911,9 @@ "metadata": { "id": "0pTbJ3PeyF-u" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "try:\n", @@ -927,7 +990,9 @@ "metadata": { "id": "NxtL1mbOYCVO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -964,7 +1029,9 @@ "metadata": { "id": "ULaGr8pvOKbl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def embed_mp4(filename):\n", " \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n", @@ -994,7 +1061,9 @@ "metadata": { "id": "owOVWB158NlF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):\n", " filename = filename + \".mp4\"\n", @@ -1026,7 +1095,9 @@ "metadata": { "id": "pJZIdC37yNH4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "create_policy_eval_video(random_policy, \"random-agent\")" ] @@ -1034,7 +1105,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "1_dqn_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/2_environments_tutorial.ipynb b/site/ko/agents/tutorials/2_environments_tutorial.ipynb index af78dcddab..8a4d919ee7 100644 --- a/site/ko/agents/tutorials/2_environments_tutorial.ipynb +++ b/site/ko/agents/tutorials/2_environments_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "Ma19Ks2CTDbZ" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -42,8 +44,7 @@ "
TensorFlow.org에서 보기\n", - " TensorFlow.org에서 보기 Google Colab에서 실행GitHub에서 소스 보기노트북 다운로드
\n", " \n", " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서보기Google Colab에서 실행 깃허브(GitHub)에서 소스 보기\n", - " 깃허브(GitHub)에서 소스 보기 노트북 다운로드
" ] @@ -94,9 +95,10 @@ "metadata": { "id": "KKU2iY_7at8Y" }, - "outputs": [], + "outputs": [ + + ], "source": [ - "!pip install \"gym>=0.21.0\"\n", "!pip install tf-agents[reverb]\n" ] }, @@ -106,7 +108,9 @@ "metadata": { "id": "1ZAoFNwnRbKK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -159,7 +163,9 @@ "metadata": { "id": "GlD2Dd2vUTtg" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class PyEnvironment(object):\n", "\n", @@ -230,7 +236,9 @@ "metadata": { "id": "1kBPE5T-nb2-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "environment = suite_gym.load('CartPole-v0')\n", "print('action_spec:', environment.action_spec())\n", @@ -255,7 +263,9 @@ "metadata": { "id": "AzIbOJ0-0y12" }, - "outputs": [], + "outputs": [ + + ], "source": [ "action = np.array(1, dtype=np.int32)\n", "time_step = environment.reset()\n", @@ -294,7 +304,9 @@ "metadata": { "id": "9HD0cDykPL6I" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class CardGameEnv(py_environment.PyEnvironment):\n", "\n", @@ -358,7 +370,9 @@ "metadata": { "id": "6Hhm-5R7spVx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "environment = CardGameEnv()\n", "utils.validate_py_environment(environment, episodes=5)" @@ -379,7 +393,9 @@ "metadata": { "id": "FILylafAkMEx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "get_new_card_action = np.array(0, dtype=np.int32)\n", "end_round_action = np.array(1, dtype=np.int32)\n", @@ -441,7 +457,9 @@ "metadata": { "id": "AJxEoZ4HoyjR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env = suite_gym.load('Pendulum-v1')\n", "print('Action Spec:', env.action_spec())\n", @@ -488,7 +506,9 @@ "metadata": { "id": "WKBDDZqKTxsL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class TFEnvironment(object):\n", "\n", @@ -576,7 +596,9 @@ "metadata": { "id": "UYerqyNfnVRL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", @@ -619,7 +641,9 @@ "metadata": { "id": "gdvFqUqbdB7u" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env = suite_gym.load('CartPole-v0')\n", "\n", @@ -657,7 +681,9 @@ "metadata": { "id": "t561kUXMk-KM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", @@ -690,7 +716,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "2_environments_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/3_policies_tutorial.ipynb b/site/ko/agents/tutorials/3_policies_tutorial.ipynb index 2e1371773d..9eb59e7291 100644 --- a/site/ko/agents/tutorials/3_policies_tutorial.ipynb +++ b/site/ko/agents/tutorials/3_policies_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "1Pi_B2cvdBiW" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -95,7 +97,9 @@ "metadata": { "id": "xsLTHlVdiZP3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install tf-agents" ] @@ -106,7 +110,9 @@ "metadata": { "id": "sdvop99JlYSM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -158,7 +164,9 @@ "metadata": { "id": "4PqNEVls1uqc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class Base(object):\n", "\n", @@ -242,7 +250,9 @@ "metadata": { "id": "QX8M4Nl-_0uu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "action_spec = array_spec.BoundedArraySpec((2,), np.int32, -10, 10)\n", "my_random_py_policy = random_py_policy.RandomPyPolicy(time_step_spec=None,\n", @@ -278,7 +288,9 @@ "metadata": { "id": "_mZ244m4BUYv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "action_spec = array_spec.BoundedArraySpec((2,), np.int32, -10, 10)\n", "action_script = [(1, np.array([5, 2], dtype=np.int32)), \n", @@ -340,7 +352,9 @@ "metadata": { "id": "nZ3pe5G4rjrW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "action_spec = tensor_spec.BoundedTensorSpec(\n", " (2,), tf.float32, minimum=-1, maximum=3)\n", @@ -392,7 +406,9 @@ "metadata": { "id": "S2wFgzJFteQX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class ActionNet(network.Network):\n", "\n", @@ -436,7 +452,9 @@ "metadata": { "id": "0UGmFTe7a5VQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "input_tensor_spec = tensor_spec.TensorSpec((4,), tf.float32)\n", "time_step_spec = ts.time_step_spec(input_tensor_spec)\n", @@ -468,7 +486,9 @@ "metadata": { "id": "fvsIsR0VfOA4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "batch_size = 2\n", "observations = tf.ones([2] + time_step_spec.observation.shape.as_list())\n", @@ -508,7 +528,9 @@ "metadata": { "id": "sSYzC9LobVsK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class ActionDistributionNet(ActionNet):\n", "\n", @@ -577,7 +599,9 @@ "metadata": { "id": "Haakr2VvjqKC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "input_tensor_spec = tensor_spec.TensorSpec((4,), tf.float32)\n", "time_step_spec = ts.time_step_spec(input_tensor_spec)\n", @@ -661,7 +685,9 @@ "metadata": { "id": "xsRPBeLZtXvu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "my_greedy_policy = greedy_policy.GreedyPolicy(my_q_policy)\n", "\n", @@ -677,7 +703,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "3_policies_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/4_drivers_tutorial.ipynb b/site/ko/agents/tutorials/4_drivers_tutorial.ipynb index 2deaeea8de..1e22bbd468 100644 --- a/site/ko/agents/tutorials/4_drivers_tutorial.ipynb +++ b/site/ko/agents/tutorials/4_drivers_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "beObUOFyuRjT" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -86,7 +88,9 @@ "metadata": { "id": "TnE2CgilrngG" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install tf-agents\n" ] @@ -97,7 +101,9 @@ "metadata": { "id": "whYNP894FSkA" }, - "outputs": [], + "outputs": [ + + ], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -182,7 +188,9 @@ "metadata": { "id": "Dj4_-77_5ExP" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env = suite_gym.load('CartPole-v0')\n", "policy = random_py_policy.RandomPyPolicy(time_step_spec=env.time_step_spec(), \n", @@ -220,7 +228,9 @@ "metadata": { "id": "WC4ba3ObSceA" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", @@ -249,7 +259,9 @@ "metadata": { "id": "Sz5jhHnU0fX1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Continue running from previous state\n", "final_time_step, _ = driver.run(final_time_step, policy_state)\n", @@ -262,7 +274,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "4_drivers_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/5_replay_buffers_tutorial.ipynb b/site/ko/agents/tutorials/5_replay_buffers_tutorial.ipynb index d5794bfd72..97f28d4de3 100644 --- a/site/ko/agents/tutorials/5_replay_buffers_tutorial.ipynb +++ b/site/ko/agents/tutorials/5_replay_buffers_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "beObUOFyuRjT" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -84,7 +86,9 @@ "metadata": { "id": "TnE2CgilrngG" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install tf-agents\n" ] @@ -95,7 +99,9 @@ "metadata": { "id": "whYNP894FSkA" }, - "outputs": [], + "outputs": [ + + ], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -211,7 +217,9 @@ "metadata": { "id": "Dj4_-77_5ExP" }, - "outputs": [], + "outputs": [ + + ], "source": [ "data_spec = (\n", " tf.TensorSpec([3], tf.float32, 'action'),\n", @@ -249,7 +257,9 @@ "metadata": { "id": "nOvkp4vJhBOT" }, - "outputs": [], + "outputs": [ + + ], "source": [ "action = tf.constant(1 * np.ones(\n", " data_spec[0].shape.as_list(), dtype=np.float32))\n", @@ -288,7 +298,9 @@ "metadata": { "id": "IlQ1eGhohM3M" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# add more items to the buffer before reading\n", "for _ in range(5):\n", @@ -338,7 +350,9 @@ "metadata": { "id": "F4neLPpL25wI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "replay_buffer_capacity = 1000*32 # same capacity as the TFUniformReplayBuffer\n", "\n", @@ -374,7 +388,9 @@ "metadata": { "id": "pCbTDO3Z5UCS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", @@ -427,7 +443,9 @@ "metadata": { "id": "gg8SUyXXnSMr" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Read the replay buffer as a Dataset,\n", "# read batches of 4 elements, each with 2 timesteps:\n", @@ -447,7 +465,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "5_replay_buffers_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/6_reinforce_tutorial.ipynb b/site/ko/agents/tutorials/6_reinforce_tutorial.ipynb index 313ff03964..0d2f4dc478 100644 --- a/site/ko/agents/tutorials/6_reinforce_tutorial.ipynb +++ b/site/ko/agents/tutorials/6_reinforce_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "klGNgWREsvQv" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -40,11 +42,9 @@ "# REINFORCE 에이전트\n", "\n", "\n", - " \n", + " \n", " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기\n", - " TensorFlow.org에서 보기 Google Colab에서 실행 깃허브(GitHub) 소스 보기\n", - " 깃허브(GitHub) 소스 보기 노트북 다운로드
" ] @@ -95,7 +95,9 @@ "metadata": { "id": "KEHR2Ui-lo8O" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!sudo apt-get update\n", "!sudo apt-get install -y xvfb ffmpeg freeglut3-dev\n", @@ -111,7 +113,9 @@ "metadata": { "id": "sMitx5qSgJk1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -159,7 +163,9 @@ "metadata": { "id": "HC1kNrOsLSIZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env_name = \"CartPole-v0\" # @param {type:\"string\"}\n", "num_iterations = 250 # @param {type:\"integer\"}\n", @@ -193,7 +199,9 @@ "metadata": { "id": "pYEz-S9gEv2-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env = suite_gym.load(env_name)" ] @@ -213,7 +221,9 @@ "metadata": { "id": "RlO7WIQHu_7D" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "env.reset()\n", @@ -235,7 +245,9 @@ "metadata": { "id": "exDv57iHfwQV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('Observation Spec:')\n", "print(env.time_step_spec().observation)\n", @@ -258,7 +270,9 @@ "metadata": { "id": "V2UGR5t_iZX-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "time_step = env.reset()\n", "print('Time step:')\n", @@ -286,7 +300,9 @@ "metadata": { "id": "Xp-Y4mD6eDhF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_py_env = suite_gym.load(env_name)\n", "eval_py_env = suite_gym.load(env_name)\n", @@ -316,7 +332,9 @@ "metadata": { "id": "TgkdEPg_muzV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "actor_net = actor_distribution_network.ActorDistributionNetwork(\n", " train_env.observation_spec(),\n", @@ -339,7 +357,9 @@ "metadata": { "id": "jbY4yrjTEyc9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n", "\n", @@ -374,7 +394,9 @@ "metadata": { "id": "BwY7StuMkuV4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "eval_policy = tf_agent.policy\n", "collect_policy = tf_agent.collect_policy" @@ -397,7 +419,9 @@ "metadata": { "id": "bitzHo5_UbXy" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "def compute_avg_return(environment, policy, num_episodes=10):\n", @@ -441,7 +465,9 @@ "metadata": { "id": "vX2zGUWJGWAl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "table_name = 'uniform_table'\n", "replay_buffer_signature = tensor_spec.from_spec(\n", @@ -497,7 +523,9 @@ "metadata": { "id": "wr1KSAEGG4h9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -532,7 +560,9 @@ "metadata": { "id": "0pTbJ3PeyF-u" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "try:\n", @@ -600,7 +630,9 @@ "metadata": { "id": "NxtL1mbOYCVO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -635,7 +667,9 @@ "metadata": { "id": "ULaGr8pvOKbl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def embed_mp4(filename):\n", " \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n", @@ -665,7 +699,9 @@ "metadata": { "id": "owOVWB158NlF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "num_episodes = 3\n", "video_filename = 'imageio.mp4'\n", @@ -685,7 +721,9 @@ "metadata": { "accelerator": "GPU", "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "6_reinforce_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb b/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb index 2a66dd52fe..c78144bf47 100644 --- a/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb +++ b/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "klGNgWREsvQv" }, "source": [ - "**Copyright 2021 The TF-Agents Authors.**" + "**Copyright 2023 The TF-Agents Authors.**" ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -88,7 +90,9 @@ "metadata": { "id": "fskoLlB-AZ9j" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!sudo apt-get update\n", "!sudo apt-get install -y xvfb ffmpeg\n", @@ -122,7 +126,9 @@ "metadata": { "id": "sMitx5qSgJk1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import base64\n", "import imageio\n", @@ -171,7 +177,9 @@ "metadata": { "id": "HC1kNrOsLSIZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env_name = \"MinitaurBulletEnv-v0\" # @param {type:\"string\"}\n", "\n", @@ -223,7 +231,9 @@ "metadata": { "id": "RlO7WIQHu_7D" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env = suite_pybullet.load(env_name)\n", "env.reset()\n", @@ -247,7 +257,9 @@ "metadata": { "id": "exDv57iHfwQV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('Observation Spec:')\n", "print(env.time_step_spec().observation)\n", @@ -272,7 +284,9 @@ "metadata": { "id": "Xp-Y4mD6eDhF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "collect_env = suite_pybullet.load(env_name)\n", "eval_env = suite_pybullet.load(env_name)" @@ -331,7 +345,9 @@ "metadata": { "id": "ff5ZZRZI15ds" }, - "outputs": [], + "outputs": [ + + ], "source": [ "use_gpu = True #@param {type:\"boolean\"}\n", "\n", @@ -366,7 +382,9 @@ "metadata": { "id": "TgkdEPg_muzV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "observation_spec, action_spec, time_step_spec = (\n", " spec_utils.get_tensor_specs(collect_env))\n", @@ -398,7 +416,9 @@ "metadata": { "id": "TB5Y3Oub4u7f" }, - "outputs": [], + "outputs": [ + + ], "source": [ "with strategy.scope():\n", " actor_net = actor_distribution_network.ActorDistributionNetwork(\n", @@ -424,7 +444,9 @@ "metadata": { "id": "jbY4yrjTEyc9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "with strategy.scope():\n", " train_step = train_utils.create_train_step()\n", @@ -473,7 +495,9 @@ "metadata": { "id": "vX2zGUWJGWAl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "table_name = 'uniform_table'\n", "table = reverb.Table(\n", @@ -503,7 +527,9 @@ "metadata": { "id": "xVLUxyUo7HQR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "reverb_replay = reverb_replay_buffer.ReverbReplayBuffer(\n", " tf_agent.collect_data_spec,\n", @@ -527,7 +553,9 @@ "metadata": { "id": "ba7bilizt_qW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = reverb_replay.as_dataset(\n", " sample_batch_size=batch_size, num_steps=2).prefetch(50)\n", @@ -556,7 +584,9 @@ "metadata": { "id": "yq7JE8IwFe0E" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf_eval_policy = tf_agent.policy\n", "eval_policy = py_tf_eager_policy.PyTFEagerPolicy(\n", @@ -569,7 +599,9 @@ "metadata": { "id": "f_A4rZveEQzW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf_collect_policy = tf_agent.collect_policy\n", "collect_policy = py_tf_eager_policy.PyTFEagerPolicy(\n", @@ -591,7 +623,9 @@ "metadata": { "id": "BwY7StuMkuV4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "random_policy = random_py_policy.RandomPyPolicy(\n", " collect_env.time_step_spec(), collect_env.action_spec())" @@ -630,7 +664,9 @@ "metadata": { "id": "HbyGmdiNfNDc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rb_observer = reverb_utils.ReverbAddTrajectoryObserver(\n", " reverb_replay.py_client,\n", @@ -654,7 +690,9 @@ "metadata": { "id": "ZGq3SY0kKwsa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "initial_collect_actor = actor.Actor(\n", " collect_env,\n", @@ -680,7 +718,9 @@ "metadata": { "id": "A6ooXyk0FZ5j" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env_step_metric = py_metrics.EnvironmentSteps()\n", "collect_actor = actor.Actor(\n", @@ -708,7 +748,9 @@ "metadata": { "id": "vHY2BT5lFhgL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "eval_actor = actor.Actor(\n", " eval_env,\n", @@ -737,7 +779,9 @@ "metadata": { "id": "gi37YicSFTfF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "saved_model_dir = os.path.join(tempdir, learner.POLICY_SAVED_MODEL_DIR)\n", "\n", @@ -782,7 +826,9 @@ "metadata": { "id": "83iMSHUC71RG" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def get_eval_metrics():\n", " eval_actor.run()\n", @@ -800,7 +846,9 @@ "metadata": { "id": "jnOMvX_eZvOW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def log_eval_metrics(step, metrics):\n", " eval_results = (', ').join(\n", @@ -836,7 +884,9 @@ "metadata": { "id": "0pTbJ3PeyF-u" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "try:\n", @@ -897,7 +947,9 @@ "metadata": { "id": "rXKzyGt72HS8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -932,7 +984,9 @@ "metadata": { "id": "ULaGr8pvOKbl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def embed_mp4(filename):\n", " \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n", @@ -962,7 +1016,9 @@ "metadata": { "id": "PSgaQN1nXT-h" }, - "outputs": [], + "outputs": [ + + ], "source": [ "num_episodes = 3\n", "video_filename = 'sac_minitaur.mp4'\n", @@ -981,7 +1037,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "7_SAC_minitaur_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/8_networks_tutorial.ipynb b/site/ko/agents/tutorials/8_networks_tutorial.ipynb index a638858e46..b8ec8d7539 100644 --- a/site/ko/agents/tutorials/8_networks_tutorial.ipynb +++ b/site/ko/agents/tutorials/8_networks_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "1Pi_B2cvdBiW" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -99,7 +101,9 @@ "metadata": { "id": "xsLTHlVdiZP3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install tf-agents" ] @@ -110,7 +114,9 @@ "metadata": { "id": "sdvop99JlYSM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -204,7 +210,9 @@ "metadata": { "id": "Zp0TjAJhYo4s" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class ActorNetwork(network.Network):\n", "\n", @@ -283,7 +291,9 @@ "metadata": { "id": "E2XoNuuD66s5" }, - "outputs": [], + "outputs": [ + + ], "source": [ "action_spec = array_spec.BoundedArraySpec((3,), np.float32, minimum=0, maximum=10)\n", "observation_spec = {\n", @@ -313,7 +323,9 @@ "metadata": { "id": "r9U6JVevTAJw" }, - "outputs": [], + "outputs": [ + + ], "source": [ "preprocessing_layers = {\n", " 'image': tf.keras.models.Sequential([tf.keras.layers.Conv2D(8, 4),\n", @@ -342,7 +354,9 @@ "metadata": { "id": "JOkkeu7vXoei" }, - "outputs": [], + "outputs": [ + + ], "source": [ "time_step = tf_env.reset()\n", "actor(time_step.observation, time_step.step_type)" @@ -360,7 +374,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "8_networks_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/9_c51_tutorial.ipynb b/site/ko/agents/tutorials/9_c51_tutorial.ipynb index df53c1dc39..37eef5b3da 100644 --- a/site/ko/agents/tutorials/9_c51_tutorial.ipynb +++ b/site/ko/agents/tutorials/9_c51_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "klGNgWREsvQv" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -40,11 +42,9 @@ "# DQN C51/레인보우\n", "\n", "\n", - " \n", + " \n", " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기\n", - " TensorFlow.org에서 보기 Google Colab에서 실행 깃허브(GitHub) 소스 보기\n", - " 깃허브(GitHub) 소스 보기 노트북 다운로드
" ] @@ -95,7 +95,9 @@ "metadata": { "id": "KEHR2Ui-lo8O" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!sudo apt-get update\n", "!sudo apt-get install -y xvfb ffmpeg freeglut3-dev\n", @@ -111,7 +113,9 @@ "metadata": { "id": "sMitx5qSgJk1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -158,7 +162,9 @@ "metadata": { "id": "HC1kNrOsLSIZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "env_name = \"CartPole-v1\" # @param {type:\"string\"}\n", "num_iterations = 15000 # @param {type:\"integer\"}\n", @@ -200,7 +206,9 @@ "metadata": { "id": "Xp-Y4mD6eDhF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_py_env = suite_gym.load(env_name)\n", "eval_py_env = suite_gym.load(env_name)\n", @@ -236,7 +244,9 @@ "metadata": { "id": "TgkdEPg_muzV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "categorical_q_net = categorical_q_network.CategoricalQNetwork(\n", " train_env.observation_spec(),\n", @@ -262,7 +272,9 @@ "metadata": { "id": "jbY4yrjTEyc9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)\n", "\n", @@ -320,7 +332,9 @@ "metadata": { "id": "bitzHo5_UbXy" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "def compute_avg_return(environment, policy, num_episodes=10):\n", @@ -367,7 +381,9 @@ "metadata": { "id": "wr1KSAEGG4h9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n", @@ -418,7 +434,9 @@ "metadata": { "id": "0pTbJ3PeyF-u" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "try:\n", @@ -483,7 +501,9 @@ "metadata": { "id": "NxtL1mbOYCVO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -518,7 +538,9 @@ "metadata": { "id": "ULaGr8pvOKbl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def embed_mp4(filename):\n", " \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n", @@ -548,7 +570,9 @@ "metadata": { "id": "owOVWB158NlF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "num_episodes = 3\n", "video_filename = 'imageio.mp4'\n", @@ -578,7 +602,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "9_c51_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/bandits_tutorial.ipynb b/site/ko/agents/tutorials/bandits_tutorial.ipynb index 58f78db50e..d1ac22a578 100644 --- a/site/ko/agents/tutorials/bandits_tutorial.ipynb +++ b/site/ko/agents/tutorials/bandits_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "klGNgWREsvQv" }, "source": [ - "##### Copyright 2020 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -80,7 +82,9 @@ "metadata": { "id": "KEHR2Ui-lo8O" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install tf-agents" ] @@ -100,7 +104,9 @@ "metadata": { "id": "3oCS94Z83Jo2" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import abc\n", "import numpy as np\n", @@ -187,7 +193,9 @@ "metadata": { "id": "TTaG2ZapQvHX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class BanditPyEnvironment(py_environment.PyEnvironment):\n", "\n", @@ -261,7 +269,9 @@ "metadata": { "id": "YV6DhsSi227-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class SimplePyEnvironment(BanditPyEnvironment):\n", "\n", @@ -295,7 +305,9 @@ "metadata": { "id": "Eo_uwSz2gAKX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "environment = SimplePyEnvironment()\n", "observation = environment.reset().observation\n", @@ -332,7 +344,9 @@ "metadata": { "id": "IPPpwSi3EtWz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf_environment = tf_py_environment.TFPyEnvironment(environment)" ] @@ -367,7 +381,9 @@ "metadata": { "id": "VpMZlplNK5ND" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class SignPolicy(tf_policy.TFPolicy):\n", " def __init__(self):\n", @@ -407,7 +423,9 @@ "metadata": { "id": "Z0_5vMDCVZWT" }, - "outputs": [], + "outputs": [ + + ], "source": [ "sign_policy = SignPolicy()\n", "\n", @@ -437,7 +455,9 @@ "metadata": { "id": "CiB935of-wVv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "step = tf_environment.reset()\n", "action = 1\n", @@ -494,7 +514,9 @@ "metadata": { "id": "fte7-Mr8O0QR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class TwoWayPyEnvironment(BanditPyEnvironment):\n", "\n", @@ -551,7 +573,9 @@ "metadata": { "id": "Srm2jsGHVM8N" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class TwoWaySignPolicy(tf_policy.TFPolicy):\n", " def __init__(self, situation):\n", @@ -612,7 +636,9 @@ "metadata": { "id": "7f-0W0cMbS_z" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class SignAgent(tf_agent.TFAgent):\n", " def __init__(self):\n", @@ -692,7 +718,9 @@ "metadata": { "id": "gdSG1nv-HUJq" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# We need to add another dimension here because the agent expects the\n", "# trajectory of shape [batch_size, time, ...], but in this tutorial we assume\n", @@ -732,7 +760,9 @@ "metadata": { "id": "LPx43dZgoyKg" }, - "outputs": [], + "outputs": [ + + ], "source": [ "step = two_way_tf_environment.reset()\n", "for _ in range(10):\n", @@ -777,7 +807,9 @@ "metadata": { "id": "oEnXUwd-nZKl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Imports for example.\n", "from tf_agents.bandits.agents import lin_ucb_agent\n", @@ -813,7 +845,9 @@ "metadata": { "id": "gVa0hmQrpe6w" }, - "outputs": [], + "outputs": [ + + ], "source": [ "batch_size = 2 # @param\n", "arm0_param = [-3, 0, 1, -2] # @param\n", @@ -869,7 +903,9 @@ "metadata": { "id": "p4XmGgIusj-K" }, - "outputs": [], + "outputs": [ + + ], "source": [ "observation_spec = tensor_spec.TensorSpec([4], tf.float32)\n", "time_step_spec = ts.time_step_spec(observation_spec)\n", @@ -904,7 +940,9 @@ "metadata": { "id": "cX7MiFhNu3_L" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def compute_optimal_reward(observation):\n", " expected_reward_for_arms = [\n", @@ -943,7 +981,9 @@ "metadata": { "id": "4Ggn45g62DWx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "num_iterations = 90 # @param\n", "steps_per_loop = 1 # @param\n", @@ -1006,7 +1046,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "bandits_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/intro_bandit.ipynb b/site/ko/agents/tutorials/intro_bandit.ipynb index e5c43667e1..460e872890 100644 --- a/site/ko/agents/tutorials/intro_bandit.ipynb +++ b/site/ko/agents/tutorials/intro_bandit.ipynb @@ -6,7 +6,7 @@ "id": "I1JiGtmRbLVp" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", diff --git a/site/ko/agents/tutorials/per_arm_bandits_tutorial.ipynb b/site/ko/agents/tutorials/per_arm_bandits_tutorial.ipynb index 056ba504a4..1361835391 100644 --- a/site/ko/agents/tutorials/per_arm_bandits_tutorial.ipynb +++ b/site/ko/agents/tutorials/per_arm_bandits_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "nPjtEgqN4SjA" }, "source": [ - "##### Copyright 2021 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "6AZJOyCA4NpL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -138,7 +140,9 @@ "metadata": { "id": "QxiNIm5XSIIp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install tf-agents" ] @@ -158,7 +162,9 @@ "metadata": { "id": "BbQXsoeKR2ui" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import functools\n", "import matplotlib.pyplot as plt\n", @@ -192,7 +198,9 @@ "metadata": { "id": "bfna8xm0MSCn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# The dimension of the global features.\n", "GLOBAL_DIM = 40 #@param {type:\"integer\"}\n", @@ -238,7 +246,9 @@ "metadata": { "id": "kfTa5Y4ZYjhO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def global_context_sampling_fn():\n", " \"\"\"This function generates a single global observation vector.\"\"\"\n", @@ -271,7 +281,9 @@ "metadata": { "id": "Ny-yikfQQi9l" }, - "outputs": [], + "outputs": [ + + ], "source": [ "per_arm_py_env = p_a_env.StationaryStochasticPerArmPyEnvironment(\n", " global_context_sampling_fn,\n", @@ -298,7 +310,9 @@ "metadata": { "id": "J8ZVqMU5AOzs" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('observation spec: ', per_arm_tf_env.observation_spec())\n", "print('\\nAn observation: ', per_arm_tf_env.reset().observation)\n", @@ -346,7 +360,9 @@ "metadata": { "id": "MqzA9Zi0Q2No" }, - "outputs": [], + "outputs": [ + + ], "source": [ "observation_spec = per_arm_tf_env.observation_spec()\n", "time_step_spec = ts.time_step_spec(observation_spec)\n", @@ -384,7 +400,9 @@ "metadata": { "id": "YQRZd43o5M0j" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('training data spec: ', agent.training_data_spec)" ] @@ -404,7 +422,9 @@ "metadata": { "id": "lTBR1vgG-2AM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('observation spec in training: ', agent.training_data_spec.observation)" ] @@ -426,7 +446,9 @@ "metadata": { "id": "C0fHvLm0Cpq9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('chosen arm features: ', agent.training_data_spec.policy_info.chosen_arm_features)" ] @@ -464,7 +486,9 @@ "metadata": { "id": "N1_kJAOS6VPo" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def _all_rewards(observation, hidden_param):\n", " \"\"\"Outputs rewards for all actions, given an observation.\"\"\"\n", @@ -500,7 +524,9 @@ "metadata": { "id": "j2Iimtmkzs0-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "num_iterations = 20 # @param\n", "steps_per_loop = 1 # @param\n", @@ -542,7 +568,9 @@ "metadata": { "id": "p4DOYwhMSUVh" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plt.plot(regret_values)\n", "plt.title('Regret of LinUCB on the Linear per-arm environment')\n", diff --git a/site/ko/agents/tutorials/ranking_tutorial.ipynb b/site/ko/agents/tutorials/ranking_tutorial.ipynb index 94280a8c6e..c3bdca9e7b 100644 --- a/site/ko/agents/tutorials/ranking_tutorial.ipynb +++ b/site/ko/agents/tutorials/ranking_tutorial.ipynb @@ -6,7 +6,7 @@ "id": "6tzp2bPEiK_S" }, "source": [ - "##### Copyright 2022 The TF-Agents Authors." + "##### Copyright 2023 The TF-Agents Authors." ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "E2347LPWgmcO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -71,7 +73,9 @@ "metadata": { "id": "tFs2W62pqUxk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install tf-agents[reverb]" ] @@ -82,7 +86,9 @@ "metadata": { "id": "1dbfZarwmB96" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Imports\n", "import matplotlib.pyplot as plt\n", @@ -265,7 +271,9 @@ "metadata": { "id": "sR8Id9Y7mhBK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "feedback_model = ranking_environment.FeedbackModel.CASCADING #@param[\"ranking_environment.FeedbackModel.SCORE_VECTOR\", \"ranking_environment.FeedbackModel.CASCADING\"] {type:\"raw\"}\n" ] @@ -289,7 +297,9 @@ "cellView": "form", "id": "3b1hYbTxi6Kg" }, - "outputs": [], + "outputs": [ + + ], "source": [ "click_type = \"ghost_actions\" #@param[\"distance_based\", \"ghost_actions\"]\n", "click_model = (ranking_environment.ClickModel.DISTANCE_BASED\n", @@ -312,7 +322,9 @@ "metadata": { "id": "ueVe8Gf77Lad" }, - "outputs": [], + "outputs": [ + + ], "source": [ "global_dim = 9 #@param{ type: \"integer\"}\n", "item_dim = 11 #@param{ type: \"integer\"}\n", @@ -360,7 +372,9 @@ "metadata": { "id": "MBJgkyFC64rR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Defining the Network and Training Params\n", "scoring_network = (\n", @@ -379,7 +393,9 @@ "metadata": { "id": "ufuiBZsg5YrR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Stack Ranking Deterministically by Scores\n", "\n", @@ -400,7 +416,9 @@ "metadata": { "id": "8ko9D3qO7gUs" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Sampling Sequentially Based on Scores\n", "\n", @@ -425,7 +443,9 @@ "cellView": "form", "id": "Gnv5c4w094A4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Sampling Sequentally and Taking Diversity into Account\n", "#@markdown The balance between ranking based on scores and taking diversity into account is governed by the following \"penalty mixture\" parameter. A low positive value results in rankings that hardly mix in diversity, a higher value will enforce more diversity.\n", @@ -451,7 +471,9 @@ "metadata": { "id": "ZMImW7rrWn5w" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Choosing the desired agent.\n", "agent_type = \"cosine_distance_agent\" #@param[\"cosine_distance_agent\", \"no_penalty_agent\", \"descending_scores_agent\"]\n", @@ -480,7 +502,9 @@ "metadata": { "id": "vpUfQXgD7y0t" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def order_items_from_action_fn(orig_trajectory):\n", " \"\"\"Puts the features of the selected items in the recommendation order.\n", @@ -537,7 +561,9 @@ "metadata": { "id": "VQDWXgDUsCZ1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Defininfing Parameters to Run the Agent on the Defined Environment\n", "num_iterations = 400 #@param{ type: \"number\" }\n", @@ -559,7 +585,9 @@ "metadata": { "id": "qt6ifI5AYWfu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "replay_buffer = bandit_replay_buffer.BanditReplayBuffer(\n", " data_spec=order_items_from_action_fn(agent.policy.trajectory_spec),\n", @@ -614,7 +642,9 @@ "metadata": { "id": "eZPOXzfyy5Sh" }, - "outputs": [], + "outputs": [ + + ], "source": [ "if feedback_model == ranking_environment.FeedbackModel.SCORE_VECTOR:\n", " reward = reward_values\n", @@ -641,7 +671,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "ranking_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/datasets/overview.ipynb b/site/ko/datasets/overview.ipynb index 3c3ccb8407..390507f18c 100644 --- a/site/ko/datasets/overview.ipynb +++ b/site/ko/datasets/overview.ipynb @@ -8,11 +8,11 @@ "source": [ "# TensorFlow Datasets\n", "\n", - "TFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.\n", + "TFDS는 TensorFlow, Jax 및 기타 머신러닝 프레임워크에서 바로 사용할 수 있는 데이터세트를 제공합니다.\n", "\n", - "It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).\n", + "결정적으로 데이터를 다운로드, 준비, `tf.data.Dataset` 또는 `np.array`)을 구성합니다.\n", "\n", - "Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/data) first.\n" + "참고: [TFDS](https://www.tensorflow.org/datasets)(이 라이브러리)를 `tf.data`(효율적인 데이터 파이프라인을 구축하기 위한 TensorFlow API)와 혼동하지 마세요. TFDS는 `tf.data`를 둘러싼 상위 수준의 래퍼입니다. 이 API에 익숙하지 않으면, 먼저 [공식 tf.data 가이드](https://www.tensorflow.org/guide/data)를 읽어 보세요.\n" ] }, { @@ -61,7 +61,9 @@ "cellView": "both", "id": "boeZp0sYbO41" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install -q tfds-nightly tensorflow matplotlib" ] @@ -72,7 +74,9 @@ "metadata": { "id": "TTBSvHcSLBzc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", @@ -98,7 +102,9 @@ "metadata": { "id": "FAvbSVzjLCIb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tfds.list_builders()" ] @@ -125,7 +131,9 @@ "metadata": { "id": "dCou80mnLLPV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds = tfds.load('mnist', split='train', shuffle_files=True)\n", "assert isinstance(ds, tf.data.Dataset)\n", @@ -164,7 +172,9 @@ "metadata": { "id": "2zN_jQ2ER40W" }, - "outputs": [], + "outputs": [ + + ], "source": [ "builder = tfds.builder('mnist')\n", "# 1. Create the tfrecord files (no-op if already exists)\n", @@ -210,7 +220,9 @@ "metadata": { "id": "JAGjXdk_bIYQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds = tfds.load('mnist', split='train')\n", "ds = ds.take(1) # Only take a single example\n", @@ -248,7 +260,9 @@ "metadata": { "id": "nJ4O0xy3djfV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds = tfds.load('mnist', split='train', as_supervised=True)\n", "ds = ds.take(1)\n", @@ -277,7 +291,9 @@ "metadata": { "id": "tzQTCUkAfe9R" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds = tfds.load('mnist', split='train', as_supervised=True)\n", "ds = ds.take(1)\n", @@ -305,7 +321,9 @@ "metadata": { "id": "Gg8BNsv-UzFl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "image, label = tfds.as_numpy(tfds.load(\n", " 'mnist',\n", @@ -343,7 +361,9 @@ "metadata": { "id": "ZyQzZ98bX3dM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds = tfds.load('mnist', split='train')\n", "ds = ds.batch(32).prefetch(1)\n", @@ -400,7 +420,9 @@ "metadata": { "id": "FKouwN_yVSGQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds, info = tfds.load('mnist', split='train', with_info=True)\n", "\n", @@ -424,7 +446,9 @@ "metadata": { "id": "DpE2FD56cSQR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds, info = tfds.load('mnist', split='train', with_info=True)\n", "\n", @@ -452,7 +476,9 @@ "metadata": { "id": "UgLgtcd1ljzt" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds, info = tfds.load('mnist', with_info=True)" ] @@ -472,7 +498,9 @@ "metadata": { "id": "nmq97QkilxeL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "builder = tfds.builder('mnist')\n", "info = builder.info" @@ -493,7 +521,9 @@ "metadata": { "id": "O-wLIKD-mZQT" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(info)" ] @@ -515,7 +545,9 @@ "metadata": { "id": "RcyZXncqoFab" }, - "outputs": [], + "outputs": [ + + ], "source": [ "info.features" ] @@ -535,7 +567,9 @@ "metadata": { "id": "HhfzBH6qowpz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(info.features[\"label\"].num_classes)\n", "print(info.features[\"label\"].names)\n", @@ -558,7 +592,9 @@ "metadata": { "id": "SergV_wQowLY" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(info.features.shape)\n", "print(info.features.dtype)\n", @@ -583,7 +619,9 @@ "metadata": { "id": "FBbfwA8Sp4ax" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(info.splits)" ] @@ -603,7 +641,9 @@ "metadata": { "id": "fRBieOOquDzX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(list(info.splits.keys()))" ] @@ -623,7 +663,9 @@ "metadata": { "id": "-h_OSpRsqKpP" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(info.splits['train'].num_examples)\n", "print(info.splits['train'].filenames)\n", @@ -645,7 +687,9 @@ "metadata": { "id": "HO5irBZ3uIzQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(info.splits['train[15%:75%]'].num_examples)\n", "print(info.splits['train[15%:75%]'].file_instructions)" @@ -709,7 +753,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "overview.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/design/life_of_a_computation.md b/site/ko/federated/design/life_of_a_computation.md index 8c17ef8167..434239f70a 100644 --- a/site/ko/federated/design/life_of_a_computation.md +++ b/site/ko/federated/design/life_of_a_computation.md @@ -9,7 +9,7 @@ **사용자 관점:** ```python -tff.backends.native.set_local_python_execution_context() # 3 +tff.backends.native.set_sync_local_cpp_execution_context() # 3 @tff.tf_computation(tf.int32) # 2 def add_one(x): # 1 @@ -37,7 +37,7 @@ Python이 **구문 분석**될 때 `@tff.tf_computation` 데코레이터는 Pyth ## TF vs. TFF vs. Python ```python -tff.backends.native.set_local_python_execution_context() +tff.backends.native.set_sync_local_cpp_execution_context() @tff.tf_computation(tf.int32) def add_one(x): diff --git a/site/ko/federated/design/tracing.md b/site/ko/federated/design/tracing.md index d5fd36252a..f07a8ebeb6 100644 --- a/site/ko/federated/design/tracing.md +++ b/site/ko/federated/design/tracing.md @@ -4,7 +4,7 @@ Python 함수에서 [AST](compilation.md#ast)를 구성하는 프로세스를 추적합니다. -TODO(b/153500547): 추적 시스템의 개별 구성 요소를 설명하고 연결합니다. +TODO: b/153500547 - 추적 시스템의 개별 구성 요소를 설명하고 연결합니다. ## 페더레이션 계산 추적하기 @@ -24,9 +24,9 @@ TODO(b/153500547): 추적 시스템의 개별 구성 요소를 설명하고 연 구체적으로, 정확히 하나의 인수가 있을 때 추적은 다음과 같이 수행됩니다. -1. 인수를 나타내는 적절한 형식 서명을 사용하여 [building_blocks.Reference](https://github.com/tensorflow/federated/blob/main/tensorflow_federated/python/core/impl/federated_context/value_impl.py)에서 지원되는 [value_impl.ValueImpl](https://github.com/tensorflow/federated/blob/main/tensorflow_federated/python/core/impl/compiler/building_blocks.py)를 생성합니다. +1. 인수를 나타내는 적절한 형식 서명을 사용하여 [building_blocks.Reference](https://github.com/tensorflow/federated/blob/main/tensorflow_federated/python/core/impl/compiler/building_blocks.py)에서 지원되는 [value_impl.Value](https://github.com/tensorflow/federated/blob/main/tensorflow_federated/python/core/impl/federated_context/value_impl.py)를 생성합니다. -2. `ValueImpl`에 대한 함수를 호출합니다. 이로 인해 Python 런타임이 ValueImpl에 의해 구현된 `ValueImpl` 메서드를 호출하여 dunder 메서드를 AST 구성으로 변환합니다. 각 dunder 메서드는 AST를 구성하고 해당 AST가 지원하는 `ValueImpl`을 반환합니다. +2. `Value`에 대한 함수를 호출합니다. 이로 인해 Python 런타임이 ValueImpl에 의해 구현된 `Value` 메서드를 호출하여 dunder 메서드를 AST 구성으로 변환합니다. 각 dunder 메서드는 AST를 구성하고 AST가 지원하는 `Value`을 반환합니다. 예를 들면: @@ -35,9 +35,9 @@ def foo(x): return x[0] ``` -여기서 함수의 매개변수는 튜플이고, 함수 본문에서 0번째 요소가 선택됩니다. 그러면 `ValueImpl`에서 재정의된 Python의 `__getitem__` 메서드가 호출됩니다. 가장 간단한 경우, `ValueImpl.__getitem__`의 구현은 <strong data-md-type="double_emphasis">getitem</strong>의 호출을 나타내는 building_blocks.Selection을 구성하고 새로운 `Selection`에서 지원되는 `ValueImpl`을 반환합니다. +여기서 함수의 매개변수는 튜플이고, 함수 본문에서 0번째 요소가 선택됩니다. 그러면 `__getitem__`에서 재정의된 Python의 `Value` 메서드가 호출됩니다. 가장 간단한 경우, `Value.__getitem__`의 구현은 `__getitem__`의 호출을 나타내는 [building_blocks.Selection](https://github.com/tensorflow/federated/blob/main/tensorflow_federated/python/core/impl/compiler/building_blocks.py)을 구성하고 새로운 `Selection`에서 지원되는 `Value`을 반환합니다. -각 dunder 메서드가 `ValueImpl`을 반환하여 재정의된 dunder 메서드 중 하나를 호출하는 함수의 본문에서 모든 연산을 스탬프 처리하므로 추적이 계속됩니다. +각 dunder 메서드가 `Value`을 반환하여 재정의된 dunder 메서드 중 하나를 호출하는 함수의 본문에서 모든 연산을 스탬프 처리하므로 추적이 계속됩니다. ### AST 생성하기 @@ -45,7 +45,7 @@ def foo(x): ## TensorFlow 계산 추적하기 -TODO(b/153500547): TensorFlow 계산을 추적하는 프로세스를 설명합니다. +TODO: b/153500547 - TensorFlow 계산을 추적하는 프로세스를 설명합니다. ## 추적 중 예외에서 오류 메시지 정리하기 diff --git a/site/ko/federated/tutorials/composing_learning_algorithms.ipynb b/site/ko/federated/tutorials/composing_learning_algorithms.ipynb index 6cbe9861dc..6c069d0b29 100644 --- a/site/ko/federated/tutorials/composing_learning_algorithms.ipynb +++ b/site/ko/federated/tutorials/composing_learning_algorithms.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "0asMuNro71hA" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -48,8 +50,9 @@ "source": [ "\n", " \n", - " \n", - " \n", + " \n", + " \n", " \n", "
TensorFlow.org에서 보기Google Colab에서 실행GitHub에서 소스 보기 Google Colab에서 실행하기\n", + "GitHub에서 소스 보기노트북 다운로드
" ] @@ -71,14 +74,12 @@ "metadata": { "id": "ZrGitA_KnRO0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", - "!pip install --quiet --upgrade tensorflow-federated\n", - "!pip install --quiet --upgrade nest-asyncio\n", - "\n", - "import nest_asyncio\n", - "nest_asyncio.apply()" + "!pip install --quiet --upgrade tensorflow-federated" ] }, { @@ -87,9 +88,11 @@ "metadata": { "id": "HGTM6tWOLo8M" }, - "outputs": [], + "outputs": [ + + ], "source": [ - "from typing import Callable\n", + "from collections.abc import Callable\n", "\n", "import tensorflow as tf\n", "import tensorflow_federated as tff" @@ -119,7 +122,7 @@ "id": "3zQlyijofSzI" }, "source": [ - "[고유한 페더레이션 학습 알고리즘 구축 튜토리얼](https://github.com/tensorflow/federated/blob/v0.36.0/docs/tutorials/building_your_own_federated_learning_algorithm.ipynb)에서는 TFF의 페더레이션 코어를 사용하여 페더레이션 평균화(FedAvg) 알고리즘 버전을 직접 구현했습니다.\n", + "The [Building Your Own Federated Learning Algorithm Tutorial](https://github.com/tensorflow/federated/blob/v0.62.0/docs/tutorials/building_your_own_federated_learning_algorithm.ipynb) used TFF's federated core to directly implement a version of the Federated Averaging (FedAvg) algorithm.\n", "\n", "이 튜토리얼에서는 TFF API의 페더레이션 학습 구성 요소를 사용하여 모든 것을 처음부터 다시 구현할 필요 없이 모듈 방식으로 페더레이션 학습 알고리즘을 구축합니다.\n", "\n", @@ -148,7 +151,7 @@ "id": "YwhOtjlvjboB" }, "source": [ - "[고유한 페더레이션 학습 알고리즘 구축 튜토리얼](https://github.com/tensorflow/federated/blob/v0.36.0/docs/tutorials/building_your_own_federated_learning_algorithm.ipynb)이 이러한 모든 구성 요소를 처음부터 구현했지만 그럴 필요가 없는 경우가 종종 있습니다. 대신 유사한 알고리즘의 빌딩 블록을 재사용할 수 있습니다.\n", + "[고유한 페더레이션 학습 알고리즘 구축 튜토리얼](https://github.com/tensorflow/federated/blob/v0.62.0/docs/tutorials/building_your_own_federated_learning_algorithm.ipynb)에서는 이러한 모든 빌딩 블록을 처음부터 구현했지만 그럴 필요가 없는 경우가 종종 있습니다. 대신 유사한 알고리즘의 빌딩 블록을 재사용할 수 있습니다.\n", "\n", "이 경우 그래디언트 클리핑으로 FedAvg를 구현하려면 **클라이언트 작업** 빌딩 블록만 수정하면 됩니다. 나머지 블록은 \"바닐라\" FedAvg에 사용된 것과 동일할 수 있습니다." ] @@ -179,16 +182,18 @@ "metadata": { "id": "pIw7QQCqltdV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", - "def client_update(model: tff.learning.Model,\n", + "def client_update(model: tff.learning.models.VariableModel,\n", " dataset: tf.data.Dataset,\n", - " server_weights: tff.learning.ModelWeights,\n", + " server_weights: tff.learning.models.ModelWeights,\n", " client_optimizer: tf.keras.optimizers.Optimizer):\n", " \"\"\"Performs training (using the server model weights) on the client's dataset.\"\"\"\n", " # Initialize the client model with the current server weights.\n", - " client_weights = tff.learning.ModelWeights.from_model(model)\n", + " client_weights = tff.learning.models.ModelWeights.from_model(model)\n", " tf.nest.map_structure(lambda x, y: x.assign(y),\n", " client_weights, server_weights)\n", "\n", @@ -264,10 +269,12 @@ "metadata": { "id": "X-I-jPsZmmMy" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def build_gradient_clipping_client_work(\n", - " model_fn: Callable[[], tff.learning.Model],\n", + " model_fn: Callable[[], tff.learning.models.VariableModel],\n", " optimizer_fn: Callable[[], tf.keras.optimizers.Optimizer],\n", ") -> tff.learning.templates.ClientWorkProcess:\n", " \"\"\"Creates a client work process that uses gradient clipping.\"\"\"\n", @@ -277,7 +284,7 @@ " # with variables created for this model.\n", " model = model_fn()\n", " data_type = tff.SequenceType(model.input_spec)\n", - " model_weights_type = tff.learning.framework.weights_type_from_model(model)\n", + " model_weights_type = tff.learning.models.weights_type_from_model(model)\n", "\n", " @tff.federated_computation\n", " def initialize_fn():\n", @@ -334,7 +341,9 @@ "metadata": { "id": "-WdnFluLLo8P" }, - "outputs": [], + "outputs": [ + + ], "source": [ "emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()" ] @@ -356,7 +365,9 @@ "metadata": { "id": "Blrh8zJgLo8R" }, - "outputs": [], + "outputs": [ + + ], "source": [ "NUM_CLIENTS = 10\n", "BATCH_SIZE = 20\n", @@ -391,7 +402,7 @@ "id": "LJ0I89ixz8yV" }, "source": [ - "여기서는 [이미지 분류](federated_learning_for_image_classification.ipynb) 튜토리얼에서와 동일한 모델이 사용됩니다. 이 모델(`tf.keras`를 통해 구현됨)에는 하나의 숨겨진 레이어, 그 다음 소프트맥스 레이어가 있습니다. TFF에서 이 모델을 사용하기 위해 Keras 모델은 [`tff.learning.Model`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model)로 래핑됩니다. 이를 통해 TFF 내에서 모델의 [정방향 전달](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model#forward_pass)을 수행하고 [모델 출력을 추출](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model#report_local_unfinalized_metrics)할 수 있습니다. 자세한 내용은 [이미지 분류](federated_learning_for_image_classification.ipynb) 튜토리얼을 참조하세요." + "여기서는 [이미지 분류](federated_learning_for_image_classification.ipynb) 튜토리얼과 동일한 모델을 사용합니다. 이 모델(`tf.keras`를 통해 구현됨)에는 하나의 숨겨진 레이어와 소프트맥스 레이어가 있습니다. TFF에서 이 모델을 사용하기 위해 Keras 모델이 [`tff.learning.models.VariableModel`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model)로 래핑됩니다. 이를 통해 TFF 내에서 모델의 [순방향 전달](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model#forward_pass)을 수행하고 [모델 출력 추출](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model#report_local_unfinalized_metrics)을 수행할 수 있습니다. 자세한 내용은 [이미지 분류](federated_learning_for_image_classification.ipynb) 튜토리얼을 참조하세요." ] }, { @@ -400,7 +411,9 @@ "metadata": { "id": "Yfld4oFNLo8Y" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def create_keras_model():\n", " initializer = tf.keras.initializers.GlorotNormal(seed=0)\n", @@ -412,7 +425,7 @@ "\n", "def model_fn():\n", " keras_model = create_keras_model()\n", - " return tff.learning.from_keras_model(\n", + " return tff.learning.models.from_keras_model(\n", " keras_model,\n", " input_spec=federated_train_data[0].element_spec,\n", " loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n", @@ -443,7 +456,9 @@ "metadata": { "id": "kOO1ObqJ0cmX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "client_optimizer_fn = lambda: tf.keras.optimizers.SGD(learning_rate=0.01)\n", "server_optimizer_fn = lambda: tf.keras.optimizers.SGD(learning_rate=1.0)" @@ -466,11 +481,13 @@ "metadata": { "id": "iwXOTPeIx2nx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tff.tf_computation()\n", "def initial_model_weights_fn():\n", - " return tff.learning.ModelWeights.from_model(model_fn())\n", + " return tff.learning.models.ModelWeights.from_model(model_fn())\n", "\n", "model_weights_type = initial_model_weights_fn.type_signature.result\n", "\n", @@ -504,7 +521,9 @@ "metadata": { "id": "z_86iNeM0IBm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "fed_avg_with_clipping = tff.learning.templates.compose_learning_process(\n", " initial_model_weights_fn,\n", @@ -540,7 +559,8 @@ ] }, "execution_count": 26, - "metadata": {}, + "metadata": { + }, "output_type": "execute_result" } ], @@ -573,7 +593,8 @@ ] }, "execution_count": 27, - "metadata": {}, + "metadata": { + }, "output_type": "execute_result" } ], @@ -596,7 +617,9 @@ "metadata": { "id": "tKhPuBgW1-3c" }, - "outputs": [], + "outputs": [ + + ], "source": [ "learning_process_output = fed_avg_with_clipping.next(state, federated_train_data)" ] @@ -624,7 +647,8 @@ ] }, "execution_count": 29, - "metadata": {}, + "metadata": { + }, "output_type": "execute_result" } ], @@ -659,7 +683,8 @@ ] }, "execution_count": 30, - "metadata": {}, + "metadata": { + }, "output_type": "execute_result" } ], @@ -697,7 +722,6 @@ ], "metadata": { "colab": { - "collapsed_sections": [], "name": "composing_learning_algorithms.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb b/site/ko/federated/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb index b808075e9c..881d2b2656 100644 --- a/site/ko/federated/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb +++ b/site/ko/federated/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "0asMuNro71hA" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -39,9 +41,9 @@ "source": [ "\n", " \n", - " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소그 보기GitHub에서 소그 보기노트북 다운로드
" ] @@ -54,7 +56,7 @@ "source": [ "# 맞춤형 반복 프로세스에서 TFF 옵티마이저 사용\n", "\n", - "이것은 [고유한 페더레이션 학습 알고리즘 빌드](building_your_own_federated_learning_algorithm.ipynb) 튜토리얼과 [페더레이션 평균화](https://arxiv.org/abs/1602.05629) 알고리즘에 대한 맞춤형 반복 프로세스를 빌드하기 위한 [simple_fedavg](https://github.com/tensorflow/federated/tree/main/tensorflow_federated/examples/simple_fedavg) 예제를 대체합니다. 이 튜토리얼에서는 Keras 옵티마이저 대신 [TFF 옵티마이저](https://github.com/tensorflow/federated/tree/main/tensorflow_federated/python/learning/optimizers)를 사용합니다. TFF 옵티마이저 추상화는 TFF 반복 프로세스에 더 쉽게 도입하도록 state-in-state-out으로 설계되었습니다. `tff.learning` API는 또한 TFF 옵티마이저를 입력 인수로 허용합니다. " + "이것은 [고유한 페더레이션 학습 알고리즘 빌드](building_your_own_federated_learning_algorithm.ipynb) 튜토리얼과 [페더레이션 평균화](https://github.com/tensorflow/federated/tree/main/tensorflow_federated/examples/simple_fedavg) 알고리즘에 대한 맞춤형 반복 프로세스를 빌드하기 위한 [simple_fedavg](https://arxiv.org/abs/1602.05629) 예제를 대체합니다. 이 튜토리얼에서는 Keras 옵티마이저 대신 [TFF 옵티마이저](https://github.com/tensorflow/federated/tree/main/tensorflow_federated/python/learning/optimizers)를 사용합니다. TFF 옵티마이저 추상화는 TFF 반복 프로세스에 더 쉽게 도입하도록 state-in-state-out으로 설계되었습니다. `tff.learning` API는 또한 TFF 옵티마이저를 입력 인수로 허용합니다. " ] }, { @@ -74,14 +76,12 @@ "metadata": { "id": "ZrGitA_KnRO0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", - "!pip install --quiet --upgrade tensorflow-federated\n", - "!pip install --quiet --upgrade nest-asyncio\n", - "\n", - "import nest_asyncio\n", - "nest_asyncio.apply()" + "!pip install --quiet --upgrade tensorflow-federated" ] }, { @@ -90,10 +90,13 @@ "metadata": { "id": "HGTM6tWOLo8M" }, - "outputs": [], + "outputs": [ + + ], "source": [ + "from typing import Any\n", "import functools\n", - "import attr\n", + "import attrs\n", "import numpy as np\n", "import tensorflow as tf\n", "import tensorflow_federated as tff" @@ -116,7 +119,9 @@ "metadata": { "id": "Blrh8zJgLo8R" }, - "outputs": [], + "outputs": [ + + ], "source": [ "only_digits=True\n", "\n", @@ -167,10 +172,10 @@ "\n", " return model\n", "\n", - "# Wrap as `tff.learning.Model`.\n", + "# Wrap as `tff.learning.models.VariableModel`.\n", "def model_fn():\n", " keras_model = create_keras_model()\n", - " return tff.learning.from_keras_model(\n", + " return tff.learning.models.from_keras_model(\n", " keras_model,\n", " input_spec=central_test_data.element_spec,\n", " loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))" @@ -220,7 +225,9 @@ "metadata": { "id": "c5rHPKreLo8g" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def client_update(model, dataset, server_weights, client_optimizer):\n", @@ -256,12 +263,14 @@ "metadata": { "id": "rYxErLvHLo8i" }, - "outputs": [], + "outputs": [ + + ], "source": [ - "@attr.s(eq=False, frozen=True, slots=True)\n", + "@attrs.define(eq=False, frozen=True)\n", "class ServerState(object):\n", - " trainable_weights = attr.ib()\n", - " optimizer_state = attr.ib()\n", + " trainable_weights: Any\n", + " optimizer_state: Any\n", "\n", "@tf.function\n", "def server_update(server_state, mean_model_delta, server_optimizer):\n", @@ -510,7 +519,9 @@ "metadata": { "id": "EdNgYoIwLo9P" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def evaluate(server_state):\n", " keras_model = create_keras_model()\n", @@ -560,7 +571,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "custom_federated_algorithm_with_tff_optimizers.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/tutorials/custom_federated_algorithms_1.ipynb b/site/ko/federated/tutorials/custom_federated_algorithms_1.ipynb index e4b06634b9..b10b25ff0c 100644 --- a/site/ko/federated/tutorials/custom_federated_algorithms_1.ipynb +++ b/site/ko/federated/tutorials/custom_federated_algorithms_1.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "uDcWxmG9kh1Q" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -47,10 +49,12 @@ }, "source": [ "\n", - " \n", - " \n", + " \n", + " \n", - " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " TensorFlow.org에서 보기\n", + " Google Colab에서 실행하기\n", + " GitHub에서 소스 보기\n", "GitHub에서 소그 보기노트북 다운로드
" ] @@ -106,14 +110,12 @@ "metadata": { "id": "Ary-OZz5jMJI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", - "!pip install --quiet --upgrade tensorflow-federated\n", - "!pip install --quiet --upgrade nest-asyncio\n", - "\n", - "import nest_asyncio\n", - "nest_asyncio.apply()" + "!pip install --quiet --upgrade tensorflow-federated" ] }, { @@ -122,7 +124,9 @@ "metadata": { "id": "-skNC6aovM46" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import collections\n", "\n", @@ -146,7 +150,9 @@ }, "execution_count": 4, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -182,7 +188,9 @@ "metadata": { "id": "COe0tLPPtTbe" }, - "outputs": [], + "outputs": [ + + ], "source": [ "federated_float_on_clients = tff.type_at_clients(tf.float32)" ] @@ -211,7 +219,9 @@ }, "execution_count": 6, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -235,7 +245,9 @@ }, "execution_count": 7, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -268,7 +280,9 @@ }, "execution_count": 8, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -305,7 +319,9 @@ }, "execution_count": 9, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -338,7 +354,9 @@ }, "execution_count": 10, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -375,7 +393,9 @@ }, "execution_count": 11, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -413,7 +433,9 @@ }, "execution_count": 12, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -468,13 +490,13 @@ "\n", "### 배치 지정\n", "\n", - "TFF provides two basic placement literals, `tff.CLIENTS` and `tff.SERVER`, to make it easy to express the rich variety of practical scenarios that are naturally modeled as client-server architectures, with multiple *client* devices (mobile phones, embedded devices, distributed databases, sensors, etc.) orchestrated by a single centralized *server* coordinator. TFF is designed to also support custom placements, multiple client groups, multi-tiered and other, more general distributed architectures, but discussing them is outside the scope of this tutorial.\n", + "TFF는 두 가지 기본적인 배치 리터럴인 `tff.CLIENTS`와 `tff.SERVER`를 제공하여 다수의 *클라이언트* 기기(휴대전화, 임베디드 기기, 분산 데이터베이스, 센서 등)가 하나의 중앙화된 *서버* 코디네이터에 의해 조정되는 클라이언트-서버 아키텍처로 자연스럽게 모델링되는 풍부한 실제 시나리오를 쉽게 표현할 수 있도록 합니다. TFF는 또한 사용자 정의 배치, 다중 클라이언트 그룹, 다중 계층 및 기타 더 일반적인 분산 아키텍처를 지원하도록 설계되었지만 이에 대한 논의는 본 튜토리얼의 범위를 벗어납니다.\n", "\n", "TFF는 `tff.CLIENTS` 또는 `tff.SERVER`가 실제로 나타내는 내용을 규정하지 않습니다.\n", "\n", "특히 `tff.SERVER`는 단일 물리적 기기(단일 그룹의 구성원)일 수 있지만 상태 시스템 복제를 실행하는 내결함성 클러스터의 복제본 그룹일 수도 있습니다. 아키텍처에 대해 특별한 가정을 하지는 않습니다. 하지만 앞 섹션에서 언급한 `all_equal` 비트를 사용하여 일반적으로 서버에서 단일 데이터 항목만 처리한다는 사실을 표현합니다.\n", "\n", - "Likewise, `tff.CLIENTS` in some applications might represent all clients in the system - what in the context of federated learning we sometimes refer to as the *population*, but e.g., in [production implementations of Federated Averaging](https://arxiv.org/abs/1602.05629), it may represent a *cohort* - a subset of the clients selected for paticipation in a particular round of training. The abstractly defined placements are given concrete meaning when a computation in which they appear is deployed for execution (or simply invoked like a Python function in a simulated environment, as is demonstrated in this tutorial). In our local simulations, the group of clients is determined by the federated data supplied as input." + "마찬가지로, 일부 애플리케이션에서 `tff.CLIENTS`는 시스템의 모든 클라이언트를 나타낼 수 있습니다. 페더레이션 학습과 관련해서는 이를 *집단*이라고 하지만 예를 들어, [페더레이션 평균화의 프로덕션 구현](https://arxiv.org/abs/1602.05629)에서는 특정한 훈련 라운드에 참여하도록 선택된 클라이언트의 하위 세트인 *코호트*를 나타낼 수 있습니다. 추상적으로 정의된 배치는 이러한 배치가 있는 계산이 실행을 위해 배포될 때(또는 이 튜토리얼에서 나타낸 바와 같이 시뮬레이션된 환경에서 Python 함수와 같이 단순히 호출됨) 구체적인 의미를 갖게 됩니다. 로컬 시뮬레이션에서 클라이언트의 그룹은 입력으로 제공된 페더레이션 데이터에 의해 결정됩니다." ] }, { @@ -498,7 +520,9 @@ "metadata": { "id": "g38EkHwGGEUo" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tff.federated_computation(tff.type_at_clients(tf.float32))\n", "def get_average_temperature(sensor_readings):\n", @@ -533,7 +557,9 @@ }, "execution_count": 14, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -580,7 +606,9 @@ }, "execution_count": 15, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -599,7 +627,7 @@ "\n", "이제 *접착* 언어로 코드를 내보내는 `tff.federated_computation` 데코레이터에 대해 이전에 주지했던 사항으로 돌아가 보겠습니다. TFF 계산의 논리는 Python에서 일반 함수로 표현할 수 있고(위에서와 같이 `tff.federated_computation`으로 데코레이팅만 해주면 됨), 이 노트북의 다른 Python 함수와 마찬가지로 내부적으로 Python 인수로 직접 호출할 수 있지만, TFF 계산은 실제로는 Python이 *아닙니다*.\n", "\n", - "이 말을 다시 표현하자면, Python 인터프리터가 `tff.federated_computation`으로 데코레이팅된 함수를 발견하면 이 함수의 본문에 있는 문을 한 번(정의 시간에) 추적한 다음, 실행을 위해서건, 다른 계산에 하위 구성 요소로 도입하려는 경우이건 향후 사용을 위해 계산 논리의 [직렬화된 표현](https://github.com/tensorflow/federated/blob/main/tensorflow_federated/proto/v0/computation.proto)을 구성한다는 것입니다.\n", + "이 말을 다시 표현하자면, Python 인터프리터가 `tff.federated_computation`으로 데코레이팅된 함수를 발견하면 이 함수의 본문에 있는 선언문을 한 번(정의 시간에) 추적한 다음, 실행을 위해서건, 다른 계산에 하위 구성 요소로 도입하려는 경우이건 향후 사용을 위해 계산 논리의 [직렬화된 표현](https://github.com/tensorflow/federated/blob/main/tensorflow_federated/proto/v0/computation.proto)을 구성한다는 것입니다.\n", "\n", "다음과 같이 print 문을 추가하여 이러한 내용을 확인할 수 있습니다." ] @@ -615,7 +643,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "Getting traced, the argument is \"ValueImpl\".\n" + "Getting traced, the argument is \"Value\".\n" ] } ], @@ -681,7 +709,9 @@ "metadata": { "id": "dpdAqMcygnmr" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tff.tf_computation(tf.float32)\n", "def add_half(x):\n", @@ -720,7 +750,9 @@ }, "execution_count": 18, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -746,7 +778,9 @@ "metadata": { "id": "z08K5UKBlSJP" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tff.federated_computation(tff.type_at_clients(tf.float32))\n", "def add_half_on_clients(x):\n", @@ -768,7 +802,9 @@ }, "execution_count": 20, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -805,7 +841,9 @@ }, "execution_count": 21, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -886,7 +924,9 @@ }, "execution_count": 23, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -932,7 +972,9 @@ }, "execution_count": 24, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -958,7 +1000,9 @@ "metadata": { "id": "cw0nen-D0Ks8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tff.tf_computation(tff.SequenceType(tf.float32))\n", "def get_local_temperature_average(local_temperatures):\n", @@ -982,7 +1026,9 @@ }, "execution_count": 26, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -1017,7 +1063,9 @@ }, "execution_count": 27, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -1056,7 +1104,9 @@ }, "execution_count": 28, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -1111,7 +1161,9 @@ }, "execution_count": 30, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -1135,7 +1187,9 @@ }, "execution_count": 31, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -1163,7 +1217,9 @@ "metadata": { "id": "hZIE1kl340at" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tff.federated_computation(\n", " tff.type_at_clients(tff.SequenceType(tf.float32)))\n", @@ -1198,7 +1254,9 @@ }, "execution_count": 33, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -1231,7 +1289,9 @@ }, "execution_count": 34, "metadata": { - "tags": [] + "tags": [ + + ] }, "output_type": "execute_result" } @@ -1252,7 +1312,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "custom_federated_algorithms_1.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/tutorials/federated_reconstruction_for_matrix_factorization.ipynb b/site/ko/federated/tutorials/federated_reconstruction_for_matrix_factorization.ipynb index 66aa02cfac..128a8f934e 100644 --- a/site/ko/federated/tutorials/federated_reconstruction_for_matrix_factorization.ipynb +++ b/site/ko/federated/tutorials/federated_reconstruction_for_matrix_factorization.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "p8SrVqkmnDQv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -48,10 +50,10 @@ "source": [ "\n", " \n", - " \n", - " \n", - " \n", + " \n", + " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소그 보기노트북 다운로드GitHub에서 소그 보기노트북 다운로드하기
" ] }, @@ -72,7 +74,9 @@ "metadata": { "id": "I8pu6-dckG_u" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", "!pip install --quiet --upgrade tensorflow-federated\n", @@ -84,11 +88,13 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "metadata": { "id": "2txfde-th95B" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import collections\n", "import functools\n", @@ -123,9 +129,7 @@ "\n", "사용자가 데이터세트에서 영화의 일부만 보기 때문에 이 행렬은 일반적으로 희소합니다. 행렬 분해로 두 개의 행렬이 출력되는 데, 하나는 각 사용자에 대한 $k$ 차원 사용자 임베딩을 나타내는 $n \\times k$ 행렬 $U$이고, 또 하나는 각 항목에 대한 $k$ 차원 항목 임베딩을 나타내는 $m \\times k$ 행렬 $I$입니다. 가장 간단한 훈련 목표는 사용자와 항목 임베딩의 내적이 관찰된 평가 $O$를 예측하도록 하는 것입니다.\n", "\n", - "```\n", "$$argmin_{U,I} \\sum_{(u, i) \\in O} (R_{ui} - U_u I_i^T)^2$$\n", - "```\n", "\n", "This is equivalent to minimizing the mean squared error between observed ratings and ratings predicted by taking the dot product of the corresponding user and item embeddings. Another way to interpret this is that this ensures that $R \\approx UI^T$ for known ratings, hence \"matrix factorization\". If this is confusing, don't worry–we won't need to know the details of matrix factorization for the rest of the tutorial." ] @@ -143,11 +147,13 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 2, "metadata": { "id": "DwxoBLaWneOE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def download_movielens_data(dataset_path):\n", " \"\"\"Downloads and copies MovieLens data to local /tmp directory.\"\"\"\n", @@ -168,11 +174,13 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "metadata": { "id": "Y6_bskRUniqB" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def load_movielens_data(\n", " data_directory: str = \"/tmp\",\n", @@ -187,7 +195,8 @@ " movies_df = pd.read_csv(\n", " os.path.join(data_directory, \"ml-1m\", \"movies.dat\"),\n", " sep=\"::\",\n", - " names=[\"MovieID\", \"Title\", \"Genres\"], engine=\"python\")\n", + " names=[\"MovieID\", \"Title\", \"Genres\"], engine=\"python\", \n", + " encoding = \"ISO-8859-1\")\n", "\n", " # Create dictionaries mapping from old IDs to new (remapped) IDs for both\n", " # MovieID and UserID. Use the movies and users present in ratings_df to\n", @@ -224,11 +233,13 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "metadata": { "id": "OkAh5nt_n4ll" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ratings_df, movies_df = load_movielens_data()" ] @@ -244,7 +255,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "metadata": { "id": "G4qap4n-C83I" }, @@ -252,7 +263,10 @@ { "data": { "text/html": [ - "
\n", + "\n", + "
\n", + "
\n", + "
\n", "\n", + "\n", + " \n", + " \n", + "\n", + " \n", + "
\n", + "
\n", + " " ], "text/plain": [ " UserID MovieID Rating Timestamp\n", @@ -325,8 +478,9 @@ "4 0 2162 5 978824291" ] }, - "execution_count": 7, - "metadata": {}, + "execution_count": 5, + "metadata": { + }, "output_type": "execute_result" } ], @@ -345,7 +499,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "metadata": { "id": "5TyN-30NC91Z" }, @@ -353,7 +507,10 @@ { "data": { "text/html": [ - "
\n", + "\n", + "
\n", + "
\n", + "
\n", "\n", + "\n", + " \n", + " \n", + "\n", + " \n", + "
\n", + "
\n", + " " ], "text/plain": [ " MovieID Title Genres\n", @@ -420,8 +716,9 @@ "4 4.0 Father of the Bride Part II (1995) Comedy" ] }, - "execution_count": 8, - "metadata": {}, + "execution_count": 6, + "metadata": { + }, "output_type": "execute_result" } ], @@ -440,7 +737,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "metadata": { "id": "8I1jgmDOCqt4" }, @@ -461,20 +758,19 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "metadata": { "id": "1aO07Lg21Joa" }, "outputs": [ { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZcAAAEGCAYAAACpXNjrAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAYL0lEQVR4nO3df/BddZ3f8efLAMqIGJA0zSS4oWtm20hr0CzGhXZcGCGgu8GtujCtZB1qtiO0Ouu4hu1MWUG62NmVLVulk12yBNeKLGqJgsYU0nXsyI8AEQho+RZhSAZIlvBD6goF3/3jflIuX26+fEPOvTc/no+ZM99z3+dzPudz7x955ZzzueemqpAkqUuvGfcAJEn7H8NFktQ5w0WS1DnDRZLUOcNFktS5g8Y9gL3FUUcdVfPnzx/3MCRpn3L77bf/bVXNmlw3XJr58+ezcePGcQ9DkvYpSR4aVPeymCSpc4aLJKlzhoskqXOGiySpc4aLJKlzQwuXJK9LcmuSHybZnOQzrX5lkp8k2dSWRa2eJJclmUhyV5K39/W1PMn9bVneV39HkrvbPpclSasfmWR9a78+yRHDep+SpJcb5pnLs8BJVfU2YBGwNMmStu1TVbWoLZta7TRgQVtWAJdDLyiAC4B3AscDF/SFxeXAR/v2W9rqK4Ebq2oBcGN7LUkakaGFS/U8014e3Japnu+/DLiq7XczMDPJHOBUYH1V7aiqJ4D19IJqDnB4Vd1cvd8NuAo4o6+vNW19TV9dkjQCQ73nkmRGkk3ANnoBcUvbdHG79HVpkte22lzg4b7dt7TaVPUtA+oAs6vqkbb+KDB7F+NbkWRjko3bt29/dW9SkvQyQ/2GflW9ACxKMhP4RpJjgfPp/YN/CLAK+DRw4RDHUEkGnjFV1ao2BhYvXuyvpkmTzF95/biHMHIPXvLecQ9hvzCS2WJV9SSwAVhaVY+0S1/PAn9J7z4KwFbg6L7d5rXaVPV5A+oAj7XLZrS/27p9R5KkqQxzttisdsZCkkOB9wA/6vtHP/TuhdzTdlkLnN1mjS0BnmqXttYBpyQ5ot3IPwVY17Y9nWRJ6+ts4Lq+vnbOKlveV5ckjcAwL4vNAdYkmUEvxK6pqm8luSnJLCDAJuBft/Y3AKcDE8DPgI8AVNWOJBcBt7V2F1bVjrb+MeBK4FDg220BuAS4Jsk5wEPAh4b2LiVJLzO0cKmqu4DjBtRP2kX7As7dxbbVwOoB9Y3AsQPqjwMn7+aQJUkd8Rv6kqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzg0tXJK8LsmtSX6YZHOSz7T6MUluSTKR5KtJDmn117bXE237/L6+zm/1Hyc5ta++tNUmkqzsqw88hiRpNIZ55vIscFJVvQ1YBCxNsgT4HHBpVb0FeAI4p7U/B3ii1S9t7UiyEDgTeCuwFPhikhlJZgBfAE4DFgJntbZMcQxJ0ggMLVyq55n28uC2FHAScG2rrwHOaOvL2mva9pOTpNWvrqpnq+onwARwfFsmquqBqnoOuBpY1vbZ1TEkSSMw1Hsu7QxjE7ANWA/8b+DJqnq+NdkCzG3rc4GHAdr2p4A39dcn7bOr+pumOMbk8a1IsjHJxu3bt+/JW5Uk9RlquFTVC1W1CJhH70zjHw7zeLurqlZV1eKqWjxr1qxxD0eS9hsjmS1WVU8CG4B3ATOTHNQ2zQO2tvWtwNEAbfsbgcf765P22VX98SmOIUkagWHOFpuVZGZbPxR4D3AfvZD5QGu2HLiura9tr2nbb6qqavUz22yyY4AFwK3AbcCCNjPsEHo3/de2fXZ1DEnSCBz0yk1etTnAmjar6zXANVX1rST3Alcn+SxwJ3BFa38F8KUkE8AOemFBVW1Ocg1wL/A8cG5VvQCQ5DxgHTADWF1Vm1tfn97FMSRJIzC0cKmqu4DjBtQfoHf/ZXL958AHd9HXxcDFA+o3ADdM9xiSpNHwG/qSpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzhkukqTOGS6SpM4ZLpKkzg3zZ46l/cr8ldePewjSPsMzF0lS5wwXSVLnhhYuSY5OsiHJvUk2J/l4q/9hkq1JNrXl9L59zk8ykeTHSU7tqy9ttYkkK/vqxyS5pdW/muSQVn9tez3Rts8f1vuUJL3cMM9cngc+WVULgSXAuUkWtm2XVtWittwA0LadCbwVWAp8McmMJDOALwCnAQuBs/r6+Vzr6y3AE8A5rX4O8ESrX9raSZJGZGjhUlWPVNUdbf2nwH3A3Cl2WQZcXVXPVtVPgAng+LZMVNUDVfUccDWwLEmAk4Br2/5rgDP6+lrT1q8FTm7tJUkjMJJ7Lu2y1HHALa10XpK7kqxOckSrzQUe7tttS6vtqv4m4Mmqen5S/SV9te1PtfaTx7UiycYkG7dv375H71GS9KKhh0uSw4CvAZ+oqqeBy4FfBhYBjwB/Muwx7EpVraqqxVW1eNasWeMahiTtd4YaLkkOphcsX66qrwNU1WNV9UJV/QL4c3qXvQC2Akf37T6v1XZVfxyYmeSgSfWX9NW2v7G1lySNwDBniwW4Arivqj7fV5/T1+z9wD1tfS1wZpvpdQywALgVuA1Y0GaGHULvpv/aqipgA/CBtv9y4Lq+vpa39Q8AN7X2kqQRGOY39E8APgzcnWRTq/0Bvdlei4ACHgR+F6CqNie5BriX3kyzc6vqBYAk5wHrgBnA6qra3Pr7NHB1ks8Cd9ILM9rfLyWZAHbQCyRJ0ogMLVyq6vvAoBlaN0yxz8XAxQPqNwzar6oe4MXLav31nwMf3J3xSpK64zf0JUmdM1wkSZ0zXCRJnTNcJEmdM1wkSZ0zXCRJnTNcJEmdM1wkSZ0zXCRJnRvm418kaZ8zf+X14x7CyD14yXs779MzF0lS5wwXSVLnDBdJUucMF0lS5wwXSVLnDBdJUucMF0lS5wwXSVLnDBdJUucMF0lS5wwXSVLnhhYuSY5OsiHJvUk2J/l4qx+ZZH2S+9vfI1o9SS5LMpHkriRv7+treWt/f5LlffV3JLm77XNZkkx1DEnSaEwrXJKcMJ3aJM8Dn6yqhcAS4NwkC4GVwI1VtQC4sb0GOA1Y0JYVwOXtOEcCFwDvBI4HLugLi8uBj/btt7TVd3UMSdIITPfM5c+mWfv/quqRqrqjrf8UuA+YCywD1rRma4Az2voy4KrquRmYmWQOcCqwvqp2VNUTwHpgadt2eFXdXFUFXDWpr0HHkCSNwJSP3E/yLuDXgFlJfq9v0+HAjOkeJMl84DjgFmB2VT3SNj0KzG7rc4GH+3bb0mpT1bcMqDPFMSaPawW9syTe/OY3T/ftSJJewSuduRwCHEYvhN7QtzwNfGA6B0hyGPA14BNV9XT/tnbGUbs55t0y1TGqalVVLa6qxbNmzRrmMCTpgDLlmUtV/Q3wN0murKqHdrfzJAfTC5YvV9XXW/mxJHOq6pF2aWtbq28Fju7bfV6rbQXePan+P1p93oD2Ux1DkjQC073n8tokq5J8N8lNO5epdmgzt64A7quqz/dtWgvsnPG1HLiur352mzW2BHiqXdpaB5yS5Ih2I/8UYF3b9nSSJe1YZ0/qa9AxJEkjMN2fOf5r4L8AfwG8MM19TgA+DNydZFOr/QFwCXBNknOAh4APtW03AKcDE8DPgI8AVNWOJBcBt7V2F1bVjrb+MeBK4FDg221himNIkkZguuHyfFVdvjsdV9X3gexi88kD2hdw7i76Wg2sHlDfCBw7oP74oGNIkkZjupfFvpnkY0nmtC8oHtm+fyJJ0stM98xl5/2LT/XVCvgH3Q5HkrQ/mFa4VNUxwx6IJGn/Ma1wSXL2oHpVXdXtcCRJ+4PpXhb71b7119G7WX4HvUeuSJL0EtO9LPZv+l8nmQlcPZQRSZL2ea/2kfv/B/A+jCRpoOnec/kmLz6fawbwj4BrhjUoSdK+bbr3XP64b/154KGq2rKrxpKkA9u0Lou1B1j+iN4TkY8AnhvmoCRJ+7bp/hLlh4BbgQ/Se07XLUmm9ch9SdKBZ7qXxf4d8KtVtQ0gySzgvwPXDmtgkqR913Rni71mZ7A0j+/GvpKkA8x0z1y+k2Qd8JX2+rfpPSJfkqSXmTJckryF3u/RfyrJbwEntk0/AL487MFJkvZNr3Tm8qfA+QDtZ4q/DpDkH7dtvzHU0UmS9kmvdN9kdlXdPbnYavOHMiJJ0j7vlcJl5hTbDu1yIJKk/ccrhcvGJB+dXEzyr4DbhzMkSdK+7pXuuXwC+EaSf8GLYbIYOAR4/zAHJknad00ZLlX1GPBrSX4dOLaVr6+qm4Y+MknSPmu6zxbbUFV/1pZpBUuS1Um2Jbmnr/aHSbYm2dSW0/u2nZ9kIsmPk5zaV1/aahNJVvbVj0lyS6t/Nckhrf7a9nqibZ8/nfFKkrozzG/ZXwksHVC/tKoWteUGgCQLgTOBt7Z9vphkRpIZwBeA04CFwFmtLcDnWl9vAZ4Azmn1c4AnWv3S1k6SNEJDC5eq+h6wY5rNlwFXV9WzVfUTYAI4vi0TVfVAVT1H79cvlyUJcBIvPttsDXBGX19r2vq1wMmtvSRpRMbxfLDzktzVLpsd0WpzgYf72mxptV3V3wQ8WVXPT6q/pK+2/anW/mWSrEiyMcnG7du37/k7kyQBow+Xy4FfBhYBjwB/MuLjv0RVraqqxVW1eNasWeMciiTtV0YaLlX1WFW9UFW/AP6c3mUvgK3A0X1N57XaruqPAzOTHDSp/pK+2vY3tvaSpBEZabgkmdP38v3Azplka4Ez20yvY4AF9H6c7DZgQZsZdgi9m/5rq6qADcDOHyxbDlzX19fytv4B4KbWXpI0ItN95P5uS/IV4N3AUUm2ABcA706yCCjgQeB3Aapqc5JrgHuB54Fzq+qF1s95wDpgBrC6qja3Q3wauDrJZ4E7gSta/QrgS0km6E0oOHNY71GSNNjQwqWqzhpQvmJAbWf7i4GLB9RvYMBvx1TVA7x4Wa2//nN6P8csSRoTf01SktQ5w0WS1DnDRZLUOcNFktQ5w0WS1DnDRZLUOcNFktQ5w0WS1DnDRZLUOcNFktQ5w0WS1DnDRZLUOcNFktQ5w0WS1DnDRZLUOcNFktQ5w0WS1DnDRZLUOcNFktS5g8Y9AO2b5q+8ftxDkLQXG9qZS5LVSbYluaevdmSS9Unub3+PaPUkuSzJRJK7kry9b5/lrf39SZb31d+R5O62z2VJMtUxJEmjM8zLYlcCSyfVVgI3VtUC4Mb2GuA0YEFbVgCXQy8ogAuAdwLHAxf0hcXlwEf79lv6CseQJI3I0MKlqr4H7JhUXgasaetrgDP66ldVz83AzCRzgFOB9VW1o6qeANYDS9u2w6vq5qoq4KpJfQ06hiRpREZ9Q392VT3S1h8FZrf1ucDDfe22tNpU9S0D6lMd42WSrEiyMcnG7du3v4q3I0kaZGyzxdoZR43zGFW1qqoWV9XiWbNmDXMoknRAGXW4PNYuadH+bmv1rcDRfe3mtdpU9XkD6lMdQ5I0IqMOl7XAzhlfy4Hr+upnt1ljS4Cn2qWtdcApSY5oN/JPAda1bU8nWdJmiZ09qa9Bx5AkjcjQvueS5CvAu4GjkmyhN+vrEuCaJOcADwEfas1vAE4HJoCfAR8BqKodSS4CbmvtLqyqnZMEPkZvRtqhwLfbwhTHkCSNyNDCparO2sWmkwe0LeDcXfSzGlg9oL4ROHZA/fFBx5AkjY6Pf5Ekdc5wkSR1znCRJHXOcJEkdc5wkSR1znCRJHXOcJEkdc5wkSR1znCRJHXOcJEkdc5wkSR1znCRJHXOcJEkdc5wkSR1znCRJHXOcJEkdc5wkSR1znCRJHXOcJEkdc5wkSR17qBxD2B/MH/l9eMegiTtVcZy5pLkwSR3J9mUZGOrHZlkfZL7298jWj1JLksykeSuJG/v62d5a39/kuV99Xe0/ifavhn9u5SkA9c4L4v9elUtqqrF7fVK4MaqWgDc2F4DnAYsaMsK4HLohRFwAfBO4Hjggp2B1Np8tG+/pcN/O5Kknfamey7LgDVtfQ1wRl/9quq5GZiZZA5wKrC+qnZU1RPAemBp23Z4Vd1cVQVc1deXJGkExhUuBXw3ye1JVrTa7Kp6pK0/Csxu63OBh/v23dJqU9W3DKi/TJIVSTYm2bh9+/Y9eT+SpD7juqF/YlVtTfL3gPVJftS/saoqSQ17EFW1ClgFsHjx4qEfT5IOFGM5c6mqre3vNuAb9O6ZPNYuadH+bmvNtwJH9+0+r9Wmqs8bUJckjcjIwyXJ65O8Yec6cApwD7AW2DnjazlwXVtfC5zdZo0tAZ5ql8/WAackOaLdyD8FWNe2PZ1kSZsldnZfX5KkERjHZbHZwDfa7OCDgP9aVd9JchtwTZJzgIeAD7X2NwCnAxPAz4CPAFTVjiQXAbe1dhdW1Y62/jHgSuBQ4NttkSSNyMjDpaoeAN42oP44cPKAegHn7qKv1cDqAfWNwLF7PFhJ0quyN01FliTtJwwXSVLnDBdJUucMF0lS5wwXSVLnDBdJUucMF0lS5wwXSVLnDBdJUucMF0lS5wwXSVLnDBdJUucMF0lS5wwXSVLnDBdJUucMF0lS5wwXSVLnDBdJUucMF0lS5wwXSVLnDBdJUuf223BJsjTJj5NMJFk57vFI0oFkvwyXJDOALwCnAQuBs5IsHO+oJOnAsV+GC3A8MFFVD1TVc8DVwLIxj0mSDhgHjXsAQzIXeLjv9RbgnZMbJVkBrGgvn0ny41d5vKOAv32V+x6I/Lx2j5/X7vHz2k353B59Zr80qLi/hsu0VNUqYNWe9pNkY1Ut7mBIBwQ/r93j57V7/Lx23zA+s/31sthW4Oi+1/NaTZI0AvtruNwGLEhyTJJDgDOBtWMekyQdMPbLy2JV9XyS84B1wAxgdVVtHuIh9/jS2gHGz2v3+HntHj+v3df5Z5aq6rpPSdIBbn+9LCZJGiPDRZLUOcNlDyRZnWRbknvGPZZ9QZKjk2xIcm+SzUk+Pu4x7c2SvC7JrUl+2D6vz4x7TPuCJDOS3JnkW+Mey94uyYNJ7k6yKcnGTvv2nsurl+SfAc8AV1XVseMez94uyRxgTlXdkeQNwO3AGVV175iHtldKEuD1VfVMkoOB7wMfr6qbxzy0vVqS3wMWA4dX1fvGPZ69WZIHgcVV1fmXTj1z2QNV9T1gx7jHsa+oqkeq6o62/lPgPnpPU9AA1fNMe3lwW/zf4BSSzAPeC/zFuMdyoDNcNBZJ5gPHAbeMdyR7t3aJZxOwDVhfVX5eU/tT4PeBX4x7IPuIAr6b5Pb2OKzOGC4auSSHAV8DPlFVT497PHuzqnqhqhbRe8rE8Um8/LoLSd4HbKuq28c9ln3IiVX1dnpPkD+3XervhOGikWr3Dr4GfLmqvj7u8ewrqupJYAOwdNxj2YudAPxmu49wNXBSkr8a75D2blW1tf3dBnyD3hPlO2G4aGTaDeorgPuq6vPjHs/eLsmsJDPb+qHAe4AfjXdUe6+qOr+q5lXVfHqPfLqpqv7lmIe110ry+jaxhiSvB04BOpv5arjsgSRfAX4A/EqSLUnOGfeY9nInAB+m9z/KTW05fdyD2ovNATYkuYve8/LWV5XTa9WV2cD3k/wQuBW4vqq+01XnTkWWJHXOMxdJUucMF0lS5wwXSVLnDBdJUucMF0lS5wwXaQSSvNCmXt+T5Js7v78yRftF/dO0k/xmkpXDH6nUDaciSyOQ5JmqOqytrwH+V1VdPEX736H3tNrzRjREqVMHjXsA0gHoB8A/AUhyPPCfgNcBfwd8BPgJcCFwaJITgT8CDqWFTZIrgafpPVb+7wO/X1XXJnkN8J+Bk4CHgf8LrK6qa0f43iTAy2LSSCWZAZwMrG2lHwH/tKqOA/498B+q6rm2/tWqWlRVXx3Q1RzgROB9wCWt9lvAfGAhvSchvGtY70N6JZ65SKNxaHt0/lx6v2OzvtXfCKxJsoDe488PnmZ//62qfgHcm2R2q50I/HWrP5pkQ3fDl3aPZy7SaPxde3T+LwEBzm31i4AN7ZdMf4Pe5bHpeLZvPZ2NUuqI4SKNUFX9DPi3wCeTHETvzGVr2/w7fU1/CrxhN7v/n8A/T/Kadjbz7j0brfTqGS7SiFXVncBdwFnAfwT+KMmdvPQy9QZgYZu+/NvT7PprwBbgXuCvgDuApzobuLQbnIos7UeSHFZVzyR5E73HqJ9QVY+Oe1w68HhDX9q/fKt9QfMQ4CKDRePimYskqXPec5Ekdc5wkSR1znCRJHXOcJEkdc5wkSR17v8BM7mWJlKJ/x0AAAAASUVORK5CYII=\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZgAAAEGCAYAAABYV4NmAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90\nbGliIHZlcnNpb24zLjYuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/av/WaAAAACXBIWXMAAAsT\nAAALEwEAmpwYAAAZhElEQVR4nO3df7Bd1Xmf8edrCWONbYjAMlUkEtGgdAK0kYsik+BJnZCRVCcN\nOMWxPK1RWlplKG7tJk0G0pmSwKgxnSRkSGtaEjQI6hoUbBfFmBAFcDLuYIFwMCAwRROIkSFIiRQM\n00Aq+e0fZ91ydDm6XMFd9+jH85nZc/Z5917rrn3+0Fd773X2SVUhSdJMe8u4ByBJOjoZMJKkLgwY\nSVIXBowkqQsDRpLUxdxxD+Bw8a53vauWLFky7mFI0hHlwQcf/IuqWjBqmwHTLFmyhG3bto17GJJ0\nREnyZwfb5iUySVIXBowkqQsDRpLUhQEjSerCgJEkddEtYJK8Lcn9Sb6WZHuSX2n1X07yzSQPteUD\nQ20uT7IjyRNJVg3Vz07ySNt2bZK0+vFJbm31rUmWDLVZm+TJtqztdZySpNF6TlN+BfjRqnopyXHA\nl5Pc2bZdU1W/NrxzkjOANcCZwHcCf5jke6tqP3AdsA74CvBFYDVwJ3AxsLeqTk+yBrga+HCSk4Ar\ngOVAAQ8m2VxVezseryRpSLczmBp4qb09ri1T/TbA+cAtVfVKVT0F7ABWJFkInFBV99XgtwVuAi4Y\narOxrd8GnNfOblYBW6pqTwuVLQxCSZI0S7reg0kyJ8lDwC4G/+BvbZs+luThJBuSzG+1RcAzQ813\nttqitj65fkCbqtoHvACcPEVfk8e3Lsm2JNt27979xg9UkvQaXb/J3y5vLUvyHcDnk5zF4HLXVQzO\nZq4Cfh3450BGdTFFnTfYZnh81wPXAyxfvtxfXpMmWXLZHeMewqx7+pM/Pu4hHDVmZRZZVf0V8CVg\ndVU9X1X7q+rbwG8DK9puO4FTh5otBp5t9cUj6ge0STIXOBHYM0VfkqRZ0nMW2YJ25kKSecCPAV9v\n91QmfBB4tK1vBta0mWGnAUuB+6vqOeDFJOe0+ysXAbcPtZmYIXYhcE+7T3MXsDLJ/HYJbmWrSZJm\nSc9LZAuBjUnmMAiyTVX1hSQ3J1nG4JLV08DPAlTV9iSbgMeAfcCl7RIbwCXAjcA8BrPHJmaj3QDc\nnGQHgzOXNa2vPUmuAh5o+11ZVXs6HqskaZJuAVNVDwPvGVH/6BRt1gPrR9S3AWeNqL8MfOggfW0A\nNhzCkCVJM8hv8kuSujBgJEldGDCSpC4MGElSFwaMJKkLA0aS1IUBI0nqwoCRJHVhwEiSujBgJEld\nGDCSpC4MGElSFwaMJKkLA0aS1IUBI0nqwoCRJHVhwEiSujBgJEldGDCSpC4MGElSFwaMJKmLbgGT\n5G1J7k/ytSTbk/xKq5+UZEuSJ9vr/KE2lyfZkeSJJKuG6mcneaRtuzZJWv34JLe2+tYkS4barG1/\n48kka3sdpyRptJ5nMK8AP1pV3w8sA1YnOQe4DLi7qpYCd7f3JDkDWAOcCawGPpVkTuvrOmAdsLQt\nq1v9YmBvVZ0OXANc3fo6CbgCeC+wArhiOMgkSf11C5gaeKm9Pa4tBZwPbGz1jcAFbf184JaqeqWq\nngJ2ACuSLAROqKr7qqqAmya1mejrNuC8dnazCthSVXuqai+whVdDSZI0C7reg0kyJ8lDwC4G/+Bv\nBU6pqucA2uu72+6LgGeGmu9stUVtfXL9gDZVtQ94ATh5ir4mj29dkm1Jtu3evftNHKkkabKuAVNV\n+6tqGbCYwdnIWVPsnlFdTFF/o22Gx3d9VS2vquULFiyYYmiSpEM1K7PIquqvgC8xuEz1fLvsRXvd\n1XbbCZw61Gwx8GyrLx5RP6BNkrnAicCeKfqSJM2SnrPIFiT5jrY+D/gx4OvAZmBiVtda4Pa2vhlY\n02aGncbgZv797TLai0nOafdXLprUZqKvC4F72n2au4CVSea3m/srW02SNEvmdux7IbCxzQR7C7Cp\nqr6Q5D5gU5KLgW8AHwKoqu1JNgGPAfuAS6tqf+vrEuBGYB5wZ1sAbgBuTrKDwZnLmtbXniRXAQ+0\n/a6sqj0dj1WSNEm3gKmqh4H3jKj/JXDeQdqsB9aPqG8DXnP/pqpepgXUiG0bgA2HNmpJ0kzxm/yS\npC4MGElSFwaMJKkLA0aS1IUBI0nqwoCRJHVhwEiSujBgJEldGDCSpC4MGElSFwaMJKkLA0aS1IUB\nI0nqwoCRJHVhwEiSujBgJEldGDCSpC56/mSydFRZctkd4x6CdETxDEaS1IUBI0nqolvAJDk1yb1J\nHk+yPcnHW/2Xk3wzyUNt+cBQm8uT7EjyRJJVQ/WzkzzStl2bJK1+fJJbW31rkiVDbdYmebIta3sd\npyRptJ73YPYBP19VX03yTuDBJFvatmuq6teGd05yBrAGOBP4TuAPk3xvVe0HrgPWAV8BvgisBu4E\nLgb2VtXpSdYAVwMfTnIScAWwHKj2tzdX1d6OxytJGtLtDKaqnquqr7b1F4HHgUVTNDkfuKWqXqmq\np4AdwIokC4ETquq+qirgJuCCoTYb2/ptwHnt7GYVsKWq9rRQ2cIglCRJs2RW7sG0S1fvAba20seS\nPJxkQ5L5rbYIeGao2c5WW9TWJ9cPaFNV+4AXgJOn6GvyuNYl2ZZk2+7du9/4AUqSXqN7wCR5B/BZ\n4BNV9S0Gl7u+B1gGPAf8+sSuI5rXFPU32ubVQtX1VbW8qpYvWLBgqsOQJB2irgGT5DgG4fLpqvoc\nQFU9X1X7q+rbwG8DK9ruO4FTh5ovBp5t9cUj6ge0STIXOBHYM0VfkqRZ0nMWWYAbgMer6jeG6guH\ndvsg8Ghb3wysaTPDTgOWAvdX1XPAi0nOaX1eBNw+1GZihtiFwD3tPs1dwMok89sluJWtJkmaJT1n\nkZ0LfBR4JMlDrfZLwEeSLGNwyepp4GcBqmp7kk3AYwxmoF3aZpABXALcCMxjMHvszla/Abg5yQ4G\nZy5rWl97klwFPND2u7Kq9nQ5SknSSN0Cpqq+zOh7IV+cos16YP2I+jbgrBH1l4EPHaSvDcCG6Y5X\nkjSz/Ca/JKkLA0aS1IUBI0nqwoCRJHVhwEiSujBgJEldGDCSpC4MGElSFwaMJKmLno+KkaQjzpLL\n7hj3EGbd05/88S79egYjSerCgJEkdWHASJK6MGAkSV0YMJKkLgwYSVIXBowkqQsDRpLUhQEjSerC\ngJEkdWHASJK66BYwSU5Ncm+Sx5NsT/LxVj8pyZYkT7bX+UNtLk+yI8kTSVYN1c9O8kjbdm2StPrx\nSW5t9a1Jlgy1Wdv+xpNJ1vY6TknSaNMKmCTnTqc2yT7g56vq+4BzgEuTnAFcBtxdVUuBu9t72rY1\nwJnAauBTSea0vq4D1gFL27K61S8G9lbV6cA1wNWtr5OAK4D3AiuAK4aDTJLU33TPYH5rmrX/r6qe\nq6qvtvUXgceBRcD5wMa220bggrZ+PnBLVb1SVU8BO4AVSRYCJ1TVfVVVwE2T2kz0dRtwXju7WQVs\nqao9VbUX2MKroSRJmgVTPq4/yQ8CPwQsSPJzQ5tOAOaMbjWynyXAe4CtwClV9RwMQijJu9tui4Cv\nDDXb2Wr/t61Prk+0eab1tS/JC8DJw/URbYbHtY7BmRHf9V3fNd3DkSRNw+udwbwVeAeDIHrn0PIt\n4MLp/IEk7wA+C3yiqr411a4jajVF/Y22ebVQdX1VLa+q5QsWLJhiaJKkQzXlGUxV/RHwR0lurKo/\nO9TOkxzHIFw+XVWfa+XnkyxsZy8LgV2tvhM4daj5YuDZVl88oj7cZmeSucCJwJ5Wf/+kNl861PFL\nkt646d6DOT7J9Un+IMk9E8tUDdq9kBuAx6vqN4Y2bQYmZnWtBW4fqq9pM8NOY3Az//52Oe3FJOe0\nPi+a1GairwuBe9p9mruAlUnmt5v7K1tNkjRLpvuTyb8L/Ffgd4D902xzLvBR4JEkD7XaLwGfBDYl\nuRj4BvAhgKranmQT8BiDGWiXVtXE37oEuBGYB9zZFhgE2M1JdjA4c1nT+tqT5CrggbbflVW1Z5rj\nliTNgOkGzL6quu5QOq6qLzP6XgjAeQdpsx5YP6K+DThrRP1lWkCN2LYB2DDd8UqSZtZ0L5H9XpJ/\nlWRh+6LkSe27JpIkjTTdM5iJ+xy/MFQr4G/P7HAkSUeLaQVMVZ3WeyCSpKPLtAImyUWj6lV108wO\nR5J0tJjuJbIfGFp/G4Ob9F9l8NgWSZJeY7qXyP718PskJwI3dxmRJOmo8EYf1/9/GHwRUpKkkaZ7\nD+b3ePVZXnOA7wM29RqUJOnIN917ML82tL4P+LOq2nmwnSVJmtYlsvbQy68zeJLyfOBveg5KknTk\nm+4vWv40cD+Dx7L8NLA1ybQe1y9JOjZN9xLZvwd+oKp2ASRZAPwhg1+RlCTpNaY7i+wtE+HS/OUh\ntJUkHYOmewbz+0nuAj7T3n8Y+GKfIUmSjgZTBkyS04FTquoXkvwU8D4Gj+C/D/j0LIxPknSEer3L\nXL8JvAhQVZ+rqp+rqn/L4OzlN/sOTZJ0JHu9gFlSVQ9PLrYfAFvSZUSSpKPC6wXM26bYNm8mByJJ\nOrq8XsA8kORfTi4muRh4sM+QJElHg9ebRfYJ4PNJ/gmvBspy4K3ABzuOS5J0hJsyYKrqeeCHkvwI\ncFYr31FV93QfmSTpiDbdZ5HdW1W/1ZZphUuSDUl2JXl0qPbLSb6Z5KG2fGBo2+VJdiR5IsmqofrZ\nSR5p265NklY/Psmtrb41yZKhNmuTPNmWtdMZryRpZvX8Nv6NwOoR9WuqallbvgiQ5AxgDXBma/Op\nJHPa/tcB6xj8/szSoT4vBvZW1enANcDVra+TgCuA9wIrgCuSzJ/5w5MkTaVbwFTVHwN7prn7+cAt\nVfVKVT0F7ABWJFkInFBV91VVMfiJ5guG2mxs67cB57Wzm1XAlqraU1V7gS2MDjpJUkfjeJ7Yx5I8\n3C6hTZxZLAKeGdpnZ6stauuT6we0qap9wAvAyVP09RpJ1iXZlmTb7t2739xRSZIOMNsBcx3wPcAy\n4Dng11s9I/atKepvtM2Bxarrq2p5VS1fsGDBFMOWJB2qWQ2Yqnq+qvZX1beB32ZwjwQGZxmnDu26\nGHi21RePqB/QJslc4EQGl+QO1pckaRbNasC0eyoTPghMzDDbDKxpM8NOY3Az//6qeg54Mck57f7K\nRcDtQ20mZohdCNzT7tPcBaxMMr9dglvZapKkWTTdx/UfsiSfAd4PvCvJTgYzu96fZBmDS1ZPAz8L\nUFXbk2wCHgP2AZdW1f7W1SUMZqTNA+5sC8ANwM1JdjA4c1nT+tqT5CrggbbflVU13ckGkqQZ0i1g\nquojI8o3TLH/emD9iPo2Xv2S53D9ZQY/4Tyqrw3AhmkPVpI04/xVSklSFwaMJKkLA0aS1IUBI0nq\nwoCRJHVhwEiSujBgJEldGDCSpC4MGElSFwaMJKkLA0aS1IUBI0nqwoCRJHVhwEiSujBgJEldGDCS\npC4MGElSFwaMJKkLA0aS1MXccQ9AR64ll90x7iFIOox1O4NJsiHJriSPDtVOSrIlyZPtdf7QtsuT\n7EjyRJJVQ/WzkzzStl2bJK1+fJJbW31rkiVDbda2v/FkkrW9jlGSdHA9L5HdCKyeVLsMuLuqlgJ3\nt/ckOQNYA5zZ2nwqyZzW5jpgHbC0LRN9XgzsrarTgWuAq1tfJwFXAO8FVgBXDAeZJGl2dAuYqvpj\nYM+k8vnAxra+EbhgqH5LVb1SVU8BO4AVSRYCJ1TVfVVVwE2T2kz0dRtwXju7WQVsqao9VbUX2MJr\ng06S1Nls3+Q/paqeA2iv7271RcAzQ/vtbLVFbX1y/YA2VbUPeAE4eYq+XiPJuiTbkmzbvXv3mzgs\nSdJkh8sssoyo1RT1N9rmwGLV9VW1vKqWL1iwYFoDlSRNz2wHzPPtshftdVer7wROHdpvMfBsqy8e\nUT+gTZK5wIkMLskdrC9J0iya7YDZDEzM6loL3D5UX9Nmhp3G4Gb+/e0y2otJzmn3Vy6a1GairwuB\ne9p9mruAlUnmt5v7K1tNkjSLun0PJslngPcD70qyk8HMrk8Cm5JcDHwD+BBAVW1Psgl4DNgHXFpV\n+1tXlzCYkTYPuLMtADcANyfZweDMZU3ra0+Sq4AH2n5XVtXkyQaSpM66BUxVfeQgm847yP7rgfUj\n6tuAs0bUX6YF1IhtG4AN0x6sJGnGHS43+SVJRxkDRpLUhQEjSerCgJEkdWHASJK6MGAkSV0YMJKk\nLgwYSVIXBowkqQsDRpLUhQEjSerCgJEkdWHASJK6MGAkSV0YMJKkLgwYSVIXBowkqQsDRpLUhQEj\nSerCgJEkdTF33AM4Wiy57I5xD0GSDitjOYNJ8nSSR5I8lGRbq52UZEuSJ9vr/KH9L0+yI8kTSVYN\n1c9u/exIcm2StPrxSW5t9a1Jlsz6QUrSMW6cl8h+pKqWVdXy9v4y4O6qWgrc3d6T5AxgDXAmsBr4\nVJI5rc11wDpgaVtWt/rFwN6qOh24Brh6Fo5HkjTkcLoHcz6wsa1vBC4Yqt9SVa9U1VPADmBFkoXA\nCVV1X1UVcNOkNhN93QacN3F2I0maHeMKmAL+IMmDSda12ilV9RxAe313qy8Cnhlqu7PVFrX1yfUD\n2lTVPuAF4OTJg0iyLsm2JNt27949IwcmSRoY103+c6vq2STvBrYk+foU+44686gp6lO1ObBQdT1w\nPcDy5ctfs12S9MaN5Qymqp5tr7uAzwMrgOfbZS/a6662+07g1KHmi4FnW33xiPoBbZLMBU4E9vQ4\nFknSaLMeMEnenuSdE+vASuBRYDOwtu22Fri9rW8G1rSZYacxuJl/f7uM9mKSc9r9lYsmtZno60Lg\nnnafRpI0S8ZxiewU4PPtnvtc4H9U1e8neQDYlORi4BvAhwCqanuSTcBjwD7g0qra3/q6BLgRmAfc\n2RaAG4Cbk+xgcOayZjYOTJL0qlkPmKr6U+D7R9T/EjjvIG3WA+tH1LcBZ42ov0wLKEnSeBxO05Ql\nSUcRA0aS1IUBI0nqwoCRJHVhwEiSujBgJEldGDCSpC4MGElSFwaMJKkLA0aS1IUBI0nqwoCRJHVh\nwEiSujBgJEldGDCSpC4MGElSFwaMJKkLA0aS1IUBI0nqwoCRJHVhwEiSujiqAybJ6iRPJNmR5LJx\nj0eSjiVHbcAkmQP8F+AfAmcAH0lyxnhHJUnHjqM2YIAVwI6q+tOq+hvgFuD8MY9Jko4Zc8c9gI4W\nAc8Mvd8JvHd4hyTrgHXt7UtJnngTf+9dwF+8ifbHGj+vQ+PndWj8vA5Brn5Tn9d3H2zD0RwwGVGr\nA95UXQ9cPyN/LNlWVctnoq9jgZ/XofHzOjR+Xoem1+d1NF8i2wmcOvR+MfDsmMYiScecozlgHgCW\nJjktyVuBNcDmMY9Jko4ZR+0lsqral+RjwF3AHGBDVW3v+Cdn5FLbMcTP69D4eR0aP69D0+XzSlW9\n/l6SJB2io/kSmSRpjAwYSVIXBsyblGRDkl1JHh33WA53SU5Ncm+Sx5NsT/LxcY/pcJbkbUnuT/K1\n9nn9yrjHdCRIMifJnyT5wrjHciRI8nSSR5I8lGTbjPbtPZg3J8kPAy8BN1XVWeMez+EsyUJgYVV9\nNck7gQeBC6rqsTEP7bCUJMDbq+qlJMcBXwY+XlVfGfPQDmtJfg5YDpxQVT8x7vEc7pI8DSyvqhn/\nYqpnMG9SVf0xsGfc4zgSVNVzVfXVtv4i8DiDJy5ohBp4qb09ri3+j3AKSRYDPw78zrjHIgNGY5Jk\nCfAeYOuYh3JYa5d7HgJ2AVuqys9rar8J/CLw7TGP40hSwB8kebA9PmvGGDCadUneAXwW+ERVfWvc\n4zmcVdX+qlrG4EkUK5J4GfYgkvwEsKuqHhz3WI4w51bV32fw5PlL22X/GWHAaFa1ewmfBT5dVZ8b\n93iOFFX1V8CXgNXjHclh7VzgJ9s9hVuAH03y38c7pMNfVT3bXncBn2fwJPoZYcBo1rSb1jcAj1fV\nb4x7PIe7JAuSfEdbnwf8GPD1sQ7qMFZVl1fV4qpawuDRUPdU1T8d87AOa0ne3ibckOTtwEpgxmbE\nGjBvUpLPAPcBfyfJziQXj3tMh7FzgY8y+J/lQ235wLgHdRhbCNyb5GEGz9bbUlVOvdVMOgX4cpKv\nAfcDd1TV789U505TliR14RmMJKkLA0aS1IUBI0nqwoCRJHVhwEiSujBgpFmSZH+bmv1okt+b+I7L\nFPsvG57GneQnk1zWfaDSDHGasjRLkrxUVe9o6xuB/11V66fY/2cYPOX2Y7M0RGlGzR33AKRj1H3A\n3wNIsoLBQxrnAX8N/DPgKeBKYF6S9wG/2rYvr6qPJbkR+BaDx9L/LeAXq+q2JG8B/jPwD1ofbwE2\nVNVts3do0oCXyKRZlmQOcB6wuZW+DvxwVb0H+A/Af6yqv2nrt1bVsqq6dURXC4H3AT8BfLLVfgpY\nAvxd4F8AP9jrOKTX4xmMNHvmtUfvL2HwY2tbWv1EYGOSpQwenX7cNPv7n1X1beCxJKe02vuA3231\nP09y70wNXjpUnsFIs+ev26P3vxt4K3Bpq18F3Nt+EfUfAW+bZn+vDK1n0qs0dgaMNMuq6gXg3wD/\nrv18wYnAN9vmnxna9UXgnYfY/ZeBf5zkLe2s5v1vbrTSG2fASGNQVX8CfI3BY+X/E/CrSf4XMGdo\nt3uBM9rU5g9Ps+vPAjsZPHL9vzH4xdAXZmzg0iFwmrJ0lEnyjqp6KcnJDB7Bfm5V/fm4x6Vjjzf5\npaPPF9qXON8KXGW4aFw8g5EkdeE9GElSFwaMJKkLA0aS1IUBI0nqwoCRJHXx/wAP8du3Y6QZWwAA\nAABJRU5ErkJggg==\n", "text/plain": [ - "
" + "
" ] }, "metadata": { - "needs_background": "light" }, "output_type": "display_data" }, @@ -511,19 +807,20 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 9, "metadata": { "id": "1gYdfRoOw04z" }, "outputs": [ { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAnMAAAJ0CAYAAAB5taW2AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nOzdd3hc1Z0+8Pc7Td2y1WVL7t2+LtjGNuACONQBQgIR3bRQkg0km6ZksxsnWX6bbFabyqYu2dmQhLBhyWYT0jbEtqiu2AJMM7bx2JLcJNmWPf38/rhXIIzKzGhmztyZ9/M8eizN3Dv3lcDjV+fce64opUBERERE9uTQHYCIiIiIkscyR0RERGRjLHNERERENsYyR0RERGRjLHNERERENsYyR0RERGRjLHNEeU5EVojIq7pzZAP+LIjIjljmiLKciOwVkZCIVJ3x+HYRUSIycSSvr5RqVUrNiDPLOhF5eCTHG4ls+lkMkm+xiPxWRLpEpFtEXhaRB0RkzEhyERENhWWOyB72ALi+7wsRMQAU64ujVVb+LETkHADrATwNYKZSajSASwBEAMxPw/FcqX5NIrInljkie/gpgFv6fb0WwH/230BEykXkP0XksIjsE5EviIhDRAqsUaK5/batFpHTIlIjIqtFxN/vubEi8pj1OntE5L54AorIMhF5xjrWDhFZ3e+59SLyFRF5WkROiMif+kbXRKRQRB4WkaPWvptFpNaGP4t/BvATpdQ/KaU6AUAp9ZZS6otKqfX9XvN2Edlljd79UUQm9HtOicg9IvK6lfNBERHruVutn983ROQogHXW9/MvIvKWiHSKyPdFpMjavsoaJewWkWMi0ioifM8nykH8i01kD88BGCUis0TECeA6AGdOd34HQDmAyQBWwSw8tymlggD+G/1GswB8CMAGpdSh/i9g/WP/vwB2ABgH4EIAHxeRi4cKJyLjAPwOwD8CqADwKQCPiUh1v81uAHAbgBoAHmsbwCxj5QAaAVQCuAfAaTv9LESkBMByAI8NkRsichWAzwP4AIBqAK0AfnHGZl4ASwDMs7L1P95SAG8CqAXwAICvApgOYAGAqVbOf7C2/SQAv3WcWuu4vH8jUQ5imSOyj74RqfcB2AXgQN8T/UrN55RSJ5RSewG0ALjZ2uTn1vN9brAeO9MSANVKqS8rpUJKqTcB/OiMfQdyE4AnlFJPKKViSqk/A9gC4LJ+2/xEKfWaUuo0gEdhFhAACMMscVOVUlGl1Fal1PFhjpdtP4sxMN9PO/rl+GdrVKxXRL5gPXwPgH9SSu1SSkUA/D8AC/qPzgH4qlKqWyn1FoC/4p2fEwAcVEp9x9o3AOAuAJ9QSh1TSp2wXq8vXxhAPYAJSqmwdT4gyxxRDuI5F0T28VMAGwFMwhnTigCqALgB7Ov32D6YIzWAWQqKRWQpgE6YBeHxAY4xAcBYEenu95gT5gjSUCYAuFZEruj3mNs6bp+Ofp+fAlBqff5TmKNyj4jIaJijbH+nlAoPcbxs+1l0AYjBLE+vAIBS6jMAPmNdMNL3XjsBwLdEpKXfvmJl68s72M8JAPb3+7wa5rmCW62Z2L7Xclqffx3AOgB/sp7/oVLqqwNkJyKbY5kjsgml1D4R2QNztOuOM54+AnMkZgKAl63HxsMasVJKRUXkUZjTi50AfmuN5JxpP4A9SqlpCcbbD+CnSqkPJ7gfrNL2JQBfEvNq1CcAvArg34fYJ6t+FkqpXhF5Hub06V+H2HQ/gAeUUj8b7jUHO1S/z4/AnI6eo5Q68J4Nze/pkwA+aZ0j+KSIbFZK/SXJYxNRluI0K5G93AHgAqVUb/8HlVJRmFOXD4hImTVt97d497lkPwfQBOBGDDytCACbAJwQkc+KSJGIOEVkrogs6beNw7pooe+jwDrOFSJysbVPoXUxQcNw35CInC8ihjU9ehxmEYvZ5GfR32cA3C4izSJSY31vDTBHD/t8H8DnRGSO9Xy5iFwbx/f6HkqpGMxp32/0O964vnP6RMQrIlOtCyh6AEQR38+ViGyGZY7IRpRSu5VSWwZ5+mMAemGeIP8UzJLyUL99n7eeHwvg94O8fhTmCfgLYC4BcgTAj2FeTNDnepgjQn0fu5VS+wH0ndx/GOYI1KcR33tMHYBfwSxyuwBsgDmNOqQs+Vn03/4pABcAWAngNWt69g8wlyv5jrXN4wC+BnNK+TiAFwFcOtz3OoTPAngDwHPW6/0fgL518qZZX58E8CyAf1NKDTVqSEQ2JTwfloiIiMi+ODJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHlCVE5P0iokRk5iDPrxeRxWk45uxUviYREWUWyxxR9rgewFPWn5nyfgAJlTkRcaUpCxERJYFljigLiEgpgPMA3AHgOuuxIhF5RER2icjjAIqsx+8Rka/32/dWEfmu9flNIrJJRF4QkR+IiNN6/KSIPCAiO0TkORGpFZFzAFwJ4OvW9lP6j/6JSJWI7O13jN+IyJMA/iIiJSLykHWs7SJyVaZ+VkRE9G4sc0TZ4SoAf1BKvQbgqIgsAnAvgFNKqVkAvghgkbXtYwCu7rdvE4BHRGSW9fm5SqkFAKIAbrS2KQHwnFJqPoCNAD6slHoGwG8AfFoptUAptXuYjGcBuEYptQrA3wF4Uil1NoDzYRbCkpH8AIiIKDmcLiHKDtcD+Jb1+SPW11MBfBsAlFI7RWSn9flhEXlTRJYBeB3ATABPA/gozMK3WUQAcyTvkPWaIQC/tT7fCuB9SWT8s1LqmPX5RQCuFJFPWV8XAhgPYFcSr0tERCPAMkekmYhUALgAgCEiCoATgAKwfYjdHgHwIQCvAHhcKaXEbHA+pdTnBtg+rJRS1udRDP53P4J3RuwLz3iut39sAB9USr06REYiIsoATrMS6XcNgJ8qpSYopSYqpRoB7IE5gnYDAIjIXADz+u3zOMyp2ethFjsA+AuAa0SkxtqnQkQmDHPsEwDK+n29F+9M514zxH5/BPAxq0BCRBYOcxwiIkoTljki/a6HWc76ewzAJAClIrILwJdhljsAgFKqC+aU5gSl1CbrsZcBfAHAn6wp2T8DqB/m2I8A+LR1EcMUAP8C4F4R2Q6gaoj9vgLADWCniLxkfU1ERBrIOzMvRERERGQ3HJkjAiAiUWt5jr6PiUm8xq0iMjb16YiIiAbHCyCITKet5TxG4lYALwI4OPI4RERE8eHIHNEARKRURP4iIttEpK1vUVwRmWgt4vsjEXlJRP5kLe57DYDFAH5mjewVicg/iMhmEXlRRH7Y72KB+0TkZRHZaS0K7BCR10Wk2nreISJv9H1NREQ0FJ4zRwRzmhVAm/XlHgDXAihWSh0XkSoAzwGYBmACgDcALFZKvSAijwL4jVLqYRFZD+BTSqkt1mtW9K3LJiI/BfCoUup/ReQggElKqaCIjFZKdYvIFwH0KKW+KSIXAbhbKfXBzP0EiIjIrjgyR2Q6bd0FYYFS6mqY66j9P+uq0P8DMA5ArbXtHqXUC9bnWwFMHOQ1zxeR50WkDeY6cnOsx3fCHMG7Cea6bgDwEIBbrM9vB/CTFH1fRESU41jmiAZ2I4BqAIusc+k68c4iusF+2w24AK+IFAL4N5i3vzIA/Kjf/pcDeBDm7bE2i4hLKbUfQKeIXADgbAC/T/23REREuYhljmhg5QAOKaXCInI+zOnV4fRfgLevuB0RkVJYC/CKiANAo1LqrwA+ax2n1Nr2xwAeBvBfSqloar4NIiLKdbyalWhgPwPwv9YU6RaYt80azn8A+L6InAawHOZo3IsAOgBstrZxAnhYRMphTuV+WynVbT33G5jTq5xiJSKiuPECCKIsISKLAXxDKbVCdxYiIrIPjswRZQERaQZwL8xz9YiIiOLGkTkiIiIiG+MFEEREREQ2xjJHREREZGMsc0REREQ2xjJHREREZGMsc0REREQ2xjJHREREZGNcZ46IstKumbMKABQDKLH+7P95/8ccACIAwtbHuz7/1bnS++hKZxhA7xkfp9vWtnFtJiKyPZY5IkqrXTNnVQIYC6De+rPv83oAFRi8qDlTcfzS09gIYOUATynDZ5yCWex6ALQDODDIx8G2tW3hVOQhIko1ljkiStqumbNcABoBTAQwwfqz72M8zOJWoCWcJeiBDPKUwCyOJQBqAEwb4mWU4TMOwyp2GKT0ta1tO5aq3ERE8WKZI6JhWaVtBoAFAOZbHzMBjEOKRtDSJeCWVJwbLDALXw2AhYNtZPiMIwDaAOzs9/Fi29q2QAoyEBENiGWOiN5l18xZ5XinsPWVtzkACnXmSlbAk9ELvaoAnG999IkaPuN1vLvg7Wxb27Yvg7mIKIexzBHlsV0zZ03Cu0vbAphTpDkj4NZ+1b4T5ijmTAAf6nvQ8BndeO8oXlvb2rZeHSGJyL5Y5ojyxK6Zs0oBnAdgFYBzYJa3cq2hMiDoztpp4NEAVlgffZThM3YBWA/gSQB/5Xl4RDQcljmiHLVr5qxRMMvbapgF7izk4d/5oMdW37MAmG19fARmudsBs9g9CWBj29q2ExrzEVEWEqW4zBJRLnjwnieLAKyAip1//ob7VwtiS5DlFydkwleuc7zYNskxV3eOFIkA2Ip3yt3TbWvbTuuNRES62ek3ViI6w4P3PDkHwOUALgZwLoACiANdo6e+VNH9Wt4XOQAIusWtO0MKuQAstT4+ByBo+IznAPwVZrl7juvhEeUfljkiG3nwnicLAFwA4AoAl8Fc2+09OuqWHa7ofi2T0bJW0J3T73MFMKfQVwFYB6DX8BlPA/gDgF+1rW3brzEbEWVILr/JEeWEB+95shJmebsKwPtgLnI7pKMVs0elO5ddhFzw6M6QQSUALrI+WqxRu18C+K+2tW0HtSYjorRhmcsTIhKFuQyCC8AeADcrpbr1pqLBPHjPkzUArgZwLcxRl4T+robdpbNj4go6VETr3ReyQdCdV2WuPwGw3Pr4hjVi90uYI3YdWpMRUUrxAog8ISInlVKl1uc+AK8ppR7QHIv6aWnyjgbwQQDXu0veX+H0TB70TgPxMF78wfbqIztH9Bq54Pb7nd0ni2W07hxZJAZgI4BHATzWtrbtkOY8RDRCHJnLT88CmAcAIrIAwPdh3th8N4DblVJdIrIewHaYa2CVALgF5gnXBoBfKqW+YO3/a5j35iwE8C2l1A+tx08C+BYAL4DTAK5SSnWKSK11vMlWlnuVUs+IyE0A7gPgAfA8gI8opaJp/SlkgZYmbwmAKwFcD/MiBg8ARAKbNjo9k4fadVjtdct6qo/sHHFGuwu59d4bNgs5YC5XsxrAdwyfsR7vFLuj+mIRUbJ0r4xOGSYiTgAXAviN9dB/AvisUmoezGnYL/bbPKSUWgyzfP0PgI8CmAvgVhGptLa5XSm1CMBiAPf1e7wEwHNKqfkwRwE+bD3+bQAbrMfPAvCSiMwC0ATgXKXUAgBRADem+FvPKi1N3vNbmrw/B3AIwM9hnhP39nSgirbPUkrFRnKMrtHTq0aWMjeEXSxzQ+h7P/gBgA7DZ/zR8Bl3GD5jjOZcRJQAjszljyIReQHmjdF3AfiziJQDGK2U2mBt4wPwX/326St8bQBeUkq1A4CIvAlzNO4ozAJ3tbVdI4Bp1uMhAL+1Ht8K88R9wLwS8xYAsEbeekTkZgCLAGwWEQAogllyckpLk7cSwK0A7gIwfeitVXUsvPsFp2fqgmSPF3UWzow4C064osGyZF/D7hQQUSJ8n4uPC+9cPPE9w2f8GsB329a2bdQbi4iGwze5/HFaKbVARIoB/BHmKJtvmH2C1p+xfp/3fe0SkdUA1gBYrpQ6ZU3N9t2MPazeOSEziqH/XxMAPqXU5+L9Zuykpcm7CsDdAD4AxD9KFAlsPuH0TE3+wCKuw9ULd9V3PHd28i9iewEApbpD2JAb5sU31xo+ow3AgwB+2ra27ZTeWEQ0EE6z5hml1CmY56Z9EkAvgC4R6bs35M0ANgy27wDKAXRZRW4mgGVx7PMXAPcC5pSvNTr4FwDXiEiN9XiFiAy4fppdtDR5K1qavJ9oafL23WfzeiRQ5ABARdtnKhUb0XmDHbVn5/XdAdS7fwmh5BgwT7U4YPiMbxg+YwS/YRBROnBkLg8ppbaLyE6YBWMtgO9bI3ZvArgtgZf6A4B7RGQXgFcBPBfHPvcD+KGI3AFzxO5epdSzIvIFAH8SEQeAMMyRw30JZMkKLU3eFTBH4a5BguVtANWx8O7tTs+0pK9I7Rk1qX6EGWxNCUK6M+SQ0QA+DuB+w2f8EcB3ATzRtraNSyIQacalSYhGqKXJWwTgdpg3Rp+dytcWZ11rwagbVgy/5SCUUuc987mjnvCJvLwYIuLAvhs+67L1KG+W2w3gewAealvb1qU7DFG+4jQrUZJamrxjWpq8X4A5gvhdpLjIAYCKdswa0VSriHTWLHo9hZFsJeYA71OaXlMA/AsAv+EzfmT4jHm6A6WbiNSJyCMisltEtorIEyIyzAVNKTv2XhHJy1/MaGicZiVKUEuTdyyAv4V5VWq6rxStioXf2O70TE96qrWjdkmk8cD6FEayj6iwzGVIMYA7Adxp+IynYP5y81jb2raI3lipJebl9o/DvGDrOuux+QBqAfBmyKQNyxxRnFqavNMAfAbmhSIZW7ssEth80ulJ/hf/k6UNeTvNGHUip8qETZxnfewxfMaXYV4FmysLgJ8P80r97/c9oJTaIaavA7gUgALwj0qpX1pX/H8JQDfMC0kehbnU0/0wl2B6v1Jqt4hUw7zIZLz1sh9XSj1trdv5C5hLSj0L88p/iMiXARxTSn3T+voBAIeUUt9K77dP2YrTrETDaGnyLmxp8v4SwCswRx8yugitinbOViqWdClRDtf404WVB1KZyS6iDpY5jSYB+AmAlw2fcYPhM3Lh35u5MNfNPNMHACwAMB/mck1fF5G+i4/mA7gHwCyYvwhOV0qdDeDHAD5mbfMtAN9QSi2BeUu/H1uPfxHAU0qpOTBHBPvK3kOw1uu0Lhq7DsDDKfoeyYY4Mkc0iJYm72oAzTBvs6VTZSz8+janZ8ZZyb5AR+3SPZP2PTEulaHsIMKRuWwwHcDPAHze8BnrYE6/5tqVd+cB+IW1EHqniGwAsATAcQCb+y24vhvAn6x92mCO9AFmAZxtLZoOAKNEpBTASphFEUqp34lIl/X5XhE5KiILYU7xbldK8VZseSwXflMiSqmWJu/7Wpq8zwD4K/QXOQBAJLB5RIu1Hqo5Ky//rkecyJXpvVwwB+YdZrYbPuMq3WGS9BLMu9Uk4swF1/svxt43oOIAsEwptcD6GKeUOjnM6/4Y5h1lboM5Ukd5LC/f4IkG0tLkXdDS5P0TzN+cl+vO05+KHpozkqnW3uLaKanMYxdhJ0Z0f1tKi/kAfm34jM2Gz7hUd5gEPQmgQETu6ntARObBPCeuyVoIvRrmiNqmBF73T3hnyhUi0ncbv40AbrAeuxRA/3vmPg7gEpgjgH9M/FuhXMIyR3mvpck7oaXJ+1MA2/DOPWSzzZhY+NUdSe8tjtoTJeN2pzCPLYRcLHNZbDGAJwyf8YzhM9boDhMP6xaFVwNYYy1N8hKAfwLwcwA7AeyAWfg+o5TqSOCl7wOwWER2isjLMM+xA8yLJ1Zax/kAgLf6ZQnBnD141JrepTzGRYMpb7U0eSsAfB7A3yDDFzUkQ5zVTxWMuvm8ZPdv3P/kxmm7H1uZykzZ7pVx2PgPt7jy6nu2sQ0A/r5tbVur7iB2YF34sA3AtUqpvF1LkkwcmaO809LkLWxp8n4G5ur1n4QNihwAqOjhuUpFk1437XDVPE8q89hByC38bdU+VgHYaPiMPxs+I577POctEZkN4A0Af2GRI4BXs1IeaWnyOmAuDfAVAI2a4yRjdCz02hZnwazFyewcKKycqSAxgcqbX+JCfIezozUA1hg+4+cAPtm2ti2R6cq8oJR6GcBk3Tkoe+TNmzrlt5Ym7yUAtgP4D9izyAEAIsEtgaR3FhndUz7l1RTGyXpBt+4ENAI3AHjF8Bn3GT7DqTsMUTZjmaOc1tLkndrS5P09gN8DsP19I1X0sDGSqdb2uqWdqcyT7VjmbK8c5oK6mw2fsVR3GKJsxUkIykktTd5CAJ8D8FnY5Jy4OJXHQq9sdhbMWZLMzkcr5qT7XrJZJeCBDL8V2cBCAM8aPuPfAXy2bW3bMd2BiLIJR+Yo51hTqi8C+AfkVpEDAESCW0PJ7hvyjJoVE2fS+9tN0M0yl0ME5u30XjV8xh2Gz+B/WyILyxzljJYmb0NLk/cxmFOqObtIrooematUNLlCJlJ8rGLWrhRHylpBt/A9LvdUwbz7wdOGz5ivOwxRNuAbHdleS5PX0dLk/RsAL8O6j2GOK4+GdiW9gHB73bKuVIbJZgE33+Ny2HIAWw2f8U3DZ4zSHYZIJ77Rka21NHnnAngawHcA5M35YNFA8lOtXaNnVKYySzYLeMCrIHObE8D9MK96vV53GCJdeAEE2VJLk7cAwN8D+AyAvLtmUcWOGkpFgiKuhM8JjLiKZkYdnl5nLFSSjmzZhGUub9QD+LnhM+4E8NG2tW2v6A5ElEkcmSPbaWnyLod5H8S/Qx4WOcuopKdaRdyHq+bnxXlzQTfLXJ65AMAOw2d8nmvTUT5hmSPbaGnyulqavF8G0Apguu48ukUD2yLJ7ttRd3ZvKrNkq6A7b8t+PvMAeADABsNn8C4JlBdY5sgWWpq8UwE8BXNqlb9x4+2p1qTuCNFdPqU21XmyUdAtLHP561yYo3R36g5ClG4sc5T1Wpq8d8K8FRdXgH+3smjo5Z3J7BhzeGaEXSU5f1UrR+byXimAHxk+438Mn1GjOwxRurDMUdZqafJWtjR5/xvAj2C+KdMZkp5qFZHOmkU5f5/WoBse3RkoK1wJoM3wGVfoDkKUDixzlJVamrwXAWgDcLXuLNlMxY7NS3aqtaN2Sc7fCSLk4sgcva0GwG8Mn/F9w2cU6Q5DlEosc5RVWpq8hS1N3m8C+APM5QZoaKXR0EtJXdV6omx8Y6rDZJuwK/du50YjdjeATYbPmK07CFGqsMxR1mhp8hoANsNcBJT3XYxTNLAtlsx+yuGaFCgY057qPNkkxDJHA5sLYIvhM+7SHYQoFVjmKCu0NHk/BrPIzdWdxW5UrGueUuHTyezbUXv27lTnySYsczSEIgA/MHzGLw2fUa47DNFIsMyRVi1N3qKWJu/DAL4N8B/eJJVEg8lNtXbWLMrZEVAFhCCSs98fpcyHAGw3fAavlifbYpkjbVqavBNh3lf1Rs1RbC8a3KaS2a+3pC6XF1VN6sIQykuTALQaPuMTuoMQJYNljrRoafJeCGALgIW6s+QCFeuep1T4VMI7irP+ZHH9njRE0k4BOX+1LqWUG8C/Gj7jIcNncEkbshWWOcq4libvpwD8EUCl7iw5pCQafDGpBYQ76pb6Ux0mGyhhmaOk3Abg/wyfUaU7CFG8WOYoY1qavMUtTd5fAPg6eEuulIsGtye13+HqBa4UR8kKMQfLHCVtBczlS+boDkIUD5Y5yoiWJu8kAM8CuE53llxlTrWGehPd73Rh1QwFJHXOXTaLCcK6M5CtTQLwrOEzLtcdhGg4LHOUdtbdHLYAmKc7S44rTmqqVaTi+KhJr6Uhj1YxB8scjVgZzLtGfEp3EKKhsMxRWrU0eZsB/B5Ahe4s+SAa3J7UUhztdcs6Up1Ft6gDyd23lujdHAC+zgsjKJvl5LkypF9Lk9cN4CfgsiMZpWI985UK9Yp4ShLZ70jl3IS2t4MIyxyl1m0Apho+4wNta9uO6A5D1B9H5ijlWpq8ZQCeAIucDkXRYFvCU60hT/nMmDhyaloy4kRUdwbKObwwgrISyxylVEuTtxbABgBrdGfJV9Hg9sT/XouUdo2esSsNcbSJODkyR2nBCyMo67DMUcq0NHmnwbxilQsBa6Rix+crFTqZ6H7tdcu60pFHl7Ar967QpazBCyMoq7DMUUq0NHnPhnlrrkm6sxAKo8GdCU+1dlXMzKmbjYdciOnOQDmt/4URbt1hKL+xzNGItTR5LwXwJIBq3VnIFAm+kPDFTWFXyeyow5P4LcGyFMscZchtAB4zfEaB7iCUv1jmaERamry3AvgNgJy7GtLWYsfnKxU8kdA+Ip4jlXNfSVOijAu5hdOslClXwJx2LdIdhPITyxwlraXJ+3mYy49wiZvsUxAN7mhLdKeOuqXH0xFGhxDPmaPMugjAE4bPKNUdhPIPyxwlrKXJ62hp8n4XwAO6s9DgIsEdCZfs7vKpNenIokPQjaQWUCYagdUA/mj4jFG6g1B+YZmjhFiLAT8C4KO6s9AwYifmKxVMaKQt6iyYGXYV9aQrUiYFeUo66XEOgL8YPmOM7iCUP1jmKG4tTV4PgP8CcK3uLBSXgmggwalWEceh6rNy4ry5AEfmSJ/FAP5q+AxeFEYZwTJHcbGK3GMArtKdheIXCe5I+F6SHbVnB9KRJdOCHpY50mo+gPWGz6jXHYRyH8scDaulyVsA4HEAXt1ZKEHqxHwVCyQ0bXpi1ISGdMXJpIBb+P5Gus0GsMHwGY26g1Bu45sdDamlyVsI4NcALtOdhZLiiQZfeDGRHWIO95SAZ3RnugJlStDN9zfKCtMAbDR8BhdUp7Thmx0Nyt/cWrC48uJ/BXCJ7iyUvEhwZ8JTrZ21i99IR5ZMCnjg1J2ByDIRZqGbpjsI5SaWORqQv7nVA+CxKaMW3Lu8+sr1uvPQCKiTCxKdau2sWbGg5hsAACAASURBVGz7NdqCbq5/SFmlAWahm607COUeljl6D39zqxvAowAuB4DxpbNWn1Nz1Qa9qWgE3NHg9oSuau0tGTsxTVkyJuBhmaOsUwfzooj5uoNQbmGZo3fxN7c6AfwcZ1y12lgyc9W5Ne9fryUUjVgkuLMwke2Vw9nQW1S7L115MiHo5jQrZaVqmOvQTdcdhHIHyxy9zd/cKgD+A8A1Az3fUDJj9bk1V6/PZCZKEdU7X8VOdyeyS0fd0rfSFScTgi7hssGUrSph3imiTncQyg0sc9RfC4CbhtqgoWT66vNqP7g+M3EohdyR4PaErmo9VL3A1iNbQTdY5iibTYR5L9cy3UHI/ljmCADgb279NIBPxLPtuOKpq1fUXrM+vYko1aLBtqJEtj9dVD1Nwb43qw+5kfBVvEQZthDAfxs+g7940IiwzBH8za03A/haIvuMLZ6yemXttevTk4jSwpxq7Yp7e3FUnygbb9slSoJuFOjOQBSHNQAeMnwG71hCSWOZy3P+5tZLADwEJH7ro/riyatX1X2IV7nahysS3PZSIju01y07mK4w6RZycZqVbOMmAF/VHYLsi2Uuj/mbW5cA+BWQ/BIOdUWTVq2uu46FziaiwbbiRLY/UmkktH02CbuQ0BW8RJp9xvAZ9+kOQfbEMpen/M2t0wD8DkDJSF+rtmjCqvPrrmehswN1ap6KnToa7+bBgjEzFCSazkjpEnZympVs5xuGz7hWdwiyH5a5PORvbq0D8EeY6x2lRE3R+FUX1N/AQpf9XJHgtpfj3lpkVNfo6bvSmCctFBCE8BQksh0HgJ8aPmOl7iBkLyxzecbf3DoKwO8BpPymz9WFjasurL9xI2x8BWQ+iAZfTGg0tr1uWdwjeVkkqDsAUZIKAPyP4TPm6g5C9sEyl0f8za0OAI8AWJCuY1QVNqy8sP6mVrDQZS91an4iU63HKmaNSmecdFDCMke2NhrA7w2f0aA7CNkDy1x++SqAS9N9kKrCcSvX1N/MQpe9nJHA1rinWsPu0llRhyuQzkCppoCQ7gxEI9QA4A+GzxitOwhlP5a5POFvbr0JwKczdbzKwrEr14y95Smw0GWlaOjF0rg3Fik8WjHXVufNxRwsc5QT5sCccuXFPDQklrk8YC1B8qNMH7eyoH7F+8auZaHLRur0PBXrPRzv5h11S3vSGSfVYoKw7gxEKbISGt6/yV5Y5nKcv7m1HsCvAT1rblUU1K24aOytTwOI6Tg+DcoZCWx9Jd6Nu0ZPS9mVz5kQdbDMUU652fAZd+sOQdmLZS6H+ZtbCwA8DmCszhxjCmrPu2jsrc+AhS6rREMvxX2D76izcGbEWXg8nXlSKepARHcGohT7luEzFukOQdmJZS63/RDAUt0hALPQXTzuNha6bKJOz1Oxk/FNtYo4D1UviHskT7eoE7Zc6JhoCAUAfmX4jArdQSj7sMzlKH9z6ycB3KI7R3+jPTXnXTLu9mfBQpctHJHA1rgvbOioXXo6nWFSKezkyBzlpIkA/tPwGVwRm96FZS4H+ZtbLwbwNd05BlLuqT73knF3PCs2vUVUromGXop7DbnjoyZqna5PRIQjc5S7LgfwOd0hKLuwzOUYf3PrOAA/A+DUnWUw5Z6qcy8Zd8fzLHRZQAXmqdjJQ/FsGnO4pwbdZXFfAatT2MXRX8ppXzZ8xgW6Q1D2YJnLIf7mVieAXwCo1J1lOKM8ledc2nDnJoFwOkwvRySwJb5z4USks3bJG2nOkxIhljnKbU4AvzB8hm1Gyym9WOZyy5cArNAdIl5l7orllzbcuZmFTq9o6OW4V5jvrFlsi/9WYRfXNqScVwPgUcNnuHQHIf1Y5nKEv7l1DWx4HkWZu2L5ZQ0f3sJCp5EKzFWxEx3xbHqydNyEdMdJhaBLWOYoH5wL4J91hyD9WOZygL+5tRbAw7Dpf89S95hllzXctUXg4EKvejgigS2vxbOhcrjGnyqq8qc70EgF3boTEGXMJwyf8UHdIUgvW/7jT+/wN7c6YF7wUKs7y0iUukcvu7zhw9tY6PSIhl4eE++2HbVn70lnllRgmaM885DhM6bpDkH6sMzZ3+cBXKg7RCqUuEcvvbzxru0OOHiT9ExTwbkqdrw9nk0PVZ+V9e8bLHOUZ0YBeMzwGcW6g5AeWf+mTIPzN7euALBOd45UKnGVn31Z490vsNBlnEQCm1+PZ8NTxbVZPwIQdIOLqlK+MQB8T3cI0oNlzqb8za2VAH6OLF5PLlklrlFnX9549w4HnEHdWfJJNPRKfFOt4qg5UdqwO81xRiTo5nsb5aVbDJ9xg+4QlHl8w7Ov7wFo0B0iXYpdo5Zc3nj3Tha6DFLBuSoa31Rre92yrL4IIuARvrdRvvq24TNqdIegzOIbng35m1uvBXCt7hzpVuwqW+JtvLvNISx0GSKR4Oa4rmo9UjWvKN1hRiLg4Xsb5a1KAA/qDkGZxTc8m/E3t9YA+DfdOTKlyFW22Ntwz4sOcQZ0Z8kH0dCuuO4eEiiomKGy+HZsnGalPHeN4TOu0R2CModvePbzPQBVukNkUpGrdJG34d6XnOI6rTtLzlOhObFoz4FhtxMp7y6f+moGEiUl6AZXxad896DhM7L+1o6UGixzNuJvbr0ewAd059ChyFWyyNt4zy4WurSTaGBTXPdfba9bdjjdYZIVYJkjqgHwbd0hKDNY5mzC39xaB+C7unPoVOgsOcvbeO8rTnGd0p0ll0VDr8Y18nuscnZpurMkK+gRljki4AbDZ1yhOwSlH8ucffwAQIXuELoVOosXXtF472ssdOkUmhOLdg97tWrIXTY7Jq6svDgl4AaXDSYyfd/wGaN1h6D0YpmzAX9z600ArtSdI1sUOIsXmIXO3as7S66KBjYNv46cSNHRilmvZCBOwkIuljkiy1gA/6o7BKUXy1yW8ze31oPnPbyHVejecIn7pO4suSgaerU6nu066pZ1pTtLMkJueHRnIMoitxk+4yLdISh9WOay34MA4r4Jej4pcBbNv6Lx3jdZ6NIhPDsW7Rp2qrVr9PSsvFou6GKZIzrDjwyfUaY7BKUHy1wW8ze3Xgbgat05spnHWTTvisaPvOkSzwndWXJNNLB52KnWiKtoZsRZkHVlOuRGge4MRFlmPIB/1h2C0oNlLkv5m1sLAXxHdw478DgL513ReO9et3iO686SS6KhV4e/JZCI+3DV/F0ZiJOQsAuFujMQZaG7DZ+xWncISj2WuezVDGCy7hB24XEWGt7x977FQpdK4Vmx6LG3htuqo3ZpVl1ZrAAVcQovgCB6LwHwY8NnFOsOQqnFMpeF/M2tkwF8VncOu/E4CudeMf4j+92Ogh7dWXJFNLBpz3Db9JRPqstElgRk5XIpRFliCoAHdIeg1GKZy07fAThNlAy3o2DOFY33HmChS41o6PXa4baJOTzTQ+7So5nIEyeWOaKhfczwGbN1h6DUYZnLMv7m1vcDuEx3DjtzOwpmX9H4kYMeR2G37iz2F54Zix7dN+QmItJZs+j1DAUalhKWOaJhOAF8XXcISh2WuSzib24tBvBN3TlygdvhmeVtvLedhW7kIoFNe4fbprNmSSgDUeISE2RNFqIsdpnhMy7UHYJSg2Uuu/wdgAm6Q+QKt8Mz64rGezs8jsKsXNjWLmKh14c9J+5EWeP4TGSJhxKEdWcgsokWw2ewB+QA/kfMEv7m1ukAPqU7R65xOTwzr2i895DHUXRMdxb7isyIRY/uHWoL5XBNPF1YcTBDgYYUdbDMEcVpPoBbdIegkWOZyx5fB7hqfTq4HJ4ZVzTee6TAUZxNJ+nbSiSwaejz5gB01J79ZiayDCfGkTmiRDzApUrsj2UuC/ibW88DcKXuHLnM5XBP9zbec6zAUXxEdxY7ioVerx9um0PViyQTWYYTcSKiOwORjYwFZ4Vsj2UuO/AWKxngcrineRvv6WKhS0Zkeix6eMg153pL6rJikWtOsxIl7NOGz8i29SIpASxzmvmbW68GsFx3jnzhcrinXdF4T3ehs+Sw7ix2Ezm9aei7QYij/mTJ2GEXGU63iBMx3RmIbKYUwFd0h6Dkscxp5G9udQH4J9058o3T4Z7qbbjnOAtdYmLh3WOH26a9bun+TGQZStiFqO4MRDZ0m+Ez5uoOQclhmdPrDgAzdIfIR06Ha4pZ6EoP6c5iH5Fpscih3UNtcbhqvvaLeMIcmSNKhhPAv+gOQclhmdPE39xaAmCd7hz5zOlwTfE23t1b5Czt1J3FLiKBTf6hng8UVs1QEK1lKuRimSNK0sWGz7hYdwhKHMucPp8AwBNONXOKa9LljXefKnKWdejOYgex8O5xQ24gMqZn1KRXMxRnQCxzRCPydS4kbD/8D6aBv7m1GsBndOcgk1XoAsXOsnbdWbJfdOpwU63tdcu0Tl2H3FmxQgqRXRkAbtcdghLDMqfHFwCU6Q5B73CKc+JljXeHil2jWOiGMdxU69HKOSWZyjKQkFvn0YlywpcNn6H17zElhmUuw/zNrWMB3K07B72XU5wTLm+4K1TsKs+K21Jlq1h4d8NQz4c85bNi4tC21luAZY5opOoB3K87BMWPZS7zPgWgQHcIGphDnBMub/hwpMRVfkB3luwVnRKLdL4x6NMiJV1jZu7KYKB3CbLMEaXCfYbPKNQdguLDMpdB/ubWKnBULus5xDn+soYPx0pco4ecTsxnkcDzQ5bd9rplxzKV5UxBN3jSHNHI1QJYqzsExYdlLrM+AYA3NLYBhzgbL2u4E6UsdAOKhfc0DvX8sTEzKzKV5UxBN9/XiFLkU7yy1R74HylD/M2t5QA+qjsHxc8hzoZLG+5EqWuM9rsaZJ/o5Fik/fXBno24imdGHZ5TmUzUJ+ARvq8RpcZUAB/UHYKGxze9zPkYgHLdISgxVqFzlLnGDH1f0jwUCWwa/EIREc+RKuPlDMZ5W4Ajc0SpxGW0bIBvehlg3e3h47pzUHIc4hh3ScOdrjJ3xT7dWbJJLLxn/FDPt9cu7c1Ulv4CHjh1HJcoRy02fMYFukPQ0FjmMuMeAJW6Q1DyHOIYe8m4Ozxl7koWurfFJsUi7a8N9mxP+ZSaTKbpE3SzzBGl2Gd1B6Chscylmb+5tRDAJ3XnoJFziKP+knG3e0a5K/fqzpItIoFNgy6yHHUWzAi7irszmQcAgm5wcRKi1LrI8BkLdYegwbHMpd/tMBdgpBzgEEf9xeNuLyx3V+3RnSUbxMJ7Jgz6pIijs2ZRxu/TGnSLK9PHJMoDPHcui7HMpZG/uVVgLkdCOcQhjrqLx91WXO6uflN3Fv1iE2ORg4MWts7aJcFMpgE4MkeUJtcaPmOS7hA0MJa59LoU5qXdlGNEHLUXj7u1dLSnZsibzueDSOD5jsGeO142fshbf6VD0A1Ppo9JlAec4ClDWYtlLr0+pjsApY+Io+aisWtH5Xuhi4X3TRzsOeVwTw4UjB607KUDR+aI0uZ2w2dU6w5B78Uylyb+5tbpAC7WnYPSS8RRfdHYtaPGeGoHv1dpzotNiIX9rwz2bGfNkoyW3bCLI3NEaVIEDlJkJZa59PkowHtE5gMRR/X7xq4dPcZTO+gdEXJdJLCpc7DnOmsXq0xmCblQkMnjEeWZjxo+o0R3CHo3lrk08De3lgK4VXcOyhwRqXrf2LUVFZ76Qdddy2WxyFuTB3uut7g+oydNh1wozOTxiPJMBYA7dYegd2OZS4+1AEbpDkGZJSKVa8beXFlRUJ/x5Tj0izXGwv5dAz2jHM5xvcW1GVlsWQHRqJNLkxCl2ccNn8GZpyzCMpdi1nIkf6M7B+khIpVr6m+uriwYm3eFLhLYdGiw5zpql2bq3rYZXwqFKA9NBLBGdwh6B8tc6q0BMFN3CNJHRCourL+pprJg3KAXBeSiWOStKYM9d6h6YaZGy0IZOg5RvuNUaxZhmUs9XulDEJExF9bfWFdVMG7AqcfcFGuIhve/PNAzp4uqpisg7RdCKOHIHFGGvN/wGbzneJZgmUshf3PreACX685B2UFERl9Qf2N9dWHDgAUnF0UDmw4P+IQ4Ko+XTUz71b4x4cgcUYZ4ANyiOwSZWOZS6xbwZ0r9iMjo8+tuGFdTOP4l3VkyIRZ5a9A7nnTULW1P+/FZ5ogy6Q7dAcjE4pFa/C2F3kNEylfXXdeQH4VOjYuG9w34fR6pNIrTffSYA+F0H4OI3jbH8BnLdYcglrmU8Te3ngNgmu4clJ2sQtdYWzjhRd1Z0i0a2HxkoMeDBaNnxsQRSeuxHUjr6xPRe/BCiCzAMpc6a3UHoOwmIqNW1TVNqC2a2KY7SzrFIvunKqXee7GDSFn36OlpvSCEZY4og5SKNoTD07CuvFR3lHzHMpcC/ubWQgBNunNQ9hORslW1H5pYVzRpp+4s6aPGxSJvDTjV2l637Gg6jxxxsswRpZsnpnZ/8PjJDX/Zf/Do7/3tKwB8QHemfMcylxpXASjXHYLsQUTKVtZeO7muaHLOFrpoYNOApe3YmFmj03nciBPRdL4+Ub4SpbrnB4IbH2rvfHnrvv1T1h09tqomGq2xnub54pqxzKUGp1gpISJSurL2msn1RVN26M6SDrGIf/pAU61hd8msqMN9Ol3HDbPMEaWOUtHqSGTLZ492Pbtl7/6ih9s7Vy4JBGcPsOX5WFc+LuP56G0scyPkb26tB3CR7hxkPyJSuqL2g1PHFk99QXeW1FP1sci+917sIVJwtHJu2s6bC7vSvzAxUa5zK7XnqhMnN/xp/8HDT+4/uPim4yeWe4CCIXZxALgxU/novVjmRu5GAE7dIcieRKTkvJoPTB9XPG277iypFgls6hro8fbapSfSdcywC7F0vTZRTlOqZ24g2PrD9kMvbtu7f9I/Hjm2qj4arUvgFTjVqhHL3MhxipVGRESKz625esa44uk5VehU5MCAU63do6dWpeuYQZdwZI4oXkrFKiPRrX97rOuZzfv8nl+0d65YHgjMTfLV5mBd+cKU5qO4scyNgL+5dTaAZP/HJ3qbWejeP7OhePo23VlSR9XFInveswxL1Fk4M+ws6knHEUNuTrMSDcel1L7LT/au/4P/YMf6/QcW3dZz4pxCpYpS8NI3peA1KAkscyPDy7EpZUSk6Jya989qLJm5VXeWVIkENr93qlXEebhm4avpOF6QZY5oYEqdmBUMtf5bx6G27Xv3T/jq4aOrx0WiY1N8lKtT/HoUJ5a5keH/uJRSIlK0vPrKOeNLZm3RnSUVVOTgDKXUe85j66g9Oy1XtIZckHS8LpEtKaXGRKMv3Hes++lN+/zORw92rFhxOmCk8YiTsK48na9Pg3DpDmBX/ubWCQDO0p2Dco+IFC6rvsIQyJZ9vS8v1p1nZFRdLLxnp9MzeV7/R4+XTUjLMgYBTzpelcheXErtv7D31Jv3dfVMGR+JLMjw4a8CkNN3uclGLHPJ4xQrpY2IFCyt9hoQ2bzv5EtLdOcZiUhwU7fTM/ldj8WcnqlBz6jDBaHj1ak8VsDN2QbKU0r1Tg+FX/hYd0/JqlOn5wvQqCnJlQD+UdOx8xbf+JLHKVZKKxEpWFp1+fyJpXM36c4yEirSPnOgqdbOmiWvp/pYQY9wmpXyh1KqPBrd8ZGu7qee2+dXjx3sOHf1qdMLBFpPN1iMdeWpPhePhsEylwR/c2sNgHN156DcJyKes6suWzCp1LBxoVM1sfDu90y7dNYuTvmacByZo3zgVMq/pvfUht8caH/rqbcOzL+3+/h5JUply83uBcAVukPkG06zJucqsAhThoiIZ0nVpQsE8vybJ3cu1Z0nGZHA5h6nZ+q7HjtZMm5Cqo8TdHMBb8pRSp2aGg5v/0hXT/Eac/StQXekIVwJ4Ae6Q+QTFpLk8Hw5yigR8SyuuuSsyWXzn9edJRkq2jFLqdi77puqHM7GU0XV+1N5nADLHOWYsmhs511dPU89s88fefxAx7nvO3V6oeZp1HhcgHXlJbpD5BOWuQT5m1tHAbhAdw7KPyLiXlx58VlTyhY8pztL4lT1QFOtHbVL96XyKEEPZxvI/hxKta/uPbX+1/6De595yz/vY90955UpNUp3rgQUArhYd4h8wje+xF0GgAsgkBYi4l5UedFigePZN05sW647TyIigS3HnZ5p73rsUPVCx+S9v03ZMQJuvqeRTSkVmBSObPtId0/hRb2nFjiAet2RRuhKAP+tO0S+4Btf4i7RHYDym4i4zqpcs0QEz75+3D6FTkXbZykVi4o43p4KPVVcM22ofRIVdAunWclWSmOxlz50/MTRO3qOzx8VU+fozpNCl2NduRPreqLDb0ojxWnWxK3RHYBIRFwLK9YsmT5q8TO6sySgOhbevfNdj4ij+nhp4xupOkDIzVFzyn4OpTrPO3V6w38daN/97D7/nE909awcFVPlunOlWBWAjJZTEakTkUdEZLeIbBWRJ0TkLhEZcPhfRH4sIrOtz/eKSNUA26wTkU+lKN9qEfmPVLzWmTgylwB/c+ssAGlZuZ4oUSLiWlBxwVKBPPPq8c22+I0+Eth88syp1o66ZQdGvbF/6iC7JCTohjsVr0OUckoFx0ci2+7t6nFf2ntqoRNYpTtSBlwJoDUTBxIRAfA4AJ9S6jrrsflWhgEppe4cwfFcSqlIsvunGkfmEsNROcoqIuKcX3H+0pnlS5/WnSUeKtox+8yrWg9XzStK1esHXRyZo+xSHIu9fEvP8Y2tbx049Tt/+3Jv76nFTuTNVdeDFqk0OB9AWCn1/b4HlFI7YJbJUhH5lYi8IiI/s4ofRGS9iLznloki8nci8pqIPAVgRr/H14vIN0VkC4D7RWSRiGywRgH/KCL1/bb7mohssl5nhfUSIQA91jarROQF62O7iJSN5JvnyFxi3qc7ANGZRMQ5b8yq5QJ5elfPc9m+mHVlLPz6Nqdnxtv3NQ4WjJmhIFGBGvE/cCE3Ckb6GkQjJUodXnY68PL9XT1j54RCs3Xn0Wg61pVPxrqeNzNwrLkAtg7y3EIAcwAcBPA0zEX/nxpoQxFZBOA6AAtgdqRtZ7yuRym1WETcADYAuEopdVhEmgA8AOB2azuXUupsEbkMwBcBrFFKPQOg79SYTwH4qFLqaREpBRBI5pvuwzIXJ39zqwv5MSxONiQiDmPMyuWAPLWr59nzdOcZSiSwpdfpmfHOAyLl3aOnvjSm+/U5I33tEEfmSBelwg2R6Na7unucV5zsXejivxd9VgPIRJkbyiallB8AROQFABMxSJkDsALA40qpU9b2vznj+V9af86AWSD/bA30OQG099uu70rerdbxzvQ0gH8VkZ8B+O++fMlimYvfUgB2WueH8oxZ6FacIyJPvdz9TNYWOhXtnKNULCLiePv9p71u+eEx3SO/VWvYhcIRvwhRAopisVevPtHbcXd3z9yKWGyZ7jxZaDWAhzJwnJcAXDPIc8F+n0cxsu7Ta/0pAF5SSg22okDfMQc8nlLqqyLyO5jLnT0tIhcrpV5JNhTPmYsfz5ejrCcijrmjzztnzuhzM3LScZIqYuHXdvR/4GjFrBFfyaeAaMzBpUko/USpI0tOBzb+7GDHq5v2+Wd87ljXqopYrFJ3riyVqRHKJwEUiMhdfQ+IyDyYI22J2Ajg/SJSZJ3HNth9Zl8FUC0iy61juUUk7tkFEZmilGpTSn0NwGYAMxPM+S4sc/Hj+XJkCyLimDP63PPmjl6RtYUuEthyqv/XYXfZzJi4goNtH6cRnXNCNCSlIvWRyKZ/OHL0+a1795c/1HFo5bxgaMbwO+a98VhXPindB1FKKQBXA1hjLU3yEoB/AtCR4OtsgzmVugPA72EWrYG2C8EcCfyaiOwA8AISW4rl4yLyoojsBBC2jpU0Mb9/Goq/ubUMwDFwWppsRCmldvU8+1RbV2uiv5lmQlfB6I+X9Z9qNV78wfbqIzsXJvuCMeDYdZ9zVaQmHpGpIBZ7/aqTvQfv7e6ZXRWNVevOY1O3Y13PT3SHyGUcmYvPSrDIkc2IiMwqX37evDGrNurOMoAxsfCr75pqba9ddnwkL6gEoZFFIjKJUsfOCgQ2/sfBzle27PNP+/ujXatY5EaEF4OkGQtKfGyxICvRmUREZpYvXSGQjTu61q/Unae/SGDLaadn1ttfd4+ZNqJRNZY5GhGlorXR6LY7uo9HP3ji5Fke85d4Sg2WuTRjmYvPUt0BiJIlIjJz9NKVIrLxhWN/zZp/oFT08FylomERpxsAIs6iWRFnwQlXNJjU4pkxQTi1CSkfeGJqt7e3d/9Hu3pm10SjS3TnyVETsa58LNb1HNQdJFdxmnUY/uZWBwD+BSfbm1F+9sqFFRdu0J2jn9GxUL+pVhHX4aoFSV+aH3WwzFGclOqZHwhu/Pf2zpe37ts/5UtHjq2uiUZrdMfKcZzhSiOWueHNBteXoxwxvXzxqrMq12RNoYsEt7zrCtaOuqWnBtt2ODGWORqKUrGqSHTrZ452PbN17/7Ch9s7V54dCObz3RkyjWUujTjNOjxOsVJOmTZq0SqBbNh69M/az2NR0SNzlYqGRJweAOgZNak+2deKOhAdfivKN26l9lx6svetv+nqmV4fjS7SnSePscylEUfmhscVvSnnTB111qrFlRdnwwhdeTT0yttTrTGHe1rIXXYkmReKOBFJXSyyNaWOzw0GW3/Qfqht2979kx44cmxVfTSa9C8KlBILsa6cd2hJE5a54bHMUU6aMmrBqiVVl6zXnSMa3PrOVKuIdNYsSuq+XmEnR+bymlKximh0298e63p68z6/+xcHO1ecEwgYumPR2zwAFusOkas4zToEa7FgnlNBOWty2fzVgKzffOT3q3VlUNEjRv+p1s7aJZHGA+sTfp2wi2UuH7mU2ndR76m9H+vqntoQiZ6lOw8N6RwMfoN7GgGWuaEtAUcvKcdNLpu3WuBYv+nI71ZrilAeDe3a7CqYuwQATpQ2jE/mRcIuxFIbi7KWUidmX3BBWAAAIABJREFUhcI7/qare9TK04F5ACbojkRxYdlOE5a5oXGKlfLCpLK5q0Vk/fOHf7tax/Gjga0hV8FcAIByuCacLqw8UBQ4Oi6R1wg7wXsT5jKl1JhY7IWbe06cuvH4iQXFSp2nOxIlbK7uALmKo05DO1t3AKJMmVg6Z/Wy6ivW6zi2ih01lIq8fe5cR+3ZexJ9jZBbODKXg1xK7b/4ZO/63/nbD2x868DCD/ccP7dYqRLduSgp07Gu3K07RC5imRsaT56lvDKhdPbq5dVXrtdw6FHR0K63r2o9VHOWJPoCQf4TkTuU6p0eDD317c7DO7bt3d/wL4ePrh4fiTTojkUj5gYwQ3eIXMRp1kH4m1uLAUzSnYMo08aXzlotIhueOfQ/GV2HLhrYGnEVmL8/9RbXTU10/yDfzWyvPBrdeePxE8dv6TmxoITTqLlqLoAXdYfINXz7G9wsAAmPDhDlgsaSmavOrZH1Tx/69epMHVPFjhlKRQIirkKIo/ZEybg3y3oPTI53/6Cb58zZkVOpA6tPnX7j/q7uiZPCkXm681DazdEdIBdxmnVw/B+O8lpDyYzV59ZcvT6DhyyLhl56e6q1o26pP5Gdg26+n9mGUqenhELP/Gvn4W3b9+4f+81DR1ZNCkd4RWp+4EUQacA3v8GxzFHeayiZvnpF7QfXZ+p40cD2t9eKO1w135PIvkEPR9KzXVk01vbh7p7WZ/b5w78+0HHO+06dPks4A5JvWObSgGVucCxzRADGFk9dvbL2mvWZOJaKHZunVCQAAIHCypkK8V+hGnAL38+ykEOp9lWnTm143N++55m3/MZ9XT0rypQapTsXaTMZ68qLdIfINXzzGxx/eyCy1BdPWb2y9kOZuJdr6dtTrSKje8onvxrvjgEP38+yhlKBiaHwM187dGTr9r37a7/beWTV1HCYF5QRYPYO3lkpxfjmNwB/c2spgKRWoSfKVfXFk1atqmtKe6GLBra9PRrXXrfsULz7BXjOnHYlsdhLt3Uf3/jUW/7g/x5oP+ey3lOLHPx3ht6LgyUpxqtZBzYbPI+D6D3qiiauWl133Yb1HY+kbdkSFeuap1T4tIi76GjFnLgXhw264UxXJhqcQ6nOc04Hdt3f1d04MxTm6SkUD5a5FONvTAPjGxLRIGqLJqw6v+76dI7QlUSD5lRryDNqdkycoXh2Cnr4y2nGKBVsDIefe+Dwkc3b9u6v+l7n4dUzQ+EpumORbbDMpRjL3MA4n080hJqi8asuqL8hbYUuGtxmrhknUnxszKxX4tkn4GaZS7fiWGzXLT3HN7a+deDUE/72ZVeePLXECY6IUsJY5lKMZW5g/A2TaBjVhY2rLqy/cSOQ+sV6Vax7vlLhUwDQXrf0WDz7BN3CG3qlgSh1eNnp0xt+caDj9ef3+Wd9+lj3ytGx2BjducjWGrCuvFx3iFzCMjcwXvxAFIeqwoaVF9bf1IrUF7riaLBtBwB0jZlREc8OQY7MpY5S4XHhyPNfOnx007a9+8f8qOPwqrmh0DTdsSincNAkhVjmBsaVyIniVFU4buWa+ptTXuiiwe0AgIireFbU4ekdbvuQCwktMkzvVRiLvXp9z4mNG946cPwP/oNLP3Cy92wXL5Sj9OCgSQqxzJ3B39xaDKBKdw4iO6ksHLtyzdhbnkIKC52K9cxXKtQLEffhqnm7hts+6GaZS4YodXTJ6cCGhw92vLZ5n///s3fe8XGdVd7/nVtmVGZGvVqy3HuLe7cTCIkJLHWpKRBKEkISkrC7hhdY7wsLAULZ5V0IC4QSAqSQ6kAKSWzLvcQtbrG6Rr1Loyn3zn3O+8eMbNmWrJE0RZaf7+fjjzUzzzzPuf7IM+f+Tpv59faO9ZlCZCXaLsm4RzpzUUQ6c5cif8EkkhGQZS9Yd33hbdF06FKswNvHAKAxf0Ukypw9SueOf5iDBcHg/m+2tu87VFXrerSxecPCgDEj0WZJrirkd20Ukc7cpcgQq0QyQjLt+eveU/iZXQAiHsN1OazAYQKAzrSpeUOtNXTpzA2FXYiz/9zds/3N2rqOV2vrl3+sx7NCB2ThiCQRyO/aKCJzIS5F/oJJJKMgw5639obCz+58pf63qzHKG8ZwqNUjFNsMU0vt0IO9g1ZRmlKZGxBi7lgUCBy/r70rZ0kgMBuALGSQjAWkMhdFpDJ3KfIXTCIZJen23LU3TLh9D0av0CVbgWPHQKQ05S4ZdE4rA0Emkp9nfTBbucHgga+1tu85WFWb+oeG5vVhR04iGSvI79ooIj/8LkUqcxJJFEi35ay5ccLn9hDIGs0+VuCICgCNecsuNwnCP5ozxgs2weUf6vFs/0dtfevrtfXLPtXjWWWDLAyRJBZm+E1WazrYcbRMFO7ebi3Y9tvgDacnbX5JhvijhAyzXop05iSSKJFmy15zw4Tbd71S9+hKBo9oUgCL7oXMgZ4eZ3HxoGuAAADHiA29kmHuWhAwjt3b0Zm5wh+YC9m/SxJHBKPTgN7ShdTuZk731nKuqOACpVwUJlVyQVo152Z3wJWJkBJ3sRqXD6A2/laPP6QzdylS+pVIokiaLXvNjRM+t/vlut+sGKFDl2QFjr1FSctW++0ZDUmBjoKLFzAhovmt4wZmkW2Jtz7b1W1+vKfnGjtjXaJNkowvmGEJKC1e2Ns62NHTiEyjSuSJCi7UK7gwtZLz092ck+uDPR1A+giPKYB05qKCdOYu5ZIvColEMjpctqzVm4o+v+fv7l8vY/CwP3eCgaOqlrQMjXnLKybVvHLVOnM6c+WNnt7qezq6ZhZY1tJE2yO5MmGGz4Ta7EFyZyuneeo426zifKWcC20VXOCsFnlZDcjKEVDyEVLPYoX8vo0S0pnrh3tzqQMyv0QiiQlOPXPViB060b2IOdDTnLuYJ9W8cunLCsxo2TnmYO6eaxhH7+3oyljt888DMDnRJknGLoLREYCttQupXc2c7usX9kyu4IK0Gs7N7oQzA6GUokSnFUlnLkpIZ+5CIpoBKZFIRoZTz1z13qIv7P2b+1dLh+nQ2a3A0UOe1CUD5oNZNM6cOWbOFOLIbV3d3k91exYnMcsw6lUOMywLSrMP9vZ2dvY0ICtQLXK5ggttZVyYUsX5mXWcneOHPQPAoC18xhjSmYsS0pm7EOnMSSQxxqFnrHxv0Rf3/s39qyUMEXE1WzBwVNOSlhd4UgqqHN6GSf1fs1QEo25oAtCYq6/v9Vbd29E5rShoXZNoeyTxgRleE1pzD5I7WjnNW8fZwUouQDkX2Cu4wFUt8jObkJEjoBRgfDlA4+laEop05i5EziOUSOKAQ09feVPRF/a95P7V4ogdOtGzkIW/qzF/Rc20iucm9X/JUq5gZ47ZM8swj9zT0elc7/MvROJDX5IoIpja/dBbupHa3cQZ/hrOtSq4QCsTE5KqOD+thnNzu+BIAzAp/OdqIm7fuUT0QQDPApjNzKeHWPtrAD9m5pOjPHMSgNXM/Kfw46UAbmXme0ez70BIZ+5CpDInkcSJVD19xU3FX9z/t9r/XSQgIslVtVuBowdbshfaplU8d8ELwStNmWPmdCGO3tzV03tLd8+iFOa1iTZJMjyYEbSgNHthb29nV08DZxrVnI9yLtTLucBRyfnp9ZydG4AtE/K7ZTBGWgU7Ej4JYGf473+/3EJm/nyUzpwE4FMA/hTe9yCAg1Ha+wKkM3ch8j+cRBJHUrW05TcV37H/pdpfRuTQBQNHbb60ZVMYYALo3PMqRtWYOF6ozO53eX3l97Z3TikJBhcl2h7JwDCjNxz27GzhdK+bs80qLlDKuSCpXBQ6qjkvuxnp2QylEEBhou29gomLM0dEDgBrAVwL4EUA/05EGwFsAdAKYB6AQwBuZmYmom0AvsrMB4nIA+AXAN4LoAHA1wH8AKE2Zl9h5hfCCtxjAFLDR36ZmXcDeAjAbCI6AuD3AA6H930fEWUCeBTAFABeAF9k5mNEtCW895Tw3z9l5v8e6hqlM3chMswqkcSZFM21/KbiOw68VPu/CwSsy89XZc9CZsPX7Zr8Tlp35cy+p0111GPDYgezd7ppvnV3R5fjOq9vIQFFiTbpaoUZzKA2P2xtXUjtbuSMvmpPrUxMSK7k/PQazs3uQWoaZNVwPEiL0zkfAPAyM79DRG1EtCT8/DUA5gKoB7ALwBqE1Lv+pAJ4g5n/hYieBfAdANcDmIOQg/YCgGYA1zOzn4imA/gzgKUANiPsvAFA2IHs4z8AHGbmDxLRdQD+AKDvBm8WQo6nE8AZIvoFM1+2yEs6cxcilTmJJAGkaK5lNxXfcfCl2l/OH8Khs1mBIwca8lcE+ztzhjb2nDmXZR37VLen67au7kUOGUaNOcwwLSjNvUhqb2enp4GzjKpw2LMiFPbMqOfsXAN6NoDsRNsrARC/MOsnAfxX+Oe/hB9vBbCfmd0AEFbPJuFSZ84A8HL45+MAAsxsEtFxnM9x1AH8PyJaBMACMCMCm9YC+AgAMPMbRJRFRK7way8xcwBAgIiaAeQBcF9uM+nMXYhU5iSSBJGiOZe+r/jOg1vdj8wTbCUNti4YOGZry/qYLfSZHGKsOHMqc/0Gr++d+zo6J08xgwsSbc94gRkeA1pzD1I6WzjN5+Ycs5ILlDIuTKoUBc5qzs1uRkY2QBMATEi0vZKIibkyFw5nXgdgPhExABUAA3gJoTGAfVgY2CcymZnDP4u+9zCzIKK+9fcDaAKwEKGZ96OdFR2JXRcgnbkLkcqcRJJAkjXH0vcV3Xloq/uRuYM6dOxZ5NfszYIUU+FQJayhEw+4Nh4w+6aYwcN3d3QmvdvrW6TIHKqICYc9W32wtXXC0VftyRWiUC3nwpQKLkir5ZwcD1JcuFpn745vtEmbX0qpeugmbwzP+CiAx5j5jr4niGg7ENUReGkA3GEH7zaEHEYA6EEoVDoQpQA+DeDb4fBrKzN3E9Egyy+PdOYuJJ6VNRKJZACSNceS9xXddegl9yNzLA4mD7BEt4wjlR3pM9uzOk7NBwAjAZ9kTku8/Ymeno7PdnYvdDKvjr8FY5t+Yc+2Nnb11nO2UcV5CDe5Ta3mvKx6zs4xoeUAyEm0vZKEkYpQAUCs+CSA71/03F8B3AWgPEpn/BzAX4noVoRCsr3h548BsIjoKIDfIVQA0ccWAI8S0TGErv+20RhA59XDIRYSWQjFi3UAQYSS9X7CzGMivBEN3JtL9wNYlmg7JBIJ4Ld639pa+8jsAR06Sj04MbDEM+/UbzcCwK7ZtP2/PqhuiLVNCnPDWp//nfvaOyfOMM2rNkGeGd0G9JZupHS1cJq3lnOsSi6gci5MqhAFrmrOy2pFWjYwQplBcjUxpeqhmyoTbcSVznDuZ33MvAgAiCgXob4pLlzUr4WINGa+sno+nWcgFUAikSSAJDV18fuK7zq8tfYXMy0OplzwIvcubE+bcK5fUyDiORIjgNk/yQwevrOzy3Zjr/cadRx3rWcGC1CrH/bWTqT2NHKmv5rzRIUo0Mu4MLmKC9JrOSenF8kuhD7/JZLRkjr0EslQjCg4wczNRPRFAAfCPVFuA/BhhHIaVCK6CcDzCM2H0wF8g5mfD/dieRnAXgCrARwA8FuESnRzAXyamfcT0XKEKk+SAPgAfJaZz4z0IofBoEnXEokk/iSpKde8v/iuIy/W/mLGRQ6d7hdlhqXoPlWYyX4boq4ApQpx4p+7Pe2f7+pakCZ4VbT3jzfMCAShNvciqaONXZ56zjarOA9lXGir4EJHNedl1nNWblCGPSXxReZCRoERZ5owcwURqQg5YQCwGMACZm4PV3h8KJzMlw1gLxG9EF43DcA/A7gdIWfuUwiV6P4TQs34PgjgNIB1zBwkoncD+C7CJbwxRjpzEskYw66mLAo5dI9Mt9g8dxcfNE44WrPmn8xreWtJQI+OM6cwN6/y+U/e19FZPNsw50Zjz3jAjK4A9NZupHS2cLqvlnODlZyvlHNhcrkodNVwblYbXFkAFQMoTrS9Ekk/pDIXBaKZNvwaM7eHfyYA3yWi9QiV8k5AqE8KAFQy83EAIKITAF4Pd1zu37MlDcDvw833GCF1Lx5IZ04iGYOEHbqjW2t/MTXIZuhOnr0LG7OWvZnX8hYCOikj3pzZKA4GD93Z2a3d5OldrAIbo2T2qGGGEKAWH+xtnXB4GjjTX3M+7JlSxfnptZyb60VSGuLXgFUiiSYpQy+RDMWInTkimoJQ/5Pm8FO9/V7+NEIy/ZJwc70qnHeU+vdPEf0ei372fBvAm8z8oXBodttI7RwmkcyHlEgkCcCuJi98f/Fdx16s/cWUsEOnddjbTADw6xi2M5csxOmP9Hia7ujsXpAuRNzDqOGwZ5MHyR1t7Oqt42wz1OS2wFbBhc4qzs9s4MxcC2oezt8MSyTjDdlVIwqM6B+RiHIAPALg/4VVtYuXpAFoDjty1wIoGeYRaQDqwj9/ZiQ2jpB4KYASiWQE2NTkBe8v/tLxF2t/MSnIhtNLDXmmltzltwXUod8NEHPLcn/g5FfaOwvnGcYshMbmRB3B6DKgN3chtbuZ0721nGtVcoFaJgqTKrnAVcO5Oe1wZSI0e3FiLGyQSK4QRq6qS84xHGcuOTzuoq81yWMAfjzI2scBvBgOnR5EKAduOPwAoTDrNxDq0hwv5B2CRDLGsalJ899ffNfxrbW/KDHZt7A5Y+p+v+3k4CPAmM3CoPXWHZ1d9E+e3sUaMOIWJuGwZ7MP9vZ2dnY3IjNQLfJQzoVaBRemVnJ+uptzcn2wy7CnRBIZ0pmLAhH3mbsacG8uFUD0q+IkEkn0MUXgxIu1vyjK6Obdr19zpuDQdGVR/9eThDjzQU9v050dXXOyhBhyFiczfCbU5l4kd7RymtfN2UYV51MFF9rLucBZLfKyGpCVI6BEpAJKJJKI+ETVQzc9kWgjrnSkEhXGvblUgXTkJJIrBl2xz31/8V0nXq76X2dAD6VIEHPbEn/gxH0dnXmLAsZMADMBQDA6ArC1diOlq4kzfLWcKyq4QCkXhcmVXJBWzbnZnXBmIJQSMty0EIlEMnLkzVEUkM7ceeQvlERyhaEr9rk3TrrjxAHz3898vMH+zlqvldLOaWIHsiv+KHJrazk32c3ZrgBsfQVYA4Y/NUJrNjyt8bVeIpFYUK7UIQNjCunMnWfcjCWTSK4WGMylzv31Qft7xKTT+7neM63XcORqTs2nzFeD6gK9uxu60QHNRkJTNUshW4BMLYCgalDQHoSVbEGkCnAaABdI5u9IJHFGD7WelYwG6cyFKXponRXOmZMf5hLJFYAFYTxj23fQVnRMey39/vlLuyuaJ9qeYmV/lj/DtyKlvmAt99jMVBGs7BFmtY1FVwnA+cmqoyVbz2x16Vk9Llu2x6lntqRqLptNSU61FEoKUNDmJ9PnI8PrpYDhJSPopYDlg0F+MhUDps0gy27BSrbATga7QLKLvUQyQmTifhSQztyFGJCNgyWSMY+BYPeT9t3lfjJXO/Os1znZNvNH/DH8wSxvmLXixMJ/TTFOve+VXcoSt17SULjuTG3RB9WAPSObuafJNMqrWs1yozlwMhMcuAbABZWwBAoma65mp54RTNOzA1l6lpikZ1KK5rLb1RSXRloOQNnUryeTgDD9MDv9ZHi8ZHh8MPxeCpheMkwvBeCHAT+ZmkFB3YSVFFIDhZOBdJDsbym5qpFRsSggnbkLkc6cRDLG6UWg+Sn77o4giWvsdk9Duz09FQCsCSmNn6n51/Xb6f79v+9p2PjrTa5dD5Kr7Z4XXrdW7319WVBL6a0tura8rnBtiulYtBxEKrNliGDdCWGebRNmjcais4TBE7zBrkJvsKuwyVc1oA0K1ECq5mpy2rLaXXpWr0vPNp16hpKiOZNciitNJa2IiDIiuR4TltcPo9tPZk9YDQx4ETC8ZFg+CrCPTCUAUzMoaAvCSgmHhV0A0mRYWDIOkMpcFJCtSfrh3lzaAmDIFgYSiSQxdJCn6hnbfpWJiwFg2vS9218p2ECv0ab1MKx2+5uNjhQEggftd1Wnkn92k6o23VyYV41uteCBZ62akmasJEAN2FwtNcXXn2zIX5kZ1JLnoZ/KxqKn0TIrqoRRFhBWYwY4MAMjuMlTSfM6tPQmp57VkWbL9rr0rKBDT1eTVUeyTUlKV0jLJSLXSP8tGMwGgj1+Mrt8MDw+Cvi9ZPi9FAh6ybB8CJCfTCWAoG6G8gNTLLAjHBZ2jvRciSTKfGTLli3PxGpzIvoJgGpm/mn48SsAapn58+HHPwJQx8yD9c29eL+NAAxm3h0jk0eEVOYuxEi0ARKJZGAaqOPkS7a3ckHnb7hycytza3FLCwDApmZysrrX57OvfFfg4cyd9nsb8iyr4LXa+rzfpLl2/dvtaXOLWlDzwLNWQ2F796rp5X/dML38r/AlZburSm4ob8pdkidU+yxSnPmafWE+7AsBAMyWycH6k5Z5tkWY1TqLzokAFw1lr8XBlC6zdXKX2TrZ7T0z4BqdbN0OPaPZqWd1ptmyfC49y0rV0rVkzZGiK/YMBWo+ESUP9F4CkR26y866Kw0pw9I3BEQwgGCnj4weHxm9PgT8vWQYvlBoGL5zYWGzf1jYEQ4LD96gWSIZPoGhl4yKXQA+BuCnRKQgJNj0v4laDeD+Yey3EYAHQMTOHBFpzBzTql3pzF2IdOYkkjFIudJ48E39xGwQUvueS0nprFLV4Oxm5PX0PWdNdSrK251oRGbeR40tZ561fctJBMfnurrXvN/T23xzYV7L/Xdoq6c28Nn7n7Xac7uwItnfWjT7zONFs888Dk9qYWVVyY3VLdkLSljRJwMAkaqTXjxH0YvP2cOip8kyKyqFUe4XVkPmSNU7kw1Xh9Hk6jCaLpxu3Q+bktzu1DNaXHpWl8uW7XfpmSJVS9eT1FSHrtgyCUoeEQ0r706BoiXDlp3MtuzhBrmCsHx+mF1hR9DnpYDPC8P0UsDykcF+MhR/KCxsD55zBM+FhWULKMnF+GO8/24APwn/PBfA2wAKwmkQXgCzATARbQfgANAK4DPM3EBE9wK4E6GpVycBbA4/tojoZgD3IDTh6hGcH8v3FWbeRURbAEwFMAVADRGdCa+ZEv77p8z839G6SBlm7Yd7c+lphJuMSiSSscFRtWrXAa18OejC2cmzZu3YlpNbvfEz+HO5SbapAABmy/5qfRsBuQDwEWXHgYf1RxZTPyfiN2mu3f+VkTabiTJm1/DJ+563vJkeLL343C7X5DOVJZsaOzJmTWVFvawSF1LvGs5eqN6JIdW7KMFJqqPFqWe0uvSsbpeeFXDqWUjV0/QkJcWpKbYsAuURUUIdqX5h4e5QWDicH0iBoBeG5SOD/GT0Dwv3VQs7QRhxOFoy5lm3ZcuWnbE8gIgqERrjtwmh4QATAOwB0AXgYYR07Q8wcwsRfRzADcx8OxHVA5jMzAEiSmfmzrCT5mHmh8N7/wnAz5l5JxFNBPAKM88Or3s/gLXM7As/fg+AawE4AZwBkM/MZjSuUSpzFyKVOYlkDFGqndp+Rq1fD7p0OktWdu1EADChF557kkgVWfZTalsgFwD+KtYvm29V7PiM9ur6viWf6+peHVbp9p+aqC2/8x4NC8vFsXteFMLlw7mRYGndlTMXHf/5TAa4PWPW29UlN7Z1pk2dA1JyLrYlpN4VzVH08/4bC09zSL0r8wmrMRPsnw5gwJDpKCG/5cn1W57cFn/twAtAVrLmbHBqma0uW5YnTc82nHoGUrQ0u11NdmmkZwOU079CN+pGjj4s3OUPh4W9MHyhtjGBoJcM7qsWDlBQNxFMsiBS+oWFZVHb2CbWyhwQUudWh//8GCFnbjVCzlwdQk7Wa+FffxVAQ/h9xwA8TkTPAXhukL3fDWBOv/86LiLqa1X0AjP7+q19iZkDAAJE1AwgD4B79JcnnbmLkc6cRDIGYDD/XT+8o17t2DDQ605nyxlFETPbkdkMotz+rwVnuIrUPS3nHm8Jfmb9XKVqxzLlnXMOXa5l5b5aW5/7aJpz908z0mcfnaos+PxXFKw4LQ7f+TehpQYwv28tAZTVcXpeVsdpMMhqyV74VnXJDd4eR/E8EKUPdg2kOHI1+4Jc2BeErolFkIP1pyyzrFmYVTqLziJATBzs/dGEwao32F3gDXYXNPmrBlyjQDFStLQmp57ZnmbL9rj0LNOpZyjJmjPJriS7VNJziCgrHvYOYJuWDFtWMtuyMoYfFvb7YXb5yejxkuH1keEPVQuHwsI+MpQATNWgoN0MNZFO6RcWlt+RsccbhzN2IeS8zUcozFoL4EEA3QC2AZjAzKsGeN9NANYjpLD9HyKaP8AaBcBKZr7AKQ07dxcnT/TPD7QQRR9M/qJeiHTmJJIEIyDMZ2z79ncq3gEdOQCYWHKsAcBMN4qbEA6p9sEu21RW6QRZPLfvuY8b31qz237PwXzquCCcentXz+r3ebzNNxfk7WvQtRX7ZinX7JulYP1xceDzrwhnkolZ/dcTWM1tPbI4t/UIBKlGY96yAzXF7za8KfmLQJSKy0CkaKQXzVb0otnnbBWeFsusrBBGmV9YDWlg/wwAKRH9Q0UZAWHzBDuKPcGO4gZf+YBrVNJ8qVpauEI3q9elZ4crdJ3JNiUpTQ1V6F4yLi2RaFCTHFCTHJyUN9z8wFBY2OjywegNhYWNgJcCpheBcFjYVAIw9VDbGJFiQTgY7ERomoic9R0Z8XDmdgP4KoAKZrYAtFPoRmwugDsA3ENEq5h5DxHpAGYAOAWgmJnfJKKdAD6BUE5dDy4soHgVody5HwIAES1i5iNxuKYLkM7chcTjl0oikQyCiaDnSfueMz4y1gy+ijkjo2E6AFRhcs9AK6zi1DatynPusYCivivw8MyD9rvOJpMxvf/aXMvKfdV9XqVjoowd85VlO+YRv+ct3nvrGyLbFsS0i89Q2LIVNu5dVti4F5Zi89YXrN5TW3St4k8pntwAAAAgAElEQVTKWgSiiCo+SXHkaPb5ObDPD1+ZCLLVcMoyylqEWaWy6CgCREkke8UDi4PJ3WbbpG6zbVLdIJ+WGtl6HHp6s0vP6nDp2T6XLctyXFihm0dECXFYh4sNmtPGmtM17LAwW4GwGugjw+MNNZEO+ELTRDhcLayGmkgHk4MQyf3CwrEIxY9l4vG9exyhKtY/XfScg5mbieijAP47fCOiAfgpgHcA/DH8HAH473DO3IsAniaiDyDkxN0L4H+I6Fj4vTsQKpKIK7IAoh/uzaVPQg6Jk0gSgheBlifte1qDZM2+3LqMjLrj8+a/MR8Afob7t++ltZcqeKbosr/RYKeLKkyLqblum+1+m0p8Sd4bADSravPNBXmVDbq2ou85Yhbv38d7P75dFOoCk4a6DlNN7nIXbXi7rnC93bC5FoFoVDfNLHpbLLOiUpjlXhGsTw/n3l1WBRzr2JSkDoee0ezSs7rS9OyAU8+0UvV0PVlNTdUVewZByacIHeLxhgUR8MPo8pHZ46OA10uGz4eA0UuG8FFAhJtIh8PCwb5qYSdCjuCVKNCkbNmyxTf0MsnlkM5cP9ybSx9BSHKVSCRxpJN6q5+x7YMgHlKFWrjw5R2utJb1APA1PLyzhiavHWidrbRpt+INrr74+ZXKiRN/1v9zCl1GAfltmnPXTzLS53C/KQ6K4OBHd4q9H9rDk1SBiCpVDd3ZWlP8rpP1BavTg1rKfEShwIBZWGw1lllGWZMwK8ecehclOElNbXVoGa0uW1Z3mp7td+qZnKql2ZLUVEe/Ct0r0XmJGQaCnnAT6V4fBXr7wsI+GJaXAuQnk8JhYXs4LJzKofzARIWFA1u2bJEFKlFAOnP9cG8u/S6AryXaDonkaqKROk9ttR3KBmFAtexCRHDtusc7Kdw4+A787qiHnAsHWqk0eA/ZjnUsGei1W9VX9vyH9vuVdJkvsBZVabm5IL+ivp9KBwCaxcYnt4m97z3AM1RG/tA2h/DZMxuqS97zTlPushxLS5oT6fsigYW3zTIry4V51iuCDWlg3wxc4epdBIgU1dnk1DPbXLasHpeebTj1TKSGZug6+1XoypFnQ8BgEQoLmz0+GB4vBXxhR7BftbChhtvGJAXPVwungUaV49m8ZcuWvKhdyFWMdOb64d5c+iBCPWckEkkcqFCa3npDf3t6pOOlcnIqD86avfNcEcMteLJBkFow4GJmYX+1vomAAV//kf6L7R9RSwctsujjdy7n7h9nps9iosz+z+tB9t/2D7Hv3Yd5rjLMMYC9KXnVVSWbqlqyF04Qqu2SfLzRElLvmsos42yTCFYpbLUXAWJStM8Z6xAUM0VzhSp09SyPy5ZtOPUMSlFdSbZQS5YcIpIjHEeBBWGEq4W7Q9XCFzSRFn3VwoHQbOGkICyHADsQCgtXbtmyRfZ2jQLSmeuHe3PpZwE8mmg7JJKrgWNq9e79WtlSECKeXnDN4hd3OhydawEgCM24DX/RcBnlRX+rbZva4t842Ot/s23eOUepGTBM25/BVDoASDLY87lXxKH1b/NCAgZtVTIY3c6JZ6tKNtW1Zc6ZwooWs1YlLLztwqwst8yyXhGsTwP7piNUnXdVo5DqT9XSm1x6ZrtLz+512bKCTi1DSdacyTYlKX0sVuiOF0xYOyY/tHHIGyrJ0Ehnrh/uzaUfwOCNASUSSZTYpZ3efkqtG7AZ8GCQEvSvWfNng8LTAGpRXLWZfjrpsu/xmFX2Xc2DrtEQNPfb7z6RST2LBlvTn9+7nLt/NIBKBwApfu6662/iyPIzvJgwskH27ekzTlSX3NjakT59JkiJOIQ7EphZhHPvGkWwUmGrYwJgTY7lmVcqGukeh57R5NQzO9P0bK/LliVStXQlWXWk2JSkdIXUvH6NYiWR87eih9bdlGgjxgPSmeuHe3PpWgClibZDIhmvMJhf0Y9sd6vtG4f73oKCM3unTd+/su/xLqw99HO6f8CcuP7YX68/RkFeMNjr6ejp2G+/u9NGwYgcmRZVabmlIL+8TtdWDvS608vtX35RHF9UwctohD3jGCRas+Ydry65oavbOWkeBnAeYwELX4cwK8tC6l1dGtg3DSN0TK82dMXe5dAyml22rM40Pdvn1DOFQ0/XklRHqq7YM5XQDF2Z7H8hjxU9tO7WRBsxHpCVQBfSnmgDJJLxioAIPmvbv69D6d04kvdPKDp1wZ1nFaYMMpr+QqwSR7dWPmA7OgBAJ5wZm4zvdb9m+9d2hXhIpynHEjkvu+tzBlPpelIo83sfVzdk9HDzfc9bB2fXYgUBw2qzQWAlp+34wpy24xCkmE25Sw/WFL870JtauABEMXOuSEnOUO1zlqn2UH1GSL1rOmuZZY3CrCS22gvD6p1siHsRpgikdRiNaR1G46Br7EpKq1PPaHHpWd0uW7bfpWdyqpZuS1JTHJpiywy3ZLmavpdbhl4yOojIQqinXB8fBPAnZl5NRJMAbGXmeaPYnwH8mJkfDD/+KkL967Zc5j13AvAy8x9Geu4le0pl7jzuzaX5OD+TTSKRRAkTwd6n7HtOecm4ZKB9JKiq2bNq9V906jdn8yF8c/txWjR0vk1Q9Nhfb1CHUsmuU946+hv94dk0jBy+oVQ6AMjp5Pr7n7PKpzZgFY3yBtpSNH9D/uojtcXXwZeUvQgJUHpY+DqFWVVmmWd7RbDeAfZOx4Ud8SUjRySrjhanntnq0rN6XLbsQLhC1xaq0D3XkmW8VOhuLnpo3fdjeQAReZh5wBB4lJw5P0J+wzJmbo3EmbvMXhozB0dix9V0BxAJbYk2QCIZb/hgtD5p391skjUiRw4ACgpPHyPCBVMhmpAfmdqlKU52aDvJE7xsocMbYvHCh4Mf2/kv+pNDFkT00afS/cHl3P3wILl0LelU+PXPaIWFbVz9wLOWu7gFqyg0z3HYqCKYVFS/Y2VR/Q4EVXuPe8KGQ3UT1tsCtvRFCI0hijmkJKer9tlLVXuot3NIvWs+a5lnG4VZBbbaCgFrCqR6NxIUn+XJ81mevGZ/zYALCBRM0VwNTj2zzaVnedJs2YZDy6RUzZlkU1NcGmnZAGVTFHoaxoGmRBw6kINHRJ9BSLVLBTAdoc4WNgC3IDRT9b3MPFD0LgjgfwHcD+D/XLTnJISKKrMRUiE/y8w1RLQFgIeZHyaibQCOAFgL4M8AfjSia5LK3IW4N5d2Qd5lSiRRoYu8tX+17bUE8aTR7LN8xdMH7XbfhXNV8fiZACVF1NZAafIdsR1pj6jI4Zf6j7bfoB4adoVda1ilc19GpQOAkiYuf+BZqzm/AyspSg6Poae21xa960R9wWqHqTsWXq7CNx6w8HWKYFW5ZZT1iGCdE+ydBkBWhMYJBWogVU9rdOqZHWmhCl3ToWUoKZoz2aYkp6mk5VC/htgJZFPRQ+tejuUBF4VZK5n5Q33OXH9lLuzMfQPANQhNjikD8G/M/AgR/QRANTP/dID9PQAKARwDsBDAFxBW5vpGfzHz74nodgD/xMwfHMCZO8nMXxrNdUpl7lLcAKLa0FMiuRppoq4zW20HM5iQO5p9dN3fZrP5LnHEArBHXO0pcpMWMqGWGMVDrb3DfGD9G/TV3VOUhkumR1yObEvk/N1dn/OYy7n7h5npM5koa6B11Xk09b47tanT6vjM/c9ZXTndWD6ccwbCZvZmTq18Yd3Uyhfgt6U3VU+8/nRj/opsS0ueO9q9RwIpyemqbfYS1XZOvWO2mssts7xemBVSvYsxApa9x2wv6THbS+pRNuAalfReh5bW7NSzOtJs2V6XnhV06OlqsupMtilJGQqpuRTD/MwwgycYRg8fM0d0IwfgTWbuAdBDRF0AXgw/fxzAoEVUzNxNRH9AaE5r/9FkqwB8OPzzYwB+MMgWT0Ro36BIZ+5SqiGdOYlkVFQpzYf/oR+fChq9yl1UdOIkEdb1f64Hzk4QRd7TjYhEbnK52uQb0pkDiDYZ31t8wH7X2y7yDTuX5pbuntWbentbbinI33s5la5sAs28+24Nc6vFiXufF/6MXgxZmRsJSUZn3syyp/Jmlj0Fb3JObVXJjRXNOYsLhGqbEY39RwIREWl5UxUtbyqSQz4yC3+XCFaVhdU7B7h3OqR6FzcsNlO7zNbJXWbrZLf3zIBrQhW66c0uPavLZcv2ufQsy6Gla0maI1VX7BnhCt1Bx+JFQP0o3hsLAv1+Fv0eCwAaERXjvIP3CDM/0m/9TwG8BeC3Izg3omKuyyGduUsZOFFBIpFExNtqzZ692tklw2kGfDny8ssucQjrUNSAYTboDU53TVOafBxJaDMAW9J1gR/l7bHf49bJimgOa3/6qXR7fpiZPmMwlQ4ATpQoc++4V8HiMnHs7hcFnP7BFYDhkuJrKZ5z+rHiOacfQ4+jqLyqZFNta9a8yaxoCZ/lSkpSmmqbtUS1zQLQp961lFtmWb0wK8FWa0FYvRsvyf5XHKEK3aa0DqNpUHfDrqS0Oc5V6Gb5XXqmSNXS9SQ11aGfr9AdKJ8zgDhUs0YTZq4FMKDKx8ztRPQkgM/h/PCB3QA+gZAq92nEsPWZdOYupTrRBkgkVyp7tHe2n1Br14Gi8wVst3vqNc24xLmpwuTO4e7FqVoRdOUITBFRyKUV6TkfML5d/pLt611EI1OMbunuWRVS6fL2unX9srl0b01TFnzufgWrTolDd/xNJKUYiGqI1OlxT51/4ldTAaAzbeqpqpJNze0ZM2ZgsHFocSak3uVOVbTci9S76jLLONun3k3DCKZsSGJHQHizAgFvVlugbrAlnKQ6mp16RqtLz+pOC1foJquO7rk//vB4S9r/EYAv93t8D4DfEtG/IFwAEauDZQHERbg3l34KwOOJtkMiudJ4VT+6rUZt3RjNPadN37O9oKDskmKEX+LubTvoumGfpVb07NLPdq8ZeuV53qfsOfQz/WcLiUZ38/tHl3PPD4ZQ6fqz8ag4cPtrwpVkImazKxngtsy5x6tLbujsck2eA1LG9JzSkHrXWinMsnrLrGC22vKB4FRI9e5KZNuDT2y9NtFGjBekMncpMswqkQwDARF8znZgb7vi2RjtvXNzqwYsnqhDsTqS/ayS1Gu0s93dNIyK9a1i1ZJ5VuWOO7Wt60dyZh83d/es2uTpbb2lMG9Pra6vGmr9toXKsm0LiG88xHtvfkPk2CxMHc35A0EAZbefWJDdfgIMsppzFh+qnvgen8cxYT7G4DzSkHqXM0XRcqZoyaF/QuZAtzDPqXcpYM80AGOhUlNyeWQULIpIZ+5S5C+YRBIhJizv0/Y9J3opEHFvtkhJSemsVNXg7IFea0FO6og2VZUUduml1G2uG3rxeR4Kfmr9fKrcvkY9Maqh4FlCZP/N3ZAdsUpHRC8vpZWvLCHxgT2862OlokgTiEm+G4HVvJZDS/JaDkGQFmjIX7G/pvhdQV9y7iIQjWgsWTwgsrtU24zFffUdzMwsWiuFUVZnmZXMVmseEJwGqd6NNeR3bRSRYdaLcG8uVQH4IR1dieSy+GC0P2nfXW+SNeLu6Zdj1uwd23JyqjcO9NqteKLaopEl8Sut/uO2Q23zh/s+ghA77fcdnEBto24lAgBtitJ6S2He2UhUuj4UwcGPlYo9H9jDU1VGYTTsGApLsfXWFa47Wlu0UQvYMxaBKCqFLfGEOdAjzJoyyzzbJUx3KtgzFUBc5t1KBuX2B5/YOpLKT8kASGduANybS6uA2Nz9SiTjgW7yup+27TUE8ZRYnbFm7eOViiImX/y8gGLdgifFaCYe2F+tqyTGJXsPRQr8vQftd9WmUGDWSM++mMddjj3fz8yYzkQR56tpQQ58+k2xb9MhnqXw6Pr4DQdTS+msLbr27brCtSmm7lwIohGFu8cCwmqtEka52zIrBFstferdFXs9VyDvevCJrW8k2ojxgnTmBsC9uXQ7gFHlx0gk45UW6j77gu2AkwkRN+0dLk5X8+lFi14Z0GFqRH7dg/Q/E0azv368fZta79s4kvcWorWh1H4fqcRRu/42RWm9tTDvbM0wVDoAsJns++xrYv+1R3meAkRUWBEtAjZXS03x9Scb8ldmBrXkebgyxkcNCrPhEWZ1mWWWdQqzNiWs3sX13/QqY9qDT2wtT7QR4wXpzA2Ae3PpHxCaxyaRSPpRrbQceU0/NhkjbNURKXPnvb4tM7N+40CvHcTywz+hf7tmVAf4gg32HU15I52RupTOnHrK9h8TiTCy3L1BeNzl2Pv9zIxpw1HpACA5wD1feFm8teYkL6IENN71JWW7q0puKG/KXZInVHvUVMtEI6y2amGUuS2zwpLqXVQxAaQ8+MTWEQ2Vl1yKdOYGwL259P8A+E6i7ZBIxhInVffe3dqZa0CIbMD9iGFeu+7xBiIeMCfsaXx857P0sVEXXNjfbDhIhlg69MqB+YT6xr7vab9eRlHqqdfHSFU6AHD4uPOul8TRpWd5CQGOod8RfTyphZVVJTdWt2QvKGFFH3YoeyzDbPQKs+bsefWuZwpCQ9Qlw+P0g09sHbC4STIyZJL/wJxItAESyVhin3Z2x3G1Zm20mgFfjoyM+uNEPOgUhGpMsqJxTnCy09DPdI34/X+xrlsxnyq2f1p7Y1QVrheTJUT2S+6G7JGodJ5kSv/hR9UNab3ces8L4tD8Kl5OwGjGLQ0bR2/95HknH50MAF2uyWcqSzY1dmTMmsqKOuxJGmMNIluqapu2SLVNO/ecsNqrhVnmtowKi62WXMCcBvndOhSnEm3AeEMqcwPg3lw6HcA7ibZDIhkLvKYf21attmyM13kLF768w5XWMmjO6lfxX7sbqGj1qA+yOGD/R72PRjlR4FnbN3dco5THJMe2XVHabi3Me6d6BCodAGR2c9N9z1tnZrmxkhCd8WojgQFuz5h1orrkxrbOtKlzQEpOomyJNSH1rvasZZ7tFMHaFIieyQDG7fWOkO89+MTWryfaiPGEdOYGwL25VEFoEl1Som2RSBKFAFsv2A7sblV6htWTbZSnBteue7yTaPDQ1efx2AkfpURl1JVtX8sOpdMYlSOmwgrutX/5aA51LYmGTQPxJ6djz0NZGdOYaEROQW4n193/rFU5pRErKcGqEYOsluyFR6tLbvD2OIrngWjcj+cSVnuNMMtrLbPc4mBLDmBOx9Wt3t324BNb/5BoI8YT0pkbBPfm0sMYZKCuRDLeCcLyPWXfe7yX/FHpqRYpOTmVB2fN3nnZPLab8VQrR2nsFLUHTtoPtM4Z7T5O9HYdsH+pNYnMqE9p6GO0Kh0ATGjl6geeteqKWrFypMUf0USQajTmLTtaU/xuw5uSvwhEUS0oGaswm15h1p4VIfUuiUXPFICvJvVuxYNPbN0fywOIiAE8zsw3hx9rABoA7GPm9w1zr0UACpn5b9G3NDpczXcGQ3EC0pmTXIX4YXY+ad9dY1Awro4cABQXv+2/3Os+JHmi5cgBAGfa57CCMhKYNvTqwelBatoNxvd73rQ92KJQbL6UM4XI2upuWPUXp2Pvd7Mypo5EpavLppIHv6CVTGrk8geetVryOrGCgIS1FFHYshU27l1W2LgXlmLz1hes3lNbdK3iT8paBKIYF9okDiI9RbVNWajazrdpFFZHbVi9MznYnAOYMzA+v6MZ8cmZ6wUwj4iSmdkH4HoAdSPcaxGApQAiduaISGPmuFXrSmVuENybS78G4LuJtkMiiSc95Gt42rbXa5GImcI0GKQE/WvW/NkkgnOwNeWYevZb9IPp0TxXO9GxXXN7o1LEsFY5fvwx/XvTiWKbohFW6c5U6/qocgdnuPn0/c9ZPVk9WBYt26KBqSZ3uYs2vF1XuN5u2FyLEFJVriqYTZ8I1p4VRlm7CNb0qXdxaxAdQyoffGJrzJqN90FEHgD/DeAtZn6aiP6AkEizDsA/ATgDYDUztxCRglCe/CoAGwH8OwALQBeAdwMoQ6iQqA7A9wBsBfAzAPMA6AC2MPPzRPQZAB9GqJJcRWhk2TPM/FzYpscBPMnMz0f7eq+6/yDDQFa0Sq4qWqmn/Hnb/hSm6A90j4T8vPIjRFh5uTU1mNQR7XOD01xzVLc3GI1csp1i/vzvBG/e/Q3tj6uIYqd4hVW61aNR6QDgnSKaddeXNcyvFG/f+4Iw07wYXf++KKFbvrTJ1S+vmVz9Mgzd2VpT/K6T9QWr04NayvwrvTlxpBDpyao+ZYGq91fvOuuEWV4dUu+a+nLvRjwJJUEcj+NZfwHwLSLaCmABgEcBrGNmQUR/BPBpAD9FyGE7GnbsvgXgBmauI6J0ZjbCzy1l5i8DABF9F8AbzHw7hXI+9xPRP8JnLgawgJnbiWgDgPsBPEdEaQBWA7gtFhcqnbnBkc6c5KqhVmk79op+ZCJodJWdo2FC0ckh11Rhii/qB9vVHCSp++G3ohJW/o313tULlPLtH1D3RLVlyUB8osez8j293rbbCvL2VNlGnkt3fLIy7wv3KVj6jjjypZeE4vBj0NYw8cZm9mRPq3hu/bSK5+CzZzZUl7znnabcZTmWljTqXMcrDUVNn6CoSyZoSaFaG+agXwRrTwmjrEMEa+wsuicDnJdgM4cibs4cMx8jokkAPolLQ6SPAngeIWfudgB9c2J3AfgdET0J4JlBtn4PgH8ioq+GHycBmBj++TVmbg+fv52Ifk6hm62PAPhrrEKv0pkbnEoAPsS5R5NEEm9Oq3X7dmqnFyLGocHLoapGd1KSZ8gcVTeKY3J+cIpT6Cc7o7bffeY9G2ZR7a6ZintN1DYdhEwhsl6sa1j1hNOx9z9HodIBwMEZyqLbZyhYc0Ic+uLLIjnZwJhymJID7QWz3vlLwax3/oLelLzqqpJNlS3Zi4qFqidETU40RFqSqk9eoPbrzSysrjphllULs9wUweZswJiBsaXexVOZA4AXADyMUPj03Hg2Zq4loiYiug7AcoRUOjDznUS0AsBNAA4R0UBV6gTgI8x85oInQ+/rvWjtHwDcDOATAD4blSsaAOnMDULRQ+uEe3PpKYQkU4lkXHJQKy89olatBiV2RFFh4ZljRBhyqkMz8lJicb41IWWJdrKzjaI4i/N9xneXHbDfdTSdehdGa8/L8fGQStd+a0He7irb6HLpds1Vluyaq+C6I2LfZ18TmfYgopqnGA1SvU0lc0/9rgQAup0Tz1aWbKpvz5wzmRVt4lDvHc8oatoERV0yAReod+7TwjjbHlbvJgHRmys8AuLtzD0KoJOZjxPRxote+zWAPwJ4jJktACCiqcy8D8A+ItoEoBhAD3BBLu8rAO4honuYmYnoGmY+PMj5vwOwH0AjMw8dfhgh0pm7PIchnTnJOOUN/fi2CrV5Y6LtAICCwjMRqYI9cGXGxACFdJFhe1vtMKIWGjWh2a4N/Kh4v/3uap2skmjtezkyhMh8sa5hdTRUOgB4Y5Gy4o2FxDcd4D2f2ibydQtjcjyXq6dm+sK3fzkdANrTZ5yoKrmxtTN9+kyQkkinZUwQUu8mzVf1SeeeE1ZXvTDLq4VZbgirKQtszEB8mkr7EeeG/MzsRqgQYiBeQCi8+tt+z/2QiKYjpL69DuAogBoAm4noCEIFEN9GKDx7LFw8UQlgwHYnzNxERKcAPBeFyxkUWc16GdybSz8P4FeJtkMiiSYMFi/aDu5sVrpjMrVguOi6v23FyqfSiC5/c8kA34yn/SCKSeoDdRpn7PtaZkZ735lUU/l32+YMJc75iB2K0n5rQd7p0ap0fSiCrQ/u4b0f3SkmaiJG8e4owiDRmjX/WHXJDT3dzpK5IIrNjcA4gDkYEMG6d86rd10lABfE4Kg9Dz6xNSq/j9GAiJYC+Akzx6wxOhGlIKRGLmbmkc8PHAKpzF2efYk2QCKJJkFY/qdte494FP+YcOQAoKjoxAkiDGlPBzKbQRSz5G5Ot81klc6QxVF16M7wxMl3mvcf/qX+k3lE8ctd6lPpnnQ69n4nCiqdUEh9Zg2teX4lmR/fIUrfv4+nqYxYfOFHBQIrOW3HFuW0HYMgxWzKXXqwpvjdgd7UwgUgGrT9zdUIkWZX9ZL5qn5eQGaru8Eyy6tC6l1jZli9G23vv5g2Ch4ORLQZwF0I58rF6Ix3A/gNQg5jzBw5QCpzlyU81qsTGLzvlURypRCA2fWkfXdVgIJxyeGKlJWrnjiq68aQNh3FouM/oG/Oj6Ut2qnO7VpNb0yqUL+iPV36Fe2ZOI5GO0+HorTfVpB3ujJKKh0AaEEO3PKG2HfDWzxb4Stn9qilaP6G/NVHaouvgy8pexGI5NjGCGAOGiJY944wy9qEWW0Lq3eFw9zmUw8+sfXPMTHwKkc6c0Pg3lz6BoBrE22HRDIaPPA3PGXf47FIjKlEdrvdU79s+bMFkfRkewEf2vUE3Rzb6lDDare/2eiI1VD6R/UfbL9OPRLzliWDES2Vrj82k72fe1Uc2HiMFxCQEa1940FQtfe4J2w4VjdhvS1gS18EorFU9TnmYdHdYBkV1cIsCwirKRMcGEq9m/bgE1vL42Xf1YR05obAvbn0uwC+lmg7JJKR0kY9Fc/ZDtiZeEKibbmY6dP3bM8vKIvIufkZ7t+2l9ZujLFJsO1o3Kv4rMs2Lx45zNtt9+8rUZpjtP/QxEKlA4BkP3ff8XdxeNVpvoYAVzT3jgeGntpeW/SuE/UFa5ymnroAocR2yTBgtgwRrDsrzLJWYVbrLDpLgHOfO20PPrE1aqP4JBcic+aGRubNSa5Y3Erb8Zf1I0WgsamY5ORWRZwDV4+iuKgm1lSnqrwdvZ5zF0J0g/H9hQftd510kD8hPdwyhMh8oa5h9VPO1H3fzsqcEi2VzpdErp9+SN3way933P2S2L64jJcSkBqNveOBzezNnFr5wrqplS/Ab09vrJ74nnca85ZnWVry3ETbdqVApNpUfeJcVT/fHYZFT6NlVlQJs+atBJo27pHK3BC4N5fmAWhMtB0SyXB5R63fv1CJz2AAACAASURBVEM7NQ+EmPRmGy0pKR2VS5ZujbjVxRfxu2O95Iz9ZAJmy/5qfRsBMZuDmYf25l32e02NRELV0k5F6bitIO9URZRVOgBI83DLfS+Ik3OreQUhcQ2pR4s3Oae2qmRTeXPONROEahtTaQpXGN+4+5Hr/jPRRoxXpDMXAe7NpVUA4tInSiKJBoe0itLDauUqDNHuI5HMmr19W05OzcZI19+CJxsEqXGpntQPtm5T2wIbY3nGAio/+7ztm/lEiS+wetqRuu/b2ZmTBVHUHdisLm64/zmrbHo9VtLYmkQwbHocReWVJZvcbVnzJrGiye+E4XHt3Y9cty3RRoxXZE5AZOxNtAESSaS8qZ/YflirXDeWHTkAyMpyR/xlaEILCChxmzkZnOGKeR+1Yzx1+v3ml84ww4r1WUPxUU/viu01dfoUw9wd7b3b0qjgG7dp6+69Q22qysVORuKvd6Q4Pe6pC078asO1O+4rWXz4x6cy209tB1sNibbrCsCETFmKKeqWLVsSbcOYp/sfNcUAbki0HRLJ5WCw2Go7tKNWbUtYtWSkuFzNpwoKyiIOWdWjqPYfdGP8mr7a1Uy1ynOCOHahVgA4zRMLneTbuVg5m3CVJ4k5+ZM9nuLcYHBfaUqyjYmimu/Wm0yu1xYrE/fOouq51XzK6cMEwtBVzGOVpEBHTn7T/kmTqv/ucHVXH/cnZZ4J2NNTEWoSK7mQQ3c/ct0vEm3EeEYqc5GxI9EGSCSXw4IIPGXbs69J6RrzjhwATJx4rHk466tR0hYrWwbDKk6Ny5nfCd68fp+YNWY+Y/pUuqmGuSsW+7tzaPIDX9RWf+0zanlT2pUf9SCAsttPLFhy+Mfrr91+b8bcE48ecvS4dyLGTWKvMHYm2oDxjnTmIuMwgLh/mUgkkWAg2P0n+85T3YpvVaJtiQzm9IzGYSWSV2OyN1bWDEZwinMhh2ZJxpxPGt9Y08gZB+JxViSkC5HxXF3Dmn9vadunMA/L8Y6UigKafs+XtJXfulk91e7AwVicEW8IrOa1HFqy/ND31m7c8ZWkmWf+tD/Z27QbzHH//R1jxOTGQHIe6cxFQNFD6wSANxNth0RyMb3wN/3JXtoQIHNRom2JlIzMumNEw+scX4uS+Fdq6Uoap2pxaacgoKjvCjw8y8e2uA4hH4qPenpX7Kips001jJh9GZ8uptl33qMt/c7HlePdyTgcq3PijcJB+4SGXctX7f+/qzeUPsDTyp7Zbfe37wezkWjb4gwDKE20EeMd6cxFzmuJNkAi6U87eSr/Yt9tBklEfTh8LJk48fiww0+NKEhIa4vgVOdoZ1FGTC+SndcbP3RYTDFRwkZKmhDpz9U1rvmPlrb9sVLpAODYFGX+57+iXfPwh5UjvXYcj9U5iUAVRupE9+ur1+z95vJ1u/7VO6nqbzt1o/stMF+xxSDD4Ojdj1zXkmgjxjvSmYucfyTaAImkj3ql/cQztn0uJi5KtC3DQwSdztZhN2HtRlp6LKwZCpGffA0T6uN1nptzCj9pfKOVGWMuLPdhT+/yHTV1tmkxVOkAYP9MZdFnH9Dm/+x9ykG/jlOxPCsR6EFv+pSql9au2/21xWv2fL29uPaN7ZrpPY7x2yfs1UQbcDUg+8wNA/fm0goAETc5lUhiQZnScHCbfnI26Mrprt9HTk7lwVmzdy4d7vs+jae7QZSQEVH64bZtarN/YzzPvFl9be+3td+uiGRmbSJ41pG6f0t2Zokgim27GGZcf5j33fq6yLQHMa4b9vqSst1VJTeUN+UuyROqfVai7Yki7777keteT7QR4x2pzA0PGWqVJJTDauXObfrJRVeiIwcAxRPfDgz3Pd1wdiTKkQOA4HRX3G/g/mhdv/Ipa8OYqXC9mA+FVDp7rFU6EOG1xcqKW7+qTn3sWmW3qaAqpuclkGR/a9HsM49v2Fj6wKzlB/6zMrf50DYSZmWi7RolPshK1rgglblh4N5c+s8Anky0HZKrk+3ayW1ntYaNibZjpChK0Ld6zZ+Dw514cApzTn6Hvp2QOaZ92F+vP0ZBjv0osYt40fb10vlK1bp4nzsc4qbSAVAEWx/ZJfZ8eDdPUgWusBSDkdHlmnymsmRTY0fGrKmsqFfaNb9y9yPX3ZhoI64GpDI3PN4AIBJthOTqgsH8kn5o+5XsyAFAXn7ZkZGMrqrGpIT367JKHAmx4UPG/13Zxs4xXeEZN5UOgFBIfWqduvaWr6q5LyynHRaN/7nZad2VMxcd//mGjTvunbDw6M/eTu88ux0srpSCApkvFyekMjdM3JtLDwJYkmg7JFcHFoTxjG3fwS7FG/VB6PFm6bJn9yYne1YO932/xN3bd9B1iW2GHBQe++sNRIh/eNsFT9cB+5fa7BScEu+zh0s8VToA0IPsv/V1se/6wzxHYeTE48yxAIOsluyFR6tLbvD2OIrngSghBUIRMOfuR64bd0UsYxGpzA0fmTcniQsGgj1/tu98ezw4cqpqdCcleUbUC8+N4sR/TmmKgx1aQhSybjjSNhkPqYJpzDcu71PpphtGXPKkTI2SfnODuuG2B9SU7fNoOwOd8Tg30RBYzW09snjZoe+v3bjjvpRZpx87kNLbsAvMvYm2rR+V0pGLH4n/kLzyeCnRBkjGP14EWv5kL63zk7k40bZEg8IJp48RYUS94lqRMyaKPYLTXY5EnV3BhSW3m/9Sx4xhF5DEmzQh0p+pa1z77VBfuqZ4nBmwUer/vF/d8Nn7Vdo7k7Yz0BOPc8cCClu2wsa9y1Ye+M6aDaUP0PSzT+1J8rXuA3Oif1di/l1JRExEf+z3WCOiFiLaOsL9fk1Ew87PJaKNIz0zWkhnbvjsBhCXD6j/396dx0dVnf8D/zyzZg8EwpaNyKIgS1hEBcLm0tq01VattWrR1trFtS6t9dvq1Vodt9pW25+tbd1R1Gpd6g4Swr6GBAg7kxBCEsISINss9/n9MZMaKZBt5p57Z57365UXSWbm3E9QkifnnvMcEZ8OUVPla+6lLQHSY6Y9wZDB23rc9LcJKf0jmaWn9MyE8UzYo+r6i/SCcZ7AlZY59uqSL2bpDDvKqTmB0n//bfvM62+1+9efRsUM8/Xriya77kvK2bvo3Kkr7zu7cMldrfm731vqamtcA+aAgjhGTHw0ARhDRInhjy8AsLengzHz9cy8OSLJDCbFXDeFj/Z6R3UOEZv20aHN/3KtSNaJc1VniRSns6XB6Wrp0S1WHbZgEPZuHf0VNUSkD0zcpTLCX4PfmPZBcEqxygzdEZ6lmxaepTNss8LRJMp4+Ar7zB/fbG/alIvFDPPPaEaaM9iSnl/50bTpy++ZPH3Zrw7nVn262OFvKjOoOXEzgEUGXAcAPgBQFH7/SgCvtj9ARBoR3dnh441ENJSIkonoP0S0Ify5K8KPLyKiyeH3v0pE68LPWRD+3BQiWk5E64loGRGZ5vQdKeZ65m3VAUTs2WmrW/sf17pcEEwxExUp2dmbNxPB0ZPX1mPAPhD16LXREBieNoxDZ00q8zP/bTO360MsdXB5aJauOnFkmzFr6dodTqHM+69yzLjxZ/aD2wejhAG/kdc3C5f/aP/hu/49Y8bSX4w7d8W9tUNqSortgdZozkAtuPGZOa1RHL+j1wB8l4gSAIwDsLILr/kqgBpmHs/MYwB81PFBIsoE8CyAS5l5PIDLww9tAVDIzBMA3AvgoQh9Db0mxVzPLACgvF2CiB0b7JVLP3duHAeCsnVZ0TJw0I4e77Tbg7yGSGbpLU52ZMNpK1Wdo8j38ORGTrLU+aXpOqf/q6Z2+oP7D6w2cpYOABrSafD/XesovO0G+77KTCxlIB7ORD2hxLaDg8/Y9trMmUvuGH32qgcqB9atXmQL+ndG+DKGrR9j5jIAQxGalfugiy8rB3ABET1CRIXMfPzP83MALGbm3eFrHAx/Ph3AG0S0EcCTALp9NGG0SDHXA9meQj9kI4SIkCWOiuLVjh1TQXCqzhJpbvexGofDN7anr/ci/1gk80RCYGiK8nVYPjjdc9qeGOxnu7I1fD118bGms1TM0gHAvn6Ue9f1jml3/dBeWZOB5apnWVVLbq7LO7Pi+VmzSm4bNnntI9v7NZQVkx6o6uWwQRh/9+pdAI+jwy3WsAC+XOckAAAzbwMwEaGi7kEiureL1/ktgM/Ds3nfaB/PDKSY67m3VAcQ1sZg/sC5rniLo2YmTHoGZ2/l5pZv6835opUYaroZlGBe8gQGjqjOcQDp/b/pe9DHbL27BCpn6QCgagCddtuPHefeM9e+fX8aVhl9fTNKO1o1YvzGv86cvfjW3ILSP27qc2hrMVjvyX+bRTc+M8fopsb/BHA/Mx8/W+1FqGgDEU1E+Gx1IhoCoJmZXwbwWPtzOlgBYAYRtT8/I/z5dHyxweLayH4JvSPFXM99BMCoNQEixujQ/f9yrVxWYz+kthlulGUO8A7qzetrMcR8s5V2WxKnOTeojgEAFZw37Eb/rTuZoWK3Yq+1z9KdrmCWDgB2DqGRN97omKJ9z775UDLWqshgRhmHt505ccOfZs4uvmXA2PK/lqYd8Zbgi1uNnTH8yEtmrmbmP53goX8ByCCiTQBuArAt/PmxAFYRUSmA+wA8eNx4+wHcAOAtItoAYH74oUcBPExE64GerQOOFjkBoheq7y55B8A3VecQ1uJH4Njr7uVbW8gX0yeJJCcf2jVx0vu9OrXgery0qYWSTLMupZ2tobXctfZAj28fR9qdjvklNzneMfUZrp15NyVp9W/698vWiQarylCwUy+7+T1dT21Bj3ZfxzKdbP66AZM3VOVc0NaUPHgciE50NF8AwOAbn5ljqrWu8UCKuV6ovrtkLoDnVecQ1tECX8Pr7mX1fgoqPTjeCKNGFS/qn1k1qzdjXI03DjDZ+kUmUWS5P9m7mzh028YMXnQ+XDzDXm7pmd5GGzX+cNDA8q1u13SVOc6p0Nf95APdneQzzwJ3MwnaHK37Bk0t3ZMzBy0J/QsQ2kkKAJ/e+MycC5WGi1NSzPVC9d0lGQBqgdhbuC4i7zA1Vb3lWsk6cZ7qLEaYNv0Vr82mD+3p61uQcOx6esW0u3ud5QcX2WtaZqnO0Y6g64tdt63OsTWcrTpLb5lhlg4AZpbpq3/4iZ6W4Idp+omZTcDuPlqdNbNsb9YMl8+Z+v9+9rcLn1OdKR5JMddL1XeX/BvAxapzCHOro8Nb3nOt7QeKj8PA09LqKsYXfDKqN2PsxPDt99IjIyKVKeJaAvvci+sGkonWHieirXmN+yeVydTWq797MzgSnqXboniWDsz8lXW88poFeqYriGFKs5ibD8CgUVsqDqkOEo9M803Iwl5UHUCY225b/br3XGuz4qWQA4DcvPL63o5RiaHm/qGQ6BgMt22d6hgdtcCddH7b430DbNunOktvpemc/kZN7fSH6xvW2JjVfT1E9PEk2znX3GXPf2WWbVnAhkplWcztAynk1JFirvfeB9DVXT4izmy0Vy1b4CwfA8KJFgvHKOY+fWpH9naUSuS3RCJNNAWGppruRIF96Dfoct99R5hhuh59PfH1pubJJVXVSWco2vHajols75xrm3r1Xfasf02lJUHq+RmgMeoV1QHimRRzvZTtKfThfxsVCoFljq3FKxzbzwXBpTqLkTIy9pYRca/XOu1Brul77wVzkicycFh1juOt5xGn3xX48WZm6KqzRIJpZukA6DZyzJ9pn37NXfbM98+ixTqhTmUek2gE8J7qEPFMirnIeEF1AGEuHztLF212VMdsM+BTycktj0gT23oMTIzEOFFlJzf3cZWpjnEibwZnTnkxeIHS2axIa5+lG9XmK1GdJWAn14vn22d8/w572mcFVKwDB1RnUujNUVsq2lSHiGdSzEVAtqdwNYAK1TmEejr0wFuulUv22A/MUp1FDT2QmtoQkXYOR5FqypYkx/OPSOuvOsPJ3Be4bsYafeRi1TkiKU3n9Ndrags9JpilAwCfkxL/dpF95rV32N0lo6mY4/Pc7pdVB4h3UsxFjszOxTk/Ak2vuZeWHrQdU7v7TqHMAd5SIvS6CGOAA3AOiUSmaOMM92i2YYfqHCfzHd+90+q5zxrVOSKt6ItZOlPMPra6KOWpi+0zf3CbHatGUDEjNtYsdsEeAMWqQ8Q7KeYi52UgNtaniO5rge/APPcSbzP5JqvOolJOzqaI3Go5iH71HRqRml5wSJJpF8PrsNnntD0+spWdpi04eyo8Szf9kfqGNXbmGtV5AKApkdIfv8w+80e32Ns25FMxA6bfyNNL80ZtqZAeZ4pJMRch2Z7CvQA+U51DGK+RmqtfdS854qdgXHeLt9kCLUlJh8dFYqw9yO11axMjBYanjWaY93zUY0hKu9D3aEKQyegD0A3xtabmySWV1SlmWEvX7kgy9fvdd+0zf3qj/UhFDhZzqA9brGGEDrkXikkxF1lyqzXO1FPjtjddy506sWmOdVJl0KAdGyhCLVgqkX8kEuMYxm3PRILdVD3njlfFA7Ov8f+qnhmtqrNEQypz2us1tYVmmqUDgINpNPC+qx0zbv6pff/OQSgxc9HfAwtHbanY1vnTRLRJMRdZb0N6zsWNStv+0nddawYyYaDqLGaQlRW5PUCVGGq5H3iB01JNv8ximT7mzAcC16xnRszeFmufpRvd1maKtXTt6vtQ1q+ucxT+/Ab73qr+WMqxsSznL6oDiBAp5iIo21PYAuAfqnOI6Ntk37P8U2fZKBDSVWcxA7vd1+hOODYhUuPVIMsRqbGMEsxKmsRAg+ocnXkueNG5/9anxdQO1+OlMqfNr6kz1Vq6djX9KO/OHzmm/fI6++59fbGCYdnCei+Ad1SHECFSzEXeXxAbv3GJk1jh2LZ4uWPb2SC4VWcxiyFZW8opgn8fB9EvJVJjGcZGTj3DtUl1jK74uf/GmRV6rqlmrqKhfZbuzLY206yla+cdRMNu/YnjnF9/376tIRWrVefpgb+N2lIRVB1ChEgxF2HZnkIvQkd8iRj0iXND8UbHnhkg+bfT0ZAhWyPa4LcFSZa8dR0Ykd7rky+M8k3fg1MOcUqp6hzRlsqc9lpNXeGj9Q1rzTZLBwDbs+j0n93kOOuBK22bDifB1OsuO/ADeFZ1CPEF+YEUHU+pDiAiSwcH33atKqmyN8xUncVsnM6W/U5na0GkxvPD0abDZslijvu4RrKdtqjO0RV+OFyz257I87HDqzqLES5qap5k1lk6ANg41HbmDbc6Jj5yma30aAI2qM7TiX+P2lKhvGGz+IIUc1GQ7Sn8DHIiRMwIINgy37107QHb0ULVWcwoO2dTBRHskRpvH7JqQGTZY9CCWUmWOavzMFL7Fvkegs44pDqLEY6bpTNlb8C1I2wFP/y5Y/wfLratbXFhs+o8JyEbH0xGirnoeVp1ANF7rfAdmudesqOJ2qaozmJWAwfu7BPJ8aqQZ+kzLgPDUsdaqafYds4eer3/zipm62TurfAsXapZZ+kAYNlo26S5dzhGP3ORbVWbA2Zq/7Fh1JaKRapDiC+TYi56XgRgrV5Z4kuOUMveee4lh3wUGKs6i1klJBzd63D4Ivr340V+cyTHM5zLnsGJ5u45d7yF+sTxTwQuX6U6h5HaZ+keM/EsHQAsLLBNueZO+4jnzrct99uxS3UeAH9QHUD8LynmoiTbU3gMwPOqc4ie2U9Htr/hWmbTiU9TncXMcnLLtxMhordE9yDPqq0a/iswLDVit52N8nTwW9M/DU6KuzM2v9rUPGlJZXXaGBPP0oGIPjzLdu41d9rzXp1pWxqwoUpRkloA8xRdW5yCFHPR9TSs20MoblXZGja841o9gAmW2ZmoSmamN+J/R3UYZPmWL/qQpIkMWGbtXLsf+W+fsUsftFx1DqOlMKe+aoFZOt1G9ren2qZdfZd9yNvnUkmQYPTu3L+M2lIRN7fjrUSKuSjK9hRuB/Ch6hyi6yrs1Ss+cW44XZoBdy45+dAuuz14eqTHbUR6RqTHNByRXe/vtsSu1i8jusjnmXCEEzeqTqLCf2fpWk08SwdAt5Hj1Vn2wmvutPf7YDIt1glGnGXcCuAZA64jekCKueh7WHUA0TWrHDsWL3VsnQJCguosVpCbWxaVWz0+uAdFY1yjBUakZavO0BNtcCXMaXtiYIBt1aqzqJDCnPrqvrrCJ+r2rzPzLB0ABBzkfv4C+4zv32FPWTiOivXoHif58qgtFfujOD6I6Ekiuq3Dxx8T0d87fPwEEd0ezQxWJcVclGV7CpcAiOmjc2LBZ86yRWWOSmkG3A0Z/aqHRnrMI0g9CKK0SI+rAqe5hrGDLHEixPEa0CfzW74HWpjjdxPXhc0tE5dUVqeNbW0z/fdvn5OSnimyz7zudrtz6SgqZqAxCpd5MgpjHm8pgKkAQEQ2AP0BnNnh8akAlnU2CBFZ7jjA3pIfXMb4neoA4sR0cPAd1+oSr33/LNVZrCQtvW6zzaYPjfS4e5FTG+kxVQpmJ0dzpiSqyvm0Ebf4b9rGjLg9simFOXXevroZvw/N0pl+prLFTal/vMQ+84e32XnNcCpmoClCQ388akuFET3vlgE4N/z+mQA2AjhKRH2JyA1gFIALiWg1EW0kor9RuCclES0ioj8Q0RoAtxqQ1VSkmDNAtqfwE8CSZ+/FtACCra+7l63ZbzsizYC7KTe3LCq3W7zIj8aMgjKB01LHcWitkSW9p0+d/Ldg0VLVOVS7IDRLl26FWToAOJZIfR693D7zhlvsLeV5VByB/wcfjEiwTnDouLUAEeUiNAu3HMBKhAq8yQDKATzNzGcx8xgAiQC+3mEIFzNPZuYnjMhrJlLMGUdm50ykFf7D89xLth2j1rNVZ7Ee1vv0qRsZjZErkR9bO+WctnROdliq59zxHg5cNWN5cLQliphostosHQA0JlP/337PPvNnN9oPb83C4h42s14wakvFkoiHO7llCBVy7cXc8g4fLwUwm4hWElE5gDn48m3Y+QbmNBUp5ozzLkJTxkKxo2jZ96p7SYOPAuNUZ7GijIzqMiKOStuWvciJue9JgeGplm+18j3/PdNrOCOumgqfzAXNLROXVlb3GdfathjMlmg9dSCNBv3m+44Zt/zYXr97IJYwunXrXItWrpNoXzc3FqGfmSsQmplrXy/3FwCXMfNYAM8CX9qwFqnbypYTc984zSrbU8gAHlKdI9410NGdr7uXcZD04aqzWFVO7saoLYpvQP+UaI2tij4wcQIb3w8sohg22wVtj41uZvdW1VnMIJk55ZV9dTN+X9+w3iqzdABQl0HZv/yBY/od19v3VPfDMgb0Tl5i9KwcECrYvg7gIDMHmfkggD4IFXTtmx8aiCgFwGUGZzMtKeaMNR/AdtUh4lW17UD5v12r+jFhiOosVkWk+1NTG8ZEa/wmpPSP1tjKENn0zAQzna3ZI01ITLmg7dG0IFNMbVLpjQ6zdCVWmaUDgOpMGnr7DY6pd19n31XXBytO8VTNqEwdlCO0i3XFcZ9rZOYGhGbjNgL4GLIW/b/IQv//xYTqu0t+AOAfqnPEm632mlUljoqxICSqzmJlAwbsWn36GUvPisbYOmzBa/A6IwbbCtAxf6V7aX2e6hyRMJm2Vrzhuj+XCMmqs5jJZ0mJ6+8c0D8zSGS5/oKn7+GK294JHut3FB3/bS8YtaXifGWhRLfIzJzxXgJQqTpEPFnj2FlS4qiYJIVc72XnbIzaBoV6DNwXi4UcAHCKM48dVKY6RySs4dNH/Spw/UbmTm/RxZXzm1smLK2s7jPeQmvp2m3NoVE/vclx1m+/aytvTML68Kc1lZlE90gxZ7BsT6EfwH2qc8SLhc6Ni0od3kIQLHfwudnYbIGWpKTG8dEavwp5Ue0ur1owLyVm2q68Fpxz9qvBOaY+8kqFZOaUl/fVzXiyvqHUSmvp2pXn28b+6FbHhMcutT2vYK2c6AUp5tR4CaE1ACJKGKy/61qzeJe9bpbqLLFi0KAdpUSI2gaFSgw9Fq2xzSAwNGVCBJu4KndP4PqZpfowKehOwMqzdACweqTtL6oziO6RYk6BbE+hDuBu1TliVRB62+uuZavqbY0zVGeJJVlZFRTN8SsxNLZv2zlsKZziWN/5E63jUp92bgOnWbqPXrS0z9L9wXqzdG+Vzy2XjQUWI8WcItmewg8ALFKdI9a0wd84z12y5ait9RzVWWKJ3e5rdCccmxDNa9RiiCua45tBYERaquoMkRSE3TG77YlhbezcqTqLWZ0XnqUrsMYsXRDAr1WHEN0nxZxav1QdIJYcQ2vtPPeS+jYKRG1dV7zKyqooJ0JUm98eQt/0aI5vBnpmwjgm7FGdI5KOIjn9Kz6PS2dqUJ3FrJKZU14Kz9I5mM383/+l8rnlFapDiO6TYk6hbE/hKgBvqs4RCw7Ssd3z3csCQdJHqM4SiwYP2Rb1ncBtSBgY7WsoR0T6wMSYm8Xy8uCcuf5f7mNGm+osZnZec8uEJZXVGSadpfNBdrBalhRz6t0DIKA6hJXV2A5uesu1Mp2JLdffyQqcrub9TmdrQTSv0YLEo0y2ftG8hlkERqQNZ8BsP8h7rUQfN/ahwFVrVecwu2Tm5Jf21c34Y33DBpPN0v25fG65tM2yKCnmFMv2FG5HqKO16IHttn2rP3CuzwchQ3WWWJWTvWkzRbm1y15k74vm+GbCSY5suGylqnNEw7PBoqnvB88pVp3DCuY0txQsrazOmNDaaoZZugYADyjOIHpBijlzuB9ATLdliIZ19t1Lip2bJ4CQpDpLLBs4aGfUC+UqDD0U7WuYSSAvpVl1hmi5yX/LzG161lLVOawgiTn5xX31Zpil+0353PLDCq8vekmKORPI9hTWAXhcdQ4rWeTctGidc9d0EGLyxACzSEg4Wm23+6N2Fms7L/Jbo30NMwnmpUxk4IjqHNHydd9Dkw9zckyceGEExbN0ZZC7Q5YnxZx5Yxby5AAAIABJREFUPArAqzqE2TFYf9+1tniHvXaW6izxICe3fAcRotpfDgD2IDfq1zAVOyVymnOD6hjR4oPTPaft8Sw/22UNVhcdN0tXZeClbyufWx408HoiCqSYM4lsT2ELgFtV5zCzIHTfG67lK2tth2eqzhIvMjO9g424zn4MiLtb5YERaX1UZ4img0jv93Xf73RmxMwxZkYIz9L1m9jaWmzALN1b5XPLP4/yNYQBpJgzkWxP4bsA3ledw4x8CBx51b1k0xFby7mqs8SL5OSDO+324OlGXOsoUuNuA4veP2EsE3arzhFNWzk3/yf+23Yxw686i5UkMSe/sK9+5lPRnaVrAXBHlMYWBpNiznxuQegfmQhrQlv9PHfJvlbyR/UEAvFluXllhizIZoADcA4x4lpmow9OivnbkB/rUyb8KfitFapzWNGsL2bporGWzlM+t9wb4TGFIlLMmUy2p3A3AI/qHGZxiI55X3MvbQuQbsgMkfhCRsbefCOucwD960CUYMS1zMY/PPUMBmL7TFoATwYuL/w8OF5alvRAeJZuRniWLlLF/04Aj0RoLGECUsyZ0yMAdqgOodo+OrT5X66VKUycozpLvElLr9tss+l5RlyrGjn1RlzHlBIdg+C2xcVB9df5fzGjUh8gM3Q9NKu5pWBZZXXmpJaIrKW7pXxuuZzWEUOkmDOhbE9hG4CbVedQaYetds1/XOvyQOivOks8ys0t22/UtSqRf9Soa5lRID81TtaTEX3F98j4Y5ywWXUSq0pkTnq+tn7m03X7y3oxS/dO+dzyDyIaTCgnxZxJZXsKPwLwtuocKpTavUsXOTeNByFZdZb4xHqfPnWG3dauRH5cH2cXzE6eyEBcNGxthTvxvLbH+wfYVqM6i5XNbGkdH56l6+5auhYAt0Url1BHijlzuw1AzHaKP5ESR8WiNY6dU0Fwqs4SrzL6VZcR8SCjrleDrKgeFWZ6dnJzH1fcNNitQ8aAS33aMWbE9Yxsb4Vn6WZ0c5buN7LpITZJMWdi2Z7CKgD3qc5hBAbzB851xVsdNbNgQJNacXK5OeWG/pA9iIw0I69nRv4RaXG1nGADDx95u/+nW5ghzWp7qX2WbnLns3SrAPzBqFzCWFLMmd/vASxXHSKagtB9b7pWLK+xH5JmwIoR6f6U1ANRP76roxYkDTDyembEGe7RbMN21TmM9LZeeNbzwa/IGa4RkMic9Fxt/Yw/n3yWzgfgB0ad9EBEQSIqJaJNRLSBiO4gIkvUG0RUQERfU52juyzxlxvPsj2FOoDrAMTk2ZU+BI6+6l6ysdHWPFV1FgFkZu4uJUJfo67ng7NVh22gUdczs+CQpLhbR3Z/YO6MVfrp0rIkQmacfJbuofK55ZsMjNLCzAXMfCaACwBcBOvcZSoA0K1ijoiUnxEuxZwFZHsKtwL4teockdaMtv3z3EuqW8k/UXUWEZKds8ln5PVqMaQGRHJbHUBgeNpoBuJuM8h3fb+ZXst916jOEStOMEtXDuAhVXmYuR7ADQBuopAEInqOiMqJaD0RzQYAIrIT0eNEtJGIyojo5vDnvUTUP/z+ZCJaFH5fI6IXiKiEiCqJ6NtE9Gh43I+IyBl+3iQiKiaitUT0MRENDn9+ERE9QkSriGgbERUSkQvAAwCuCM8sXkFEU4hoeTjrMiI6Pfz6a4noXSJaCGABEb1IRJe0f91E9AoRXWzU37MUc9bxJICYuSVxmJoqX3MvbQ5QcJTqLCLEZgu0JCU1jjfympXIO2jk9UzNbc9Egj0ues51pMNmP6/t8dNb2BVXt5mjLTxL13dOU/Pc8rnlStvfMPMuAHYAAwDcGPoUjwVwJYAXKNQ0/AYAQwEUMPM4AK90YehhAOYA+CaAlwF8Hh63BUBRuKB7CsBlzDwJwD8B/K7D6x3MPAWhzYb3MbMPwL0A5odnFucD2AKgkJknhB/rWBhPDI89E8A/AFwLAESUDmAqgP90/W+pd6SYs4gOt1stf9RXLR2ueNO1IkknNqQpreiaQYO2lxIhxchrenFaXO3W7kzgtNRoH6xuSk1ITL3Q92hSkCl+G0hHQSLzH/74s53rVec4znSECi8w8xYAlQBGAjgfwF+ZORB+rCu/6H3IzH6EZh/tAD4Kf74cocLwdABjAHxKRKUI3eHK7vD6t8J/rg0//0TSAbxBRBsRmlQ5s8Njn7bnZOZiACOIKBOhIvVf7V+LEaSYs5BsT+F2APeoztEbu2x1a993rc0BIVN1FvFlWdkVhn8/2IPcuCxeTiaYlTSRgQbVOVTYwwOyvuf7vwZm6//CahLrADyoOgQAENFpAIIAelKsB/BFrXL8sX9tAMDMOgA/f7FOUAfgAEAANoVn2QqYeSwzX3j868PZTrbu7bcIzfiNAfCN4zI0HffcFwFcjdDEyz+78sVFihRz1vNHAItVh+iJMnvl0oXOjeNg8OyP6Jzd7mt0u5sKjL5uPQbF5ZmsJ2Ujp57hMnKhuqms5NGj7w1cu4EZUuT3ThuA70NrVH66SHim6hkAT4eLrRIAV4UfGwkgF8BWAJ8C+HH7ZgIiyggP4QUwKfz+pd28/FYAmUR0bnhMJxGd2clrjgJI7fBxOoC94fev7eS1zyPclJmZDT3pRIo5i8n2FDKAH8BizYSXOrYUr3LskGbAJpWVVVFOBLfR121EH8N2zlpFYET6YNUZVHopeOE5bwZnWPIXVhP5NbRGlb8UJLa3JgHwGYBPANwffuwvAGxEVA5gPoBrmbkNwN8BVAEoI6INAL4Xfv79AP5IRGuA7vUlDK+BuwzAI+ExSxFay3YqnwMY3b4BAsCjAB4movU4+exd+/XqAFQAeK47OSOBen9er1Ch+u6SGwE8rTpHZxjMHzlLF++1H5QeciZ29jlvrHO5Wg3fVXwV3jwKotTOnxlf3J/VbKEgn6E6h0rvu+4pGWPzFqrOYUElAGZBa9RVB4k3RJSE0Hq9iczcaOS1ZWbOorI9hX8G8J7qHKeiQ/e/5Vq5TAo5c3O6mvc7na2G7mIFgCNIOyiF3IkFs5LqVGdQ7RLfA+cc4FSzLd43u2MArpVCznhEdD5Cs3JPGV3IAVLMWd21APaoDnEifgSaXnUv3XDI1jRNdRZxajk5mzYTwfDzUauRU2v0Na0iMCx1LIe69setABzOOW1P5LexY7fqLBZyC7TGXapDxCNm/oyZ85hZyZFpUsxZWLan8CBCW6BN1Wi0Bb6Gee4llS3km6w6i+jcwIE7Mzp/VuR5kW/4b6+W4bJncKJ9reoYqjUipc9FPo9NZ5J+hJ37J7RGw9dqCXOQYs7isj2FS2GiY1IaqXnPq+4lx/wUHK06i+hcQsKRaofDP1bFtSuR39b5s+JXYFia8iOCzGAXD8n7of/OPczxPVPZifUINeMVcUqKudjwMEK7hZSqo8atb7qWu3XioaqziK7JzS3foerae5Ft+K1dK9GHJE5kIO7XzgHA5/qE8Y8FrlitOodJHQZwGbTGmDy/W3SNFHMxINyu5BoAytYgeW31699zrRnMhAGqMoju659ZOUTVtQ8gU/oNngqRXe/vrlAdwyz+Erx42kfBycWqc5gMI9RPTtbJxTkp5mJEtqewHqFGjIbvYtpor1r2mbP8TBDSjL626Lnk5IM77fbgSFXXb0Jyf1XXtorAiLQc1RnM5Cf+n8/YqQ9epjqHiTwCrdHUXQ2EMaSYiyHZnsKF+PIhwlG3zLG1eIVj+7kguIy8rui93LwyZTuhg7AFgrDHdXPcruA01zB2UNyeCPG/iL7me3jiEU4qV53EBBYidNaoEFLMxaD7ARhyK+JjZ+mizY7qmSCQEdcTkZWRsTdf1bXrMXAfwsf2iFML5iQfUJ3BTNrgSpjT9vggP9tN2ZbJIDUAroTW2K0TEUTskmIuxmR7CoMArkAU+8/p0ANvuVYu2WM/MCta1xDRlZ5eu9lm0/NUXX8P8varurbVBPJTxzPk8PmOGtAn82Lfb33MiMf2NgEA34HW2JND60WMkmIuBmV7CusAXIIo/ADwI9j8mnvZ+oO2Y9MjPbYwTm5uudJiyov8JpXXtxSnLZ2THXISwnE289BhN/lv2cFsrj6bBvgFtMalqkMIc5FiLkZlewrXAbgukmO2wHdwnrtkdzO1nRXJcYXRWE/vU3e6ygRVGCrHDXVDYHiqW3UGM/qPfs6kvwS/uVx1DgO9Ca3xSdUhhPlIMRfDsj2F8wE8FImxjlBz9avuJY1+Cp4ZifGEOv367Skj4kEqM+zDENkw0w36wMQJTKhRncOMHgt8t7AkOCYeWpZsBfAD1SGEOUkxF/t+DeDd3gywn45sf8O13KETK1swLyInJ3fjUdUZDqNPuuoMlkJk0zMTtqmOYVbf999dWM39V6rOEUVNAC6F1qj8364wJynmYly4ofDVAHrU3qDStr/0HdfqAUxQOpMjIoNI96ekHBijOkcbEuT/p24KjEiTX6ZOgmGzXdj26JgmjskmywzgemiN0qJGnJQUc3Eg21N4FMDFALp1WPVme/WKT51lo0CQWZQYkTlg13oi9FWZoRlJR5hsGSozWBGnOPPYSRtU5zCrZiQkn9/2eN8g2/apzhJh90JrfE11CGFuUszFiWxP4U4AlwNd2/m10rF98TLH1ikgyMLrGJKTvdmvOkMNsmLth61hArkpcpvtFPah36DLfPcdYUas7Jb+B7TGB1WHEOYnxVwcCZ8Q8fPOnveps2xRuaNqBkj+/4glNlugOTGpsUB1jkoMPaw6g1UFh6YUMGKmUImK9Tzi9F8EbtjEbPzRhhH2KYCfqA4hrEF+WMeZbE/h0wB+f6LHdHDw365VJZX2/bOMTSWMMGjw9lIiJKvO4cVpraozWJbDlsIp0nOuM28EZ015JXheieocvVAO4DJojfHWQ0/0kBRz8elOAPM7fiKAYMt897K1DbajhYoyiSjLyqqwq84AANXIkePfeiEwIi1FdQYr+HXghzPX6cMXq87RAzUAiqA1HlEdRFiHFHNxKLzDdS7CZ7i2wndonnvJjiZqnaI2mYgWh6Ot0e1umqA6BwDsx4Ak1RmsTM9MGM+EKtU5rOBy331T6zl9reoc3XAMwNehNcbzubOiB6SYi1PZnsI2AJccpqbPX3UvPeijwFjVmUT0DMnaUkYEUzTqPYq0fqozWBoR6QMTd6uOYQVB2B3ntT0+vJWdO1Rn6QIfgEugNcptdNFtUszFsWxP4eE33SuuCZLuUJ1FRNfgwduUr5UDAAY4AMcQ1TmsLjAibRiH+o+JThxFcvpXfI8k6ExKzyPuhA7gGmiNC1QHEdYkxVyc0zRtL4ALAZj5G53oBaereb/T2TpedQ4AOID+tSCSdje9xEmObLhsMoPTRZU8KPsa/6/qmGHWzTe3Qmt8XXUIYV1SzAlomrYNwEUApIdVDMrJ2VRBBFNsftiDXPmlIUICeSlmLUxMaak+ZswDgWvWM5tuRvNBaI1Pqw4hrE2KOQEA0DRtLUKnRLSpziIia+DAnaY5baES+fILQ4QE81ImMCA7HrvhueBF576jTzXTDte/QWv8jeoQwvqkmBP/pWna5wCuBBBUnUVERkLikT0Oh1/5WaztKjFU+QkUMcNOiZzmlOO9uuk2/00zt+g5S1TnAPAWgJ+pDiFigxRz4ks0TXsbwPWQxdUxITe3fKfqDB3VIMupOkMsCYxI66M6gxV9w/e7KYc4RWUh/G8AV0JrlF+cRURIMSf+h6ZpzwO4DrD8cThxr3//yizVGTo6hH5pqjPEEr1/wlgmSJuSbvLD4Zrd9kSujx1eBZd/E8Dl0Bp9Cq4tYpQUc+KENE17AcA1kFuulpWccmCH3R4coTpHR81IGqA6Q6zRBydVqs5gRYeR2vfrvt+xzjhk4GVfBfBdOaZLRJoUc+KkNE2bh9AaOvnGY0F5uWXVqjN05IOzlUFSzEWYf0TqGSy/dPXINs7J/5H/jkpmGLGW8yWEesnJfysRcVLMiVPSNO0NAN8BDPlmJyKob0ZNvuoMHe1DVg2I5FzWSEtwDIJbes711AJ9UsGTgctWRvky/wRwrRRyIlqkmBOdCm+K+DakbYllpKfXbrLZ9DzVOTqqQt4B1RliVSA/VX7Z6oU/Bb89fUFwQnGUhv8rgOuhNRq+BpmI/o+INhFRGRGVEtHZJ3neZCL600keW0REW8OvLyWiy4jo70Q0OrrpRXdIMSe6RNO09wFcApi2g7roIDevrEF1huN5kd+sOkOsCmYnT2TgsOocVvZD/50zvPrA5REe9mkAP4XWaHh3ACI6F8DXAUxk5nEAzgew50TPZeY1zHzLKYa7ipkLwm9vMvP1zLw5CrFFD0kxJ7pM07SPAHwDQIvqLOJUWE9Prz9DdYrj7YGpJgpji53c3MdVpjqGtRF91ecpOMqJmyI04JPQGm9WUciFDQbQwMxtAMDMDcxcQ0RnEdEyItpARKuIKJWIZhHR+10dODxbNzlqyUW3STEnukXTtM8AfA1Ak+os4sT69duzgYgHqs5xvDoMSlSdIZb5R6Rlqs5gda1wJ57X9viAANt6u3noUWiNt0ckVM99AiCHiLYR0V+IaCYRuQDMB3ArM49HaLauK7+cv9LhNmu/aIYWPSPFnOg2TdMWAfgq5CxXU8rJ3XhMdYYTOYJ0aXAbRZzhHsU2bFedw+rq0TfzW74HWph7fFTag9AafxnRUD3AzMcATAJwA4D9CBVxPwawj5lXh59zhJm70q2g421WWftqQlLMiR7RNG0JgAsBNKrOIr5AFPSlpBwwzfFdHfngGqw6Q6wLDkmqUZ0hFpTzaSNu89+4jbnbLV/uM9NZq8wcZOZFzHwfgJsQ2sh2SkT0cXgG7u/RTygiRYo50WOapq0AMAOAqfqZxbMBA3aXEqGv6hzHa0T6ARClqs4R6wLD00az9IWMiHf0aZOfDX5taRefzgB+Ca3xgWhm6g4iOp2IOjYNLwBQAWAwEZ0Vfk4qETk6vo6ZvxKegbvewLiil6SYE72iaVoZgLMBSJ8rE8jO2WTKH+TVyKlVnSEuuO2ZSLCvUx0jVjwUuHrGCn3U4k6e5gNwFbTGR43I1A0pAF4gos1EVAZgNIB7AVwB4Cki2gDgUwAJCjOKCCFmOU9d9J6maSkAXgNQpDpLvLLZAs1Tp73KREhWneV4H+Aby16ha6eqzhEP7HuaVjo3Hz5hPzHRfQRdX+q+Zc0QOjjlBA8fAnAJtMbOCj4hokpm5kREaJp2DMDFCPVVEgoMHryt1IyFHABUYqg0tTVIMCtpIgOm6zNoVQyb7YK2x0Y3s2vrcQ/tBjBVCjlhBlLMiYjRNC2oadrNAG4HYHi383g3JGuLo/NnqbEXOXbVGeKGjZx6hitSvdIEgCYkplzQ9lhakKku/KnVAM6F1rhFZS4h2kkxJyJO07QnAVwKQDr+G8ThaDvsdjcVqM5xMg3ob8oZw1gVGJEuO4cjbC8yB3/X95uDfra/AWAWtMa6Tl8khEGkmBNRoWnavwHMAiDf8AyQlVVRTgSX6hwn04xkaWhrIO7jGsl2qlCdI9as5jM+HNH20nehNcovqsJUpJgTUaNp2mqEdrrKLZ8oGzR4u2lnvoKwBYKwy0yRwYLZSfWqM8QQP4AfeT1Fd3g9RbKERJiOFHMiqjRNqwQwDcBnqrPEKperud7pbDXtLdZ6DKoBkayZM1jgtNRxHGqbIXrnIIALvZ4iaaIrTEuKORF1mqY1InSe6z9UZ4lF2TkbK4jM+2+5Cnly/I8KLntfTrSvVR3D4soBTPF6ihapDiLEqZj2B4CILZqm+TVNux6hswFbVeeJJQMH7jL1wdde5JvyrNh4EBieZtodzhbwMoBzvJ6inaqDCNEZKeaEoTRN+xuAcwHsUJ0lFiQkHtnjcPhNeRZruyoMlTVGiuiDEyeybELqLj+Am7yeomu8niLZ6CAsQYo5YThN00oBTALwpuosVpebW2b6WYNaDDbtLtuYR2TX+7tlV2vX7QUw0+sp+rPqIEJ0hxRzQglN045omnY5gFsgi7R7rH//qizVGTpzGH3TVWeIZ4GR6bmqM1jEIgATvZ6i5aqDCNFdUswJpTRNewpAIYBK1VmsJiXlwHa7PThCdY7OtMItbUkU4lTnaeygjapzmNxjAM73eoqknYuwJCnmhHKapq0CMAHAe6qzWEluXtle1Rk604ykIyBbX9U54l0wJ/mQ6gwmdRjApV5P0S+8nqKg6jBC9JQUc8IUNE07BOBiAL8EEFAcxxL69t17muoMndmL7FrVGQQQyE8dx0CL6hwmsxjAeK+n6C3VQYToLSnmhGlomsaapj0KYDZCC5HFSaT32bfJZmPTr4WqxFCZETIDpy2dkx3rVMcwiQCAXwOY7fUUVakOI0QkSDEnTEfTtCUI3Xb9RHUWs8rNLW9QnaErKpEvPQVNIjA8LVF1BhPYAWCa11P0OzmWS8QSKeaEKWmath/AVxHa7Sq9nr6E9fT0+jNUp+iKauSS6gwiRB+YUMAU1zPezwGY4PUUrVIdRIhIk+7gwrQ0TWMAT2ma9hGAFxBqNhz3+vXbs4GIJ6jO0RX1GJCkOoMII7LpmQnb7fWtpm9nE2GHANzg9RRJX0sRs2RmTpiepmnbEWpfcjeANsVxlMvJLbfM8VjHkGrqo8biTWBEWj4DrDqHgRYAGCeFnIh1xBxP/66F1WmaNgahWbqJqrOoQBT0TZs+r5kIfVRn6QwDfDXe9IHIrTqL+IJ7YU0Z+Xmc6hxRdhjAnV5P0T9UBxHCCDIzJyxF07SNAM4B8BvE4SzdgAG7S61QyAHAAfSvk0LOfAK5KUdUZ4iyfwMYLYWciCcyMycsS9O0UQD+gThaSzdp8jvLkpKOTFWdoyvWY1LZ43RPrM8AWU9AP+ZesI8ISFYdJcLqAdzs9RS9rjqIEEaTmTlhWZqmVQCYDuBWAE2K40SdzeZvSkw8UqA6R1d5kR/rM0DW5LClcKpzveoYEfYSgFFSyIl4JcWcsDRN03RN0/4EYAxivC/d4MHbNxDBMrtDqzBUjkcyqcDw1FTVGSJkD4CveT1F3/d6ig6qDiOEKlLMiZigaZpX07SvALgKMXp6xJCsCku1EqpBlqXyxhM9M2EcE6x8+kEAwO8BnOn1FH2oOowQqkkxJ2KKpmnzAJwO4LeIobMoHY62w253s2VusQLAQWTEyuxP7CEifWDiLtUxemgRgAKvp+gOr6foqOowQpiBbIAQMUvTtDwAjwL4juosvZWXV1qSm1deqDpHd1yNN+qYbANV5xAnRs2Bva6SuiEEWOWUjn0ItRuZpzqIEGYjM3MiZmmaVqlp2hUAZgCw9ILvQYO3pajO0B1tcLUwaIDqHOLkOMmRBZfNCv8u2gA8DGCkFHJCnJgUcyLmaZpWAmAygBsQal9gKS5Xc53T2TZedY7u2IchNSCyyoxP3AoMTTH7UoR3EFoXd4/XU2SZk0+EMJrcZhVxRdO0NAD3ArgFgFNxnC4ZNmxV8ZCsrTNV5+iOxZi1+q9081mqc4hOBLnF/VmNj4B01VGOsx7AL7yeos9UBxHCCmRmTsQVTdOOaJp2J0KtTN5XnacrBgzc1V91hu6qRL7ZZ3wEANgpkdOcZapjdLADwJUAJkkhJ0TXSesAEZc0TdsG4BvhdiaPI1TcmU5iYmOVw+E/U3WO7qpCnkz5W0RgRFof19oDqmPUAngAwN+9niK/6jBCWI3MzIm4pmnaxwDGAbgMgJlmKAAAObnllmwfUY+BCaoziK7R+yeMZRtU/X/WCODXAIZ7PUX/Two5IXpG1swJEaZpGgG4BKE1dabo6TZ12rwddntwuOoc3XUd5m33kXuE6hyiaxzlhxY5appnGXjJVgB/BvCw11OkfFpQCKuTYk6I44SLum8CuA/ABFU5UlIatk+Y+KElC6Kr8OYxEFmqnUpcaw3UuovrMgmwR/tKAP4O4FGvp2hPlK8lRNyQYk78FxFdAuBtAKOYecspnncbgL8xc3P44w8AfI+ZDxuT1Diapn0DoaJuktHXHn3mwkX9+u2dZfR1e+sw+jTcSP+w3KaNeOdetG8NtemTozT8UQD/D8DvvZ6iuihdQ4i4JcWc+C8img9gCICFzHzfKZ7nBTCZmRuMyqaapmlFCBV1hrXbmDb95T02G+cYdb1I2YQxmx6i+y23aSPe2SuPLXduaTw3wsMeBPBHAE95PUWHIjy2ECJMNkAIAACFbolNB/BDAN8Nf85ORI8T0UYiKiOim4noFoQKvs+J6PPw87xE1D/8/u3h528Mz+CBiIYSUQURPUtEm4joEyJKVPKF9pCmaf/RNG0KgK8BWBnt6/Xps2+jFQs5APAiv1F1BtF9wezkiQxEquDaB+AuAHleT9EDUsgJEV3SmkS0uxjAR8y8jYgOENEkAFMADAVQwMwBIspg5oNEdDuA2cfPzIVfcx2AsxE673ElERUj9ANiBIArmflHRPQ6gEsBvGzYVxchmqZ9COBDTdMuBHAbgK8iCmdb5uSWW3ZReCXyZUeiFdnJzX1cK+mwb0YvRtkJ4AkA//R6itoilEwI0Qkp5kS7KxG6HQIAr4U/zgfwDDMHAICZD3YyxnQAbzNzEwAQ0VsACgG8C2A3M5eGn7cWoSLRsjRN+wTAJ5qmDQPwM4SK2L6RGV0PpqfXjYrMWMbbi2yZ8bco/8i0TPeqbq+eYACfAvgTgA+9niI94sGEEKckxZwAEWUAmANgLBExQjvaGMDqCF6m42/pQQCWus16Mpqm7QRwh6ZpvwHwPQA3opdtTfr137OBCBMjkU+FA+ifqjqD6Bnu6x7FNtpOOndlF/UxAC8AeNrrKTrphikhRPRJMSeAUMPcl5j5x+2fCN8e3QDgx0T0ecfbrAjtTEsFcPyv8CUAniciD0K3Hr8F4BpDvgLFNE1rRqjlwt81TZsG4CaEbiV3+/zXnJyNTRGOZ6gmJGeqziB6LpiVVOPY03SqYm4HgKdX2JtEAAAJ3klEQVQBPOf1FB0xKJYQ4hSkmBNA6JbqI8d97l8ARgGoAlBGRH4AzyL0TfxvAD4iohpmnt3+AmZeR0TPA1gV/tTfmXk9EQ2Nbnxz0TRtKYClmqYNAnADgB8jtGmkU0RBX0rKwXHRzBdNQdgCOuyDVOcQPRcYljravqcpQF/++aAD+Bihf/8fej1F0gZBCBOR1iRCRJmmaQ6EZilvAnDKxeUDB21fNXLkiimGBIuCGgypuoueylWdQ/SOu7h2JbUGzwawG8BzAJ63apNfIjrGzCkdPr4WodZKN6lLJURkycycEFGmaVoAwBsA3tA0bTSA7yO0vu5/Wo9kZ28OGBwvoqowtAGAFHPW1uwfnrbEtfHQPQA+j/dZOCJytG8CO9HHXX2dENEkxZwQBtI0bTOAuzVN+xWAWQCuRmhtXbrN5m9KTDxiijNhe8qLfEuv94tjOoBFAF4C8GbN1VOPqY1jjPASkH8C6A9gP4DrmLkqvFykFaHj/JaGN4l1/PhFAM8ASEKoHcsPmPkQES0CUIrQzv5XEWrTIkTUSTEnhAKapjGAzwF8rmnajQC+MXDQzplEuF5xtF6pQl5QdQbRLRUIFXAv184usORt1C5IJKLSDh9nINQuCQCeAvACM79ARD9AqL3KJeHHsgFMZeZguLjr+HEZgJuZuZiIHkDodJjbwq9zMXO0jkUT4oSkmBNCMU3TWhG+Dbtg4bD/Q6iB8xUALkAPdsOqVIvBbtUZRKfWA3gHwDu1swtKO3tyDGhh5v/OeLevmQt/eC6Ab4fffwnAox1e9wYzB4//mIjSAfRh5uLw519A6N9vu/mRDC9EV0gxJ4SJnDdnZyOAFwG8uGDhsL4IbZy4HKFbsgkKo3VJI/r2UZ1B/A8/gGKECrh3a2cXVCnOYxXHLxno6hICWWogDCfFnBAmdd6cnYcQWs/zzwULhyUBOA+hs2GLcILNE2bQigRpS2IORwF8iFAB90Ht7ILDivOY1TKEzqJ+CcBVCPXKPCVmbiSiQ0RUyMwlCPXSLO7sdUJEkxRzQljAeXN2NgN4L/yGBQuHjcUXhd1UhE7tUKoJSY0gitCRZqIH9iK0FuwdAJ/Xzi7wKc5jBTcDeI6I7kJ4A0QXXzcXwDNElARgVzdeJ0RUSJ85ISxuwcJhfQB8BaHC7iKEduYZbhtO33o/PXS6imvHqRYAKxCaFfoPgLW1swvkG7oQcUhm5oSwuPPm7DyM0KLr+QsWDrMhdDbszPBbIUK796KuCkMPGXGdOHYEwFIAi8Nvq2tnF/jVRhJCmIEUc0LEkPPm7NQBrAu/Pblg4TACMBZfFHZTAWRF49pe5LdFY9w4dgChNVztxVtp7ewCaf0ihPgfUswJEcPOm7OTAZSF354CgAULh+UhVNRNC/85FhH4XlCNHOrtGHFuL4AlCN02XQxgs9w2FUJ0hayZEyLOLVg4zA1gNIDxx7116/bsjXh27WHKmBT5hDGnBcAmhArs8vCfZbWzCxqUphJCWJYUc0KIE1qwcFg2/rfAGwHAdqLnz8VruwPkzDcuoekxAC++mBltL962184u0BXmEkLEGCnmhBBdFu53NxLA8I5vDJx2Nd7MBJHpGxtHwWGEirb2ty0IFW4ba2cXHFWWSggRN6SYE0JExKDPS90INTPOO8FbNkK3bfvgJDN7JtWC0Fq2498qAewG4K2dXdCoLp4QQkgxJ4Qw0KDPSwmhgi4DQN/wnx3fjv9cXwBJCBWA3X0jAM0ItfRofzvaxY8PAaipnV0g7VaEEKYnxZwQQgghhIVZ6XaHEEIIIYQ4jhRzQgghhBAWJsWcEEIIIYSFSTEnhBBCCGFhUswJIYQQQliYFHNCCCGEEBYmxZwQQgghhIVJMSeEEEIIYWFSzAkhhBBCWJgUc0IIIYQQFibFnBBCCCGEhUkxJ0SUEdEgInqNiHYS0Voi+oCIRp7gectU5BNCCGFtxMyqMwgRs4iIACwD8AIzPxP+3HgAacxcEv7YwcwBhTGFEEJYmMzMCRFdswH42ws5AGDmDQDsRFRCRO8C2AwARHQs/OcsIiomoneIaBcReYjoKiJaRUTlRDQs/LxMIvoXEa0Ov01T8PUJIYRQzKE6gBAxbgyAtSd5bCKAMcy8+wSPjQcwCsBBALsA/J2ZpxDRrQBuBnAbgD8CeJKZlxBRLoCPw68RQggRR6SYE0KdVScp5ABgNTPvAwAi2gngk/DnyxGa7QOA8wGMDt3JBQCkEVEKMx+LVmAhhBDmI8WcENG1CcBlJ3ms6RSva+vwvt7hYx1f/Lu1ATiHmVt7lVAIIYSlyZo5IaJrIQA3Ed3Q/gkiGgegMAJjf4LQLdf2cQsiMKboBiLqR0Sl4bdaItobfv8wEW3u4hg/IaLvh99/noguC7+/iIgmRzO/ECI2SDEnRBRxaLv4twCcH25NsgnAwwBqIzD8LQAmE1FZuHD4SQTGFN3AzAeYuYCZCwA8g9AaxgIABQjNop5SeCfzM8z8Ym+zEJG9t2MIIaxJbrMKEWXMXAPgOyd46NnjnpcS/nMRgEUdPj+rw/v/fYyZGwBcEdm0IoLsRPQsgKkA9gK4mJlbiGgRgFIA0wG8SkSpAI4x8+MnG4iILgRwPwA3gJ0ArmPmY0TkBTAfwAUAHgXwWhS/HiGEScnMnBBCRMcIAH9m5jMBHAZwaYfHXMw8mZmf6GwQIuoP4NcAzmfmiQDWALi9w1MOMPNEZpZCTog4JTNzQggRHbuZuTT8/loAQzs8Nr8b45wDYDSApeGdyy4Ay3s4lhAiBkkxJ4QQ0dFxR3IQQGKHj0+1k/l4BOBTZr7yJI93ZywhRAyS26xCCGFuKwBMI6LhAEBEySc621cIEb+kmBNCCBNj5v0ArkVos0QZQrdYz1AaSghhKhTqnCCEEEIIIaxIZuaEEEIIISxMijkhhBBCCAuTYk4IIYQQwsKkmBNCCCGEsDAp5oQQQgghLEyKOSGEEEIIC5NiTgghhBDCwqSYE0IIIYSwMCnmhBBCCCEsTIo5IYQQQggLk2JOCCGEEMLCpJgTQgghhLAwKeaEEEIIISxMijkhhBBCCAuTYk4IIYQQwsKkmBNCCCGEsDAp5oQQQgghLEyKOSGEEEIIC5NiTgghhBDCwqSYE0IIIYSwMCnmhBBCCCEsTIo5IYQQQggLk2JOCCGEEMLCpJgTQgghhLAwKeaEEEIIISxMijkhhBBCCAuTYk4IIYQQwsKkmBNCCCGEsDAp5oQQQgghLEyKOSGEEEIIC5NiTgghhBDCwqSYE0IIIYSwMCnmhBBCCCEsTIo5IYQQQggLk2JOCCGEEMLCpJgTQgghhLAwKeaEEEIIISxMijkhhBBCCAuTYk4IIYQQwsKkmBNCCCGEsLD/D7ToHMJJ0DAuAAAAAElFTkSuQmCC\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAnMAAAJ0CAYAAAB5taW2AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90\nbGliIHZlcnNpb24zLjYuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/av/WaAAAACXBIWXMAAAsT\nAAALEwEAmpwYAADKuUlEQVR4nOzdd3hc5ZU/8O+509R7t2S597EFtnEvdAgiCWlKAUxJAoQAIWQT\nJb/srrPpxSmbzYZsErJKQgKkQAgQCAvYlnHvwjbGliVZY6v3Nu3e8/vjjrBsq4xGM3NnRufzPH6Q\nZt573zPCHp05byNmhhBCCCGEiE6K0QEIIYQQQojASTInhBBCCBHFJJkTQgghhIhikswJIYQQQkQx\nSeaEEEIIIaKYJHNCCCGEEFFMkjkhJjkiWkdEJ42OIxLIz0IIEY0kmRMiwhFRLRG5iSjrkscPExET\n0bSJ3J+ZK5l5rp+xbCai30+kv4mIpJ/FCPEtI6IXiKiDiDqJ6DgRfZOI0icSlxBCjEaSOSGiQw2A\njw1+Q0R2APHGhWOoiPxZENFqAFsBvAlgHjOnAbgJgBfAkhD0Zw72PYUQ0UmSOSGiw+8A3Dnk+00A\nfju0ARGlEtFviaiFiOqI6KtEpBCRzVclWjSkbTYRDRBRDhFtJCLHkOcKiOgvvvvUENHD/gRIRCuJ\naKevryNEtHHIc1uJ6OtE9CYR9RDRPwera0QUR0S/J6I237X7iCg3Cn8W3wPwG2b+NjM3AQAzn2Xm\nf2fmrUPueQ8RnfBV714houIhzzER3U9Ep3zP/4yIyPfcXb6f34+IqB3AZt/r+QERnSWiJiJ6nIji\nfe2zfFXCTiJqJ6JKIpL3fCFikPzDFiI67AaQQkTzicgEoAzApcOdPwWQCmAGgA3QE567mdkF4K8Y\nUs0C8BEA25i5eegNfL/s/w7gCIApAK4F8DkiunG04IhoCoAXAXwDQAaALwD4CxFlD2n2cQB3A8gB\nYPW1AfRkLBVAEYBMAPcDGIimnwURJQJYBeAvo8QNIno/gK8A+ACAbACVAP54SbNSAMuhV/M+AmBo\nfysAnIH+M/wmgO8CmAOgBMAsX5z/5mv7GACHr59cX79yfqMQMUiSOSGix2BF6noAbwM4N/jEkKTm\ny8zcw8y1ALYAuMPX5A+4OIH5uO+xSy0HkM3M/8HMbmY+A+CXAD46Rmy3A3iJmV9iZo2ZXwWwH8B7\nhrT5DTO/w8wDAJ6BnoAAgAd6EjeLmVVmPsDM3WP0F2k/i3To76eNQ+L4nq8q1kdEX/U9fB+AbzPz\nCWb2AvgWgJKh1TkA32HmTmY+C+ANXPg5AcB5Zv6p71ongE8BeJSZ25m5x3e/wfg8APIBFDOzxzcf\nUJI5IWKQzLkQInr8DsB2ANNxybAigCzo1a66IY/VQa/UAMDrAOKJaAX0hKMEwLPD9FEMoICIOoc8\nZoJeQRpNMYAPE9GtQx6zQE9GBjUO+bofQJLv699Br8o9RURp0Kts/4+ZPaP0F2k/iw4AGvTk6W0A\nYOYvAvgi6QtGBt9riwH8hIi2DLmWfLENxjvSzwkA6od8nQ0gAcAB30js4L1Mvq+/D2AzgH/6nv8f\nZv7OMLELIaKcJHNCRAlmriOiGujVrnsveboVeiWmGMBx32NT4atYMbNGRM9Ar0g1AXjBV8m5VD2A\nGmaePc7w6gH8jpk/Nc7r4Evavgbga6SvRn0JwEkAvx7lmoj6WTBzHxHtgT58+sYoTesBfJOZnxzr\nniN1NeTrVujD0QuZ+dxlDfXX9BiAx4hoIYA3iGgfM78WYN9CiAglw6xCRJd7AVzDzH1DH2RmFfrQ\n5TeJKNk3bPd5XDyX7A/Qhx8/geGHFQFgL4BuIvoSEcUTkYmIFhHR8iFtFN+ihcE/Nl8/txLRjb5r\n4nyLCQrHekFEdDUR2X3Do93QEzE1Sn4WQ30RwD1EVE5EOb7XVgi9ejjocQBf9iVXgws1PuzHa70M\nM2vQh31/NKS/KYNz+oiolIhm+RZQdEP/mfrzcxVCRBlJ5oSIIsxczcz7R3j6IQB90CfI74CepDwx\n5No9vucLAPxjhPurAG6FPvRYA7368yvoiwkGfQx6RWjwTzUz1wN4H/RJ9i3QK1D/Av/eY/IA/Bl6\nwnECwDZcvqBhuFgj4WcxtP0OANcAWA/gHd/w7MvQtyv5qa/Ns9AXLTxFRN0A3gJw81ivdRRfAnAa\nwG7f/f4PwOA+ebN93/cC2AXgv4euqhVCxA6S+bBCCCGEENFLKnNCCCGEEFFMkjkhhBBCiCgmyZwQ\nQgghRBSTZE4IIYQQIopJMieEEEIIEcUkmRNCCCGEiGKSzAkhhBBCRDFJ5oQQQgghopgkc0IIIYQQ\nUUySOSGEEEKIKCbJnBBCCCFEFJNkTgghhBAiikkyJ4QQQggRxSSZE0IIIYSIYpLMCSGEEEJEMUnm\nhBBCCCGimCRzQgghhBBRTJI5IYQQQogoJsmcEEIIIUQUk2ROCCGEECKKSTInhBBCCBHFJJkTQggh\nhIhikswJIYQQQkQxSeaEEEIIIaKYJHNCCCGEEFFMkjkhhBBCiCgmyZwQQgghRBSTZE4IIYQQIopJ\nMieEEEIIEcUkmRNCCCGEiGKSzAkhhBBCRDFJ5oQQQgghopgkc0IIIYQQUUySOSGEEEKIKCbJnBBC\nCCFEFJNkTgghhBAiikkyJ4QQQggRxSSZE0IIIYSIYpLMCREhiOg2ImIimjfC81uJaFmQ+3w/ES0I\n5j2FEEKElyRzQkSOjwHYAeCjYezz/QDGlcwRkTk0oQghhAiEJHNCRAAiSgKwBsC98CVzRBRPRE8R\n0VEiehpAvO/xB4joe0OuvYuIfur7+nYi2ktEh4noF0Rk8j3eS0TfJKIjRLSbiHKJaDWA9wL4vq/9\nzKHVPyLKIqLaIX38iYj+DuCfRJRIRE8Q0T4iOkRE7wvXz0oIIcTFJJkTIjK8H8DLzPwOgHYiuhLA\nAwD6mXkxgG8CWOpr+2cAHxhybRmAp4lovu/rNcxcAkAF8Alfm0QAu5l5CYDtAD7FzDsBPA/gX5i5\nhJmrx4hxFYBNzHwNgP8H4HVmXg7gaugJYWLgL18IIUSgJJkTIjJ8DMBTvq+f8n2/HsDvAYCZjwI4\n6vu6BcAZIlpJRJkA5gJ4E8C10BO+fUR02Pf9DN893QBe8H19AMC0AGJ8lZnbfV/fAKDc189WAHEA\npgZwTyGEEBMkc1+EMJgvIbsGwCIiYgAmAAzgkO+/w3kawEcAvA3gWWZmIiIAFcz85WHae5h58F4q\nRv6378WFD3lxlzzXNzRsAB9k5pMjvzIhhBDhIJU5IYz3IQC/ZeZiZp7GzEUAagAchG+YlIgWAVg8\n5Jq/Qh+a/Rj0xA4AXgPwISLK8V2TQUTFY/TdAyB5yPe1uDCc+6FRrnsFwEO+BBJEdMUY/QghhAgR\nSeaEMN7HADx7yWN/gT4UmkRERwF8EcDewSeZuQPAcQDFzLzX99hxAF+FvkDhKIBXAeSP0fdTAP7F\nt4hhJoAfAHiAiHYCyBrluq8DsAA4SkRv+b4XQghhALow8iKEEEIIIaKNVOaEAEBEqm97jsE/0wK4\nx11EVBCC8IQQQogRyQIIIXQDvu08JuIuAG8BOD/haIQQQgg/SWVOiGEQURIRvUZEB4moanBTXCKa\nRkQniOiXRHSMiP7p29z3QwCWAXjSV9mLJ6J/822q+xYR/c+QxQIPE9Fx32bATxGRQkSniCjb97xC\nRKeJaLQ5a0IIIQQAmTMnBAB9mBVAle/bGgAfBpDAzN2+pGo3gNkAigGcBrCMmQ8T0TMAnmfm3xPR\nVgBfYOb9vntmDO7LRkS/A/AMM/+diM4DmM7MLiJKY+ZOIvp3AF3M/GMiugHAfcz8wfD9BIQQQkQr\nqcwJoRvwnYJQwsy3Qd9H7Vu+VaH/B2AKgFxf2xpmPuz7erQNeK8moj1EVAV9H7mFvsePQq/g3Q59\nXzcAeALAnb6v7wHwm+C8LCGEELFOkjkhhvcJANkAlvrm0jXhwia6riHtht2Al4jiAPw3gA8xsx3A\nL4dcfwuAn0Hfz+0AEZmZuR5AExFdA2AFgH8E/RUJIYSISZLMCTG8VADNzOwhoquhD6+OZegGvIOJ\nWysRJcG3AS8RKQCKmPkN6HvHpQFI8rX9FfTju55hZjUor0IIIUTMk9WsQgzvSQB/J6L9AA5DPzZr\nLP8L4HEiGoB+KP0voc/DqwWwz9fGBOD3RJQKfSj3R8zc6XvueejDqzLEKoQQwm+yAEKICEFEy6An\nd+uMjkUIIUT0kMqcEBGAiMoBPADfWaxCCCGEv6QyJ4QQQggRxWQBhBBCCCFEFJNkTgghhBAiikky\nJ4QQQggRxSSZE0IIIYSIYpLMCSGEEEJEMUnmhBBCCCGimOwzJ4SISCfmzbcBSACQ6Pvv0K+HPqYA\n8ALw+P5c9PWf11DfM+tNHgB9l/wZqNpUJXszCSGiniRzQoiQOjFvfiaAAgD5vv8Ofp0PIAMjJ2qm\nYPSfNIDtANYP8xTbK+z90BO7LgANAM6N8Od81aYqTzDiEUKIYJNkTggRsBPz5psBFAGYBqDY99/B\nP1OhJ242Q4LzcVlBIzxF0BPHRAA5AGaPchu2V9hb4EvsMELSV7Wpqj1YcQshhL8kmRNCjMmXtM0F\nUAJgie/PPABTEKQKWqg4LRSMucEEPeHLAXDFSI3sFfZWAFUAjg7581bVpipnEGIQQohhSTInhLjI\niXnzU3EhYSvx/XchgDgDwwqY0xrWhV5ZAK72/Rmk2ivsp3Bxgne0alNVXRjjEkLEMEnmhJjETsyb\nPx0XJ20l0IdIY4bTYviqfRP0KuY8AB8ZfNBeYe/E5VW8qqpNVX0GxCiEiGKSzAkxSZyYNz8JwFoA\nGwCshp68pRoaVBi4LBE7DJwGYJ3vzyC2V9hPANgK4HUAb8g8PCHEWCSZEyJGnZg3PwV68rYRegJ3\nJSbhv3mXNapeMwFY4PvzGejJ3RHoid3rALZXbarqMTA+IUQEImbZZkmIWPCz+1+PB7AOrF199bZH\nNhK05YjwxQnh8PWPKm9VTVcWGR1HkHgBHMCF5O7Nqk1VA8aGJIQwWjR9YhVCXOJn97++EMAtAG4E\nsAaADaSgI23WsYzOdyZ9IgcALgtZjI4hiMwAVvj+fBmAy15h3w3gDejJ3W7ZD0+IyUeSOSGiyM/u\nf90G4BoAtwJ4D/S93S7TmLeyJaPznXCGFrFclph+n7NBH0LfAGAzgD57hf1NAC8D+HPVpqp6A2MT\nQoRJLL/JCRETfnb/65nQk7f3Abge+ia3o2rLWJAS6riihdsMq9ExhFEigBt8f7b4qnZPA/hT1aaq\n84ZGJoQIGUnmJgkiUqFvg2AGUAPgDmbuNDQoMaKf3f96DoDbAHwYetVlXP9WPZakBRqZXQp7DT19\nIRK4LJMqmRuKAKzy/fmRr2L3NPSKXaOhkQkhgkoWQEwSRNTLzEm+rysAvMPM3zQ4LDHElrLSNAAf\nBPAxS+L7M0zWGSOeNOAP+1u/OJTdenRC94gF9zxi6uxNoDSj44ggGoDtAJ4B8JeqTVXNBscjhJgg\nqcxNTrsALAYAIioB8Dj0g82rAdzDzB1EtBXAIQBLAWQDuBP6hGs7gKeZ+au+65+DfjZnHICfMPP/\n+B7vBfATAKUABgC8j5mbiCjX198MXywPMPNOIrodwMMArAD2APgMM6sh/BlEhC1lpYkA3gvgY9AX\nMVgBwOvcu91knTHapWNqyFvZld16dMIxRju3xdizYSOQAn27mo0AfmqvsG/FhcSuzbiwhBCBMnpn\ndBFmRGQCcC2A530P/RbAl5h5MfRh2H8f0tzNzOuhJ19/A/AggEUA7iKiTF+be5h5KYBlAB4e8ngi\ngN3MvAR6FeBTvsf/E8A23+NXAjhGRPMBlAFYw8wlAFQAnwjuK48sW8pKr95SVvoHAM0A/gB9Tty7\nw4GsNsxnZm0ifXSkzcmaWJSxwWOWZG4Ug+8HvwDQaK+wv2KvsN9rr7CnGxyXEGIcpDI3ecQT0WHo\nRzUdAPAqEaUCSGPmbb42FQD+NOSawYSvCsAxZm4AACI6A70a1wY9gbvN164IwGzf424AL/gePwB9\n4j6gr8S8EwB8lbcuIroDegVwHxEBQDz0JCembCkrzQRwF4BPA5gzemvO1jzVh03WWSWB9qea4uZ5\nTbYes+pKDvQe0Y4BLxPJ+5x/zLiweOLn9gr7cwD+q2pT1XZDoxJCjEne5CaPAWYu8SVwL0CvslWM\ncY3L919tyNeD35uJaCOA6wCsYuZ+39Ds4GHsHr4wIVPF6H/XCEAFM3/Zz9cSVbaUlW4AcB+ADwD+\nV4m8zn09JuuswDsmMrdkX3Eiv3H3VYHfJOo5ASQZHUQUskBffPNhe4W9CsDPAPyualNVv7FhCSGG\nI8Oskwwzd0Gfm/YFAP0AOoho8GzIOwBsG+naYaQC6PAlcvMArPTjmtcAPADoQ75ElOJ77ENElON7\nPIOIht0/LVpsKSvN2FJW+uiWstLBczY/hnEkcgDAasM8Zm1C8wYbc6+a1KcD8MUfQkRg7NCnWpyz\nV9h/ZK+wT+AThhAiFKQyNwkx8yEiOgLgowA2AXiciBIAnAFw9zhu9TKA+4noKICTAHb7cc0jAP6H\niO6FXrF7gJl3EdFXAfyTiBQAHuiVw7pxxBIRtpSVroNehfsQxpm8DSNb81QfMllnB7witStlev4E\nY4hqTHAbHUMMSQPwOQCP2CvsrwD4LwAvVW2qki0RhDCYbE0ixARtKSuNB3AP9IPRFwTz3mTKq7Sl\nfHzd2C1HwMxrd365zerpmZSLIbwK6j7+JXNUV3kjXDWAnwN4ompTVYfRwQgxWckwqxAB2lJWmr6l\nrPSr0CuI/4UgJ3IAwGrj/AkNtRJRU87SU0EMKapoCuSc0tCaCeAHABz2Cvsv7RX2xUYHFGpElEdE\nTxFRNREdJ6KXiGiMBU1B67uWiCblBzMxOhlmFWKctpSVFgD4PPRVqaFeKZqleU4fMlnnBDzU2pi7\n3Ft0bmsQQ4oeKkkyFyYJAD4J4JP2CvsO6B9u/lK1qcprbFjBRfpy+2ehL9j6qO+xEgC5AOQwZGEY\nSeaE8NOWstLZAL4IfaFI2PYu8zr39ZqsgX/w700qnLTDjKoJMZVMRIm1vj819gr7f0BfBRsrG4Bf\nDX2l/uODDzDzYdJ9H8DNABjAN5j5ad+K/68BaAJQAuCv0Ld6egT6FkzvZ+ZqIsqGvshkqu+2n2Pm\nN337dv4R+sbte6Gv/AcRfR1AKzP/xPf9NwE0MfN/hvLFi8glw6xCjGFLWekVW8pKnwbwNvTqQ1g3\noWW1aQGzFnBSwop56kBc5rlgxhQtVEWSOQNNB/AbAMftFfaP2yvssfD7ZhH0fTMv9QHoydoS6Ns1\nfZ+IBhcfLYGevNmhfxCcw8xXAfgVgId8bX4C4EfMvBz6kX6/8j3+7wB2MPMV0Pf9HEz2fg198Rp8\ni8Y+CuDJ4LxEEY2kMifECLaUlW4EUA79mC0jZWqeUwdN1rlXBnqDxtwVNdPrXpoSzKCigVcqc5Fg\nDvRE4yv2Cvtm6MOvsbbybi2AP/o2Qm8iom0AlgPoBrBvyIbr1QD+6bumCnqlD9ATwAW+TdMBIIWI\nkgGsh54ogplfJKIO39e1RNRGRFdAH+I9xMxyFNskFguflIQIqi1lpddvKSvdCeANGJ/IAQC8zn0T\n2qy1OefKSflv3WtCrAzvxYKF0E+YOWSvsL/P6GACdAz6aTWXomEeG3TphutDN2MfLKgo0DdfL/H9\nmcLMPb7nRkp8fwX9RJm7ATzhR+wihk3KN3ghhrOlrLRkS1npP6F/cl5ldDxDsdq8cCJDrX0JuTOD\nGU+08JgwofNtRUgsAfCcvcK+z15hv9noYMbpdQA2Iho8axpEtBxAB4Ay30bo2dAranvHcd9/Avjs\nkHuW+L7cDt851UR0M4ChZ+Y+C+Am6BXAV8b9SkRMkWROTHpbykqLt5SV/g7AQVw4QzbSpGuek0cC\nvpqU3J7EKdVBjCcquM2SzEWwZQBeslfYd9or7NcZHYw/fEcU3gbget/WJMcAbAbwBwBHARyBnvB9\nkZkbx3HrhwEsI6KjRHQcwP2+x78GYD0RHYR+Zu7ZIbG4oY8ePOMb3hWTmGwaLCatLWWlGQC+Av0T\ncVgXNQSCTNk7bCl3rA30+qL617fPrv7L+mDGFOnenoLt/3aneVK95ii2DcC/Vm2qqjQ6kGjgW/hw\nEMCHmXnS7iUpdFKZE5POlrLSuC1lpV+Evnv9Y4iCRA4AWG1ZxKwGvG9aS9ZiazDjiQZuC8mn1eix\nAcB2e4X9VXuF3Z9znictIloA4DSA1ySRE4CsZhWTyJayUgX61gBfB1BkcDiBSNPc7+w32eYvC+Ri\nZ1zmPAZpBJ40H+Lc8g4Xja4DcJ29wv4HAI9Vbaoaz3DlpMDMxwHMMDoOETkmzZu6mNy2lJXeBOAQ\ngP9FdCZyAACva78z4IuJ0rpSZ54MYjgRz2UxOgIxAR8H8La9wv6wvcJuMjoYISKZJHMipm0pK521\npaz0HwD+ASDqz41ktcU+kaHWhrwVTcGMJ9JJMhf1UqFvqLvPXmFfYXQwQkQqGYQQMWlLWWkcgC8D\n+BKiZE6cn1I199v7TLaFywO5uC1jYajPko0oTuuo+3+J6HEFgF32CvuvAXypalNVu9EBCRFJpDIn\nYo5vSPUtAP+G2ErkAABe1wF3oNe6rSnzNTIFfH20cVkkmYshBP04vZP2Cvu99gq7/L8VwkeSOREz\ntpSVFm4pK/0L9CHVmN0kl9XWRcxqYAkZUUJ7xvwTQQ4pYrksJO9xsScL+ukHb9or7EuMDkaISCBv\ndCLqbSkrVbaUlX4WwHH4zjGMcamq+0TAGwg35K3sCGYwkcxpkfe4GLYKwAF7hf3H9gp7itHBCGEk\neaMTUW1LWekiAG8C+CmASTMfTHUGPtTakTY3M5ixRDKnFbIKMraZADwCfdXrx4wORgijyAIIEZW2\nlJXaAPwrgC8CmHRrFllrszN7XUTmcc8J9Jrj56mKtc+kuRNDEVskkWRu0sgH8Ad7hf2TAB6s2lT1\nttEBCRFOUpkTUWdLWekq6Ocg/j9MwkTOJyXgoVYiS0vWkkkxb85lkWRukrkGwBF7hf0rsjedmEwk\nmRNRY0tZqXlLWel/AKgEMMfoeIymOg96A722Me+qvmDGEqlclkmb7E9mVgDfBLDNXmGXUxLEpCDJ\nnIgKW8pKZwHYAX1oVT5x492h1oBOhOhMnZkb7HgikctCksxNXmugV+k+aXQgQoSaJHMi4m0pK/0k\n9KO4ZAf4iyWr7uNHA7lQU6xzPebEmF/VKpW5SS8JwC/tFfa/2SvsOUYHI0SoSDInItaWstLMLWWl\nfwXwS+hvyuISAQ+1ElFTztKYP6fVZYHV6BhERHgvgCp7hf1WowMRIhQkmRMRaUtZ6Q0AqgDcZnQs\nkYy19sWBDrU25i6P+ZMg3GapzIl35QB43l5hf9xeYY83OhghgkmSORFRtpSVxm0pK/0xgJehbzcg\nRpekuo8FtKq1J3lqUbCDiTQec+wd5yYm7D4Ae+0V9gVGByJEsEgyJyLGlrJSO4B90DcBlXMX/aQ6\nD2qBXMeKebrTlt4Q7HgiiVuSOTG8RQD22yvsnzY6ECGCQZI5ERG2lJU+BD2RW2R0LNGGtY7FzJ6B\nQK5tzL2qOtjxRBJJ5sQo4gH8wl5hf9peYU81OhghJkKSOWGoLWWl8VvKSn8P4D8B+cUboETVFdhQ\na1PO0pitgDLgBlHMvj4RNB8BcMheYZfV8iJqSTInDLOlrHQa9HNVP2FwKFFPdR3kQK7rS8yL5U1V\nA1oYIial6QAq7RX2R40ORIhASDInDLGlrPRaAPsBXGF0LLGAtc7FzJ7+cV9IpvzehPyaEIRkOAZi\nfrWuCCoLgB/aK+xP2CvssqWNiCqSzImw21JW+gUArwDINDqWGJKout4KaAPhxrwVjmAHEwmYJJkT\nAbkbwP/ZK+xZRgcihL8kmRNhs6WsNGFLWekfAXwfciRX0KmuQwFd15JdYg5yKBFBUySZEwFbB337\nkoVGByKEPySZE2Gxpax0OoBdAD5qdCyxSh9qdfeN97qBuKy5DAQ05y6SaQSP0TGIqDYdwC57hf0W\nowMRYiySzImQ853msB/AYqNjiXEJAQ21EmV0p0x/JwTxGEpTJJkTE5YM/dSILxgdiBCjkWROhNSW\nstJyAP8AkGF0LJOB6joU0FYcDXkrG4Mdi9FUBYGdWyvExRQA35eFESKSxeRcGWG8LWWlFgC/gWw7\nElasdS1hdvcRWRPHc11r5qJxtY8GXknmRHDdDWCWvcL+gapNVa1GByPEUFKZE0G3paw0GcBLkETO\nCPGqq2rcQ61ua+o8jZSYGpb0mqAaHYOIObIwQkQkSeZEUG0pK80FsA3AdUbHMlmprkPj/3dNlNSR\nNvdECMIxjNcklTkRErIwQkQcSeZE0GwpK50NfcWqbARsINa6lzC7e8d7XUPeyo5QxGMUjzn2VuiK\niCELI0REkWROBMWWstKroB/NNd3oWATiVNfRcQ+1dmTMi6nDxt1maEbHIGLa0IURFqODEZObJHNi\nwraUld4M4HUA2UbHInRe1+FxL27ymBMXqIp1/EeCRShJ5kSY3A3gL/YKu83oQMTkJcmcmJAtZaV3\nAXgeQMythoxqWvcSZlfPuK4hsrZmLno7RBGFndtCMswqwuVW6MOu8UYHIiYnSeZEwLaUlX4F+vYj\nssVN5LGpriNV472oMW9FdyiCMYJb5syJ8LoBwEv2CnuS0YGIyUeSOTFuW8pKlS1lpf8F4JtGxyJG\n5nUdGXeS3Zk6KycUsRjBZUFAGygLMQEbAbxir7CnGB2ImFwkmRPj4tsM+CkADxodixiD1rOE2TWu\nSptqss3zmOO7QhVSOLlkSrowxmoAr9kr7OlGByImD0nmhN+2lJVaAfwJwIeNjkX4xaY6xznUSqQ0\nZ18ZE/PmnFKZE8ZZBuANe4VdFoWJsJBkTvjFl8j9BcD7jI5F+M/rOjLusyQbc69yhiKWcHNZJZkT\nhloCYKu9wp5vdCAi9kkyJ8a0pazUBuBZAKVGxyLGiXuWsOYc17BpT0pxYajCCSenheT9TRhtAYBt\n9gp7kdGBiNgmb3ZiVFvKSuMAPAfgPQaHIgJjVV2H3xrPBZpimem0pjWFKqBwcVnk/U1EhNkAttsr\n7LKhuggZebMTI3KUV9qWZd74QwA3GR2LCJzXdXTcQ61NuctOhyKWcHJaYTI6BiF8pkFP6GYbHYiI\nTZLMiWE5yiutAP4yM6XkgVXZ791qdDxiAri3ZLxDrU05y6J+jzaXRfY/FBGlEHpCt8DoQETskWRO\nXMZRXmkB8AyAWwBgatL8jatz3rfN2KjEBFhU16FxrWrtSyyYFqJYwsZplWRORJw86IsilhgdiIgt\nksyJizjKK00A/oBLVq0WJc7bsCbn/VsNCUpMmNd1NG487VkxFfbF59aFKp5wcFlkmFVEpGzo+9DN\nMToQETskmRPvcpRXEoD/BfCh4Z4vTJy7cU3ObVvDGZMIEu5bwtpA53guacxbcTZE0YSFy0yybbCI\nVJnQT4rIMzoQERskmRNDbQFw+2gNChPnbFyb+8GtYYlGBJPF6zo0rlWtzdklUV3ZclkgyZyIZNOg\nn+WabHQgIvpJMicAAI7yyn8B8Kg/backzNq4LvdDW0MbkQg21VUVP572A/HZsxnRe1i924Jxr+IV\nIsyuAPBXe4VdPniICZFkTsBRXnkHgO+O55qChJkb1+d+eGtoIhIhoQ+1dvjdnpTsnuSpUbtFicsC\nm9ExCOGH6wA8Ya+wy4klImCSzE1yjvLKmwA8AYz/6KP8hBkbN+R9RFa5Rg+z13Xw2HguaMhbeT5U\nwYSa2yzDrCJq3A7gO0YHIaKXJHOTmKO8cjmAPwOBb+GQFz99w8a8j0pCFyVUV1XCeNq3ZtrH1T6S\neMwY1wpeIQz2RXuF/WGjgxDRSZK5ScpRXjkbwIsAEid6r9z44g1X531MErpowP2LWetv87e5y5Y+\nl0FqKEMKFY9JhllF1PmRvcL+YaODENFHkrlJyFFemQfgFej7HQVFTvzUDdfkf1wSushn9roOHve7\nNVFKR9qcEyGMJyQYcIFkCpKIOgqA39kr7OuNDkREF0nmJhlHeWUKgH8ACPqhz9lxRRuuzf/EdkTx\nCsjJQHW9Na5qbEPeSr8reRHEZXQAQgTIBuBv9gr7IqMDEdFDkrlJxFFeqQB4CkBJqPrIiitcf23+\n7ZWQhC5ycf+S8Qy1tmfMTwllOKHAJMmciGppAP5hr7AXGh2IiA6SzE0u3wFwc6g7yYqbsv66/Dsk\noYtcJq/zgN9DrR5L0nxVMTtDGVCwMeA2OgYhJqgQwMv2Cnua0YGIyCfJ3CThKK+8HcC/hKu/zLiC\n9dcV3LkDktBFJNX9VpLfjYni2jIWRdW8OU2RZE7EhIXQh1xlMY8YlSRzk4BvC5JfhrvfTFv+uusL\nNklCF4l4YDFrfS3+Nm/MW9EVynCCTSN4jI5BiCBZDwPev0V0kWQuxjnKK/MBPAcYs+dWhi1v3Q0F\nd70JQDOifzEik9d54G1/G3ekzQ7ayudwUBVJ5kRMucNeYb/P6CBE5JJkLoY5yittAJ4FUGBkHOm2\n3LU3FNy1E5LQRRTVfczvA75VU9w8rymuO5TxBJOqwGt0DEIE2U/sFfalRgchIpMkc7HtfwCsMDoI\nQE/obpxytyR0kYQHFrPW699QK5GpObvE70qe0VQTonKjYyFGYQPwZ3uFPcPoQETkkWQuRjnKKx8D\ncKfRcQyVZs1Ze9OUe3ZBErpIoXidB/xe2NCYu2IglMEEk8cklTkRk6YB+K29wi47YouLSDIXgxzl\nlTcC+K7RcQwn1Zq95qYp9+6iKD0iKtao7mN+7yHXnTLN0OH68fBKZU7ErlsAfNnoIERkkWQuxjjK\nK6cAeBKAyehYRpJqzVpz05R790hCFwHYuZi13mZ/mmqKZZbLkuz3ClgjecxS/RUx7T/sFfZrjA5C\nRA5J5mKIo7zSBOCPADKNjmUsKdbM1TcXfnIvgWQ4zFiK17nfv7lwRNSUu/x0iOMJCrckcyK2mQD8\n0V5hj5pquQgtSeZiy9cArDM6CH8lWzJW3Vz4yX2S0BlLdR9P87dtU86yqPh/5THL3oYi5uUAeMZe\nYTcbHYgwniRzMcJRXnkdonAeRbIlY9V7Cj+1XxI6A7FzEWs9jf407U2aUhzqcILBZSZJ5sRksAbA\n94wOQhhPkrkY4CivzAXwe0Tp/88kS/rK9xR+ej9BkY1ejaF4nfvf8achK+ap/fFZjlAHNFEui9ER\nCBE2j9or7B80OghhrKj85S8ucJRXKtAXPOQaHctEJFnSVt5S+KmDktAZQ3UfT/e3bWPuVTWhjCUY\nJJkTk8wT9gr7bKODEMaRZC76fQXAtUYHEQyJlrQVtxR9+pACRQ5JDzd2LWKtu8Gfps3ZV0b8+4Yk\nc2KSSQHwF3uFPcHoQIQxIv5NWYzMUV65DsBmo+MIpkRz6lXvKbrvsCR0YUde575T/jTsT8iN+AqA\nywLZVFVMNnYAPzc6CGEMSeailKO8MhPAHxDB+8kFKtGcctUtRfcdUWByGR3LZKK63/ZvqJWUnJ6k\nwuoQhzMhLou8t4lJ6U57hf3jRgchwk/e8KLXzwEUGh1EqCSYU5bfUnTfUUnowohdi1j1b6i1IW9l\nRC+CcFpJ3tvEZPWf9gp7jtFBiPCSN7wo5Civ/DCADxsdR6glmJOXlxbdV6WQJHRhQl7XPr9WtbZm\nLY4PdTAT4bTKe5uYtDIB/MzoIER4yRtelHGUV+YA+G+j4wiXeHPystLC+99SyOQ0OpbJQHWf8Ov0\nEKctYy5H8HFsMswqJrkP2SvsHzI6CBE+8oYXfX4OIMvoIMIp3py0tLTwgWMmMg8YHUvMY/dCTe06\nN2Y7otTO1FknwxBRQFwWyK74YrL7mb3CHvFHO4rgkGQuijjKKz8G4ANGx2GEeHPi0tKi+09IQhdy\npDr3+nX+akPeypZQBxMopyRzQuQA+E+jgxDhIclclHCUV+YB+C+j4zBSnCnxytKiB942kbnf6Fhi\nmeo+6Vfltz1zQVKoYwmUy0qSzAkBfNxeYb/V6CBE6EkyFz1+ASDD6CCMFmdKuOLWogfekYQulNwL\nNbVzzNWqbkvyAo3MEbk4xWmBbBsshO5xe4U9zeggRGhJMhcFHOWVtwN4r9FxRAqbKaFET+gsfUbH\nEqtU596x95Ejim/LmP92GMIZN7dZkjkhfAoA/NDoIERoSTIX4RzllfmQeQ+X8SV0p81k6TU6llik\nuk9m+9OuMW9lR6hjCYTbAqvRMQgRQe62V9hvMDoIETqSzEW+nwHw+xD0ycRmil9ya9EDZyShCwXP\nAk3tGHOotSNtTkSulnOZJZkT4hK/tFfYk40OQoSGJHMRzFFe+R4AtxkdRySzmuIX31r0mTNmsvYY\nHUusUZ37xhxq9Zrj53lNtohLpt0W2IyOQYgIMxXA94wOQoSGJHMRylFeGQfgp0bHEQ2sprjFtxY9\nUGsha7fRscQS1X1y7COBiCwtWUtOhCGccfGYEWd0DEJEoPvsFfaNRgchgk+SuchVDmCG0UFEC6sp\nzl469YGzktAFk2e+prafHatVY+6KiFpZzAB7TSQLIIS4HAH4lb3CnmB0ICK4JJmLQI7yyhkAvmR0\nHNHGqsQtunXqZ+otiq3L6FhihercWzNWm67U6XnhiGUcInK7FCEixEwA3zQ6CBFcksxFpp8CMkwU\nCItiW3hr0QPnJKELDtV9KnesNppineO2JLWFIx4/STInxOgeslfYFxgdhAgeSeYijKO88v0A3mN0\nHNHMotgW3Fr0mfNWJa7T6Fiin2eeprbVjdqEiJpylp4KU0BjYpJkTogxmAB83+ggRPBIMhdBHOWV\nCQB+bHQcscCiWOeXFj3QIAndxHmde2vHatOUs9wdhlD8ohEiJhYhIth77BX2a40OQgSHJHOR5f8B\nKDY6iFhhUazzby16oNGqxEXkxrbRQnOfGnNOXE9y0dRwxOIPJniMjkGIKLHFXmGXPCAGyP/ECOEo\nr5wD4AtGxxFrzIp13q1FDzRblfh2o2OJXt65mtpWO1oLVszTBuIyzocpoFGpiiRzQvhpCYA7jQ5C\nTJwkc5Hj+4DsWh8KZsU699aiB1ptSkIkTdKPKl7n3tHnzQFozL3qTDhiGYsmlTkhxuObslVJ9JNk\nLgI4yivXAniv0XHEMrNimVNadH+7TUloNTqWaKS5T+WP1aY5eymFI5axeE3wGh2DEFGkADIqFPUk\nmYsMcsRKGJgVy+zSovs7JKELhHeOpraMuudcX2JeRGxyLcOsQozbv9gr7JG2X6QYB0nmDOYor7wN\nwCqj45gszIpl9q1F93fGmRJbjI4l2ngH9o5+GgQp+b2JBWNuMhxqXhM0o2MQIsokAfi60UGIwEky\nZyBHeaUZwLeNjmOyMSmWWaWF93dLQjc+mqe6YKw2DXkr6sMRy2g8ZqhGxyBEFLrbXmFfZHQQIjCS\nzBnrXgBzjQ5iMjIp5pl6QpfUbHQs0cM7W/M2V4/WoiVrieGLeDxSmRMiECYAPzA6CBEYSeYM4iiv\nTASw2eg4JjOTYp5ZWnRfX7wpqcnoWKKF17nXMdrzzrisuQwyNJlymyWZEyJAN9or7DcaHYQYP0nm\njPMoAJlwajATmaffUnRff7wpudHoWKKB5qmeMmoDovSulOknwxTOsCSZE2JCvi8bCUcf+R9mAEd5\nZTaALxodh9D5Ejpngim5wehYIp86a6yh1oa8lYYOXbstEbFDihDRyg7gHqODEOMjyZwxvgog2egg\nxAUmMk17T9F97gRziiR0YxhrqLUtc2FiuGIZjttiZO9CxIT/sFfYDf13LMZHkrkwc5RXFgC4z+g4\nxOVMZCq+pfDT7gRzakQcSxWpNE914WjPu62p8zVSDNvrzSnJnBATlQ/gEaODEP6TZC78vgDAZnQQ\nYngKmYpvKfyUN9Gces7oWCKXOlPzNp0e8WmixI70eSfCGNBFXJLMCREMD9sr7HFGByH8I8lcGDnK\nK7MgVbmIp5Bp6nsKP6UlmtNGHU6czLzOPaMmuw15K9vDFculXBbIpDkhJi4XwCajgxD+kWQuvB4F\nIAcaRwGFTEXvKfwkkiShG5bmqSka7fn29HkZ4YrlUi6LvK8JESRfkJWt0UH+J4WJo7wyFcCDRsch\n/KeQqfDmwk8iyZxu+KkGkUedoXkbTo30rNecME9VrP3hjGiQ00ryviZEcMwC8EGjgxBjkze98HkI\nQKrRQYjx8SV0SrI5ffRzSSchr3PvyAtFiKytWfbjYQznXU6pzAkRTLKNVhSQN70w8J328Dmj4xCB\nUUiZclPhJ83Jlow6o2OJJJqnZupozzfkrugLVyxDOa0wGdGvEDFqmb3Cfo3RQYjRSTIXHvcDyDQ6\nCBE4hZSCm6bca022ZEpC9y5tuuZteGekZ7tSZ+aEM5pBLoskc0IE2ZeMDkCMTpK5EHOUV8YBeMzo\nOMTEKaTk3zTlHmuKJbPW6Fgihde5d8RNllWTba7HnNAZxnAAAC4LZHMSIYLrBnuF/QqjgxAjk2Qu\n9O6BvgGjiAEKKfk3TrknLtWSVWN0LJFA89QUj/gkkdKUszTs57S6LGQOd59CTAIydy6CSTIXQo7y\nSoK+HYmIIQopeTdOuTsh1ZJ9xuhYjKdN07znR0zYmnKXu8IZDSCVOSFC5MP2Cvt0o4MQw5NkLrRu\nhr60W8QYIiX3xil3JaVZc0Y9dH4y8Dr3NI70XHfy1FGP/goFlwXWcPcpxCRggkwZiliSzIXWQ0YH\nIEKHSMm5oWBTymRP6DRP3bSRnmPFMsNpSxsx2QsFqcwJETL32Cvs2UYHIS4nyVyIOMor5wC40eg4\nRGgRKdk3FGxKSbfmjnxWaczTijWP4+2Rnm3KWR7WZNdjlsqcECESDylSRCRJ5kLnQUDOiJwMiJTs\n6ws2paVbc0c8ESHWeZ17m0Z6ril3GYczFrcZtnD2J8Qk86C9wp5odBDiYpLMhYCjvDIJwF1GxyHC\nh4iyri/YlJFhzR9x37VYpnnPzhjpub6E/LBOmnabERfO/oSYZDIAfNLoIMTFJJkLjU0AUowOQoQX\nEWVeV3BHZoYtP+zbcRhPK9I8jhPDPcOKaUpfQm5YNltmQFVNsjWJECH2OXuFXUaeIogkc0Hm247k\ns0bHIYxBRJnX5d+RnWkrmHQJnde5t3mk5xpzV4TrbNuwb4UixCQ0DcB1RgchLpBkLviuAzDP6CCE\ncYgo49r823MybVNGXBQQizTv2ZkjPdecfUW4qmXuMPUjxGQnQ60RRJK54JOVPgJElH5t/ifysmxT\nhh16jE1aoeqpPz7cMwPxWXMYCPlCCCapzAkRJu+3V9jlzPEIIclcEDnKK6cCuMXoOERkIKK0a/I/\nkZ8dVzhsghOLVOfelmGfICWzO3layFf7aiSVOSHCxArgTqODEDpJ5oLrTsjPVAxBRGlX5318Sk7c\n1GNGxxIOmvfsiCeeNOataAh5/5LMCRFO9xodgNBJ4hFc8ilFXIaIUjfmfbRwciR0PEX11A37Olsz\n7Qmh7l1T4Al1H0KIdy20V9hXGR2EkGQuaBzllasBzDY6DhGZfAldUW5c8VtGxxJqqnNf63CPu2xp\n8zRSvCHtW0FI7y+EuIwshIgAkswFzyajAxCRjYhSNuSVFefGT6syOpZQ0rz1s5j58sUORMmdaXNC\nuiBEkjkhwohZLfR4ZmNzapLRoUx2kswFgaO8Mg5AmdFxiMhHRMkbcj8yLS9++lGjYwkdnqJ5zw47\n1NqQt7ItlD17TZLMCRFqVo2rP9jdu+21+vNt/3A0rAPwAaNjmuwkmQuO9wFINToIER2IKHl97odn\n5MXPiNmETnXuHTZpa0+fnxbKfr0mqKG8vxCTFTF3LnG6tj/R0HT8QF39zM1t7RtyVDXH97TMFzeY\nJHPBIUOsYlyIKGl97odm5MfPPGJ0LKGgeR1zhhtq9VgS56uKZSBU/XokmRMieJjVbK93/5faOnbt\nr62P/31D0/rlTteCYVpejc2pU8Ien3iXJHMT5CivzAdwg9FxiOhDREnrcj84qyBh1mGjYwk+zte8\ndZcv9iCytWUuCtm8OY859BsTCxHrLMw17+vp3fbP+vMtr9efX3Z7d88qK2Ab5RIFwCfCFZ+4nCRz\nE/cJACajgxDRiYgS1+Z8YM6UhNmHjI4l2LzOvR3DPd6Qu6InVH16zNBCdW8hYhpz1yKnq/J/Gprf\nOlhbP/0bre0b8lU1bxx3kKFWA0kyN3EyxComhIgS1uTcNndKwpyYSujYe27YodbOtFlZoerTZSap\nzAnhL2Yt06se+Hx7x859dQ7rHxua1q1yOhcFeLeF2Jx6RVDjE36TZG4CHOWVCwAE+hdfiHfpCd37\n5xUmzDlodCzBw3mat+aybVhUU9w8jym+KxQ9ui0yzCrEWMzMdbf09m192XG+cWv9uaV3d/WsjmOO\nD8Ktbw/CPUQAJJmbGFmOLYKGiOJX57x/flHivANGxxIsXue+y4daiUwtOVecDEV/LknmhBgec898\nl7vyvxubqw7V1hd/p6Vt4xSvWhDkXm4L8v2EnySZmxj5iyuCiojiV2W/d+HUxPn7jY4lGNh7fi4z\nXzaPrTH3qpCsaHWbQaG4rxBRiZnTVfXww+2db+6tc5ieOd+4bt2A0x7CHqdjc2oo7y9GYDY6gGjl\nKK8sBnCl0XGI2ENEcSuzb7UTaH9d3/FlRsczMZyneWqOmqwzFg99tDu5OCTbGDitobirENHFzFx/\nbV//mYc7umZO9XpLwtz9+wDE9Ck3kUiSucDJEKsIGSKyrcgutYNoX13vseVGxzMRXtfeTpN1xkWP\naSbrLJc1pcXm7s4OZl9Oi4w2iEmKuW+O23P4oc6uxA39A0sIKDIokvcC+IZBfU9a8sYXOBliFSFF\nRLYVWbcsmZa0aK/RsUwEexvmDTfU2pSz/FSw+3JZSYZZxeTBzKmqeuQzHZ07dtc5+C/nG9ds7B8o\nIRg63WAZNqcGey6eGIMkcwFwlFfmAFhjdBwi9hGR9aqs95RMT7JHcULHOZqn+rJhl6bcZUHfE04q\nc2IyMDE7ruvr3/b8uYazO86eW/JAZ/faROZIOeyeANxqdBCTjQyzBuZ9kERYhAkRWZdn3VxCoD1n\neo+uMDqeQHid+7pM1lkXPdabOKU42P24LLKBt4hRzP2zPJ5Dn+noSrhOr74VGh3SKN4L4BdGBzGZ\nSEISGJkvJ8KKiKzLsm66ckbykj1GxxIIVhvnM2sXnZvKiqmoPz67Ppj9OCWZEzEmWdWOfrqja8fO\nOof32XONa67vH7jC4GFUf1yDzamJRgcxmUgyN06O8soUANcYHYeYfIjIsizzxitnJpfsNjqW8ePs\n4YZaG3NX1AWzF5dVRhtE9FOYGzb29W99znG+dudZx+KHOrvWJjOnGB3XOMQBuNHoICYTeeMbv/cA\nkA0QhCGIyLI084ZlBGXX6Z6Dq4yOZzy8zv3dJuvsix5rzr5CmVH7QtD6cFrkPU1EKWbndI/34Gc6\nu+Ju6OsvUYB8o0OaoPcC+KvRQUwW8sY3fjcZHYCY3IjIfGXmdcuJsOtUd/QkdKw2zGfWVCLl3aHQ\n/oSc2aNdM14uC8kwq4gqSZp27CPdPW33dnUvSdF4tdHxBNEt2JxqwuYudeymYqJkmHX8rjM6ACGI\nyHxFxnXL56Qs22l0LOOQrXmqj170CCnZ3UlFp4PVgdsiVXMR+RTmprX9A9v+dK6heledY+GjHV3r\nUzRONTquIMsCENbklIjyiOgpIqomouNE9BIRfZqIhi3/E9GviGiB7+taIsoaps1mIvpCkOLbSET/\nG4x7XUoqc+PgKK+cDyAkO9cLMV5EZC7JuGYFgXae7N4XFZ/ovc59vZcOtTbmrTyXcrp+1giXjIvL\nAksw7iNE0DG7pnq9Bx/o6LLc3Nd/hQnYYHRIYfBeAJXh6IiICMCzACqY+aO+x0owyjYpzPzJCfRn\nZmZvoNcHm1TmxkeqciKiEJFpScbVK+alrnjT6Fj8wWrjgktXtbZkLY4P1v1dZqnMiciSoGnH7+zq\n3l559lz/i46GVaV9/ctMmDSrrt8bxr6uBuBh5scHH2Dmw9CTySQi+jMRvU1ET/oSPxDRViK67MhE\nIvp/RHSSiP4PwNwhj28lom8R0TYAjxDRUiLaRkQHiOgVIsof0u67RLSXiN4honW+W7gBdPnabCCi\nw74/h4goeSIvXipz43O90QEIcSkiMi1O37CKQG+e6Nod6ZtZZ2qeUwdN1rnvnmvssqXPZZBK4An/\ngnNbYJvoPYSYKGJuWTngPP5IR1fBQrd7gdHxGGgONqfOwOauM2HoaxGAAyM8dwWAhQDOA3gT+qb/\nO4ZrSERLAXzUd40ZwMFL7pvGzBuIyAJgG4D3MXMLEZUB+CaAe3ztzMx8FRG9B8C/A7iOmXcCGJwa\n8wUADzLzm0SUBMAZyIseJMmcnxzllWZMjrK4iEJEpNjT168CaMeJrl1rjY5nNF7n/j6Tde6FB4hS\nO9NmHUvvPLVwovd2S2VOGIXZU+hVD3y6s8t0a2/fFWb5fTFoI4BwJHOj2cvMDgAgosMApmGEZA7A\nOgDPMnO/r/3zlzz/tO+/c6EnkK/6Cn0mAA1D2g2u5D3g6+9SbwL4IRE9CeCvg/EFSpI5/60AEE37\n/IhJRk/o1q0moh3HO3dGbELHatNCZs1LpLz7/tOQt6olvXPiR7V6zIib8E2EGId4TTt5W09f432d\nXYsyNG2l0fFEoI0AnghDP8cAfGiE51xDvlYxdu7DozzX5/svATjGzCPtKDDY57D9MfN3iOhF6Nud\n7Sai65j57THiGpHMmfOfzJcTEY+IlEVpa1cvTFsTlknHAcrQPO8cGfpAW8b8Ca/kY0DVFNmaRIQe\nMbcuH3Buf/J848m9dY65X27v2JChaZlGxxWhwlWhfB2AjYg+NfgAES0PoP/tAG4jonjfPLaRFlCc\nBJBNRKt8fVmIyO/RBSKaycxVzPxdAPsBzBtnnBeRZM5/Ml9ORAUiUhamrVm7KG1dxCZ0Xuf+/qHf\neyzJ8zQyu0Zq76cJzTkRYlTM3nyvd++/tbbtOVBbn/pEY/P6xS733LEvnPSmYnPq9FB3wswM4DYA\n1/u2JjkGYDP0eXLjuc9B6EOphwH8BSOsxmVmN/RK4HeJ6Iiv/Xh2FfgcEb3lu3YAwD/GE+elSH/9\nYjSO8spkAO2QYWkRRZiZT3Tt2lHVUblu7NZh12FL+1zy0KFW+1u/OJTdevSKQG+oAe0f/bI5Izjh\nCaGzadqp9/X2nX+gs2tBlqplGx1PlLoHm7t+Y3QQsUwqc/5ZD0nkRJQhIpqfumrt4vQN242OZRjp\nmufkRUOtDbkruydyQya4JxaSEDpibr/S6dz+v+eb3t5f55j9r20dGySRmxBZDBJikqD4Jyo2ZBXi\nUkRE81JXrCPQ9iMdW9cbHc9QXuf+AZN1/rvfd6bPnlBVTZI5MSHMaq6qHry3s1v9YE/vlVb9Q7wI\nDknmQkySOf+sMDoAIQJFRDQvbcV6Itp+uP2NiPkFxWrLImbVQ2SyAIDXFD/fa7L1mFVXQJtnagRP\ncCMUk4FV4+rSvr76Bzu6FuSo6nKj44lR07A5tQCbu8Y1f034T4ZZx+Aor1QAyD9wEfXmpl61/oqM\na7cZHccQaZp7yFArkbklqyTgpfmqIsmc8BNz1xKna/uvG5qOH6irn/m11vaNOaqaY3RYMU5GuEJI\nkrmxLYDsLydixJzUZRuuzLwuYhI6r2v/RStYG/NW9I/UdiyaJHNiNMxallc98MW2jp0Hauvjft/Q\ntP4qp2syn84QbpLMhZAMs45NhlhFTJmdsnQDgbYdaHvV8HksrLYuYlbdRCYrAHSlTM8P9F6qAnXs\nVmKysTDX3Nzbd/azHV1z8lV1qdHxTGKSzIWQVObGJjt6i5gzK+XKDcsyb4yECl2q6n773aFWTbHM\ndluSWwO5kdcEb/DCElGNuXuRy1X5i4bmqoO19dO/2dq+IV9VA/6gIILiCmxOlRNaQkSSubFJMidi\n0syUkg3Ls27aanQcquvAhaFWImrKWRrQuV4ek1TmJjVmLUNVD36+vePNfXUOyx/PN61b7XTajQ5L\nvMsKYJnRQcQqGWYdhW+zYJlTIWLWjOQlGwHauq/1HxuNioHVVvvQodam3OXeonNbx30fj1mSucnI\nzFx3Q19/7UMdnbMKveqVRscjRrUaIx9wLyZAkrnRLYdUL0WMm5G8eCNB2bq39cWNBoWQqrpP7DPb\nFi0HgJ6kwqmB3MRjhhbcsETEYu6Z7/Yc+WxHZ8r6AediAMVGhyT8Isl2iEgyNzoZYhWTwvTkRRuJ\naOuelhc2GtG/6jzgNtsWAQBYMRcPxGWei3e2TRnPPTwmyNmEsYyZ0zXt8B1dPf2f6O4pSWBea3RI\nYtwWGR1ArJKq0+iuMjoAIcJlWtLCjSuzb91qRN+stdmZve/OnWvMvapmvPdwW0gqczHIzFx/Y2/f\n1hcdDee2nz13xae6utckMCcaHZcIyBxsTrUYHUQskmRudDJ5VkwqxUkLNq7Kfu9WA7pOUd0n3l3V\n2pxzJY33Bi75FRE7mPvmuNw7/rOp5cjB2vrCH7S0bZzq9RYaHZaYMAuAuUYHEYtkmHUEjvLKBADT\njY5DiHCbmjR/IxFt29n8t7DuQ6c6D3jNNv3zU19C3qzxXu+Sd7Ool6qqRz/R3dN9Z1dPSaIMo8aq\nRQDeMjqIWCNvfyObD2Dc1QEhYkFR4rwNa3Jo65vNz20MV5+stduZvU4icxxIye1JnHImue/cDH+v\nd1lkzlw0MjGf29g/cPqRjs5p0z3exUbHI0JuodEBxCIZZh2Z/IUTk1ph4tyNa3Ju2xrGLpNV97F3\nh1ob81Y4xnOxyyLvZ1GDeWCm273zh00tBw/V1hf8uLl1w3SPV1akTg6yCCIE5M1vZJLMiUmvMHHO\nxnW5H9warv5U56F394pryVpiHc+1LqtU0iNdsqpVfaqzq3JnncPz3LnG1df3D1xJMgIy2UgyFwKS\nzI1MkjkhABQkzNq4PvdDW8PRF2vti5m9TgBwxmXOY/i/QtVpIXk/i0AKc8OG/v5tzzoaanaeddgf\n7uhal8ycYnRcwjAzsDk13uggYo28+Y1MPj0I4ZOfMHPj+tyPhOMs16R3h1qJ0rpSZ5z090KnVd7P\nIgazc5rbs/O7za0HDtXW5/5XU+uGWR6PLCgTgJ53yMlKQSZvfsNwlFcmAQhoF3ohYlV+wvQNG/LK\nQp7Qqc6D71bjGvJWNvt7nVPmzBkuUdOO3d3ZvX3HWYfr7+caVr+nr3+pIr9nxOWkWBJkspp1eAsg\n8ziEuExe/LQNG/M+um1r41Mh27aEtY7FzJ4BIkt8W8ZCvzeHdVlgClVMYmQKc9PqAeeJRzo6i+a5\nPTI9RfhDkrkgk09Mw5M3JCFGkBtfvOHqvI+FskKXqLr0oVa3NWWBRia3Pxe5rPLhNGyYXUUez+5v\ntrTuO1hbn/XzppaN89yemUaHJaKGJHNBJsnc8GQ8X4hR5MRP3XBN/sdDltCproP6nnFECe3p89/2\n5xqnRZK5UEvQtBN3dnVvrzx7rv8lR8PK9/b2LzdBKqJi3CSZCzJJ5oYnnzCFGEN2XNGGa/M/sR0I\n/ma9rHUuYfb0A0BD3op2f65xWUgO9AoBYm5ZOTCw7Y/nGk/tqXPM/5f2zvVpmpZudFwiqhVic2qq\n0UHEEknmhieLH4TwQ1Zc4fpr82+vRPATugTVVXUEADrS52b4c4FLKnPBw+yZ4vHu+VpL296DtfXp\nv2xs2bDI7Z5tdFgipkjRJIgkmRue7EQuhJ+y4qasvy7/jqAndKrrEADAa06YryrWvrHau80Y1ybD\n4nJxmnbyY10927edPdf9suP8ig/09l1lloVyIjSkaBJEksxdwlFemQAgy+g4hIgmmXEF668ruHMH\ngpjQsda1hNndByJLS9biE2O1d1kkmQsEMbctH3Bu+/35xnf21TnmfqW9Y32GpmUaHZeIeZLMBZEk\nc5eTv2BCBCDTlr/u+oJNwUzoElTXW0cBoDFvhT+VOVuQ+o19zN58r3fvv7a27zlQW5/yRGPzhiUu\n9xyjwxKTivyuDSJJ5i4nQ6xCBCjDlrfuhoK73gTg9zFco1FdhwgAOlNn5o7V1m2RZG4sNk079eHu\nnm1v1J/r+Gf9+as+0tO7wgLIwhFhBPldG0QyF+Jy8hdMiAlIt+WuvbHg7h2vnP/NakzwA6NvqLVX\nU6xzPObEDou3b8RVlB6pzA2LmDtKXK6qR9q7spe6XPMByEIGEQmkMhdEUpm7nPwFE2KC0mw5a2+c\ncs8uTLxCF6+6jh4FkdKUs3TEc1oZ8DKRvJ8NYlZzvN59X25t37W/tj7xtw3N632JnBCRQn7XBpG8\n+V1OKnNCBEGaNXvNTVPu3UUgdSL3UV2HTQDQmLt8tJMgnBPpI1ZYNa6+rad32//Vn299rf788o/3\n9K6yQhaGCGMxw+lh09kOTjpyWivYuU1dvPU33hvfnlb+ogzxB4kMs15OkjkhgiTVmrXmxin3vPnK\nuSdWMjigkwJY617C7OrpSS4qGrEN4AKQFHCg0Yy5a7HLffThjs6MFU7XQsj+XSKMNEanG5aWLiR2\nN3Nafz3naGc4X6nWCuJqOD+1jnOyOpCSAb0Sd2k1Lg9Affijjj2SzF1OSr9CBFGqNWvNTVPu3fny\nuV+vCDChi1NdRw9S3PLVTlt6Q5yrI//SBkzw6/zWmMGsZanawbu7uj1lPT1X2BjrjA5JxBZmqBqU\nln7Y2jo4qacRGe5aLVc7wwWWM1yQWMN5aQ7OzhmALQ1AWoDd5EOSuaCQZO5yl/2iEEJMTIo1c/XN\nhZ/c9Q/Hr5YzeNzvO17XEZM5bjkac686M+3sK5M2mbMw19zU21f3UEfX3HxVXWZ0PCI6MWPAA1Nz\nL+I7Wzm19xxneWo5T6nmAusZzk+u03IzG5CZrUHJg149CxX5fRskkswN4SivTILMLxEiJJItGasC\nTui07hJmV09zzpU87ewrlz+twBOsOCMOc/dCt/vIwx1d6asHnIsATDc6JBG5NEaHC9bWLiR2NXPa\nwJBhz/gznJ96lnOyOpGcDn1KkdHTiiSZCxJJ5i7m1xmQQojAJFsyVr2n8FO7X3L8ctk4Ezqb6jpy\noDdx6bDzwVSKsWSOmTM07fCmru7+j3f3XhnHLMOokxwzVBVK8wBs7e2c3NOATFedlsNnuMB6mgsS\najkv4xxnZTthSwcw4hY+EUaSuSCRZO5ikswJEWJJlvSV7yn89O6XHL9cytD8Xs3mdR0xm+Ouyu9N\nyK9N6m+YNvQ51QRv0AM1gJm57vq+/tqHOzpnFXrVK4yOR4QHM/o9MDf3IL6jlVP7z3GWt4bzUc35\ntjOcn1Kn5WU0IT1bg5KP2EqAYum1GEqSuYvJeYRChEGSJW3lLYWf2vOi45dX+p3QaT1LWHN2Neat\nODvrzHPThj6lKlGczDH3znN7Dj/U0Zm8fsC5BMYPfYkg0pjanbC0dCOxu4nTnWc5Rz3D+ebT2pS4\nWs5LPcs5OV1ISgUwzfdnMgnb71wiug3AXwHMZ+a3x2j7KwA/ZObjE+xzGoDVzPwH3/fLANzJzA9P\n5L7DkWTuYlKZEyJMEi1pK24p+vTel+r/p0SD5s9cVZvqOrK/JWuJddaZ5y56whttlTlmTtO0I7d3\n9fTd0d1TksC81uiQxPgww6tCae6Hrb2dU3oaOMNdx3mo5gJLNecn1XBe2nnOynHBmgH53TKStDD2\n9TEAOwB8FMDm0Roy8yeD1Oc0AB8H8AffffcD2B+ke19EkrmLyT84IcIo0Zx61S1F9+19sf4XfiV0\nXtcR60Dq8hkMMAH07uMmTGhj4nAxMTuu7R+ofri9c0ax11tidDxieMzo8w17drZwWr+Dszy1nK9U\nc35ctVaQVMe5Wc1Iy2IoBQAKjI43iqWFoxMiSgKwBsDVAJ4HsJmINkJP6loBLAJwAMDtzMxEtBXA\nF5h5PxH1AvgZgOsAdAD4CoDvQd/G7HPM/LyvAvc7AIm+Lj/LzDsBfAfAfCI6DKACwCHffUuJKAPA\nEwBmAOgH8GlmPkpEm333nuH774+Z+T/Heo2SzF1MhlmFCLMEc8pVtxTdt+/F+v9ZrEEd/XxV7l3C\n7B7oTpn+Tmp3zdzBhz2mCR8bFjrM/bM9noMPdnQlXdM/sISAQqNDmqyYwQxqc8La1oXE7kZOH1zt\naT6tTYmv4by0s5yT1YPEVMiq4XBIDVM/7wfwMjO/Q0TtRHSl7/ErACwEcB7Am9ATvh2XXJsIYCsz\nf4mIngXwDQDXA1gAPUF7HkAzgOuZ2UlEswH8EcAyAOXwJW8A4EsgB30NwCFmfj8RXQPgtwBKfM/N\ng554JgM4SUQ/Z+ZRF3lJMncxqcwJYYAEc8ryW4ru2/9i/S/sYyR0VtV1eF9D3grv0GTObY68ZC5F\nVY9+vLu3a1NXd0mSDKOGHDM8KpTmPsS1t3NybwNnumt9w55n9GHP9POcleOGJQtAltHxCgDhG2b9\nGIAf+75+yvf9iwD2MrMDAHzVs2m4PJlzA3jZ93UVABcze4ioChfmOFoA/BcRlQBQAczxI6a1AD4I\nAMz8OhFlEtFgcvsiM7sAuIioGUAuAMdoN5Nk7mJSmRPCIAnm5GWlRffvf8Hx+CKN1biR2nldR61t\nmR+x6u/JukhJ5kzM5zf0D7zzSEfn9Bke72Kj44kVzOh1w9zcg4TOFk4dcHC2p4bzldNcEFej5SfX\ncU5WM9KzAJoCYIrR8Qq/hbwyR0SZAK4BsIiIGIAJAAN4CfoxgINUDJ8TeZiZfV9rg9cws0ZEg+0f\nBdAEYAn0M+/9OSuahnlssB9/4rqIJHMXk8qcEAaKNyctKy28/8ALjscXjpjQcW+J02xr1kjxKKyv\nhHVbiIdtGw7MAzM83kMPdnTGXdc/UKLIHCq/+YY9WwdgbetE0uBqTz6jFZiquSDhDOen1nN2di8S\nUjBZz96NbeZp5S8m1H7nlv4Q9vEhAL9l5vsGHyCibdArY8GSCsDhS/A2QU8YAaAH+lDpcLYD+ASA\nr/uGX1uZuZtouBxvbJLMXSzN6ACEmOzizUlLSwsfOPCi4/EFKnvjh2liUd2HazrS5rZndpywA4Db\ngHeyZFV766M9PR13d3YvSWZeHf4IItuQYc+2Nk7pO89Z7lrOhW+T28Q6zs08z1nZHpizAWQbHa8w\nTCL0BQCh8jHoCxGG+guABwBUB6mP/wbwFyL6MIA3APT5Hj8KwEtERwD8L/QFEIM2A/gNER2F/vo3\nTSQAulA9HKMhkQp9vNgCwAt94t+PmTkihjeCwVFeuRfAcqPjEEIATrXv4Av1j88fNqGjxP1TXUt7\nF534zUYAeHM+bfvJ+00bQh2TwtywdsD5ziPtnVPneDyTdoI8M7rdsLR0I6GrhVP76zlbreF8quaC\nuDNafkod52a2IjULCLDMICaTGbXfuaXG6CCi3Xg+zw4wcwkAEFEO9H1TUgH8+9BGRGRm5uja8+mC\n4aoAQggDxJkSrywteuDQC/U/n6uyN+GiJ7lvSXvqlHf3a3L5fY5EAJid0zzeQ/d3dllv6uu/whTD\nu9YzgzVQqxO21k4k9jRyhrOOc7UzWr7lNBfE13J+Wj1nZ/chPgVAitHxipiQOHYTMZaABieYuZmI\nPg1gn29PlE0AbgEQByCRiN4L4G/Qz4ezAPgqM//NtxfLy9BXi6wEcATAb6Av0c0B8Alm3ktEV0Ff\neRIPYADA3cx8MtAXOQ4jTroWQoRfnCnhiluLHjj89/qfz7kkobM4tdNuVbEMmDRPvNM67GTiCUnU\ntGMf7u5t/2RX1+JUjVcF+/7hxgyXF6bmPsR1tHFK73nO8tRyLk5zgfUMFyTVcW7Gec7M8cqwpwgv\nmQsZBAHPNGHmM0SkQE/CAGAVgMXM3O5b4XGbbzJfFoDdRPS8r90sAB8G8GkA+6DvjrwWwHuhb8b3\nfgBvA1jPzF4iug7At+BbwhtikswJEWFspoQSPaF7fLbKnnc/xXvdx5JaM+3Hc1sOLnVZgpPMKczN\nqwacxx/p6Cya7/YsDMY9w4EZXS5YWruR0NnCaQP1nOOt4Tylmgviq7WClLOck9mGlEyAigAUGR2v\nEENIZS4IJjpteOgb6KvM3D7k8W8R0XroS3mnQN8nBQBqmLkKAIjoGIDXfDsuD92zJRVAhW/zPYZe\n3QsHSeaEiEC+hO7IC/U/n+llj/5JnvuXNGYufyO35SBcFlICvjmzu8jrPXB/Z7f5lt6+K03AxiCF\nPWHM0DRQywBsbZ1I6m3gDOfZC8OeCbWcl1bPOTn9iEtF+DZgFSKYEsZuIsYScDJHRDOg73/S7Huo\nb8jTn4Bepl/q21yvFhcSpaH7p2hDvteGxPN1AG8w822+odmtgcY5Tv6cDymEMIDNFL/k1qIHjv69\n/uczfAmducPW5gEApwXjTubiNe3tD/b0Nt3X2b04TdPCPozqG/Zs6kV8Rxun9J3jLI++yW2+9QwX\nJNdyXkYDZ+SoMOXiwodhIWKN7KoRBAH9EIkoG8DjAP7LV1W7tEkqgGZfInc1gOJxdpEK4Jzv67sC\niTFA4aoACiECYDXFL7616DNVf6//+TQvu5P7qSHXY47vclpdprGvBoi55Sqn6/jn2jsLFrnd86Af\nmxN0GqPLDUtzFxK7mzmtv55z1BrON53WCuJqOD/lLOdktyMlA/rZi1NDEYMQUSLwqrp413iSuXjf\ncReDW5P8DsAPR2j7JIC/E9F+AIehz4Ebj+9BH2b9PIDXx3ntRMgnBCEinNUUZ7+16IGqF+p/Xuzh\ngSXN6TP3Oq3HRz4CjNlT4FUP3tfZRe/t7bvSDAS8hYlv2LN5ALb2dk7ubkSGq07LRTUXmM9wQWIN\n56U5ODtnADYZ9hTCP5LMBYHf+8xNBo7ySg3DH7EhhIgwHs117O/1Py9M7+adr11xMv/AbKVk6PNx\nmnby/b19Tfd3dC3I1LQxz+JkxoAHpuY+xHe0cmq/g7PctZxHZ7jAVs35yXVabmYDMrM1KH5VAYUQ\nfvlo7XduedroIKKdVKJ8HOWVCiSREyJqWBTbwluLHjj2cu3/JLss+hQJYm5b6nQde6SjM7fE5Z4L\nYC4AaIwOF6yt3UjoauL0gXrO0c5wvlKtFcTXcH5qHedkdSI5HfqUkPFOCxFCBE4+HAWBJHMXyF8o\nIaKMRbEtvGnafcf2ef79ZFmD7Z21/WpCO6dq25F15vdaTn0958Q7OCvFBevgAqxhhz/NhNYs9LaG\nN3ohhAolWg8ZiCiSzF0QM8eSCTFZMJgrk/ee99pu0Ka9vZfP987qcyflmJPNA4rd5DUttnR3w+Lu\ngNlKmtlkVhWyushjdsFrcpPX5oUar0JL1MCpAFJAMn9HiDCz6FvPiomQZM6n8DvrVN+cOXkzFyIK\nqNDcf7Xu2W8tPGp+Ne1R+7LuM81TrX9iZW+mM31gRcL5/LXcY/Ukat6aHs1TZ2WtqxjgvHhTUkuW\nJaM1xZLZk2LN6k22ZLQkmlOsViU+UVUozkVeq5M8AwPk7u8nl7uf3N5+cqkDcJOTPIobHqubVJsK\nNV4FJzM4BSS72AsRIJm4HwSSzF3MDdk4WIiI54a3+xnbzmoneVYn56qvcbx17hb+CH7rqW6Yt+LY\nki8muE+UvvKmstRhKW4oWHeyvvD9JpctPYu5p8njrq5t9VS7m13HM8CuKwBctBKWQN54c0pzsiXd\nm2rJcmVaMrVplgxKMKfYbKaEFDOZswHKoiF7MmnQPE54Op3k7u0nd+8A3M5+cnn6ye3pJxeccMNJ\nHrObvBYP1Di9GqglM5AGkv0txaQmo2JBIMncxSSZEyLC9cHV/Cfbzg4vaVfYbL0N7ba0RABQpyQ0\n3nX2i+u30aN7K3oaNv7q5pQ3H6OUtoeef01dvfu15V5zQl994dXV5wrWJniSSq4CkYlZdWvec8c0\nz6k2zXPWzFpnMYOn9Hu7Cvq9XQVNA7XDxqDA5Eo0pzQlWzPbUyyZfSmWLE+yJV1JMCfHpSgpqSYy\nFxJRuj+vxwO13wl3t5M8Pb5qoKsfLnc/udUBcvEAeRQXPGY3ea1eqAm+YeEUAKkyLCxigFTmgkC2\nJhnCUV7ZAmDMLQyEEMbooN7av1r3mpi4CABmzd697ZX8DfQq3bwebrXd9kZjUgJc3v22B+oSyTm/\nyWRqur0gtw7dpvzPP6ueLW7GSgJMLmtKy9mi64835K3M8JrjF2FIlY21nkbVc6ZWc592aWpjOtg1\nBwF8yDORuT/JnNaUbMnsSLVm9adYMr1JljRTvCkp3qrEpSlkziGilEB/FgxmN7w9TvJ0DcDdO0Au\nZz+5nf3k8vaTWx2Ai5zkUVzwWjz6/MAEFZzkGxZODrRfIYLsg5s3b/5rqG5ORD8CUMfMP/Z9/wqA\nemb+pO/7LQDOMfNI++Zeer+NANzMvDMkAQdIKnMXcxsdgBBieA3UcfxF68Ec0IUPXDk5NTn1uKMF\nAGA1ZXC8affAgG3lta4fZOywPdyQq6r5r9afz/11asqbX7ondWFhC85+/lm1oaC9e9Xs6r9smF39\nFwzEZTlqi2+sbspZmquZbPNISc4z25bkwbYEAMCseth7/rjqOdWieeosrHVOBbhwrHhV9iZ0eVqn\nd3lapzv6Tw7bxkLW7iRLenOyJbMz1Zo5kGLJVBPNaeZ4c1KCRbGlKzDlEVH8cNcSiGywpNjYkpKK\nhHHVNzRoXhe8nQPk7hkgd98AXM4+crsH9KFhDLw7LOwZOiyc5BsWHnmDZiHGzzV2kwnZCX2FxY+J\nSIFesBn6IWo1gM+N434bAfT67usXIjIzc0hX7UoydzFJ5oSIQNVK4/43LMfmg5A4+FhCQmetyeSd\n34zcnsHH1JnJivJWJxqRkfsh9+aTz1r/LZkISfd2da+5tbev+faC3JZH7zOvntnApx59Vm3P6cKK\neGdr4fyTTxbOP/kkehMLamqLb6pryVpczIplOgAQmSxkKVqgWIrejYe1nibVc6ZGc1c7NbUhI9Dq\nnYfdKR3uppQOd9PFp1sPYVXi25Mt6S0plsyuFGuWM8WSoSWa0yxxpsQki2LNICi5RDSueXcKFHM8\nrFnxbM0a7yCXF+qAE54uXyI40E+ugX64Pf3kUgfIzU5yK059WNjmfTcRfHdYWLaAEpdyhvj+bwL4\nke/rhQDeApDvmwbRD2A+ABDRNgBJAFoB3MXMDUT0MID7oZ96dRxAue97lYhuB/AQ9BOuHseFY/k+\nx8xvEtFmAAUApgFoJaJ3fG1m+P77Y2b+z2C9SBlmHcJRXvk2fJuMCiEiwxFT7Zv7zNVXgS4+O3ne\nvO1bs3PqNt6FP1Z7yDoTAMCs2v55vo2AHAD4oLJ93w8sj19JQ5KIX6em7PxJeup8Jkqff5aPP/I3\ntT+jF8su7bcrZfrJmuKbGzvS581kxTRqJU6v3jWcurh6p41ZvQsSjjMltSRb0ltTLJndKZZMV7Il\nE4mWVEuckpBsVqyZBMolIkMTqSHDwt36sLBvfiC5vP1wqwPkJie5hw4LD64WTgYh4OFoEfHWbd68\neUcoOyCiWgDrAdwM/XCAKQB2AegC8APode33MXMLEZUBuJGZ7yGi8wCmM7OLiNKYudOXpPUy8w98\n9/4DgP9m5h1ENBXAK8w839fuVgBrmXnA9/0NAK4GkAzgJIA8ZvYE4zVKZe5iUpkTIoJUmk9sO2k6\nvx50+eksmVn1UwHAA0vBuw8SmbRM2wlTmysHAP6irV9uV89sv8v8z/WDTe7t6l7tq9LtPTHVfNX9\nD5mxpFo7+tDfNS1lACWD7VK7a+aWVP33XAa4PX3eW3XFN7V1ps5cAFKyL41Fr94VLlAsF/I31nqb\n9erd6QFNbcwAO2cDGHbIdILIqfbmONXenBZn/fANQGq8Obkh2ZzRmmLN7E21ZLmTLelIMKfabKb4\nFDNZsgDKHrpCN+hBTnxYuMvpGxbuh3tA3zbG5e0nNw+uFnaR1+KBN06FljBkWFgWtUW2UFfmAL06\nt9r354fQk7nV0JO5c9CTrFd9f/1NABp81x0F8CQRPQfguRHufR2ABUP+6aQQ0eCc1OeZeWBI2xeZ\n2QXARUTNAHIBOCb64gBJ5i4lyZwQEYDB/A/Loe3nTR0bhns+ObnlpKJoc9uR0QyinKHPeeekFJp2\ntbz7/WbvXesXKrXblyvvvJvQ5ahqzj/rz+c8kZq888fpafOPzFQWf/JzCla8rR26/yXNnOiCfbAt\nAZTZ8faizI63wSC1JWvJwbriG/t7kooWgShtpNdASlKO2bY4B7bF+mtizcve8ydUz+lmzVNrYa2z\nENCmjnR9MDHY1O/tzu/3duc3OWuHbaNAcSeYU5uSLRntqdas3hRLpifZkq7Em5PjbEp8ioks2USU\nGY54h4nNHA9rZjxbM9PHPyzsdMLT5SR3Tz+5+wfI7dRXC+vDwgPkVlzwmNzktXn0TaQThgwLy+/I\n0OsPQx87oSdvdujDrPUAHgPQDeB1AFOYedUw190CvaL3XgD/SkQLh2mjAFh1SdI2uKbq0skTQ+cH\nqghiDiZ/US8myZwQBtOgef5q3bO3U+kfNpEDgKnFRxsAzHWgqAm+IdVBnGKdySY6Riq/+8Zb5v63\nNTttD+3Po46LhlPv6epZXdrb33x7fu6eBot5xZ55yhV75ilYX6Xt++QrWnKcB/OGtiewKaf18JU5\nrYehkcndmLt839mi69z9CXklIErEKIgUM1kK5yuWwvnvxqr1tqiemjOa+7RTUxtSwc45ABL8+kEF\nmQbN2uvtKOr1dhQ1DFQP28ZE5oFEc6pvhW5mX4oly7dCNzneqsSlmvQVupcdl2YkM0xxSTDFJXFc\n7njnB+rDwu6uAbj79GFht6ufXJ5+uHzDwh7FBY9F3zZGS1ChJTE4GfppInLWt3/Ckcy9CT15O8PM\nKoB20j+ILQRwH4BHiGgVM+8iIguAOQBOAChi5jeIaAeAj0OfU9eDixdQ/BPAZwF8HwCIqISZD4fh\nNV1EkrmLheMvlRBiBB54e5+x7To5QO41I7diTk9vmA0AtZjeM1wLtSixzVzb++73GhTTta4fzN1v\ne+BUPLlnD22bo6o5/3RcqNIxUfp2u7J8+yLiGw7y7jtf17KsXsy6tA+FVWtB4+7lBY27oSrW/vP5\nq3fVF16tOOMyS0Dk14pPUpKyzTZ7Nmx23yvTvKw2nFDdp1s0T62JtY5CQCv2517hoLI3vtvTNq3b\n0zbt3Ajvlmay9iRZ0ppTLJkdKZasgRRrppp08QrdXCIyJGEdLyvMyVY2J6eMe1iYVZevGjhA7t5+\nfRNp14B+mgj7Vgub9E2kvfFeaPFDhoVDMRQfycLxe7cK+irWP1zyWBIzNxPRhwD8p++DiBnAjwG8\nA+D3vscIwI98c+b+DuDPRPQ+6AsgHgbwMyI66rt2O/RFEmElCyCGcJRXPgM5JE4IQ/TD1fKMbVer\nl9T5o7VLTz9Xtcj+uh0AfopHt+2mtZdX8Dxal+31BhtdssK0iJrPbbU+ajURXzbvDQCaTabm2/Nz\naxos5hWDjxGzduse3l22TSuwaJg21uvwmOK7HIUb3jpXsN7mtqaUgGhCH5pZ62tRPWdqNE91v+Y9\nn+abezdqFTDSWZW4jiRLenOKJbMr1ZLlSrZkqImWNEu8KTHRotjSCUoe+ZkQxxoVmssJd9cAeXoG\nyNXfT+6BAbjcfeTWBsil+TaR9g0LewdXCydDTwSjsUCTsHnz5oGxm4nRSDI3hKO88nHoJVchRBh1\nUl/dX617oBGPWYVasuTl7SmpLesB4Mv4wY6zNH3tcO2slU07lX7v6ksfX6kcO/ZHyzdn0CgVkN+k\nJr/5o/S0BTzkFAdFY++Hdmi7b9vF00wa/Fqp6rYkt54tuvb4+fzVaV5zgh1BWGDArKmsNp5W3aeb\nNE9NxFXvgoTjTImtSeb01hRrZneqJcuZbMngRHOqNc6UmDRkhW40Ji8h44a317eJdN8AufoGh4UH\n4Fb7yUVO8pBvWNjmGxZOZH1+oFHDwq7NmzfLApUgkGRuCEd55bcAfNnoOISYTBqp88QL1gNZIAxb\nLbuY5l277slO8m0cfB/+90gvJS8ZrqXS0H/AerRj6XDP3Wl6ZdfXzBUraZRfYC0mpeX2/Lwz54dU\n6QDArLL7Y1u13e/Zx3NMjLyxY9YN2DIa6opveKcpZ3m2ao5b4O91/mCtv0311FRrnlP9mrchFTww\nB1FevfODlmBKbkq2ZLSlWDN7UixZ7mRLBhL1M3STh6zQlSPPxsBgTR8W9vQMwN3bT64BXyI4ZLWw\n2+TbNibOe2G1cCpoQnM8mzdv3pwbtBcyiUkyN4SjvPIx6HvOCCHC4IzSdPB1y1uz/T1eKju7Zv+8\n+TveXcRwB55p0MiUP2xjZs32z/NNBAz7/BbLz7d90FQ54iKLQf+bkrzzhxlp85goY+jjFi87N/2f\ntue6Q7xQGecxgH0JuXW1xTfXtmQtmaKZrJfNx5sovXrXdFp1n2rSvLUKq+2FgDYt2P1EOoLiSTCn\n6Ct0LZm9KdYsd7IlnRJMKXFWfUuWbCKSIxwnQIXm9q0W7tZXC1+0ibQ2uFrYpZ8tHOeFmqSBk6AP\nC9ds3rxZ9nYNAknmhnCUV94N4Amj4xBiMjhqqtu513x6GQh+n15wxZV/35GU1LkWALwwuzfhKTNG\nqbxYDrZtNbU4N470/EvW8h0LlLPDDtMONVKVDgDi3Nx77yvagfVv8RIC0vx7JRd0J089VVt887m2\njAUzWDGHbKsS1vrbNU9Nteo53ad5z6eCB2ZDX503qSlkciaa05pSLBntKZasvhRrpjfZnK7Em5Pj\nrUpcWiSu0I0VHqjbp39n45gfqMTYJJkbwlFe+T6MvDGgECJI3jS/ve2E6dywmwGPhBSvc82aP7rJ\ndxpAPYpqy+nH00a9ptdTa3uzecQ2Zng9e20PHsugnhJ/YqhISd65ZZgqHQAkOLnrgZe0w1ed5CsJ\ngR1k354251hd8U2tHWmz54IUv4dwA8HMmm/uXaPmrVFY7ZgCqNND2We0MpOlN8mS3pRsyehMtWT1\np1gztURzmhJvSkqwKnFpCplyiWjSJ8YBeKnwO+tuMTqIWCDJ3BCO8sq1ACqNjkOIWMVgfsVyeJvD\n1L5xvNfm55/cPWv23pWD37+JtQf+mx4ddk7cULbXzh8lLy8e6fk09HTstT3YaSWvX4lMi0lpuSM/\nr/qcxbxyuOeT+7n9s3/XqkrO8HIKcM84BmmtmYuq6opv7OpOnrYIwySPocDaQIfmqTmtV+/OpYIH\nZiHAxHSysSi2riRzenOKNbMz1ZI1kGzJ0JIsaeY4U1KiRbFlKPoZujLZ/2K/K/zOujuNDiIWyEqg\ni7UbHYAQsUqD5n3WundPh9K3MZDrpxSeuOiTZy1mjHA0/cXU4qRuc/Ww29EBADqRnH6z+9vdr1q/\n2K4Qj5k0Zata9suO89kjVel6Eijj22WmDek93PzI39T98+uxgoBxbbNBYCW7rWpJdlsVNFI8TTnL\n9p8tus7Vl1iwGBeOCgo6UuLTTbYFy002fX2GXr1rOqV6Tjdqnhpitb3AV72TDXEv4dFcqR3uxtQO\nd+OIbWxKQmuyJb0lxZLZnWLNcqZYMjjRnGaNMyUkmRVrhm9Llsn0e7ll7CYTQ0Qq9D3lBr0fwB+Y\neTURTQPwAjMvmsD9GcAPmfkx3/dfgL5/3eZRrrkfQD8z/zbQfi+7p1TmLnCUV+bhwplsQogg8cDb\n9yfbrhP95L7sQHt/mEyenlWrn7LQkHM2v4N/3VZFJWPPt/FqPbbXGkxjVcmuUQ4e+bXlB/NpHHP4\nxqrSAUB2J59/9Dm1emYDVtEEP0CritnZkLf6cH3RNRiIyyqBAZUe1gY6NU/tadVzqk/znk8C98/G\nxTvii8Bp8aaklmRLRmuKJbMnxZrl8q3QteordN/dkiVWVuiWF35n3XdD2QER9TLzsEPgQUrmnNDz\nhuXM3OpPMjfKvczM7A0kjsn0CcAfbUYHIESsGYC79RnbzmYPqQElcgCQX/D2USJcdCpEE/L8q3aZ\nlWROMu+gXu+oCx1e165c8gPvR3b8i+WZMRdEDBqs0v02JXnnD0aYS9eSRgVfuctcUNDGdZ9/VnUU\ntWAV6ec5jptJ88YVnt++svD8dnhNth7HlA0Hzk1Zb3VZ00qgH0MUcqTEp5ls85eZbPreznr1rvmU\n6jnVqHlqwWpbAaDOgFTvAqEMqL25A2pvbrPz7LANCORNMKc0JFsy2lIsmb2p1ix3kjmDEs3JcVZT\nQoqZzFkAZVEQ9jQMgyYjOh0uwSOiu6BX7UwAFgHYAsAK4A7oZ6q+h5mHG73zAvgfAI8C+H+X3LMY\n+qLKbOhVyLuZ+SwRbQbQy8w/IKKt0M+OXQPgeV+/439NUpm7mKO8sgvyKVOIoOii/vq/WHerGvG0\nidznqhV/3m+zDVx8riqePOmiOL+2NVCaBg5bD7eX+NP2F5Yt2240HRj3CrtWX5XOMUqVDgCKm7j6\n88+qzXkdWElBSnjclsT2+sJrj53PX53ksSQtGW2FbziwNtCpeWurVffpHs17LhncPwuArAgNEwUm\nV6IltTHZktGRqq/Q9SSZ05UEc3K8VYlPNZE5m4ZsiG2gmwu/s+7lUHZwyTBrDTPfNpjMDa3M+ZK5\nrwK4AvrJMacBfImZHyeiHwGoY+YfD3P/XgAFAI4CWALgU/BV5gaP/mLmCiK6B8B7mfn9wyRzx5n5\nMxN5nVKZu5wDQFA39BRiMmqirpMvWPenMyFnIvexWJxtVutAyaWPu2Dze7WnlhO3hAn1xCgaq+19\nns+vf52+sHOG0nDZ6RGjyVK17H84zmf/LiV55/cz0uYyUeZw7epyaeYj95tnzjrHJx99Tu3K7sZV\n4+lnOFZPX8bMmufXzax5Hk5rWlPd1OvfbsxbkaWa4xdO9N6BICU+zWSdv9Rkfbd6x6w2V6ue6vOa\n54xU70JMg2rr8bQX93jai8/j9LBtTGTpSzKnNidbMjtSrVn9KZZMb5IlzRRvSo63KnHpCplyKITz\nM31GnmAYPAPMXOJn2zeYuQdADxF1Afi77/EqACMuomLmbiL6LfRzWoceTbYKwAd8X/8OwPdGuMXT\nfsY3IknmLlcHSeaEmJBapfnQ/1mqZoImXuUuLDx2nAjrhj7Wg+ROEKX5fRMi0nLiq01NA2MmcwDR\nze5vX7nP9sBbKTQw7rk0d3T3rL65r6/ljvy83aNV6U5PobkPPmjGwjrt2MN/05zpfRhzZa4/4tyd\nuXNP/yl37uk/oT8+u762+KYzzdlX5msm65xg3D8QRERkzp2pmHNnIl7PkVlzdmne2tO+6l0SuG82\npHoXNip7Ers8rdO7PK3THf0nh22jr9BNa06xZHalWLMGUiyZapI5zRxnTkq0KLZ03wrdEY/F88P5\nCVwbCq4hX2tDvtcAmImoCBcSvMeZ+fEh7X8M4CCA34xy/5GGQv1azDUaSeYuN/xEBSGEX94ynd21\n23xq6Xg2Ax5Nbt7pyxLCcyhswDg36PXOTpmlNA2wP0ObLljjrnFtyd1le8hhIdWvc1iHGlKl2/X9\njLQ5I1XpAOBYsbLwvocVXHlaO/rg3zUkO0euAIxXwkBL0YK3f1e04O3foSepsLq2+Ob61sxF01kx\nG36WKylxqSbrvKUm6zwAg9W7lmrVc/q85qkBq635vupdrEz2jzr6Ct2m1A5304jphk1JaEt6d4Vu\npjPFkqElmtMscabEJMuFFbrDzed0IQyrWYOJmesBlIzwXDsRPQPgXlw4fGAngI9Cr8p9AsCOUMUm\nydzl6owOQIhotcv8zrZjpvp1oOD8ArbZes+bze7LkptaTO8c77040VwIi3IYHq3En/atSMt+n/vr\n1S9av9JFFFjF6I7unlV6lS53t8NiGXUu3cFZyuJ7H1Ww6oR24L6XtLgEN4I6RJrc65hpP/bLmQDQ\nmTrzRG3xzc3t6XPmYKTj0MJMr97lzFTMOZdU7+pOq+5Tg9W7WQjglA0ROi6tP9Pl6s9sc50bqQnH\nmZKaky3prSmWzO5U3wrdeFNS98IffiDWJu1vAfDZId8/DOAJIvoX+BZAhKpjWQBxCUd55ccBPGl0\nHEJEm39ajmw9a2rdGMx7zpq9a1t+/unLFiP8Ag9u3U7XjLsv05meNy2nuteM3fKCUmXXgZ9afrqE\naGIffn+fkrzre2NU6YbaeETbd8+rWkqcByE7u5IBbstYWFVXfGNnV8r0BSAlos8p1at3rTWa5/R5\n1XOGWW3LA7wzIdW7aLT1sadfuNroIGKFVOYuJ8OsQoyDBs37nHXf7nald2Ow752TUzvs4olzKDIF\ncj+1OPEK86nubhrHivUXtFVLF6k12+83v7A+kD4H3d7ds+rm3r7WOwpyd9VbLKvGar91ibJ862Li\nmw7w7ttf17KtKmZOpP/hEEBZ7ccWZ7UfA4PU5uwrD9RNvWGgN2mKHRF4HqlevcueoZizZ5jj9R8h\ns6tb87xbvUsA984CEAkrNcXoZBQsiCSZu5z8BRPCTx6o/X+27TrWRy6/92bzV0JCZ43J5J0/3HMt\nyE4M6KYmJYFTLJXU7Vk3duMLvuP9+Ho71WxbYzo2oUPBMzUt6yVHQ5bfVToienkZrXxlKWnv28Vv\nfqRSKzRrCMl8NwKbclsOLM1tOQCNzK6GvBV7zxZd6x2IzykBUUDHkoUDkS3FZJ1z5eD6DmZm1lpr\nNPfpc6qnhlltzQW8syDVu0gjv2uDSIZZL+EorzQBcEISXSFGNQB3+zO2nec9pAa8e/po5s3fvjU7\nu27jcM/diafrVApsEr/S6qyyHmizj/c6gqbtsD2yfwq1TXgrEQBoU5TWOwpyT/lTpRukaOz9SKW2\n6327eKaJURCMOMaiKta+cwXrjtQXbjS7bOklIArKwpZwYnb1aJ6zp1XPqS7N40gE984EEJbzbsWI\n7nns6RdGW/kpxkGSuWE4yitrgdB8+hUiFnRTv+PP1t1ujXhGqPpYs/bJGkXRpl/6uAZFvQPPaBM5\n8cD2z3M1xLjs3mNJgLNvv+2B+gRyzQu070s9mZK067sZ6bOZyO/5amYvuz7xhrbn5gM8T+GJ7eM3\nHh5zQmd94dVvnStYm+CxJC8BUUDD3ZFAU1trNXe1Q/Wc0VhtGazeRe3riULXPvb0C68bHUSskGRu\nGI7yym0AJjQ/RohY1ULdp5637ktmgt+b9o5Xckrz2yUlrwybMDUi79xj9LMpE7m/pap9q+n8wMZA\nri1Aa0Ol7REyEQft9bcpSuudBbmnzo6jSgcAVg8P3P2qtvfqI7xIAfxaWBEsLmtKy9mi64835K3M\n8JrjFyE6jo8aEbO7V/PUnVY9pzs1T32Cr3oX1p/pJDPrsadfqDY6iFghydwwHOWVv4V+HpsQYog6\npeXwq5aj0xHgVh3+Wrjota0ZGec3Dvfcflx16Ef0pSsm1MGAt8G2vSk30DNSl9HJE3+yfm0qEQKb\nuzeCJ1OSdn83I33WeKp0ABDv4p5PvawdXHOcS8iAjXcH4rIctcU3VjflLM3VTLagVS2NpqltdZr7\ntEP1nFGlehdUHgAJjz39QkCHyovLSTI3DEd55f8D8A2j4xAikhw3OXbvNJ+8AgT/DrgPGPPadU82\nEPGwc8L+jLIdz9JHJrzgwvZGw35ya8vGbjm8j5pe3/Nt86+WU5D21BsUaJUOAJIGuPOBF7Ujy07x\nUgKSxr4i+HoTC2pqi2+qa8laXMyKZdxD2ZGM2d2nec6eulC965kBIKK3c4lQbz/29AvDLm4SgZFJ\n/sM7ZnQAQkSSPeZT26tMZ9cGazPg0aSnn68i4hFPQajDNDUY/XinJ7stJ7sCvv4p9ZoVdjqz7RPm\n1ye0wvVSmZqW9aKjISuQKl1vPKV9/0OmDal93PrQ89oBey1fRcBEjlsat6S+89MXHX9iOgB0pUw/\nWVN8c2NH+ryZrJjGfZJGpCGyJpqss0pM1lnvPqap7XWa57RDdZ9RWW3JATyzIL9bx3LC6ABijVTm\nhuEor5wN4B2j4xAiErxqObq1ztSyMVz9LVny8vaU1JYR56x+AT/Z2UCFqyfckcou2/+dH6AJnijw\nrPVft1+hVIdkjm27orTdWZD7Tl0AVToAyOjmpkf+pp6c58BKQnCOVwsEA9yePu9YXfFNbZ2pMxeA\nlGyjYgk1vXpXf0r1nOrUvPUJ0HqmA4jZ1xugbz/29AtfMTqIWCLJ3DAc5ZUK9JPo4oyORQijaGD1\neeu+na1Kz7j2ZJtgr961657sJBp56OqT+N2xAUoIylFX1j0t25VO94QSMRNU727bZ49kU9fSYMQ0\nnD8kJ+36Tmb6LCYKKCnI6eRzjz6r1sxoxEoyuGrEILUla8mRuuIb+3uSihaBKM3IeMJBU9vPap7q\netVTrbK3JRvwzMbkrt5teuzpF35rdBCxRJK5ETjKKw9hhAN1hYh1XqgDf7LtruojZ1D2VPNXdnbN\n/nnzd4w6j+12/KmVg3TsFLW7jtv2tS6Y6H2S0de1z/aZ1jjyBP2UhkETrdIBwJRWrvv8s+q5wlas\nDHTxRzBpZHI35i4/crboOnd/Ql4JiIK6oCRSMXv6NU/9KU2v3sWx1jMD4MlUvVvx2NMv7A1lB0TE\nAH7PzHf4vjcDaACwh5lLx3mvEgAFzPxS0AMNksn8yWAsxyDJnJiEnPB0PmPbedZN3rAmcgBQVPSW\nc7TnBxDXG6xEDgA4w7aAFZwmDbPGbj2yHiSm3uj+bs8b1sdaFArNL+UMTct8wdGw6qnkpN3fykyf\nGUiV7lwWFT/2KXPxtEau/vyzaktuJ1YQYNiWIgqr1oLG3csLGndDVaz95/NX76ovvFpxxmWWgCjE\nC22MQ2RJMFlnLDFZL2zTqKkd9b7qnYe9zdmAZw5i83c0Izxz5voALCKieGYeAHA9gHMB3qsEwDIA\nfidzRGRm5rCt1pXK3Agc5ZVfBvAto+MQIpx6aKDhz9bd/SppIaswjYQUr3PNmj96iJA8UptqzDz1\nb/S92cHs13ysY5vZ0R+URQxrlaqq31m+PZsotFM0fFW6k3UWy4TmDs5x8NuPPqf2ZPZgebBiCwaP\nKb7LUbjhrXMF621ua0oJ9KrKpMLsGdC89ac09+l2zXt2sHoXtg2iQ6jmsadfCNlm44OIqBfAfwI4\nyMx/JqLfQi/SrAPwXgAnAaxm5hYiUqDPk18J4GoA/w5ABdAF4DoAp6EvJDoH4NsAXgDwUwB26An3\nZmb+GxHdBeAW6FO0En3t/8zMf/PF9CSAp5n5+WC/3kn3D2QcZEWrmFRaqaf6b9a9CUzBP9DdH3m5\n1YeJsHK0NmcxrSPY/XpnpSwwOfq9wZhLtkOz27/hvX3nV82/X0UUuoqXr0q3eiJVOgB4p5DmPfBZ\nM+w12lsPP695Uvsxsf37gsSiDqROr3t5zfS6l+G2JLeeLbr2+Pn81Wlec4I92jcn9heRJd5kmbHY\nZBlaves8p3mq6/TqXdPg3LuAT0IxSFUY+3oKwL8R0QsAFgN4AsA6ZtaI6PcAPgHgx9ATtiPM3EpE\n/wbgRmY+R0RpzOz2PbaMmT8LAET0LQCvM/M9pM/53EtE/+frcxWAxczcTkQbADwK4G9ElApgNYBN\noXihksyNTJI5MWnUK21HX7Ecngqa2MrOiZhSeHzMNrWYMRD0jm2mbMSZ9sKpBmVY+dfqe1YvVqq3\nvc+0K6hblgznoz29K2/o62/blJ+7q9Ya+Fy6qunKok89omDZO9rhz7yoKUlOjLg1TLhZPT1Zs848\nt37WmecwYMtoqCu+4Z2mnOXZqjluwnMdo41iSpuimJZOMcfpa22YvU7NW39Cc5/u0Lxnbax1Twc4\n1+AwxxK2ZI6ZjxLRNAAfw+VDpE8A+Bv0ZO4eAIPnxL4J4H+J6BkAfx3h1jcAeC8RfcH3fRyAqb6v\nX2Xmdl//24joZ0SUA+ADAP4SqqFXSeZGVgNgAGHeo0mIcHvbdG7PDvPbSxDiocHRmEzu7ri43pKx\n2jlQFJL+vTOSNcvxzqDd7xHPQxvmUf2bcxXHmqDddAQZmpb593MNq55OTtr9zQlU6QBg/xyl5J45\nCtYc0w58+mUtPt6NiEqY4l3t+fPeeSp/3jtPoS8ht662+OaalqySIs1kMaSabDQic5zJMn2xacje\nzJradU7znK7TPNUezducBbjnILKqd+GszAHA8wB+AGAjhhzPxsz1RNRERNcAWAG9Sgdmvp+IVkAf\nLj3sW/xwKQLwQWY+edGD+nV9l7T9ne/eH4WeNIaEzJkbhaO88gCAK42OQ4hQ2W+urjxsql0NMvaI\noqKiqh3Tph8e81SHh/CLfe2UFfz5XRp7bK+e76YgnsVpgde9z/bAiTTqWxKse46lQ1Ha78zPfbvW\nOrG5dIOuOaztuftVLcPmRVDnKQZbd/LUUzXFN59vz1gwnRXz1LGvmDz06p3jlOY+1e6r3k0Dgneu\ncAAWPvb0C2OX4SeIiHqZOYmICqEnXj8hoo0AvjC4mpWIPgh97tvvmPlLvsdmMnO17+tDAO4GMBPA\ne5l5k+/xbwFIAfAQMzMRXcHMh3xz5t4djvW1zQWwF0AjM68I1euVytzoDkGSORGjXrdUbT1jat5o\ndBwAkF9w0q+qYA9SMkISgEIWLd36lqnDHbShUQ/M1qtdW4r22h6ss5BaHKz7jiZd0zL+fq5hdTCq\ndADweomy4vUlxLfs410f36rlWVRE5PFcKT1nZy956xezAaA9bc6x2uKbWjvTZs8FKUYmLRFBr95N\ns5ss0959TFO7zmue6jrNU+3W1KZMsHsOwrOptBNh3pCfmR0AfjLC089DH179zZDHvk9Es6FX314D\ncATAWQDlRHQY+gKIr0Mfnj1K+hzOWgDDbnfCzE1EdALAcxN8KaOSytwoHOWVnwTwS6PjECKYGKz9\n3bp/R7PSHZJTC8bLYnG2rVj5p1Si0T9cMsC3489OEIVk6gN1uk/a9rTMDfZ959LZmn9Yy9OVMM9H\nDHaVTtFYff8u3v2hHdpUsxai8e4gYpDWmmk/Wld8Y093cvFCEIXmg0AMYPa6NO+5dy5U77qKAc4P\nQVe7Hnv6haD8fQwGIloG4EfMHLKN0YkoAfrQ8pXMHPj5gWOQytzo9hgdgBDB5IXq/LN19+FexRkR\niRwAFBYeO0aEMePpQEYz9CGLkOA061w20UlSOagJ3UmeOv1+z6OHfmH50SKi8M1dGqzSPZOctPsb\nQajSaQqZ/rqG1vxtJXnKtmuVt+7hWSZGKH7hBwWBley2oyXZbUehkeJpylm2/2zRda6+xILFIBpx\n+5vJiMhsM1mK7SbLhQIyq90Nqqe6Vq/eNWb4qncT3fsvpBsFjwcRlQN4AL65ciHq4zroCy1+GMpE\nDpDK3Kh8x3p1AiPveyVEtHDB0/WMbWeti7xhm8Plj5Wrnj5isbjHjOkISqq+R/9qD2Us5hOd28xn\n+0KyCvVz5j9Xfs781zAejXZBh6K0b8rPfbsmSFU6ADB72XXH69qeGw/yfIWj5+xRVTE7G/JWH64v\nugYDcVklIJJjG/3A7HVr3nPvaJ7TbZqnzuqr3hWM8zYff+zpF/4YkgAnOUnmxuAor3wd+iaCQkSt\nXjgb/mTb1auSFlET2W223vPLr3o235892Z7HbW8+TbeHdnWoW223vdGYFKpD6Z+wfG/bNabDId+y\nZCTBqtINZfVw/73/1PZtPMqLCUgP1n3DwWuy9TimbDh6bsp6q8uaVgKiSFr1GfFY625Q3WfqNM9p\nl6Y2ZYBdY1XvZj329AvV4YpvMpFkbgyO8spvAfiy0XEIEag26jnznHWfjYmnGB3LpWbP3rUtL/+0\nX8nNT/Ho1t20dmOIQ4J1e+NuZUAddfPiwDFvsz66p1hpDtH9xxaKKh0AxDu5+75/aIdWvc1XkL7S\nL6q4LYnt9YXXHjufvybZY0lcDP1UADEOzKpb8547pXlOt2qeOgtrncXAu+87bY89/ULQjuITF5M5\nc2OTeXMiajmUtqqXLYcLQZFZMcnOqfV7Dtx5FIalaqLOTDYpb3WG6O5EN7q/u2S/7YHjSeQ0ZA+3\ndE3LeP5cw+o/JSfu+XpmxoxgVekG4ijlx7eZNvyqnzsefFHbduVpXkb6kUZRwerpy5hZ8/y6mTXP\nw2lLa6ybesM7jblXZarm+IVGxxYtiExWk2XqQpPlwu4wrPU0qp4ztZrn7EEDQ4t5Upkbg6O8MhdA\no9FxCDFe75jO791uPrEIhASjYxlOQkJHzdJlL/i91cWn8b9H+yg59CcTMKu2f55vIyBk52Dmor35\nTdvDHjNphlZLOxWlY1N+7okzQa7SAUBqL7c88rx2fGEdryAYtyH1RPXHZ9fXFt9c3Zx9xRTNZI2o\naQpR5qsPPn7NN40OIlZJMucHR3llLYCw7BMlRDAcMJ+pPGSqWYUxtvsw0rz527ZmZ5/d6G/7O/BM\ng0amsKyetOxv3Wpqc20MZR+LqfrU36z/mkdk/AKrPycl7vl6VsZ0TT92KKgyu7jh0efU07PPYyVF\n1kkE49aTVFhdU3yzoy1z0TRWzPI7YXyufvDxa7YaHUSskjkB/tltdABC+OsNy7Fth8w16yI5kQOA\nzEyH378MPTC7NChhO3PSOycl5PuoHeWZsx/1fOYkM9RQ9zWWD/X2rdh29pxlhtuzM9j3bkul/K9u\nMq97+D5TU20OdjCMf72BSu51zFx87Jcbrt7+SPGVh354IqP9xDaw2mB0XFHAA5myFFKSzPlH/hKK\niOfbDHhbtanRsNWS/kpJaT6hKJrfQ6yNKDgfzgnpnGKdySY6Fup+ntPWLvu1+p43Q92PP9I0Lf1v\n5xpW/3tL2x6FuTnY92/KoMIv3mte+9gnTWcdmdjJgBbsPsIprat6fsnR/9pw9baH8xYf/e+jqV3V\n28Faq9FxRahDDz5+zYDRQcQySeb8s93oAIQYjQrN9Sfrrj1NSlfEJ3IAMHXq0XElC3UobgtVLCNR\nixLD0uc3vLev36PNi5j3mMEq3Uy3JyRJpiObpn/+0+bVX77LVN2UGv2jHgRQVvuxxUsP/XD91dse\nTl947IkDST2OHQjxJrFRZofRAcQ6Seb8cwhA2H+ZCOEPN7zdf7DtONGtDKwyOhb/MKelN45rInkd\npveHKpqReGckL2H9LMmQ+5j7q2saOX1fOPryR5qmpT93rmFNqKp0AHAmn2Y/9Bnzyn+73XSiPQn7\nQ9FHuBHYlNtyYOlVB769duP2z8XNPfmHvfH9TTvBHPa/vxEmIqrPsUySOT8UfmedBuANo+MQ4lJ9\ncDb9wVbZ4CJPidGx+Cs949xRovHtHF+P4vCv1LIoqZxoDst2ChoU07WuH8wbYGtYDyEfy4d6+1Zs\nP3vOOtPtDtkv47eLaP79D5mXfaNMqeqOx6FQ9RNuCnttUxrevGrV3v9YvaHy8zzr9F932pzte8Hs\nNjq2MGMAlUYHEeskmfPfq0YHIMRQ7dRb85Rtp8dLWtAPhw+lqVOrxj381Ih8Q7a28M5MnuhZlH7r\nQ3zy9e7vJ6lMIamEBSpV09KeO9e45mstbXtDVaUDgKMzFPsnP2e+4gcfUA732VAVqn6MYNLciVMd\nr61es/tfr1r35hf7p9W+tMPi7j4I5qhdDDIORx58/JoWo4OIdZLM+e//jA5AiEHnlfZjf7XuSWHi\nQqNjGR/Nm5zcOu5NWLuRmhaCYMak5cVfwYTz4erPwdkFH3N/tZUZETcs94Hevqu2nz1nnRXCKh0A\n7J2rlNz9ebP9p6XKfqcFJ0LZlxEs3v60GbUvrl2388tXrtn1lfai+te3mT39VYjdfcL+aXQAk4Hs\nMzcOjvLKMwD8XoEnRCicVhr2b7Ucnw+Knt31B2Vn1+yfN3/HsvFe9wn8uRtEhhwRZTnUttXU7NwY\nzj5vN726++vm36zw58xaIzyblLh3c1ZGsUYU2u1imHH9Id5z52tahs2LmN6wdyAuy1FbfGN1U87S\nXM1km2d0PEF03YOPX/Oa0UHEOqnMjY8MtQpDHTLV7NhqOV4SjYkcABRNfcs13mu6kdxhVCIHAN7Z\nKWH/APd79fqVf1I3RMwK10vdplfpbKGu0oEIr16prLjzC6aZv7ta2elRUBvS/gwU72wtnH/yyQ0b\nKz8/76p936zJaT6wlTRPjdFxTdAAZCVrWEhlbhwc5ZUfBvCM0XGIyWmb+fjWU+aGjUbHEShF8Q6s\nXvNH73hPPDiBBce/QV835BzTQbbXzh8lL4f+KLFL/N36lUq7Ursu3P2OR9iqdAAUjdUPvqnt+sBO\nnmbSEGVTDALTlTL9ZE3xzY0d6fNmsmKKttf8yoOPX3OT0UFMBlKZG5/XEeUbXYrow2B+0XJgWzQn\ncgCQm3f6cCBHV9VhmuH7danFSYbEcJv7P1a2cXJEr/AMW5UOgKaQ6U/rTGvv+IIp5/mraLtKsX9u\ndmp3zdySqv/esHH7w1OWHPnpW2mdp7aBtWhZUCDz5cJEKnPj5Civ3A9gqdFxiMlBheb+q3XP/i6l\nP+gHoYfbsuXP7o6P71053ut+gQe3badrjN0M2av12l5rIEL4h7dT0Nu1z/aZNht5Z4S77/EKZ5UO\nACxedt75mrbn+kO8QGFkh6PPSMAgtSVryZG64hv7e5KKFoEozeiYRrDgwceviblFLJFIKnPjJ/Pm\nRFi44e35o23HW7GQyJlM7u64uN6SQK51oMj49ymzksRJZkMqZN1ISr3Z/R2TxhTxG5cPVulmu91h\nmSflMVPcr280bdj0eVPCtkW0jYHOcPRrNAKbcloPX7n8wHfXbtz+SMK8t3+3L6Gv4U0w9xkd2xA1\nksiFj/FvktHnRaMDELGvH66WP9gqzznJc6XRsQRDwZS3jxIhoL3iWpEdEYs9vLNTkozq+wwXFN/j\n+ZdzzBj3ApJwS9W0tL+ea1z7dX1fuqZw9OmyUuLPbjVtuPtRE+2eS9sY6AlHv5FAYdVa0Lh7+cp9\n31izofLzNPvUn3bFDbTuAbPRf1dC/ruSiJiIfjfkezMRtRDRCwHe71dENO75uUS0MdA+g0WSufHb\nCSAsb1BicuqgvrqnbG8OeEmLme0JCvLfCXjT3z4kZQUzlkBp2XFLmFBvVP9btZLF3/F+LGqOvXr/\nhSpd2I5y6o+j1B9+wLTh/7d35/FRVef/wD/PrNkTAmHLRmRRkCUsoiwhgEuXtNV+1VqrFm2ttXWj\nLq21rV5tq6PV2lbbn61t3XGtdas7SAj7GhIg7ExCCEkIkADZZnt+f8ykxhTINnPPvTPP+/XKi2Qy\nc+4nCsmTc895zvW3Wb2bzqBihvH69UWSNeBJyD6wdMbMNfedW7D8rra8fe+ucLQ3rQezT0EcPSY+\nmgGMJ6L40McXAjjQ18GY+Xpm3haWZDqTYq6XQkd7va06h4hOB+notn85VicGiHNUZwkXu721we5o\nze/LawOw+P2w9uror4ghosCQ+L0qI/zV//VZ7/unF6vM0BuhWbpZoVk63TYrHE+g9IeusBb+8BZr\n89YcLGMYf0Yz3Oz+1tS8yg9nzV51z7TZK3/emFP1yTKbt7lMp+bELQCW6nAdAPgAQFHo/SsBvNzx\nCSLSiOjOTh9vIaIRRJRIRP8hos2hx64IfX4pEU0Lvf9lItoYes7i0GPTiWglEW0K/WmY03ekmOub\nf6sOIKLPHkvdhv84NuaAYIiZqHDJytq2jQi2vry2HoMPgqhPr40E36iUkRw8a1KZH3sXFu4KDDfV\nweXBWbrq+DHt+qyl69CYRBn3X2Wbc9OPrUd2DUMJA149r28UDu/xQaP2vjVnzoqfTpyx+t7a4TUl\nxVZfWyRnoBbf9NT8tgiO39krAL5NRHEAJgJY04PXfBlADTNPYubxAD7s/EkiygDwNIBLmXkSgMtD\nn9oOYA4zTwZwL4AHw/Q19JsUc32zGIDydgkiemy2Vq74zL5lIgjK1mVFypChu9P6+tr9yG0IY5R+\n40RbFuyWUtU5ijwPTWviBFOdX5oa4NR/1dTO/s2hw+v0nKUDgIZUGvaLa20FC2+wHqzMwAoGYuFM\n1JOKbz8y7KydrxQWLr9j3LlrH6gcUrduqcXv3RPmy+i2foyZywCMQHBW7v0evqwcwAVE9DARFTBz\n15/n5wFYxsz7Qtc4Eno8FcDrRLQFwOMAen00YaRIMdcHWa4CL2QjhAiT5baK4nW23TNBsKvOEm5O\n54kam80zoa+vdyPvRDjzhINvRJLydVge2J3z2x8b5mWrsjV8fXXxieZzVMzSAcDBgZRz1/W2WXd9\n31pZk45VqmdZVUtsqcs9u+LZuXNLFo6ctuHhXQMbyoop4Kvq57B+6H/36h0Aj6LTLdYQH75Y58QB\nADPvRLDFWDmAh4jo3i6vI5z878avAXwWms37esd4RiDFXN+9qTqAMDcG8/v2jcXbbTWFMOgZnP2V\nk1O+sz/ni1ZihOFmUPy5iZMZOKY6x2GkDvqG5zceZvPdJVA5SwcAVYPpjIU/tM24Z4F116EUrNX7\n+kaUcrxq9KQtfy2ct+y2nPzSP25NO7qjGBzoy/+bpTc9NV/vpsb/BPAAM3edrXYDmAIARDQFobPV\niWg4gBZmfhHBIrBr14BVAAqJqOP56aHHU/H5Botrw/sl9I8Uc333IQC91gSIKBNAwPsvx5qVNdaj\napvhRljGYPfQ/ry+FsONN1tptSRwin2z6hgAUMG5I2/y3raHGSp2K/ZbxyzdmQpm6QBgz3Aac9NN\ntunad6zbjiZig4oMRpTeuPPsKZv/VDiv+NbBE8r/WppyzF2Cz281dkf3Iy+ZuZqZ/3iST/0LQDoR\nlQL4EYCdoccnAFgbevwXAH7TZbxDAG4A8CYRbQbwauhTjyA4k7cCgDXcX0d/yAkQ/VB9d8nbAL6h\nOocwFy98J15zrtrRSp6oPkkkMfHo3ilT3+vXqQXX44WtrZRgmHUpHSwNbeWODYf7fPs43O60vVpy\ns+1tQ5/h2p13khLW/WrQwKwA0TBVGfL3BMpueTcQSG5FvqoMRhUgi7du8LTNVdkXtjcnDpsIopMd\nzecDMOymp+Ybaq1rLJBirh+q7y5ZAOBZ1TmEebTC0/Cac2W9l/xKD47Xw9ixxUsHZVTN7c8YV+P1\nw0yWgWGKFFbOjw/sIw7etjGC5+0PFc+xlpt6prfJQk3fHzqkfIfTMVtljvMqAhtvfD/gTPAYZ4G7\nkfgttraDQ2eW7s+ej9a4QfkI7iQFgE9uemr+RUrDxSgp5vqh+u6SdAC1QPQtXBfh10jNVW861nCA\nOFd1Fj3Mmv2S22IJjOjr61sRd+J6esmwu3vt5UeWWmta56rO0YEQCCxzLFyXbWk4V3WW/jLCLB0A\nFJYF1n3/40BKnBeG6SdmND6r83h1ZmHZgcw5Do89+f/9+G8XPaM6UyySYq6fqu8ueQvAxapzCGOr\no8bt7zo2DATFxmHgKSl1FZPyPx7bnzH2YNSue+nh0eHKFHatvoPOZXVDyEBrj+PR3rLeeWNlIrX3\n67+9ERwLzdJtVzxLB2b+0kZec83iQIbDj5FKsxibB8DQsdsrjqoOEosM803IxJ5XHUAY2z5L/cZ3\nHRsyY6WQA4Cc3PL6/o5RiRHG/qEQbxsGp2Wj6hidtcKZcEH7owN8bDmoOkt/pQQ49fWa2tkP1Tes\ntzCr+3qI6KOplvOuucua99Jcy0qfBZXKshjb+1LIqSPFXP+9B6Cnu3xEjNlirVq52F4+HoSTLRaO\nUsxpabVj+jtKJfJaw5Emknwjkg13osBBDBx6uee+Y8wwXI++vvhac8u0kqrqhLMU7XjtwESWt2dY\nZl59lzXzXzNpuZ/6fgZolHpJdYBYJsVcP2W5Cjz430aFQmClbUfxatuuGSA4VGfRU3r6gTIi7vda\np/3IMXzvPX924hQGGlXn6GoTjz7zLt8PtzEjoDpLOBhmlg5AwEK2Vwuts6+5y5rx3jm0LECoU5nH\nIJoAvKs6RCyTYi48nlMdQBjLR/bSpdts1VHbDPh0snPKw9LEth5D4sMxTkRZyclpjjLVMU7mDX/h\n9Of9FyqdzQq3jlm6se2eEtVZfFZyPH+Bdc5377CmfJpPxQHgsOpMCr0xdntFu+oQsUyKuTDIchWs\nA1ChOodQL4CA703HmuX7rYfnqs6iRsCXnNwQlnYOx5FsyJYkXXlHpwxSneFU7vNdN2d9YMwy1TnC\nKSXAqa/V1Ba4DDBLBwAeO8X/7SvWwmvvsDpLxlExx+a53S+qDhDrpJgLH5mdi3Fe+Jpfca4oPWI5\noXb3nUIZg92lROh3EcYA+2AfHo5MkcbpznFswW7VOU7lW557Z9Vz2nrVOcKt6PNZOkPMPrY5KOmJ\ni62F31toxdrRVMyIjjWLPbAfQLHqELFOirnweRGIjvUpovda4Tm8yLnc3UKeaaqzqJSdvTUst1qO\nYGB9p0akhucfnmDYxfABWKzz2x8d08Z2wxacfRWapZv9cH3Deitzjeo8ANAcT6mPXmYt/MGt1vbN\neVTMgOE38vTTorHbK6THmWJSzIVJlqvgAIBPVecQ+muiluqXncuPeckf093iLRZfa0JC48RwjLUf\nOf1ubaIn36iUcQzjno96AgkpF3keifMz6X0Aui6+2twyraSyOskIa+k6HEukgb/9trXwRzdZj1Vk\nYxkH+7BFG0bwkHuhmBRz4SW3WmNMPTXtfMOxyh4gNsyxTqoMHbp7M4WpBUsl8o6FYxzdOK0ZiLMa\nqudcV1U8JOsa78/rmdGmOkskJDOnvFZTW2CkWToAOJJCQ+672jbnlh9ZD+0ZihIjF/19sGTs9oqd\n3T9NRJoUc+H1b0jPuZhRaTlU+o5j/RAmDFGdxQgyM8O3B6gSI0z3A893RrLhl1msDIw/+wHfNZuY\nEbW3xTpm6ca1txtiLV2H+jTK/Pl1toKf3GA9UDUIKzg6luX8RXUAESTFXBhluQpaAfxDdQ4ReVut\n+1d9Yi8bC0Kq6ixGYLV6mpxxJyaHa7waZNrCNZZe/JkJUxloUJ2jO8/4vzLjrcCsqNrh2lUyc8qr\nNXWGWkvXoWYg5d75A9usn11n3XdwAFYzTFtYHwDwtuoQIkiKufD7C6LjNy5xCqttO5etsu08FwSn\n6ixGMTxzezmF8b/HEQxMCtdYurGQPZDu2Ko6Rk/8xHtTYUUgx1AzV5HQMUt3dnu7YdbSdXAPpZG3\n3Wg775ffte5sSMY61Xn64G9jt1f4VYcQQVLMhVmWq8CN4BFfIgp9bN9cvMW2fw5I/u10Nnz4jrA2\n+G1FgilvXftGp/b75Au9fMPzm+lHOalUdY5IS2ZOeaWmruCR+oYNRpulA4BdmXTmj2+2nfPAlZat\njQkw9LrLTrwAnlYdQnxOfiBFxhOqA4jwCoD9/3asLamyNhSqzmI0dnvrIbu9LT9c43lhaw/AYspi\njtMcY9hK21Xn6AkvbI557Y/letjmVp1FD19pbplq1Fk6ANgywnL2DbfZpjx8maX0eBw2q87TjbfG\nbq9Q3rBZfE6KuQjIchV8CjkRImr44G991bliw2HL8QLVWYwoK3trBRGs4RrvIDJrQGTaY9D8mQmm\nOauzEckDijwPIsA4qjqLHrrM0hmyN+CG0Zb87//ENukPF1s2tDqwTXWeU5CNDwYjxVzkPKk6gOi/\nNniOLnIu391M7dNVZzGqIUP2pIVzvCrkmvqMS9/I5Alm6im2i7NGXO+9s4rZPJn7KzRLl2zUWToA\nWDnOMnXBHbZxT33FsrbdBiO1/9g8dnvFUtUhxBdJMRc5zwMwV68s8QXHqPXAIufyox7yTVCdxaji\n4o4fsNk8Yf3v40ZeSzjH053Dms7xxu4519WSwJRJj/kuX6s6h546Zul+Z+BZOgBYkm+Zfs2d1tHP\nXGBZ5bVir+o8AP6gOoD4X1LMRUiWq+AEgGdV5xB9c4iO7XrdsdISID5DdRYjy84p30WEsN4S3Y9c\ns7Zq+C/fyOSw3XbWy5P+b87+xD815s7Y/HJzy9TlldUp4w08Swci+uAcy4xr7rTmvlxoWeGzoEpR\nkloAixRdW5yGFHOR9STM20MoZlVZGja/7Vg3mAmm2ZmoSkaGO+z/jeow1PQtXwLDE6YwYJq1cx1+\n4L19zt7A0FWqc+gtiTn5ZRPM0gUsZP33TMusq++yDv/3DCrxE/TenfuXsdsrYuZ2vJlIMRdBWa6C\nXQA+UJ1D9FyFtXr1x/bNZ0oz4O4lJh7da7X6zwz3uE1ITQ/3mLojsgYGOU2xq/WLiL7icU0+xvFb\nVCdR4b+zdG0GnqUDELCQ7eW51oJr7rQOfH8aLQsQ9DjLuA3AUzpcR/SBFHOR95DqAKJn1tp2L1th\n2zEdhDjVWcwgJ6csIrd6PHAOjcS4evONTslSnaEv2uGIm9/+2BAfW6pVZ1EhiTn55YN1BY/VHdpo\n5Fk6APDZyPnshdY5373DmrRkIhUHInuc5Itjt1cciuD4IKLHiWhhp48/IqK/d/r4MSK6PZIZzEqK\nuQjLchUsBxDVR+dEg0/tZUvLbJXSDLgX0gdWjwj3mMeQfAREKeEeVwVOcYxkG5niRIiuGpCW8U3P\nA63MsbuJ66KW1inLK6tTJrS1G/77t8dOCU8VWQuvu91qXzGWihloisBlHo/AmF2tBDATAIjIAmAQ\ngLM7fX4mgBXdDUJEpjsOsL/kB5c+fqs6gDi5ANj/tmNdidt6aK7qLGaSklq3zWIJjAj3uAeQXRvu\nMVXyZyVGcqYkosr5jNG3em/eyYyYPbIpiTl50cG6Ob8PztIZfqay1UnJf7zEWvj9hVZeP4qKGWgO\n09Afjd1eoUfPuxUIFXMIFnFbABwnogFE5AQwFsCXiGgdEW0hor9RqCclES0logeJqBjAbTpkNRQp\n5nSQ5Sr4GDDl2XtRzQd/22vOlesPWY5JM+Beyskpi8jtFjfyIjGjoIzvjOSJHFxrZErvBmZO+5u/\nqNuZkGh3YXCWLtUMs3QAcCKe0h653Fp4w63W1vJcKg7D38HfhCVYNzh43JqPiHIQLOpWAVgDYAaA\naQDKADzJzOcw83gA8QC+1mmINGYuZObH9MhrJFLM6Udm5wykDd7GRc7lO09Q27mqs5gPB9LS6sZE\nYuRK5EXXTjm7JZUTbabqOdfVQ76r5qzyjzNFERNJZpulA4CmRBr06+9YC398k7VxRyaW9bGZ9eKx\n2yuWhz3cqXXMznUUc6s6fbwSwDwiWkNE5QDm44u3YV/VMaehSDGnn3cQnDIWih1H68GXncsbPOSb\nqDqLGaWnV5cRcUTathxAdtR9T/KNSjZ9q5XveO+ZXcPpMdVU+FQubGmdsqKyOm1iW/syMJui9dTh\nFBr6q+/a5tz6Q2v9viFYzujVrXMtUrlOoWPd3AQEf2auRnBmrmO93F8AXMbMEwA8DXxhw1q4biub\nTtR94zSqLFcBA3hQdY5Y10DH97zmXMl+CoxSncWssnO2RGxRfAMGJUVqbFUCQ+Ins/79wMKKYbFc\n2P67cS3s3KE6ixEkMie9dLBuzu/rGzaZZZYOAOrSKetn37PNvuN66/7qgVjJQKCbl+g9KwcEC7av\nATjCzH5mPgIgDcGCrqMHYgMRJQG4TOdshiXFnL5eBbBLdYhYVW05XP6WY+1AJgxXncWsiALe5OSG\n8ZEavxlJgyI1tjJElkBGnJHO1uyTZsQnXdj+SIqfKao2qfRHp1m6ErPM0gFAdQaNuP0G28y7r7Pu\nrUvD6tM8VdMrUyflCO5iXd3lsSZmbkBwNq4cwFuQtej/RSb6+xcVqu8u+R6Af6jOEWt2WGvWltgq\nJoAQrzqLmQ0evHfdmWetOCcSYwdg8V+D1xhR2FaATngrnSvqc1XnCIdptKPidcf9OURIVJ3FSD5N\niN905+BBGX4i0/UXPHM/Vyx8239i4HF0/re9eOz2iguUhRK9IjNz+nsBQKXqELFkvW1PSYmtYqoU\ncv2Xlb0lYhsU6jHkYDQWcgDASfZctlGZ6hzhsJ7PHPtz3/VbmLu9RRdTLmhpnbyisjptkonW0nXY\nkU1jf3Sz7Zxff9tS3pSATaGHNZWZRO9IMaezLFeBF8B9qnPEiiX2LUtLbe4CEEx38LnRWCy+1oSE\npkmRGr8KuRHtLq+aPzcpatquvOKff+7L/vmGPvJKhUTmpBcP1s15vL6h1Exr6TqU51km/OA22+Tf\nXWp5VsFaOdEPUsyp8QKC9/xFhDA48I5j/bK91rq5qrNEi6FDd5cSIWIbFCox4kSkxjYC34ikyWFs\n4qrcPb7rC0sDI6WgOwkzz9IBwLoxlr+oziB6R4o5BbJcBQEAd6vOEa38CLS/5li5tt7SNEd1lmiS\nmVlBkRy/EiOi+7adzZLESbZN3T/RPC71aDMaOMXUffQipWOW7g/mm6V7s3xBuWwsMBkp5hTJchW8\nD2Cp6hzRph3epkXOku3HLW3nqc4STaxWT5Mz7sTkSF6jFsMdkRzfCHyjU5JVZwgnP6y2ee2PjWxn\n+x7VWYzq/NAsXb45Zun8AH6pOoToPSnm1PqZ6gDR5ATaahc5l9e3ky9i67piVWZmRTkRItr89igG\npEZyfCMIZMRNZMJ+1TnC6TgSU7/kcTkCTA2qsxhVInPSC6FZOhuzkf//v1C+oLxCdQjRe1LMKZTl\nKlgL4A3VOaLBETqx71XnSp+fAqNVZ4lGw4bvjPhO4HbEDYn0NZQjosCQ+KibxXLzsOwF3p8dZEa7\n6ixGdn5L6+TlldXpBp2l80B2sJqWFHPq3QPApzqEmdVYjmx907EmlYlN19/JDOyOlkN2e1t+JK/R\nivjjTJaBkbyGUfhGp4xiwGg/yPutJDBxwoO+qzaozmF0icyJLxysm/PH+obNBpul+3P5gnJpm2VS\nUswpluUq2IVgR2vRB7ssB9e9b9+UB0K66izRKjtr6zaKcGuXA8g6GMnxjYQTbFlwWEpV54iEp/1F\nM9/zn1esOocZzG9pzV9RWZ0+ua3NCLN0DQAeUJxB9IMUc8ZwP4CobssQCRut+5YX27dNBiFBdZZo\nNmTonogXylUYcTTS1zASX25Si+oMkXKz99bCnYHMFapzmEECc+LzB+uNMEv3q/IF5Y0Kry/6SYo5\nA8hyFdQBeFR1DjNZat+6dKN972wQovLEAKOIiztebbV6I3YWawc38toifQ0j8ecmTWHgmOockfI1\nz4PTGjkxKk680IPiWboyyN0h05NizjgeAeBWHcLoGBx4z7GheLe1dq7qLLEgO6d8NxEi2l8OAPYj\nJ+LXMBQrxXOKfbPqGJHigd05v/3RTC9bZQ1WD3WZpavS8dILyxeU+3W8nogAKeYMIstV0ArgNtU5\njMyPgOd1x6o1tZbGQtVZYkVGhnuYHtc5hMExd6vcNzolTXWGSDqC1IFf8/w2wIyoOcZMD6FZuoFT\n2tqKdZile7N8QflnEb6G0IEUcwaS5Sp4B8B7qnMYkQe+Yy87l289ZmmdoTpLrEhMPLLHavWfqce1\njiM55jawBAbFTWDCPtU5ImkH5+Td6F24lxle1VnMJIE58bmD9YVPRHaWrhXAHREaW+hMijnjuRXB\nf2QipBnt9YucJQfbyBvREwjEF+XklumyIJsB9sE+XI9rGU1gWELU34b8KDB98p/831ytOocZzf18\nli4Sa+lc5QvK3WEeUygixZzBZLkK9gFwqc5hFEfphPsV54p2HwV0mSESn0tPP5Cnx3UOY1AdiOL0\nuJbReEcln8VAdJ9JC+Bx3+UFn/knScuSPgjN0s0JzdKFq/jfA+DhMI0lDECKOWN6GMBu1SFUO0hH\nt/3LsSaJibNVZ4k1Kal12yyWQK4e16pGdr0e1zGkeNtQOC0xcVD9dd6fzqkMDJYZuj6a29Kav7Ky\nOmNqa1jW0t1avqBcTuuIIlLMGVCWq6AdwC2qc6i021K7/j+OjbkgDFKdJRbl5JQd0utalcg7rte1\njMiXlxwj68mIvuR5eNIJjtumOolZxTMnPFtbX/hk3aGyfszSvV2+oPz9sAYTykkxZ1BZroIPAfxb\ndQ4VSq3uFUvtWyeBkKg6S2ziQFpanW63tSuRF9PH2fmzEqcw0Kg6hx7a4Iw/v/3RQT621KjOYmaF\nrW2TQrN0vV1L1wpgYYRiCYWkmDO2hQCitlP8yZTYKpaut+2ZCYJddZZYlT6wuoyIh+p1vRpkRvSo\nMMOzkpPTHDHTYLcO6YMv9WgnmBHTM7L9FZqlm9PLWbpfyaaH6CTFnIFluQqqANynOoceGMzv2zcW\n77DVzIUOTWrFqeVkl+v6Q/YI0lP0vJ4ReUenxNRygs08aszt3h9tZ4Y0q+2njlm6ad3P0q0F8Aed\nYgmdSTFnfL8HsEp1iEjyI+B5w7F6VY31qDQDVowo4E1KPhzx47s6a0XCYD2vZ0Sc7hzHFuxSnUNP\n/w4UnPOs/0tyhmsYxDMnPFNbP+fPp56l8wD4nl4nPRCRn4hKiWgrEW0motuJyBT1BhHlE9FXVefo\nLVP8x41lWa6CAIDrAETl2ZUe+I6/7Fy+pcnSMlN1FgFkZOwrJcIAva7ngb0tAMsQva5nZP7hCTG3\njux+34I5awNnSsuSMJlz6lm6B8sXlG/VMUorM+cz89kALgTwVZjnLlM+gnl7jIiUnxEuxZwJZLkK\ndgD4peoc4daC9kOLnMur28g7RXUWEZSVvdWj5/VqMbwGRHJbHYBvVMo4BmJuM8i3Pb+aXcsD1qvO\nES1OMktXDuBBVXmYuR7ADQBupqA4InqGiMqJaBMRzQMAIrIS0aOhx8uI6JbQ424iGhR6fxoRLQ29\nrxHRc0T0ceg5/0dEj4Re/yER2UPPm0pExUS0gYg+IqJhoceXEtHDRLSWiHYSUQEROQA8AOCK0Mzi\nFUQ0nYhWhrKuJKIzQ6+/loheJ6J3AXxMRC8Q0cUdXzcRvURE39Drv7MUc+bxOICouSXRSM2VrzhX\ntPjIP1Z1FhFksfhaExKaJul5zUrkHtHzeobmtGYgzhoTPec6C8BiPb/90TNb2RFTt5kjLTRLN2B+\nc8uC8gXlStvfMPNeBOuNwQBuCj02AcCVAJ6jYNPwGwDkAZjMzBMBvNSDoUcCKAJwMYAXAXwWGrcV\nQFGooHsCwGXMPBXAPwH8ttPrbcw8HcHNhvcxswfAvQBeDc0svgpgO4A5zDw59LnOhfEMAAuYeT6A\nvyN4Fw1ElApgJgDdWsBIMWcSnW63mv6or1pqrHjDsTohQKxLU1rRM0OH7iolQpKe13TjjJjard0d\n3xnJkT5Y3ZCaEZ98keeRBD9T7DaQjoB45j/88cd7NqnOEdIxAz8bwAsAwMzbAVQCGAPgAgBPMbMv\n9Lme/KL3ATN7EZx9tAL4MPR4OYARAM4EMB7AJ0RUiuAdrqxOr38z9OeG0PNPJhXA60S0BcFJlbM7\nfe6TjpzMXAxgFBENRrBI/VfH16IHKeZMJMtVsAvAPapz9MdeS92G9xwbskHIUJ1FfFFmVoXu3w/2\nIycmi5dT8WcmTGGgQXUOFfbz4MzveH7RwGz+X1gNYiOA36gOAQBEdAYAP4B64JTdCgjAyb4f+PB5\nrdL12L92AGDmAAAvf75OMADAFhpza2iWLZ+ZJzDzRV1fH8p2qnVvv0Zwxm88gK93ydDc5bkvALgK\nwYmXZ04xXkRIMWc+fwSwTHWIviizVq5YYt8yETrP/ojuWa2eJqezOV/v69ZjaEyeyXpKFrIH0h16\nLlQ3lDU8bty9vms3M5/0h7rouXYA34XWpPx0ESLKAPAUgCdDxdYyBAseENEYADkAdgD4GMCNHZsJ\niCg9NIQbwNTQ+5f28vI7AGQQ0YzQmHYiOrub1xwHkNzp41QAB0LvX9vNa59FqCkzM+v671iKOZPJ\nchUwgO/BZM2EV9i2F6+17ZZmwAaVmVlRTgSn3tdtQppuO2fNwjc6dZjqDCq94L/ovDf8c0z5C6uB\n/BJak8pfCuI7WpMA+BTBQu3+0Of+AsBKROUAXgVwLTO3I7jmrApAGRFtBvCd0PPvB/BHIioBeteX\nMLQG7jIAD4fGLEVwLdvpfAZgXMcGCACPAHiIiFYgeCv3dNerA1ABnWflAID6f16vUKH67pKbADyp\nOkd3GMwf2kuXHbAekR5yBnbuea9vdDjadN9VfBXeOA6i5O6fGVucn9ZsJz+fpTqHSu857ikZb3EX\nqM5hQiUA5kJrCqgOEmuIKAHB9XpTmLlJz2vLzJxJZbkK/gzgXdU5TieAgPdNx5qVUsgZm93Rcshu\nb9N1FysAHEPKESnkTs6fmVCnOoNql3geOO8wJxtl8b5ZnABwrRRy+iOiCxDc+fqE3oUcIMWc2V0L\nYL/qECfjha/5ZeeKzUctzbNUZxGnl529dRvR6W8fREI1smv1vqZZ+EYmT+Bg1/6Y5YPNPr/9sbx2\ntu1TncVEboXWtFd1iFjEzJ8ycw4z/0HF9aWYM7EsV8ERBLdAG6rRaCs8DYucyytbyTNNdRbRvSFD\n9qR3/6zwcyNP999eTcNhTed46wbVMVRrQlLaVzwuS4BJ+hF275/QmnRfqyWMQYo5k8tyFayAgY5J\naaKW/S87l5/wkn+c6iyie3Fxx6ptNu8EFdeuRF5798+KXb6RKcqPCDKCvTw89/veO/czx/ZMZTc2\nIdSMV8QmKeaiw0MI7hZSqo6adrzhWOUMEI9QnUX0TE5O+W5V1z6ALN1v7ZpJYHj8FAZifu0cAHwW\nmDzpd74r1qnOYVCNAC6D1hSV53eLnpFiLgqE2pVcA0DZGiS3pX7Tu471w5gwWFUG0XuDMiqHq7r2\nYWRIv8HTIbIGBjkrVMcwir/4L571oX9aseocBsMI9pOTdXIxToq5KJHlKqhHsBGj7ruYtlirVn5q\nLz8bhBS9ry36LjHxyB6r1T9G1fWbkThI1bXNwjc6JVt1BiO50fuTOXsCw1aqzmEgD0NrMnRXA6EP\nKeaiSJarYAm+eIhwxK207Shebds1AwSHntcV/ZeTW6ZsJ7QfFp8f1phujtsTnOIYyTaK2RMh/hfR\nVz0PTTnGCeWqkxjAEgTPGhVCirkodD8AXW5FfGQvXbrNVl0IOuVZe8LA0tMP5Km6dj2GHETo2B5x\nev7sxMOqMxhJOxxx89sfHeplqyHbMumkBsCV0Jp6dSKCiF5SzEWZLFeBH8AViGD/uQACvjcda5bv\ntx6eG6lriMhKTa3dZrEEclVdfz9yD6m6ttn48pInMeTw+c4akJZxsefXHmbEYnsbH4BvQWuqVx1E\nGIcUc1Eoy1VQB+ASROAHgBf+llecKzcdsZyYHe6xhX5ycsqVFlNu5DWrvL6p2C2pnGiTkxC62MYj\nRt7svXU3s7H6bOrgp9CaVqgOIYxFirkoleUq2AjgunCO2QrPkUXOkn0t1H5OOMcVeuNAalrdmSoT\nVGGEHDfUC75RyU7VGYzoP4Hzpv7F/41VqnPo6A1oTY+rDiGMR4q5KJblKngVwIPhGOsYtVS/7Fze\n5CX/2eEYT6gzcOD+MiIeqjLDQQyXDTO9EBgSP5kJNapzGNHvfN8uKPGPj4WWJTsAfE91CGFMUsxF\nv18CeKc/AxyiY7ted6yyBYiVLZgX4ZOds+W46gyNSEtVncFUiCyBjLidqmMY1Xe9dxdU86A1qnNE\nUDOAS6E1Kf+3K4xJirkoF2oofDWAPrU3qLQcKn3bsW4wE5TO5IjwIAp4k5IOj1edox1x8vepl3yj\nU+SXqVNgWCwXtT8yvpmjsskyA7geWpO0qBGnJMVcDMhyFRwHcDGAXh1Wvc1avfoTe9lYEGQWJUpk\nDN67iQgDVGZoQcIxJku6ygxmxEn2XLbTZtU5jKoFcYkXtD86wM+Wg6qzhNm90JpeUR1CGJsUczEi\ny1WwB8DlQM92fq2x7Vq20rZjOgiy8DqKZGdt86rOUIPMaPthqxtfTpLcZjuNgxg49DLPfceYES27\npf8Brek3qkMI45NiLoaEToj4SXfP+8RetrTcVjUHJH8/oonF4muJT2jKV52jEiMaVWcwK/+IpHxG\n1BQqEbGJR5/5U98NW5n1P9owzD4BcKPqEMIc5Id1jMlyFTwJ4Pcn+1wA7H/Lsbak0nporr6phB6G\nDttVSoRE1TncOKNNdQbTslmSOEl6znXndf/c6S/5zy9RnaMfygFcBq0p1nroiT6SYi423Qng1c4P\n+OBvfdW5ckOD5XiBokwiwjIzK6yqMwBANbLl+Ld+8I1OSVKdwQx+6ft+4cbAqGWqc/RBDYAiaE3H\nVAcR5iHFXAwK7XBdgNAZrm3wHF3kXL67mdqmq00mIsVma29yOpsnq84BAIcwOEF1BjMLZMRNYkKV\n6hxmcLnnvpn1nLpBdY5eOAHga9CaYvncWdEHUszFqCxXQTuASxqp+bOXnSuOeMg3QXUmETnDM7eX\nEcEQjXqPI2Wg6gymRkSBIfH7VMcwAz+stvPbHx3VxvbdqrP0gAfAJdCa5Da66DUp5mJYlqug8Q3n\n6mv8FLCpziIia9iwncrXygEAA+yDbbjqHGbnG50ykoP9x0Q3jiMx9Uueh+MCTErPI+5GAMA10JoW\nqw4izEmKuRinadoBABcBMPI3OtEPdkfLIbu9bZLqHABwGINqQSTtbvqJE2xZcFhkBqeHKnlo1jXe\nn9cxw6ibb26D1vSa6hDCvKSYE9A0bSeArwCQHlZRKDt7awURDLH5YT9y5JeGMPHlJhm1MDGkFYHx\n4x/wXbOJ2XAzmr+B1vSk6hDC3KSYEwAATdM2IHhKRLvqLCK8hgzZY5jTFiqRJ78whIk/N2kyA7Lj\nsRee8X9lxtuBmUba4fo3aE2/Uh1CmJ8Uc+K/NE37DMCVAPyqs4jwiIs/tt9m8yo/i7VDJUYoP4Ei\nalgpnlPscrxXLy303ly4PZC9XHUOAG8C+LHqECI6SDEnvkDTtH8DuB6yuDoq5OSU71GdobMaZNpV\nZ4gmvtEpaaozmNHXPb+dfpSTVBbCbwG4ElqT/OIswkKKOfE/NE17FsB1gOmPw4l5gwZVZqrO0NlR\nDExRnSGaBAbFTWCCtCnpJS9sjnntj+V42OZWcPk3AFwOrcmj4NoiSkkxJ05K07TnAFwDueVqWolJ\nh3dbrf7RqnN01oKEwaozRJvAsIRK1RnMqBHJA77m+S0HGEd1vOzLAL4tx3SJcJNiTpySpmmLEFxD\nJ994TCg3p6xadYbOPLC3MUiKuTDzjk4+i+WXrj7Zydl5P/DeUckMPdZyvoBgLzn5fyXCToo5cVqa\npr0O4FuALt/sRBgNSK/JU52hs4PIrAGRnMsabnG2oXBKz7m+WhyYmv+477I1Eb7MPwFcK4WciBQp\n5kS3Qpsi/g/StsQ0UlNrt1osgVzVOTqrQu5h1RmilS8vWX7Z6oc/+f9v9mL/5OIIDf9XANdDa9J9\nDTIR/YKIthJRGRGVEtG5p3jeNCL60yk+t5SIdoReX0pElxHR34loXGTTi96QYk70iKZp7wG4BDBs\nB3XRSU5uWYPqDF25kdeiOkO08mclTmGgUXUOM/u+98457sCQVWEe9kkAP4LWpHt3ACKaAeBrAKYw\n80QAFwDYf7LnMvN6Zr71NMNdxcz5obc3mPl6Zt4Wgdiij6SYEz2madqHAL4OoFV1FnE6HEhNrT9L\ndYqu9sNQE4XRxUpOTnOUqY5hbkRf9rjyj3P81jAN+Di0pltUFHIhwwA0MHM7ADBzAzPXENE5RLSS\niDYT0VoiSiaiuUT0Xk8HDs3WTYtYctFrUsyJXtE07VMAXwXQrDqLOLmBA/dvJuIhqnN0VYeh8aoz\nRDPv6JQM1RnMrg3O+PPbHx3sY0t/Nw89Aq3p9rCE6ruPAWQT0U4i+gsRFRKRA8CrAG5j5kkIztb1\n5JfzlzrdZh0YydCib6SYE72madpSAF+GnOVqSNk5W06oznAyx5CapjpDNON051i2YJfqHGZXjwEZ\n3/Q80Mrc56PSfgOt6WdhDdUHzHwCwFQANwA4hGAR90MAB5l5Xeg5x5i5J90KOt9mlbWvBiTFnOgT\nTdOWA7gIQJPqLOJzRH5PUtJhwxzf1ZkHjmGqM0Q7//CEGtUZokE5nzF6ofemncy9bvlyn5HOWmVm\nPzMvZeb7ANyM4Ea20972JaKPQjNwf9clpAgLKeZEn2mathrAHACG6mcWywYP3ldKhAGqc3TVhNTD\nIEpWnSPa+UaljGPpCxkWbwdmTXva/9UVPXw6A/gZtKYHIpmpN4joTCLq3DQ8H0AFgOFEdE7oOclE\nZOv8Omb+UmgG7nr90or+kmJO9IumaWUAzgUgfa4MICt7qyF/kFcju1Z1hpjgtGYgzrpRdYxo8aDv\n6jmrA2OXdfM0D4CroDU9okemXkgC8BwRbSOiMgDjANwL4AoATxDRZgCfAIhTmFGECTHLeeqi/zRN\nSwLwCoAi1VlilcXia5k562UmQqLqLF29j6+vfImunak6Ryyw7m9eY9/WeNJ+YqL3CIHACuet64fT\nkekn+fRRAJdAa+qu4BMiomRmToSFpmknAFyMYF8locCwYTtLjVjIAUAlRkhTW534MxOmMGC4PoNm\nxbBYLmz/3bgWduzo8ql9AGZKISeMQIo5ETaapvk1TbsFwO0AdO92HuuGZ263df8sNQ4g26o6Q8yw\nkD2Q7ghXrzQBoBnxSRe2/y7Fz1QXemgdgBnQmrarzCVEBynmRNhpmvY4gEsBSMd/ndhs7Y1OZ3O+\n6hyn0oBBhpwxjFa+0amyczjMDiBj2Lc9vzriZevrAOZCa6rr9kVC6ESKORERmqa9BWAuAPmGp4PM\nzIpyIjhU5ziVFiRKQ1sdcZpjDFupQnWOaLOOz/pgdPsL34bWJL+oCkORYk5EjKZp6xDc6Sq3fCJs\n6LBdhp358sPi88MqM0U682cl1KvOEEW8AH7gdhXd4XYVyRISYThSzImI0jStEsAsAJ+qzhKtHI6W\neru9LV91jlOpx9AaEMmaOZ35zkieyMG2GaJ/jgC4yO0qkia6wrCkmBMRp2laE4Lnuf5DdZZolJW9\npYLIuP+Wq5Arx/+o4LAO4HjrBtUxTK4cwHS3q2ip6iBCnI5hfwCI6KJpmlfTtOsRPBuwTXWeaDJk\nyF5DH3ztRp4hz4qNBb5RKYbd4WwCLwI4z+0q2qM6iBDdkWJO6ErTtL8BmAFgt+os0SAu/th+m81r\nyLNYO1RhhKwxUiQwLH4Kyyak3vICuNntKrrG7SqSjQ7CFKSYE7rTNK0UwFQAbyiOYno5OWWGnzWo\nxTDD7rKNekTWwCCn7GrtuQMACt2uoj+rDiJEb0gxJ5TQNO2YpmmXA7gVski7zwYNqspUnaE7jRiQ\nqjpDLPONSc1RncEklgKY4nYVrVIdRIjekmJOKKVp2hMACgBUqs5iNklJh3dZrf7RqnN0pw1OaUui\nECfbz2AbbVGdw+B+B+ACt6tI2rkIU5JiTiinadpaAJMBvKs6i5nk5JYdUJ2hOy1IOAayDFCdI9b5\nsxOPqs5gUI0ALnW7in7qdhX5VYcRoq+kmBOGoGnaUQAXA/gZAJ/iOKYwYMCBM1Rn6M4BZNWqziAA\nX17yRAZaVecwmGUAJrldRW+qDiJEf0kxJwxD0zTWNO0RAPMQXIgsTiE17eBWi4UNvxaqEiNkRsgI\n7JZUTrRtVB3DIHwAfglgnttVVKU6jBDhIMWcMBxN05YjeNv1Y9VZjConp7xBdYaeqESe9BQ0CN+o\nlHjVGQxgN4BZblfRb+VYLhFNpJgThqRp2iEAX0Zwt6v0evoCDqSm1p+lOkVPVCOHVGcQQYEhcflM\nMT3j/QyAyW5X0VrVQYQIN+kOLgxL0zQG8ISmaR8CeA7BZsMxb+DA/ZuJeLLqHD1Rj8EJqjOIECJL\nICNul7W+zfDtbMLsKIAb3K4i6WspopbMzAnD0zRtF4LtS+4G0K44jnLZOeWmOR7rBJINfdRYrPGN\nTsljgFXn0NFiABOlkBPRjphj6d+1MDtN08YjOEs3RXUWFYj8nlmzF7UQIU11lu4wwFfjDQ+InKqz\niM85l9SUkZcnqs4RYY0A7nS7iv6hOogQepCZOWEqmqZtAXAegF8hBmfpBg/eV2qGQg4ADmNQnRRy\nxuPLSTqmOkOEvQVgnBRyIpbIzJwwLU3TxgL4B2JoLd3UaW+vTEg4NlN1jp7YhKllj9I90T4DZD6+\nwAnn4oNEQKLqKGFWD+AWt6voNdVBhNCbzMwJ09I0rQLAbAC3AWhWHCfiLBZvc3z8sXzVOXrKjbxo\nnwEyJ5sliZPtm1THCLMXAIyVQk7EKinmhKlpmhbQNO1PAMYjyvvSDRu2azMRTLM7tAoj5Hgkg/KN\nSk5WnSFM9gP4qttV9F23q+iI6jBCqCLFnIgKmqa5NU37EoCrEKWnRwzPrDBVK6EaZJoqbywJZMRN\nZIKZTz/wAfg9gLPdrqIPVIcRQjUp5kRU0TRtEYAzAfwaUXQWpc3W3uh0tuSrztEbR5AeLbM/0YeI\nAkPi96qO0UdLAeS7XUV3uF1Fx1WHEcIIZAOEiFqapuUCeATAt1Rn6a/c3NKSnNzyAtU5euNqvF7H\nZBmiOoc4OWrxHXCU1A0nwCyndBxEsN3IItVBhDAamZkTUUvTtEpN064AMAeAqRd8Dx22M0l1ht5o\nh6OVQYNV5xCnxgm2TDgsZvh30Q7gIQBjpJAT4uSkmBNRT9O0EgDTANyAYPsCU3E4Wurs9vZJqnP0\nxkEMrwGRWWZ8YpZvRJLRlyK8jeC6uHvcriLTnHwihN7kNquIKZqmpQC4F8CtAOyK4/TIyJFri4dn\n7ihUnaM3lmHuur/SLeeoziG64edW56c1HgJSVUfpYhOAn7pdRZ+qDiKEGcjMnIgpmqYd0zTtTgRb\nmbynOk9PDB6yd5DqDL1ViTyjz/gIALBSPKfYy1TH6GQ3gCsBTJVCToiek9YBIiZpmrYTwNdD7Uwe\nRbC4M5z4+KYqm817tuocvVWFXJnyNwnf6JQ0x4bDqmPUAngAwN/driKv6jBCmI3MzImYpmnaRwAm\nArgMgJFmKAAA2TnlpmwfUY8hcaoziJ4JDIqbwBao+nvWBOCXAEa5XUX/Two5IfpG1swJEaJpGgG4\nBME1dflKw4TMnLVot9XqH6U6R29dh0W7POQcrTqH6Blb+dGltpqWuTpesg3AnwE85HYVKZ8WFMLs\npJgTootQUfcNAPcBmKwqR1JSw67JUz4wZUF0Fd44ASJTtVOJaW2+WmdxXQYB1khfCcDfATzidhXt\nj/C1hIgZUsyJ/yKibwJ4E8BYZt5+muctBPA3Zm4Jffw+gO8wc6MeOfWkadrXESzqpup97XFnL1k6\ncOCBuXpft78akdZwE/3DdJs2Yp1z6cH11B6YFqHhjwP4fwB+73YV1UXoGkLELCnmxH8R0WsAhgFY\nzMzaaZ7nBjCNmRt0iqacpmlFCBZ1urXbmDX7xf0WC2frdb1w2YrxWx+k+023aSPWWStPrLJvb5oR\n5mGPAPgjgCfcrqKjYR5bCBEiGyAEAICCt8RmAfg+gG+HHrMS0aNEVE5EZUR0CxHdCmA4gM+I6LPQ\n89xENCj0/u1EtCX0tjD02AgiqiCip4loKxF9TETxKr7OvtI07T+apk0H8FUAayJ9vbS0g1vMWMgB\ngBt5TaoziN7zZyVOYSBcBddBAHcByHW7ih6QQk6IyJLWJKLDJQA+ZOadRHSEiKYAOBdAHoDJzOwj\nonRmPkJEtwOY13VmjoimArgu9DoCsIaIihH8ATEawJXM/IPQDOClAF7U7asLE03TPgDwgaZpFwFY\nCODLiMDZltk55aZdFF6JPNmRaEZWcnKaYw01eub0Y5Q9AB4D8E+3q6g9TMmEEN2QYk50uBLAH0Lv\nvxL6+AwATzGzDwCY+Ug3Y8wG8G9mbgYAInoTQAGAdwDsY+bS0PM2ABgRxuy60zTtYwAfa5o2EsCP\nESxiB4Rn9IA/NbVubHjG0t8BZMmMv0l5x6RkONf2evUEA/gEwJ8AfOB2FQXCHkwIcVpSzAkQ0UAA\n8wGMJyJGcEcbI1h09WZR5elmqDr/lu4HYKrbrKeiadoeAHdomvYrAN8BcBP62dZk4KD9m4kwJQzx\nlDiMQcmqM4i+4QHOsWyhXRTgnuyiPgHgOQBPul1Fp9wwJYSIPCnmBBBsmPs8M/+w44HQ7dGNAG4k\noqWdb7MiuDMtGUDXX+GXAXiWiFwIFnbfBHCNLl+BYpqmtSDYcuHvmqbNAnAzgreSe33+a3b2luYw\nx9NVMxIzVGcQfefPTKix7W8+XTG3G8CTAJ5xu4qO6RRLCHEaUswJIHhL1dXlsX8BGAugCkAZEXkB\nPI3gN/G/AfiAiA4y87yOFzDzRiJ6FsDa0EN/Z+ZNRDQiwvkNRdO0FQBWaJo2FMANAH6I4KaRbhH5\nPUlJRyZGMl8k+WHxBWAdqjqH6DvfyORx1v3NPvriz4cAgI8Q/Pf/gdtVJG0QhDAQaU0iRIRpmmZD\ncJbyZgCnXVw+ZOiutWPGrJ6uS7AIqMHwqrvoiRzVOUT/OItr11Cb/1wA+wA8A+BZszb5JaITzJzU\n6eNrEWytdLO6VEKEl8zMCRFhmqb5ALwO4HVN08YB+C6C6+v+p/VIVtY2n87xwqoKIxoASDFnbi3e\nUSnLHVuO3gPgs1ifhSMiW8cmsJN93NPXCRFJUswJoSNN07YBuFvTtJ8DmAvgagTX1qVaLN7m+Phj\n+Qrj9ZsbeaZe7xfDAgCWAngBwBs1V888oTaOPogoF8A/AWQAOATgOmauCi0XOYLgcX4bQ5vEOn/8\nAoCnACQg2I7le8x8lIiWAliJYM/OdxBs0yJExEkxJ4QCmqYxgM8AfKZp2k0Avj5k6J5CIlyvOFq/\nVCHXrzqD6JUKBAu4F2vn5ZvyNmoPxBNRaaeP0xEstIDgGsDnmfk5Ivoegu1VLgl9bgyAC5jZHyru\nOn9cBuAWZi4mogcQPB1mYeh1acxcGMkvSIiupJgTQjFN09oQug27eMnIXwC4GMAVAC5EH3bDqlSL\nYU7VGUS3NgF4G8DbtfPySxVn0UMrM+d3fNCxZi704QwA/xd6/wUAj3R63evM7O/6MRGlIliwFYce\nfw7Bf78dXg1jdiF6RIo5IQzk/Pl7mgA8D+D5xUtGDkBw48TlCN6SjVMYrUeaMCBNdQbxP7wAihEs\n4N6pnZdfpTiPkXVeH9h1yUBPlxDIUgOhOynmhDCo8+fvOYrgep5/Ll4yMgHA+QieDVuEk2yeMII2\nxElbEmM4DuADBAu492vn5TeqjWNYKxE8i/oFAFcBWN7dC5i5iYiOElEBM5cg2EuzuLvXCRFJUswJ\nYQLnz9/TAuDd0BsWLxk5AZ8XdjMRPLVDqWYkNIEoTEeaiT44gOBasLcBfFY7L9+jOI8Z3Argn0R0\nF0IbIHr4ugUAniKiBAB7e/E6ISJC+swJYXKLl4xMA/AlBAu7rwAYpCLHTpy543568EwV145RrQBW\nIzgr9B8AG2rn5cs3dCFikMzMCWFy58/f04jgoutXFy8ZaUHwbNjC0FsBgrv3Iq4KI47qcZ0YdgzA\nCgSPzVsGYF3tvHyv2khCCCOQYk6IKHL+/D0BBM/U3Qjg8cVLRhKACfi8sJsJIDMS13Yjrz0S48aw\nwwBK8HnxVlo7L19avwgh/ocUc0JEsfPn72EAZaG3JwBg8ZKRuQgWdbNCf05AGL4XVCOb+jtGjDuA\n4AL8YgSLt21y21QI0ROyZk6IGLd4yUgngHEAJnV569Xt2Zvw9IZGSp8a/oRRpxXAVgQL7PLQn2W1\n8/IblKYSQpiWFHNCiJNavGRkFv63wBsNwHKy5y/AK/t8ZM/TL6HhMQA3Pp8Z7SjedtXOyw8ozCWE\niDJSzAkheizU724MgFGd3xg442q8kQEiwzc2joBGBIu2jrftCBZuW2rn5R9XFUoIETukmBNChMXQ\nz0qdCDYzzj3JWxaCt23TcIqZPYNqRXAtW9e3SgD7ALhr5+U3qYsnhBBSzAkhdDT0s1JCsKBLBzAg\n9Gfnt66PDQCQgGAB2Ns3AtCCYEuPjrfjPfz4KICa2nn50m5FCGF4UswJIYQQQpiYmW53CCGEEEKI\nLqSYE0IIIYQwMSnmhBBCCCFMTIo5IYQQQggTk2JOCCGEEMLEpJgTQgghhDAxKeaEEEIIIUxMijkh\nhBBCCBOTYk4IIYQQwsSkmBNCCCGEMDEp5oQQQgghTEyKOSEijIiGEtErRLSHiLYR0ftENOYkz1up\nIp8QQghzI2ZWnUGIqEVEBGAlgOeY+anQY/kAkpm5JPSxlZn96lIKIYQwM5mZEyKy5gHwdhRyAMDM\npQCsRPQZES0CUA4ARHQi9OdcIiomoteIaCcRuYjoKiJaS0TlRDQy9LwMIvoXEa0Lvc3S/8sTQgih\nmk11ACGi3HgAG07xuekAxjPzvpN8bhKAsQCOANgL4O/MPJ2IbgNwC4CFAP4I4HFmXk5EOQA+Cr1G\nCCFEDJFiTgh11p6ikAOAdcx8EACIaA+Aj0OPlyM42wcAFwAYF7yTCwBIIaJkZj4eqcBCCCGMR4o5\nISJrK4DLTvG55tO8rr3T+4FOHwfw+b9bC4AZzNzar4RCCCFMTdbMCRFZSwA4iegHHQ8Q0TkACsMw\n9scAbu40bn4YxhS9QEQDiag09FZLRAdC7zcS0bYejnEjEX039P6zRHRZ6P2lRDQtkvmFENFBijkh\nIoiD28W/CeDCUGuSrQA0ADVhGP5WANOIqCxUONwYhjFFLzDzYWbOZ+Z8AE8huIYxH0A+grOop0VE\nNmZ+ipmf728WIrL2dwwhhDnJbVYhIoyZawB86ySferrL85JCfy4FsLTT43M7vf/fzzFzA4ArwptW\nhJGViJ4GMBPAAQAXM3MrES1FsF3NLADvEFEygBPM/OipBiKiiwDcD8AJYA+A65j5BBG5AfwTwEUA\nngTwSgS/HiGEQcnMnBBCRMZoAH9m5rMBNAK4tNPn0pi5kJkf624QIhoE4JcALmDmKQDWA7i901Pa\nmHk2M0shJ0SMkpk5IYSIjH2hnoJAsD3NiE6fe7UX45wHYByAFaGdyw4Aq/o4lhAiCkkxJ4QQkdF5\nR7IfQHynj0+3k7krAvAJM195is/3ZiwhRBSS26xCCGFsqwHMIqJRAEBECSc721cIEbukmBNCCANj\n5kMArgXwMhGVIVjcnaU0lBDCUCjYOUEIIYQQQpiRzMwJIYQQQpiYFHNCCCGEECYmxZwQQgghhIlJ\nMSeEEEIIYWJSzAkhhBBCmJgUc0IIIYQQJibFnBBCCCGEiUkxJ4QQQghhYlLMCSGEEEKYmBRzQggh\nhBAmJsWcEEIIIYSJSTEnhBBCCGFiUswJIYQQQpiYFHNCCCGEECYmxZwQQgghhIlJMSeEEEIIYWJS\nzAkhhBBCmJgUc0IIIYQQJibFnBBCCCGEiUkxJ4QQQghhYlLMCSGEEEKYmBRzQgghhBAmJsWcEEII\nIYSJSTEnhBBCCGFiUswJIYQQQpiYFHNCCCGEECYmxZwQQgghhIlJMSeEEEIIYWJSzAkhhBBCmJgU\nc0IIIYQQJibFnBBCCCGEiUkxJ4QQQghhYlLMCSGEEEKYmBRzQgghhBAmJsWcEEIIIYSJSTEnhBBC\nCGFiUswJIYQQQpiYFHNCCCGEECYmxZwQQgghhIlJMSeEEEIIYWL/H6zqKlEOkM/KAAAAAElFTkSu\nQmCC\n", "text/plain": [ - "
" + "
" ] }, - "metadata": {}, + "metadata": { + }, "output_type": "display_data" } ], @@ -554,7 +851,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 10, "metadata": { "id": "EfAeZ7f0GlSo" }, @@ -626,11 +923,13 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 11, "metadata": { "id": "DHwb2AsvtIwO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def create_tf_datasets(ratings_df: pd.DataFrame,\n", " batch_size: int = 1,\n", @@ -690,11 +989,13 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 12, "metadata": { "id": "T6pJVpHfns9q" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# We limit the number of clients to speed up dataset creation. Feel free to pass\n", "# max_clients=None to load all clients' data.\n", @@ -722,7 +1023,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 13, "metadata": { "id": "9D2rCgcwFP4E" }, @@ -760,20 +1061,19 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 14, "metadata": { "id": "98VwSFBe1GPM" }, "outputs": [ { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEGCAYAAACKB4k+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAATW0lEQVR4nO3dfbBlVX3m8e8jKjpiBEKniwEqjdipyGQSwjQMRitDJCGKmYCJEqjMQAyTjgZMnJlY1Y5VvqSSCsQhkzJlMKhEmBCVKAQyOCISfEkqAg1BaCCEljQjDNAtGJWQQYHf/LHXXZxc7+17++Wcc1++n6pTZ5+19zn7t3rDfe5eZ9+1U1VIkgTwrGkXIElaOgwFSVJnKEiSOkNBktQZCpKk7tnTLmBPHHTQQbVu3bpplyFJy8rNN9/81apaM9e6ZR0K69atY/PmzdMuQ5KWlST3zbfO4SNJUmcoSJI6Q0GS1BkKkqTOUJAkdYaCJKkzFCRJnaEgSeoMBUlSt6z/onlPrNt09dT2ve3c10xt35K0M54pSJI6Q0GS1BkKkqTOUJAkdYaCJKkzFCRJnaEgSeoMBUlSZyhIkjpDQZLUGQqSpM5QkCR1hoIkqTMUJEmdoSBJ6gwFSVJnKEiSOkNBktQZCpKkzlCQJHWGgiSpMxQkSZ2hIEnqDAVJUje2UEhyWJLrk9yZ5I4kv9baD0xybZJ72vMBrT1J3ptka5Lbkhw9rtokSXMb55nCk8B/raojgeOAs5McCWwCrquq9cB17TXAq4H17bERuGCMtUmS5jC2UKiqB6vqlrb8TeAu4BDgZODittnFwClt+WTgkhp8Edg/ycHjqk+S9J0m8p1CknXADwM3AGur6sG26iFgbVs+BPjKyNvub22zP2tjks1JNu/YsWNsNUvSajT2UEiyH/AJ4C1V9Y3RdVVVQO3K51XVhVW1oao2rFmzZi9WKkkaaygkeQ5DIFxaVZe35odnhoXa8/bW/gBw2MjbD21tkqQJGefVRwE+BNxVVb87suoq4My2fCZw5Uj7Ge0qpOOAr48MM0mSJuDZY/zslwP/Ebg9ya2t7b8B5wKXJTkLuA84ta37JHASsBV4HHjDGGuTJM1hbKFQVX8JZJ7VJ8yxfQFnj6seSdLC/ItmSVJnKEiSOkNBktQZCpKkzlCQJHWGgiSpMxQkSZ2hIEnqDAVJUmcoSJI6Q0GS1BkKkqTOUJAkdYaCJKkzFCRJnaEgSeoMBUlSZyhIkjpDQZLUGQqSpM5QkCR1hoIkqTMUJEmdoSBJ6gwFSVJnKEiSOkNBktQZCpKkzlCQJHWGgiSpMxQkSZ2hIEnqDAVJUmcoSJI6Q0GS1BkKkqRubKGQ5KIk25NsGWl7V5IHktzaHieNrHtbkq1J7k7yk+OqS5I0v3GeKXwYeNUc7f+jqo5qj08CJDkSOA34V+09f5BknzHWJkmaw9hCoao+Dzy6yM1PBj5aVU9U1d8DW4Fjx1WbJGlu0/hO4Zwkt7XhpQNa2yHAV0a2ub+1fYckG5NsTrJ5x44d465VklaVSYfCBcARwFHAg8D5u/oBVXVhVW2oqg1r1qzZ2/VJ0qo20VCoqoer6qmqehr4AM8MET0AHDay6aGtTZI0QRMNhSQHj7x8LTBzZdJVwGlJ9k1yOLAeuHGStUmS4Nnj+uAkHwGOBw5Kcj/wTuD4JEcBBWwDfhmgqu5IchlwJ/AkcHZVPTWu2iRJcxtbKFTV6XM0f2gn2/8W8FvjqkeStDD/olmS1BkKkqTOUJAkdYaCJKkzFCRJnaEgSerGdkmqJK106zZdPbV9bzv3NWP53EWdKSR5+WLaJEnL22KHj35/kW2SpGVsp8NHSV4G/AiwJsl/GVn1XYA3wZGkFWah7xSeC+zXtnvhSPs3gNeNqyhJ0nTsNBSq6nPA55J8uKrum1BNkqQpWezVR/smuRBYN/qeqnrlOIqSJE3HYkPhT4H3Ax8EnNJaklaoxYbCk1V1wVgrkSRN3WIvSf3zJL+S5OAkB848xlqZJGniFnumcGZ7futIWwEv3rvlSJKmaVGhUFWHj7sQSdL0LSoUkpwxV3tVXbJ3y5EkTdNih4+OGVl+HnACcAtgKEjSCrLY4aM3j75Osj/w0bFUJEmamt2dOvsfAb9n2E3Tmm53XFPtSlo5Fvudwp8zXG0Ew0R4LwUuG1dRkqTpWOyZwn8fWX4SuK+q7h9DPZKkKVrUH6+1ifH+lmGm1AOAb42zKEnSdCz2zmunAjcCrwdOBW5I4tTZkrTCLHb46O3AMVW1HSDJGuAzwMfHVZgkafIWO/fRs2YCoXlkF94rSVomFnum8Kkk1wAfaa9/DvjkeEqSJE3LQvdofgmwtqremuRngFe0VX8NXDru4iRJk7XQmcLvAW8DqKrLgcsBkvzrtu7fj7U6SdJELfS9wNqqun12Y2tbN5aKJElTs1Ao7L+Tdc/fm4VIkqZvoVDYnOSXZjcm+U/AzeMpSZI0LQt9p/AW4IokP88zIbABeC7w2nEWJkmavJ2eKVTVw1X1I8C7gW3t8e6qellVPbSz9ya5KMn2JFtG2g5Mcm2Se9rzAa09Sd6bZGuS25IcvacdkyTtusXOfXR9Vf1+e/zFIj/7w8CrZrVtAq6rqvXAde01wKuB9e2xEbhgkfuQJO1FY/ur5Kr6PPDorOaTgYvb8sXAKSPtl9Tgi8D+SQ4eV22SpLlNeqqKtVX1YFt+CFjblg8BvjKy3f2tTZI0QVObv6iqimdu3LNoSTYm2Zxk844dO8ZQmSStXpMOhYdnhoXa88wkew8Ah41sd2hr+w5VdWFVbaiqDWvWrBlrsZK02kw6FK4CzmzLZwJXjrSf0a5COg74+sgwkyRpQhY7S+ouS/IR4HjgoCT3A+8EzgUuS3IWcB/DDXtgmHH1JGAr8DjwhnHVJUma39hCoapOn2fVCXNsW8DZ46pFkrQ43ihHktQZCpKkzlCQJHWGgiSpMxQkSZ2hIEnqDAVJUmcoSJI6Q0GS1I3tL5q19KzbdPXU9r3t3NdMbd+SFs8zBUlSZyhIkjpDQZLUGQqSpM5QkCR1hoIkqTMUJEmdoSBJ6gwFSVJnKEiSOkNBktQZCpKkzlCQJHWGgiSpMxQkSZ2hIEnqDAVJUmcoSJI6b8epiZjWrUC9Dai0azxTkCR1hoIkqTMUJEmdoSBJ6gwFSVLn1Uda0aZ11RN45ZOWJ88UJEndVM4UkmwDvgk8BTxZVRuSHAh8DFgHbANOraqvTaM+SVqtpnmm8GNVdVRVbWivNwHXVdV64Lr2WpI0QUtp+Ohk4OK2fDFwyhRrkaRVaVqhUMCnk9ycZGNrW1tVD7blh4C1c70xycYkm5Ns3rFjxyRqlaRVY1pXH72iqh5I8j3AtUn+dnRlVVWSmuuNVXUhcCHAhg0b5txGkrR7pnKmUFUPtOftwBXAscDDSQ4GaM/bp1GbJK1mEw+FJC9I8sKZZeBEYAtwFXBm2+xM4MpJ1yZJq900ho/WAlckmdn/n1TVp5LcBFyW5CzgPuDUKdQmSavaxEOhqu4FfmiO9keAEyZdjyTpGUvpklRJ0pQZCpKkzlCQJHWGgiSpMxQkSZ33U5DGZFr3cvA+DtoTnilIkjpDQZLUGQqSpM7vFKQVxu8ytCc8U5AkdYaCJKkzFCRJnaEgSeoMBUlSZyhIkjpDQZLUGQqSpM5QkCR1hoIkqXOaC0l7xbSm1wCn2NibPFOQJHWGgiSpMxQkSZ3fKUha9qb5fcZK45mCJKkzFCRJnaEgSeoMBUlSZyhIkjpDQZLUGQqSpM5QkCR1hoIkqTMUJEmdoSBJ6pZcKCR5VZK7k2xNsmna9UjSarKkQiHJPsD7gFcDRwKnJzlyulVJ0uqxpEIBOBbYWlX3VtW3gI8CJ0+5JklaNZba1NmHAF8ZeX0/8G9HN0iyEdjYXj6W5O4J1TYOBwFfnXYRY7BS+wUrt2/2a5nJeXvUt++db8VSC4UFVdWFwIXTrmNvSLK5qjZMu469baX2C1Zu3+zX8jOuvi214aMHgMNGXh/a2iRJE7DUQuEmYH2Sw5M8FzgNuGrKNUnSqrGkho+q6skk5wDXAPsAF1XVHVMua5xWxDDYHFZqv2Dl9s1+LT9j6VuqahyfK0lahpba8JEkaYoMBUlSZyhMUJJtSW5PcmuSza3twCTXJrmnPR8w7ToXkuSiJNuTbBlpm7MfGby3TVtyW5Kjp1f5zs3Tr3cleaAds1uTnDSy7m2tX3cn+cnpVL2wJIcluT7JnUnuSPJrrX0lHLP5+rasj1uS5yW5McmXWr/e3doPT3JDq/9j7YIckuzbXm9t69ft9s6ryseEHsA24KBZbb8DbGrLm4Dzpl3nIvrxo8DRwJaF+gGcBPxvIMBxwA3Trn8X+/Uu4Nfn2PZI4EvAvsDhwJeBfabdh3n6dTBwdFt+IfB3rf6VcMzm69uyPm7t336/tvwc4IZ2LC4DTmvt7wfe1JZ/BXh/Wz4N+Nju7tszhek7Gbi4LV8MnDLFWhalqj4PPDqreb5+nAxcUoMvAvsnOXgyle6aefo1n5OBj1bVE1X198BWhmlalpyqerCqbmnL3wTuYpg9YCUcs/n6Np9lcdzav/1j7eVz2qOAVwIfb+2zj9nMsfw4cEKS7M6+DYXJKuDTSW5u03UArK2qB9vyQ8Da6ZS2x+brx1xTl+zsf9ql6Jw2jHLRyPDesuxXG1b4YYbfPFfUMZvVN1jmxy3JPkluBbYD1zKc1fxDVT3ZNhmtvferrf868N27s19DYbJeUVVHM8wCe3aSHx1dWcO537K/Rnil9KO5ADgCOAp4EDh/uuXsviT7AZ8A3lJV3xhdt9yP2Rx9W/bHraqeqqqjGGZ2OBb4/kns11CYoKp6oD1vB65gONAPz5yat+ft06twj8zXj2U9dUlVPdz+53wa+ADPDDUsq34leQ7DD81Lq+ry1rwijtlcfVspxw2gqv4BuB54GcNQ3swfHY/W3vvV1r8IeGR39mcoTEiSFyR54cwycCKwhWEajzPbZmcCV06nwj02Xz+uAs5oV7QcB3x9ZMhiyZs1lv5ahmMGQ79Oa1d9HA6sB26cdH2L0caWPwTcVVW/O7Jq2R+z+fq23I9bkjVJ9m/Lzwd+guH7kuuB17XNZh+zmWP5OuAv2tnfrpv2t+yr5QG8mOGqhy8BdwBvb+3fDVwH3AN8Bjhw2rUuoi8fYTgl/zbDuOZZ8/WD4SqK9zGMh94ObJh2/bvYr//Z6r6t/Y938Mj2b2/9uht49bTr30m/XsEwNHQbcGt7nLRCjtl8fVvWxw34QeBvWv1bgHe09hczhNhW4E+BfVv789rrrW39i3d3305zIUnqHD6SJHWGgiSpMxQkSZ2hIEnqDAVJUmcoaMlIUknOH3n960netZc++8NJXrfwlnu8n9cnuSvJ9bPa1yX5p5FZO29Ncsa465mnxscW3kqr1ZK6HadWvSeAn0ny21X11WkXMyPJs+uZ+WYWchbwS1X1l3Os+3IN0xZIS5ZnClpKnmS47+x/nr1i9m/6M7/tJjk+yeeSXJnk3iTnJvn5Nhf97UmOGPmYH0+yOcnfJfmp9v59krwnyU1t8rRfHvncLyS5CrhzjnpOb5+/Jcl5re0dDH9M9aEk71lMh5N8b4b7GRyU5Fltnye2dX/WJk+8Y2QCRZI81mq+I8lnkhyb5LOt/z/dtvmF9m/y2fb575xn/28d6fvMnP0vSHJ1hrn8tyT5ucX0RSuDZwpaat4H3Jbkd3bhPT8EvJRh2ut7gQ9W1bEZbrjyZuAtbbt1DHPgHAFcn+QlwBkM0zgck2Rf4K+SfLptfzTwAzVMsdwl+ZfAecC/Ab7GMPPtKVX1G0leyTCP/+Y56jyizXo5481V9YUWKhcw/CXqnVU1s/9frKpH2zQHNyX5RFU9AryAYRqDtya5AvhNhmkQjmSYPvmq9v5jgR8AHm/vv3q0rhY+69t2Aa7KMEnjGuD/VtVr2nYv2sm/vVYYQ0FLSlV9I8klwK8C/7TIt91UbW6eJF8GZn6o3g782Mh2l9UwQdo9Se5lmHXyROAHR85CXsTwg/JbwI2zA6E5BvhsVe1o+7yU4QY9f7ZAnXMOH1XVB5O8Hngjw6yeM341yWvb8mGtrkdabZ8a6eMTVfXtJLczBN+Ma1uIkORyhrOY0bA6sT3+pr3er+3jC8D5Laz+V1V9YYF+aQUxFLQU/R5wC/BHI21P0oY7kzwLeO7IuidGlp8eef00//y/8dlzuhTDb8hvrqprRlckOR74x90rf9ck+RcMM17C8IP5m23/Pw68rKoeT/JZhvltAL5dz8xP0/tbVU/nmRk0Ye7+/rNdA79dVX84R01HM8wh9JtJrquq39itzmnZ8TsFLTlV9SjDbQfPGmnexjBcA/DTDHei2lWvb+P2RzBMLHY3cA3wpgzTL5Pk+zLMYrszNwL/rn0PsA9wOvC53ahnxnnApcA7GKZ5huGM5WstEL6f4VaMu+onMtyH+fkMd+j6q1nrrwF+McO9CEhySJLvacNjj1fVHwPvYRhG0yrhmYKWqvOBc0ZefwC4MsmXGIZOdue3+P/D8AP9u4A3VtX/S/JBhiGXW5IE2MECt0StqgeTbGKYxjjA1VW1mCnPZ3+ncBHDrLnHAC+vqqeS/GySNwB/ArwxyV0M4fXFXelocyPDfQYOBf549vccVfXpJC8F/nroOo8B/wF4CfCeJE8zzBj7pt3Yt5YpZ0mVVqAkv8Aw5fU5C20rjXL4SJLUeaYgSeo8U5AkdYaCJKkzFCRJnaEgSeoMBUlS9/8BGwq8+FHAj1QAAAAASUVORK5CYII=\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEGCAYAAACKB4k+AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90\nbGliIHZlcnNpb24zLjYuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/av/WaAAAACXBIWXMAAAsT\nAAALEwEAmpwYAAASyklEQVR4nO3df7BkdXnn8ffHEdGIayAM1CxSGYKTipgyxAyTGFMJiUaJpgJm\ng4FK4phQISZgYrKxMqxV/kgtJUTNWpWKuqjESYKys6sEEiwBCf6IlQgDQRjECSOOOmGWGcWNsGZR\n4Nk/zrlf2uu9cxtmTvfte9+vqq7u/p7T3c93ztz7ued093NSVUiSBPCEaRcgSVo+DAVJUmMoSJIa\nQ0GS1BgKkqTmidMu4GAcffTRtX79+mmXIUkz5eabb/5KVa1daNlMh8L69evZvn37tMuQpJmS5IuL\nLfPwkSSpMRQkSY2hIElqDAVJUmMoSJIaQ0GS1BgKkqTGUJAkNYaCJKmZ6W80H6z1W66eyuvuvuil\nU3ldSVqKewqSpMZQkCQ1hoIkqTEUJEmNoSBJagwFSVJjKEiSGkNBktQYCpKkxlCQJDWGgiSpMRQk\nSY2hIElqDAVJUmMoSJIaQ0GS1BgKkqTGUJAkNYaCJKkxFCRJjaEgSWoMBUlSYyhIkhpDQZLUDBYK\nSY5PckOSO5PckeT3+vGjklyX5K7++siRx1yQZFeSnUlePFRtkqSFDbmn8BDwn6vqWcCPAeclOQnY\nAlxfVRuA6/v79MvOAp4NnAa8I8maAeuTJM0zWChU1d6quqW/fT9wJ3AccDqwtV9tK3BGf/t04PKq\nerCqvgDsAjYNVZ8k6TtN5D2FJOuBHwY+DRxbVXuhCw7gmH6144AvjzxsTz82/7nOTbI9yfb9+/cP\nWrckrTaDh0KSI4APAq+pqq8faNUFxuo7BqouqaqNVbVx7dq1h6pMSRIDh0KSw+gC4bKq+lA/fG+S\ndf3ydcC+fnwPcPzIw58B3DNkfZKkbzfkp48CvBe4s6r+dGTRVcDm/vZm4MqR8bOSHJ7kBGADcONQ\n9UmSvtMTB3zu5wO/Btye5NZ+7L8AFwHbkpwDfAk4E6Cq7kiyDfgs3SeXzquqhwesT5I0z2ChUFX/\nwMLvEwC8YJHHXAhcOFRNkqQD8xvNkqTGUJAkNYaCJKkxFCRJjaEgSWoMBUlSYyhIkhpDQZLUGAqS\npMZQkCQ1hoIkqTEUJEmNoSBJagwFSVJjKEiSGkNBktQYCpKkxlCQJDWGgiSpMRQkSY2hIElqDAVJ\nUmMoSJIaQ0GS1BgKkqTGUJAkNYaCJKkxFCRJjaEgSWoMBUlSYyhIkhpDQZLUGAqSpMZQkCQ1hoIk\nqTEUJEnNYKGQ5NIk+5LsGBl7Y5J/TXJrf3nJyLILkuxKsjPJi4eqS5K0uCH3FN4HnLbA+H+rqpP7\ny4cBkpwEnAU8u3/MO5KsGbA2SdICBguFqvoEcN+Yq58OXF5VD1bVF4BdwKahapMkLWwa7ymcn+S2\n/vDSkf3YccCXR9bZ0499hyTnJtmeZPv+/fuHrlWSVpVJh8I7gROBk4G9wNv68Sywbi30BFV1SVVt\nrKqNa9euHaRISVqtJhoKVXVvVT1cVY8A7+bRQ0R7gONHVn0GcM8ka5MkTTgUkqwbufsyYO6TSVcB\nZyU5PMkJwAbgxknWJkmCJw71xEk+AJwKHJ1kD/AG4NQkJ9MdGtoN/BZAVd2RZBvwWeAh4Lyqenio\n2iRJCxssFKrq7AWG33uA9S8ELhyqHknS0vxGsySpMRQkSY2hIElqDAVJUmMoSJIaQ0GS1Az2kVRJ\nWunWb7l6aq+9+6KXDvK8Y+0pJHn+OGOSpNk27uGjPxtzTJI0ww54+CjJ84AfB9Ym+YORRf8B8CQ4\nkrTCLPWewpOAI/r1njYy/nXgl4YqSpI0HQcMhar6OPDxJO+rqi9OqCZJ0pSM++mjw5NcAqwffUxV\n/cwQRUmSpmPcUPifwLuA9wC2tJakFWrcUHioqt45aCWSpKkb9yOpf5vkd5KsS3LU3GXQyiRJEzfu\nnsLm/vq1I2MFfN+hLUeSNE1jhUJVnTB0IZKk6RsrFJK8YqHxqvrLQ1uOJGmaxj18dMrI7ScDLwBu\nAQwFSVpBxj189OrR+0meDvzVIBVJkqbm8bbO/gaw4VAWsppMq93uUK12Ja0c476n8Ld0nzaCrhHe\ns4BtQxUlSZqOcfcU3jpy+yHgi1W1Z4B6JElTNNaX1/rGeJ+j65R6JPDNIYuSJE3HuGdeezlwI3Am\n8HLg00lsnS1JK8y4h49eB5xSVfsAkqwFPgr8r6EKkyRN3ri9j54wFwi9rz6Gx0qSZsS4ewofSXIN\n8IH+/i8DHx6mJEnStCx1juZnAsdW1WuT/CLwE0CAfwQum0B9kqQJWuoQ0NuB+wGq6kNV9QdV9ft0\newlvH7Y0SdKkLRUK66vqtvmDVbWd7tSckqQVZKlQePIBlj3lUBYiSZq+pULhpiS/OX8wyTnAzcOU\nJEmalqU+ffQa4Iokv8KjIbAReBLwsgHrkiRNwQH3FKrq3qr6ceBNwO7+8qaqel5V/e8DPTbJpUn2\nJdkxMnZUkuuS3NVfHzmy7IIku5LsTPLig5mUJOnxGbf30Q1V9Wf95e/HfO73AafNG9sCXF9VG4Dr\n+/skOQk4C3h2/5h3JFkz5utIkg6Rwb6VXFWfAO6bN3w6sLW/vRU4Y2T88qp6sKq+AOwCNg1VmyRp\nYZNuVXFsVe0F6K+P6cePA748st6efkySNEHLpX9RFhirBcZIcm6S7Um279+/f+CyJGl1mXQo3Jtk\nHUB/Pddkbw9w/Mh6zwDuWegJquqSqtpYVRvXrl07aLGStNpMOhSuAjb3tzcDV46Mn5Xk8CQn0J3/\n+cYJ1yZJq964XVIfsyQfAE4Fjk6yB3gDcBGwrf/y25foTtpDVd2RZBvwWbrTfZ5XVQ8PVZskaWGD\nhUJVnb3Iohcssv6FwIVD1SNJWtpyeaNZkrQMGAqSpMZQkCQ1hoIkqTEUJEmNoSBJagwFSVJjKEiS\nGkNBktQM9o1mLT/rt1w9tdfefdFLp/baksbnnoIkqTEUJEmNoSBJagwFSVJjKEiSGkNBktQYCpKk\nxlCQJDWGgiSpMRQkSY2hIElqDAVJUmMoSJIaQ0GS1BgKkqTGUJAkNYaCJKkxFCRJjafj1ERM61Sg\nngZUemzcU5AkNYaCJKkxFCRJjaEgSWoMBUlS46ePtKJN61NP4CefNJvcU5AkNVPZU0iyG7gfeBh4\nqKo2JjkK+B/AemA38PKq+to06pOk1Wqaewo/XVUnV9XG/v4W4Pqq2gBc39+XJE3Qcjp8dDqwtb+9\nFThjeqVI0uo0rVAo4NokNyc5tx87tqr2AvTXxyz0wCTnJtmeZPv+/fsnVK4krQ7T+vTR86vqniTH\nANcl+dy4D6yqS4BLADZu3FhDFShJq9FU9hSq6p7+eh9wBbAJuDfJOoD+et80apOk1WzioZDkqUme\nNncbeBGwA7gK2Nyvthm4ctK1SdJqN43DR8cCVySZe/33V9VHktwEbEtyDvAl4Mwp1CZJq9rEQ6Gq\n7gZ+aIHxrwIvmHQ9kqRHLaePpEqSpsxQkCQ1hoIkqTEUJEmNoSBJajyfgjSQaZ3LwfM46GC4pyBJ\nagwFSVJjKEiSGt9TkFYYz0utg+GegiSpMRQkSY2hIElqDAVJUmMoSJIaQ0GS1BgKkqTGUJAkNYaC\nJKkxFCRJjW0uJB0ytguffe4pSJIaQ0GS1BgKkqTG9xQkzbxptgtfadxTkCQ1hoIkqTEUJEmNoSBJ\nagwFSVJjKEiSGkNBktQYCpKkxlCQJDWGgiSpMRQkSc2yC4UkpyXZmWRXki3TrkeSVpNlFQpJ1gB/\nDvwccBJwdpKTpluVJK0eyyoUgE3Arqq6u6q+CVwOnD7lmiRp1VhurbOPA748cn8P8KOjKyQ5Fzi3\nv/tAkp0Tqm0IRwNfmXYRA1mpc3Nes2dFzi0XH9S8vnexBcstFLLAWH3bnapLgEsmU86wkmyvqo3T\nrmMIK3Vuzmv2rNS5DTWv5Xb4aA9w/Mj9ZwD3TKkWSVp1llso3ARsSHJCkicBZwFXTbkmSVo1ltXh\no6p6KMn5wDXAGuDSqrpjymUNaUUcBlvESp2b85o9K3Vug8wrVbX0WpKkVWG5HT6SJE2RoSBJagyF\nCUqyO8ntSW5Nsr0fOyrJdUnu6q+PnHadS0lyaZJ9SXaMjC06jyQX9G1LdiZ58XSqXtoi83pjkn/t\nt9mtSV4ysmxW5nV8khuS3JnkjiS/14+vhG222NxmersleXKSG5N8pp/Xm/rx4bdZVXmZ0AXYDRw9\nb+xPgC397S3AxdOuc4x5/CTwXGDHUvOga1fyGeBw4ATg88Caac/hMczrjcAfLrDuLM1rHfDc/vbT\ngH/p618J22yxuc30dqP7ztYR/e3DgE8DPzaJbeaewvSdDmztb28FzpheKeOpqk8A980bXmwepwOX\nV9WDVfUFYBddO5NlZ5F5LWaW5rW3qm7pb98P3EnXPWAlbLPF5raYmZhbdR7o7x7WX4oJbDNDYbIK\nuDbJzX27DoBjq2ovdP/BgWOmVt3BWWweC7UuOdAP7XJ0fpLb+sNLc7vrMzmvJOuBH6b7y3NFbbN5\nc4MZ325J1iS5FdgHXFdVE9lmhsJkPb+qnkvXBfa8JD857YImYMnWJcvcO4ETgZOBvcDb+vGZm1eS\nI4APAq+pqq8faNUFxmZtbjO/3arq4ao6ma6zw6YkP3iA1Q/ZvAyFCaqqe/rrfcAVdLt39yZZB9Bf\n75tehQdlsXnMdOuSqrq3/+F8BHg3j+6Sz9S8khxG90vzsqr6UD+8IrbZQnNbKdsNoKr+D/Ax4DQm\nsM0MhQlJ8tQkT5u7DbwI2EHXxmNzv9pm4MrpVHjQFpvHVcBZSQ5PcgKwAbhxCvU9LnM/gL2X0W0z\nmKF5JQnwXuDOqvrTkUUzv80Wm9usb7cka5N8d3/7KcALgc8xiW027XfZV8sF+D66Twd8BrgDeF0/\n/j3A9cBd/fVR0651jLl8gG6X/Ft0f6Gcc6B5AK+j+zTETuDnpl3/Y5zXXwG3A7f1P3jrZnBeP0F3\nKOE24Nb+8pIVss0Wm9tMbzfgOcA/9/XvAF7fjw++zWxzIUlqPHwkSWoMBUlSYyhIkhpDQZLUGAqS\npMZQ0LKSpJK8beT+HyZ54yF67vcl+aVD8VxLvM6ZfdfOG+aNr0/y7yOdO29N8oqh61mkxgeWXkur\n0bI6HacEPAj8YpI3V9VXpl3MnCRrqurhMVc/B/idqrphgWWfr651gbQsuaeg5eYhunPP/v78BfP/\n0p/7azfJqUk+nmRbkn9JclGSX+n70d+e5MSRp3lhkk/26/18//g1Sd6S5Ka+gdpvjTzvDUneT/dF\nqPn1nN0//44kF/djr6f7QtW7krxlnAkn+d6+P/7RSZ7Q1/eiftnf9A0U7xhpokiSB5Jc3C/7aJJN\nST6W5O4kv9Cv88okVyb5SN9j/w2LvP5rR+Y+17f/qUmuTtfPf0eSXx5nLpp97iloOfpz4LYkf/IY\nHvNDwLPoWl/fDbynqjalO+nKq4HX9OutB36KrlnaDUmeCbwC+LeqOiXJ4cCnklzbr78J+MHq2hE3\nSf4jcDHwI8DX6LrfnlFVf5zkZ+h6+W9foM4T+86Xc15dVZ/sQ+VddB0+P1tVc6//G1V1X9/q4KYk\nH6yqrwJPBT5WVX+U5ArgvwI/S9dXfyvdt3hb/cA3+sdfPVpXHz4b+vUCXJWuUeNa4J6qemm/3tMX\n/ZfXimIoaNmpqq8n+Uvgd4F/H/NhN1XfUjjJ54G5X6q3Az89st626pqk3ZXkbuAH6PpQPWdkL+Tp\ndL8ovwncOD8QeqfQ/VLe37/mZXQn6fmbJepc8PBRVb0nyZnAq+g6e8753SQv628f39f11b62j4zM\n8cGq+laS2+mCb851fYiQ5EN0ezGjYfWi/vLP/f0j+tf4JPDWPqz+rqo+ucS8tEIYClqu3g7cAvzF\nyNhD9Ic8+0ZoTxpZ9uDI7UdG7j/Ct/8/n9/Xpej+Qn51VV0zuiDJqcD/XaS+hVoVP25JvouusyV0\nv5jv71//hcDzquobST4GPLlf51v1aI+aNt+qeiTJUvP9tpcG3lxV/32Bmn6Ero/Qm5NcW1V//Hjm\nptniewpalqrqPmAb3Zu2c3bTHa6B7kxThz2Opz6zP25/Il2Twp3ANcBvp2vBTJLvT9fJ9kA+DfxU\n/z7AGuBs4OOPo545FwOXAa+na/UM3R7L1/pA+AG60zE+Vj+b7ry+T6E7S9en5i2/BviNdOcjIMlx\nSY7pD499o6r+Gngr3WlKtQq4p6Dl7G3A+SP33w1cmeRGug6Ri/0VfyA76X55Hwu8qqr+X5L30B1y\nuaXfA9nPEqdFraq9SS4AbqD7a/vDVTVO2/P57ylcStc59xS6kzA9nOQ/Jfl14P3Aq5Lc1tf9T+NP\ns/kHuo6hzwTeP/99jqq6NsmzgH/sps4DwK/2678lySN0XWN/+3G8tmaQXVKlFSrJK4GNVXX+UutK\nczx8JElq3FOQJDXuKUiSGkNBktQYCpKkxlCQJDWGgiSp+f/Uy8XYYKWzCQAAAABJRU5ErkJggg==\n", "text/plain": [ - "
" + "
" ] }, "metadata": { - "needs_background": "light" }, "output_type": "display_data" } @@ -844,16 +1144,18 @@ "다음을 정의합니다.\n", "\n", "- `UserEmbedding`: 단일 `num_latent_factors` 차원 사용자 임베딩을 나타내는 간단한 Keras 레이어입니다.\n", - "- `get_matrix_factorization_model`: 서버에서 전역적으로 집계되는 레이어와 로컬로 유지되는 레이어를 포함하여 모델 논리가 포함된 [`tff.learning.reconstruction.Model`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/reconstruction/Model)을 반환하는 함수입니다. 페더레이션 재구성 훈련 프로세스를 초기화하려면 이 추가 정보가 필요합니다. 여기에서는 [`tff.learning.reconstruction.from_keras_model`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/reconstruction/from_keras_model)을 사용하여 Keras 모델에서 `tff.learning.reconstruction.Model`을 생성합니다. `tff.learning.Model`과 유사하게 클래스 인터페이스를 구현하여 사용자 정의 `tff.learning.reconstruction.Model`을 구현할 수도 있습니다." + "- `get_matrix_factorization_model`: 서버에서 전역적으로 집계되는 레이어와 로컬로 유지되는 레이어 등 모델 로직이 포함된 [`tff.learning.models.ReconstructionModel`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/models/ReconstructionModel)을 반환하는 함수입니다. 페더레이션 재구성 훈련 프로세스를 초기화하려면 이 추가 정보가 필요합니다. 여기서는 [`tff.learning.models.ReconstructionModel.from_keras_model_and_layers`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/models/ReconstructionModel#from_keras_model_and_layers)를 사용하여 Keras 모델에서 `tff.learning.models.ReconstructionModel`을 생성합니다. `tff.learning.models.VariableModel`과 유사하게 클래스 인터페이스를 구현하여 사용자 정의 `tff.learning.models.ReconstructionModel`을 구현할 수도 있습니다." ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 15, "metadata": { "id": "nSLMxPDP3D72" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class UserEmbedding(tf.keras.layers.Layer):\n", " \"\"\"Keras layer representing an embedding for a single user, used below.\"\"\"\n", @@ -879,10 +1181,10 @@ "\n", "def get_matrix_factorization_model(\n", " num_items: int,\n", - " num_latent_factors: int) -> tff.learning.reconstruction.Model:\n", + " num_latent_factors: int) -> tff.learning.models.ReconstructionModel:\n", " \"\"\"Defines a Keras matrix factorization model.\"\"\"\n", " # Layers with variables will be partitioned into global and local layers.\n", - " # We'll pass this to `tff.learning.reconstruction.from_keras_model`.\n", + " # We'll pass this to `tff.learning.models.ReconstructionModel.from_keras_model_and_layers`.\n", " global_layers = []\n", " local_layers = []\n", "\n", @@ -916,7 +1218,7 @@ "\n", " model = tf.keras.Model(inputs=item_input, outputs=pred)\n", "\n", - " return tff.learning.reconstruction.from_keras_model(\n", + " return tff.learning.models.ReconstructionModel.from_keras_model_and_layers(\n", " keras_model=model,\n", " global_layers=global_layers,\n", " local_layers=local_layers,\n", @@ -929,16 +1231,18 @@ "id": "-B3FPaRiwY3n" }, "source": [ - "페더레이션 평균화를 위한 인터페이스와 유사하게 페더레이션 재구성을 위한 인터페이스는 `tff.learning.reconstruction.Model`을 반환하는 인수가 없는 `model_fn`을 예상합니다." + "페더레이션 평균화를 위한 인터페이스와 유사하게 페더레이션 재구성을 위한 인터페이스는 `tff.learning.models.ReconstructionModel`을 반환하는 인수가 없는 `model_fn`을 예상합니다." ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 16, "metadata": { "id": "vNBRQW9EwneZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# This will be used to produce our training process.\n", "# User and item embeddings will be 50-dimensional.\n", @@ -961,11 +1265,13 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 17, "metadata": { "id": "FDJUfeSNwxIL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class RatingAccuracy(tf.keras.metrics.Mean):\n", " \"\"\"Keras metric computing accuracy of reconstructed ratings.\"\"\"\n", @@ -1007,16 +1313,18 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 18, "metadata": { "id": "YQsX0FgtwsoE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# We'll use this by doing:\n", "# state = training_process.initialize()\n", "# state, metrics = training_process.next(state, federated_train_data)\n", - "training_process = tff.learning.reconstruction.build_training_process(\n", + "training_process = tff.learning.algorithms.build_fed_recon(\n", " model_fn=model_fn,\n", " loss_fn=loss_fn,\n", " metrics_fn=metrics_fn,\n", @@ -1036,16 +1344,15 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 19, "metadata": { "id": "KHi7J330PtxO" }, - "outputs": [], + "outputs": [ + + ], "source": [ - "# We'll use this by doing:\n", - "# eval_metrics = evaluation_computation(state.model, tf_val_datasets)\n", - "# where `state` is the state from the training process above.\n", - "evaluation_computation = tff.learning.reconstruction.build_federated_evaluation(\n", + "evaluation_process = tff.learning.algorithms.build_fed_recon_eval(\n", " model_fn,\n", " loss_fn=loss_fn,\n", " metrics_fn=metrics_fn,\n", @@ -1064,7 +1371,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 22, "metadata": { "id": "I_kOjFVKQoNX" }, @@ -1073,27 +1380,28 @@ "name": "stdout", "output_type": "stream", "text": [ - "ModelWeights(trainable=[array([[-0.02840446, 0.01196523, -0.01864688, ..., 0.03020107,\n", - " 0.00121176, 0.00146852],\n", - " [ 0.01330637, 0.04741272, -0.01487445, ..., -0.03352419,\n", - " 0.0104811 , 0.03506917],\n", - " [-0.04132779, 0.04883525, -0.04799002, ..., 0.00246904,\n", - " 0.00586842, 0.01506213],\n", + "ModelWeights(trainable=[array([[-0.01839826, 0.04044249, -0.04871846, ..., 0.01967763,\n", + " -0.03034571, -0.01698984],\n", + " [-0.03716197, 0.0374358 , 0.00968184, ..., -0.04857936,\n", + " -0.0385102 , -0.01883975],\n", + " [-0.01227728, -0.04690691, 0.00250578, ..., 0.01141983,\n", + " 0.01773251, 0.03525344],\n", " ...,\n", - " [ 0.0216659 , 0.00734354, 0.00471039, ..., 0.01596491,\n", - " -0.00220431, -0.01559857],\n", - " [-0.00319657, -0.01740328, 0.02808609, ..., -0.00501985,\n", - " -0.03850871, -0.03844522],\n", - " [ 0.03791947, -0.00035037, 0.04217024, ..., 0.00365371,\n", - " 0.00283421, 0.00897921]], dtype=float32)], non_trainable=[])\n", + " [ 0.03374172, 0.02467764, 0.00621947, ..., -0.01521915,\n", + " -0.01185555, 0.0295455 ],\n", + " [-0.04029766, -0.02826073, 0.0358924 , ..., -0.02519268,\n", + " -0.03909808, -0.01965014],\n", + " [-0.04007702, -0.04353172, 0.04063287, ..., 0.01851353,\n", + " -0.00767929, -0.00816654]], dtype=float32)], non_trainable=[])\n", "Item variables shape: (3706, 50)\n" ] } ], "source": [ "state = training_process.initialize()\n", - "print(state.model)\n", - "print('Item variables shape:', state.model.trainable[0].shape)" + "model = training_process.get_model_weights(state)\n", + "print(model)\n", + "print('Item variables shape:', model.trainable[0].shape)" ] }, { @@ -1116,7 +1424,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 25, "metadata": { "id": "JiBOGFsWWBiU" }, @@ -1125,15 +1433,19 @@ "name": "stdout", "output_type": "stream", "text": [ - "Initial Eval: OrderedDict([('loss', 14.340279), ('rating_accuracy', 0.0)])\n" + "Initial Eval: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.365454)])\n" ] } ], "source": [ "# We shouldn't expect good evaluation results here, since we haven't trained\n", "# yet!\n", - "eval_metrics = evaluation_computation(state.model, tf_val_datasets)\n", - "print('Initial Eval:', eval_metrics['eval'])" + "eval_state = evaluation_process.initialize()\n", + "eval_state = evaluation_process.set_model_weights(\n", + " eval_state, training_process.get_model_weights(state)\n", + ")\n", + "_, eval_metrics = evaluation_process.next(eval_state, tf_val_datasets)\n", + "print('Initial Eval:', eval_metrics['client_work']['eval'])" ] }, { @@ -1147,7 +1459,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 26, "metadata": { "id": "lOTfqrVcVfJf" }, @@ -1156,14 +1468,14 @@ "name": "stdout", "output_type": "stream", "text": [ - "Train metrics: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.317455)])\n" + "Train metrics: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.183293)])\n" ] } ], "source": [ "federated_train_data = np.random.choice(tf_train_datasets, size=50, replace=False).tolist()\n", "state, metrics = training_process.next(state, federated_train_data)\n", - "print(f'Train metrics:', metrics['train'])" + "print(f'Train metrics:', metrics['client_work']['train'])" ] }, { @@ -1177,7 +1489,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 28, "metadata": { "id": "VJBzOPNYwp9q" }, @@ -1186,27 +1498,27 @@ "name": "stdout", "output_type": "stream", "text": [ - "Train round 0: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.7013445)])\n", - "Train round 1: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.459233)])\n", - "Train round 2: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.52466)])\n", - "Train round 3: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.087793)])\n", - "Train round 4: OrderedDict([('rating_accuracy', 0.011243612), ('loss', 11.110232)])\n", - "Train round 5: OrderedDict([('rating_accuracy', 0.06366048), ('loss', 8.267054)])\n", - "Train round 6: OrderedDict([('rating_accuracy', 0.12331288), ('loss', 5.2693872)])\n", - "Train round 7: OrderedDict([('rating_accuracy', 0.14264487), ('loss', 5.1511016)])\n", - "Train round 8: OrderedDict([('rating_accuracy', 0.21046545), ('loss', 3.8246362)])\n", - "Train round 9: OrderedDict([('rating_accuracy', 0.21320973), ('loss', 3.303812)])\n", - "Train round 10: OrderedDict([('rating_accuracy', 0.21651311), ('loss', 3.4864292)])\n", - "Train round 11: OrderedDict([('rating_accuracy', 0.23476052), ('loss', 3.0105433)])\n", - "Train round 12: OrderedDict([('rating_accuracy', 0.21981856), ('loss', 3.1807854)])\n", - "Train round 13: OrderedDict([('rating_accuracy', 0.27683082), ('loss', 2.3382564)])\n", - "Train round 14: OrderedDict([('rating_accuracy', 0.26080742), ('loss', 2.7009728)])\n", - "Train round 15: OrderedDict([('rating_accuracy', 0.2733109), ('loss', 2.2993557)])\n", - "Train round 16: OrderedDict([('rating_accuracy', 0.29282996), ('loss', 2.5278995)])\n", - "Train round 17: OrderedDict([('rating_accuracy', 0.30204678), ('loss', 2.060092)])\n", - "Train round 18: OrderedDict([('rating_accuracy', 0.2940266), ('loss', 2.0976772)])\n", - "Train round 19: OrderedDict([('rating_accuracy', 0.3086304), ('loss', 2.0626144)])\n", - "Final Eval: OrderedDict([('loss', 1.9961331), ('rating_accuracy', 0.30322924)])\n" + "Train round 0: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.523704)])\n", + "Train round 1: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.552873)])\n", + "Train round 2: OrderedDict([('rating_accuracy', 0.0), ('loss', 14.480412)])\n", + "Train round 3: OrderedDict([('rating_accuracy', 0.0051107327), ('loss', 12.155375)])\n", + "Train round 4: OrderedDict([('rating_accuracy', 0.042440318), ('loss', 9.201913)])\n", + "Train round 5: OrderedDict([('rating_accuracy', 0.11840491), ('loss', 5.5969186)])\n", + "Train round 6: OrderedDict([('rating_accuracy', 0.12890044), ('loss', 5.5303264)])\n", + "Train round 7: OrderedDict([('rating_accuracy', 0.19774501), ('loss', 3.9932375)])\n", + "Train round 8: OrderedDict([('rating_accuracy', 0.21234067), ('loss', 3.5070496)])\n", + "Train round 9: OrderedDict([('rating_accuracy', 0.21757619), ('loss', 3.5754187)])\n", + "Train round 10: OrderedDict([('rating_accuracy', 0.24020319), ('loss', 3.0558898)])\n", + "Train round 11: OrderedDict([('rating_accuracy', 0.2337753), ('loss', 3.1659348)])\n", + "Train round 12: OrderedDict([('rating_accuracy', 0.2638889), ('loss', 2.413888)])\n", + "Train round 13: OrderedDict([('rating_accuracy', 0.2622365), ('loss', 2.760038)])\n", + "Train round 14: OrderedDict([('rating_accuracy', 0.27820238), ('loss', 2.195349)])\n", + "Train round 15: OrderedDict([('rating_accuracy', 0.29124364), ('loss', 2.447856)])\n", + "Train round 16: OrderedDict([('rating_accuracy', 0.30438596), ('loss', 2.096729)])\n", + "Train round 17: OrderedDict([('rating_accuracy', 0.29557413), ('loss', 2.0750825)])\n", + "Train round 18: OrderedDict([('rating_accuracy', 0.31832394), ('loss', 1.99085)])\n", + "Train round 19: OrderedDict([('rating_accuracy', 0.3162333), ('loss', 2.0302613)])\n", + "Final Eval: OrderedDict([('rating_accuracy', 0.3126193), ('loss', 2.0305126)])\n" ] } ], @@ -1222,13 +1534,15 @@ "for i in range(NUM_ROUNDS):\n", " federated_train_data = np.random.choice(tf_train_datasets, size=50, replace=False).tolist()\n", " state, metrics = training_process.next(state, federated_train_data)\n", - " print(f'Train round {i}:', metrics['train'])\n", - " train_losses.append(metrics['train']['loss'])\n", - " train_accs.append(metrics['train']['rating_accuracy'])\n", + " print(f'Train round {i}:', metrics['client_work']['train'])\n", + " train_losses.append(metrics['client_work']['train']['loss'])\n", + " train_accs.append(metrics['client_work']['train']['rating_accuracy'])\n", "\n", "\n", - "eval_metrics = evaluation_computation(state.model, tf_val_datasets)\n", - "print('Final Eval:', eval_metrics['eval'])" + "eval_state = evaluation_process.set_model_weights(\n", + " eval_state, training_process.get_model_weights(state))\n", + "_, eval_metrics = evaluation_process.next(eval_state, tf_val_datasets)\n", + "print('Final Eval:', eval_metrics['client_work']['eval'])" ] }, { @@ -1242,32 +1556,30 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 29, "metadata": { "id": "h6w702JmR-3V" }, "outputs": [ { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAX4AAAEWCAYAAABhffzLAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3dd3xcd5nv8c+jLluyZFmyVVxkOy6Ji1yU3iCNFDtxgCXhUhJaFnZDZynLXtrdvXcpy2ZZdoEESIEkhLIJsdNITwikuNtJ3OK4qrrIkmz1ee4fcwyyYtmypZmjmfm+X695zZlzzsx5dDT6ztHvnPn9zN0REZHUkRZ2ASIiEl8KfhGRFKPgFxFJMQp+EZEUo+AXEUkxCn4RkRSj4Bc5CjN7xMxuCLsOkVgwXccvycLMWns9HAF0AD3B479197vjVMc24KPu/kQ8tidyojLCLkBkqLh73uHpY4WvmWW4e3c8axMZTtTUI0nPzN5mZrvM7EtmVgfcbmajzWyZmTWa2f5genyv5zxjZh8Npm80sz+a2feCdd80sytOoo5sM7vFzGqC2y1mlh0sKw5qaDKzfWb2vJmlBcu+ZGa7zazFzDaa2cVDtGskRSn4JVWUAkXAJOAmou/924PHE4E24IfHeP6ZwEagGPgO8DMzsxOs4avAWcA8oAo4A/inYNnngV1ACTAO+EfAzWwGcDNwurvnA+8Atp3gdkWOoOCXVBEBvu7uHe7e5u573f137n7I3VuAfwEuPMbzt7v7be7eA9wJlBEN6BPxPuBb7t7g7o3AN4EPBMu6gtec5O5d7v68R0/A9QDZwGlmlunu29z9jRPcrsgRFPySKhrdvf3wAzMbYWY/MbPtZtYMPAcUmll6P8+vOzzh7oeCybx+1u1PObC91+PtwTyA7wJbgD+Y2VYz+3KwrS3AZ4BvAA1m9iszK0dkEBT8kir6Xr72eWAGcKa7jwIuCOafaPPNiagh2rR02MRgHu7e4u6fd/cpwNXA5w635bv7Pe5+XvBcB74dwxolBSj4JVXlE23XbzKzIuDrQ/z6mWaW0+uWAdwL/JOZlZhZMfA14JcAZrbIzE4JzhscINrEEzGzGWZ2UXASuD2oOTLEtUqKUfBLqroFyAX2AC8Cjw7x6z9MNKQP374B/DOwHFgLrANWBvMApgFPAK3An4H/dvenibbv/2tQZx0wFvjKENcqKUZf4BIRSTE64hcRSTEKfhGRFKPgFxFJMQp+EZEUkxCdtBUXF3tlZWXYZYiIJJQVK1bscfeSvvMTIvgrKytZvnx52GWIiCQUM9t+tPlq6hERSTEKfhGRFKPgFxFJMQp+EZEUo+AXEUkxCn4RkRSj4BcRSTEJcR3/yXpqQz1bGlo5vbKI2RUFZKbrc05EJKmD/5mNjdz15+j3F3Iy05g/YTSnV47m9MlFzJ84mrzspP7xRUSOKiH646+urvaT/eZuQ3M7y7fv55Vt+3hl2z5eq2km4pCeZpxWNorqytGcUVlEdWURJfnZQ1y5iEh4zGyFu1e/ZX6yB39frR3drNy+n+Xb9vHytn2s3tlEe1d0JLvJxSM5vXI01ZVFnFFZxKQxI4iOhPdXkYjT0tFNc1sXB9q6aG7rorm9i+a27ujj9q6/LmvvZmR2BtefPoFzpo55y2uJiMSSgr8fnd0R1tcciH4QvLmf5dv30XSoC4DivGymlIyktb2b5vZomLd2dHOsXWYGo3IyKcjNZFRuBjVN7ew72Mn0cXl88OxK3rmgghFZamISkdhT8A9QJOK80djKK9uizUO797cxKjeDUTmZjMoNbjkZQbBnHhHyBbmZjMzKIC3tr0.29.09LFtbyx1/epP1u5vJz8ngPdUT+ODZk5g0ZmRcfiYRSU0K/pC5Oyt37OeOP23nkXW19Lhz0Yyx3HBOJeedUnzEh4WIyFBQ8A8j9c3t3P3idu55eQd7WjuZUjKSG86u5F0Lx+tKIxEZMgr+Yaiju4eH19Vyx5+2s2ZnE3nZGbx74Xg+ePYkppTkhV2eiCS4uAe/mf0cWAQ0uPvsPss+D3wPKHH3Pcd7rWQN/t5W7djPnX/axkPraunqcS6cXsKN51Ry4fQSNQOJyEkJI/gvAFqBu3oHv5lNAH4KzAQWKviP1NDSzr0v7eTul7bT0NLB/ImF/O7j5yj8ReSE9Rf8MevDwN2fA/YdZdG/A18Ehn8bUwjG5ufw6Uum8ccvXcQXLpvOqh1NvLLtaLtRROTkxLXzGjO7Btjt7msGsO5NZrbczJY3NjbGobrhJSsjjQ+fN5nczHSWrq0JuxwRSSJxC34zGwH8I/C1gazv7re6e7W7V5eUvGWQ+JQwIiuDS04bx8Pr6ujqiYRdjogkiXge8U8FJgNrzGwbMB5YaWalcawh4SyeW8a+g5386Y29YZciIkkibsHv7uvcfay7V7p7JbALWODudfGqIRFdOKOE/JwMlq5Rc4+IDI2YBb+Z3Qv8GZhhZrvM7COx2lYyy85I5x2zSnlsfR0d3T1hlyMiSSCWV/W8193L3D3T3ce7+8/6LK8cyKWcAourymnp6ObZjal3kltEhp6GpEoA50wdQ9HILJaurQ27FBFJAgr+BJCZnsYVs0t54rV6DnV2h12OiCQ4BX+CWFxVTltXD0++3hB2KSKS4BT8CeL0yiLGjcrW1T0iMmgK/gSRnmZcNaecZzY20tzeFXY5IpLAFPwJZHFVGZ09Ef7wan3YpYhIAlPwJ5B5EwqZUJTLg2ruEZFBUPAnEDNj8dxyXtiyh72tHWGXIyIJSsGfYBZXldMTcR5Zr54uROTkKPgTzMzSfE4Zm6ere0TkpCn4E8zh5p6Xt+2j7kB72OWISAJS8CegRVVluMND69SFg4icOAV/Appakses8lFq7hGRk6LgT1CLq8pZvbOJnfsOhV2KiCQYBX+CumpOGYDG4xWRE6bgT1ATikawYGIhS9eonV9EToyCP4Etrirn9dpmtjS0hF2KiCQQBX8Cu2pOGWkGD+qoX0ROgII/gY0dlcNZU8awbE0N7h52OSKSIBT8CW5xVTlb9xzk1ZrmsEsRkQSh4E9wl88qJSPNdHWPiAyYgj/BjR6ZxfnTilm2plbNPSIyIDELfjP7uZk1mNn6XvO+a2YbzGytmd1vZoWx2n4qWVxVzu6mNlbuaAq7FBFJALE84r8DuLzPvMeB2e4+F9gEfCWG208Zl542jqyMNHXhICIDErPgd/fngH195v3B3buDhy8C42O1/VSSn5PJRTPG8tC6Wnoiau4RkWMLs43/w8Aj/S00s5vMbLmZLW9sbIxjWYlpcVU5jS0dvPTm3rBLEZFhLpTgN7OvAt3A3f2t4+63unu1u1eXlJTEr7gEddHMsYzISlcXDiJyXHEPfjO7EVgEvM91GcqQyc1K59LTxvHI+lq6eiJhlyMiw1hcg9/MLge+CFzt7upPeIgtnltO06Eu/rh5T9iliMgwFsvLOe8F/gzMMLNdZvYR4IdAPvC4ma02sx/Havup6ILpJYzKydDVPSJyTBmxemF3f+9RZv8sVtsTyMpI44rZZTy0rpb2rh5yMtPDLklEhiF9czfJLK4qp7Wjm2c2NoRdiogMUwr+JHPWlCKK87J0dY+I9EvBn2Qy0tO4ck4ZT26op7Wj+/hPEJGUo+BPQourymnvivDk6/VhlyIiw5CCPwktnDiasoIcXd0jIkel4E9CaWnGorllPLupkQOHusIuR0SGGQV/klpcVU5Xj/PYq3VhlyIiw4yCP0nNqShg0pgRGplLRN5CwZ+kzIzFc8t5YcseGlrawy5HRIYRBX8SWzK/nIija/pF5AgK/iR2yth85lQUcP+qXWGXIiLDiII/yS2ZX8H63c1srm8JuxQRGSYU/Enu6qpy0tOMB1bvDrsUERkmFPxJriQ/m/NOKeaBVTVENB6viKDgTwnXzq9gd1Mby7fvD7sUERkGFPwp4LJZ4xiRla6TvCICKPhTwoisDN4xq5Rla6MDtIhIalPwp4hr51fQ0q4BWkREwZ8yzpk6hpL8bO5fpat7RFKdgj9FZKSncXVVOU9taKDpUGfY5YhIiBT8KeTa+RV09TgPrVMXDiKpTMGfQmaVj2La2DweUHOPSEpT8KcQM2PJ/Ape2bafnfsOhV2OiIQkZsFvZj83swYzW99rXpGZPW5mm4P70bHavhzdNfPKAXTUL5LCYnnEfwdweZ95XwaedPdpwJPBY4mj8aNHcMbkIu5fvRt3deEgkopiFvzu/hywr8/sa4A7g+k7gSWx2r70753zK9jaeJB1uw+EXYqIhCDebfzj3P3wJSV1wLj+VjSzm8xsuZktb2xsjE91KeKKOWVkpafpmn6RFBXayV2PtjP029bg7re6e7W7V5eUlMSxsuRXkJvJxaeOZemaGrp7ImGXIyJxdtzgN7OpZpYdTL/NzD5lZoUnub16MysLXqsMUP8BIVkyv4I9rZ08v2VP2KWISJwN5Ij/d0CPmZ0C3ApMAO45ye09CNwQTN8A/P4kX0cG6W0zSijIzdTVPSIpaCDBH3H3buBa4D/d/R+AsuM9yczuBf4MzDCzXWb2EeBfgUvNbDNwSfBYQpCdkc6iuWU89modrR3dYZcjInGUMYB1uszsvUSP0BcH8zKP9yR3f28/iy4eYG0SY9fOr+Dul3bw2Po63rVwfNjliEicDOSI/0PA2cC/uPubZjYZ+EVsy5J4WDhpNONH52o8XpEUc9zgd/fX3P1T7n5v8E3bfHf/dhxqkxgzM66dX8ELW/ZQ39wedjkiEicDuarnGTMbZWZFwErgNjP7fuxLk3hYMr+CiMPSNTVhlyIicTKQpp4Cd28G3gnc5e5nEj0xK0lgakkeVeML9GUukRQykODPCK65fw+wLMb1SAiWzK/g1ZpmNtW3hF2KiMTBQIL/W8BjwBvu/oqZTQE2x7YsiadFc8tJTzMd9YukiIGc3P2Nu891908Ej7e6+7tiX5rES0l+NudPK+b3q3YTiajHTpFkN5CTu+PN7P6gb/0GM/udmemi7yRz7fwKag608/K2vh2qikiyGUhTz+1Eu1ooD25Lg3mSRC47rZSRWencv1LNPSLJbiDBX+Lut7t7d3C7A1B3mUkmNyudd8wu5eF1tbR39YRdjojE0ECCf6+Zvd/M0oPb+4G9sS5M4u/a+RW0dHTz1AZ1miqSzAYS/B8meilnHVALvBu4MYY1SUjOmVrM2PxsXd0jkuQGclXPdne/2t1L3H2suy8BPh2H2iTO0tOMa+aV88zGBvYf7Ay7HBGJkZMdges9Q1qFDBtL5lfQ1eMsW1d7/JVFJCGdbPDbkFYhw8ZpZaOYPi5PA7SIJLF+g9/Mivq5jUHBn7TMjCXzK1ixfT879h4KuxwRiYFjHfGvAJYH971vywE1ACexJfMqAHSSVyRJ9TsCl7tPjmchMnyUF+Zy1pQiHli9m09dfApm+gdPJJmcbBu/JLlr51fw5p6DrNl1IOxSRGSIKfjlqC6fXUZWRppO8ookIQW/HFVBbiaXnjqOB9fU0NUTCbscERlCAwr+oKuGcjObePgW68IkfFfPK2ffwU7+9IZ66BBJJgPplvmTQD3wOPBQcBvUSFxm9lkze9XM1pvZvWaWM5jXk9i4cHoJ+dkZLNN4vCJJZSBH/J8GZrj7LHefE9zmnuwGzawC+BRQ7e6zgXTg+pN9PYmdnMx0Lp01jkdfraOjWz12iiSLgQT/TmCoL+3IAHLNLAMYAeiQcphaPLeclvZunt+0J+xSRGSI9Hsdfy9bgWfM7CGg4/BMd//+yWzQ3Xeb2feAHUAb8Ad3/0Pf9czsJuAmgIkTdUohLOeeUkzhiEyWrq3hktPGhV2OiAyBgRzx7yDavp8F5Pe6nRQzGw1cA0wmOqLXyKCP/yO4+63uXu3u1SUlGvclLFkZaVw+q5QnXqunrVPNPSLJ4LhH/O7+zSHe5iXAm+7eCGBm/wOcA/xyiLcjQ2RxVTm/emUnT29s4Mo5ZWGXIyKD1G/wm9kt7v4ZM1sKeN/l7n71SW5zB3CWmY0g2tRzMdH+f2SYOnNyEcV5WSxbW6PgF0kCxzri/0Vw/72h3KC7v2RmvwVWAt3AKuDWodyGDK2M9DSunFPGfa/spLWjm7zsgZwaEpHh6lidtK0I7p8d6o26+9eBrw/160rsLJpbzl1/3s6Tr9dzTdB7p4gkpoF8gWuamf3WzF4zs62Hb/EoToaP6kmjKR2Vw9I1GplLJNEN5Kqe24EfEW2WeTtwFzoRm3LS0oyr5pbx7KYGDrR1hV2OiAzCQII/192fBCwYeP0bwFWxLUuGo8VV5XT1OH94tS7sUkRkEAYS/B1mlgZsNrObzexaIC/GdckwVDW+gAlFuSxdq+YekUQ20L56RhDtX2ch8H7ghlgWJcOTmbFobjkvbNnDvoMafVMkUR0z+M0sHbjO3VvdfZe7f8jd3+XuL8apPhlmFs0toyfiPLJeR/0iiarf4DezDHfvAc6LYz0yzJ1WNoopJSNZpqt7RBLWsY74Xw7uV5nZg2b2ATN75+FbPIqT4edwc8+Lb+6lobk97HJE5CQMpI0/B9gLXAQsAhYH95KiFs8twx0eXqejfpFEdKzv3o81s88B64n21WO9lr2l7x5JHdPG5TOzNJ+la2u58dzJYZcjIifoWEf86UQv28wj2g1zXp+bpLDFVeWs2L6f3U1tYZciIifoWEf8te7+rbhVIgll0dwyvvvYRh5aW8NNF0wNuxwROQHHOuK3YyyTFDdpzEjmji9gmb7MJZJwjhX8F8etCklIi+aWsXbXAbbtORh2KSJyAvoNfnffF89CJPFcNbccgId0dY9IQhnI5ZwiR1VRmMvCSaNZuqYm7FJE5AQo+GVQFs8tY0NdC5vrW8IuRUQGSMEvg3LlnDLMUI+dIglEwS+DMnZUDmdNHsOytTW463t9IolAwS+DtqiqjK2NB3mttjnsUkRkABT8MmhXzC4jPc10Tb9IglDwy6AVjczi3FOKWbpGzT0iiSCU4DezQjP7rZltMLPXzezsMOqQobN4bhm79rexZteBsEsRkeMI64j/P4BH3X0mUAW8HlIdMkQum1VKVnqarukXSQBxD34zKwAuAH4G4O6d7t4U7zpkaBXkZnLB9BIeWltLJKLmHpHhLIwj/slAI3C7ma0ys5+a2ci+K5nZTWa23MyWNzY2xr9KOWGLq8qoa25nxY79YZciIscQRvBnAAuAH7n7fOAg8OW+K7n7re5e7e7VJSUl8a5RTsIlp44jJ1PNPSLDXRjBvwvY5e4vBY9/S/SDQBLcyOwMLpo5lofX1dLdEwm7HBHpR9yD393rgJ1mNiOYdTHwWrzrkNhYPLecPa2dvPSmOncVGa7Cuqrnk8DdZrYWmAf835DqkCH29pljGZmVzrK1au4RGa5CCX53Xx2038919yXurrOBSSInM51LTxvHI+vr6FJzj8iwpG/uypBbNLecpkNd/HHLnrBLEZGjUPDLkDt/ejGjcjJ0dY/IMKXglyGXnZHOO2aV8vir9bR39YRdjoj0oeCXmFhcVU5LRzfPbtKX70SGGwW/xMQ5U8dQNDJLXTWLDEMZYRcgySkjPY0rZpdyz8s7WLFtH2WFuZQX5lJekEN5YS5lwX15YS6jR2RiZmGXLJIyFPwSM5+8aBpjRmaxu6mdmqY21u1q4rFX2+nsPvIyz5zMNMoLgg+GwhzKCnKpKMylrDCHGePyGTsqJ6SfQCQ5KfglZkoLcvjcZTOOmOfu7D3YSU1TW3CLfijUHmhnd1Mbz25qpKGlg8PjuWRlpHHfTWcxf+LoEH4CkeSk4Je4MjOK87Ipzstm7vjCo67T2R2hvrmdXfvb+IffruHv717Jsk+dT9HIrDhXK5KcdHJXhp2sjDQmFI3g7Klj+NH7FrLnYCef/tUqetTPv8iQUPDLsDZnfAHfvHoWz2/eww+e3Bx2OSJJQcEvw971p0/g3QvH84OnNvP0xoawyxFJeAp+GfbMjP9zzWxmlo7is/etZtf+Q2GXJJLQFPySEHKz0vnR+xbQ0+P83d0r6ehWVxAiJ0vBLwmjsngk//aeKtbuOsC3lmrsHpGTpeCXhHLZrFI+fuFU7n5pB79bsSvsckQSkoJfEs4XLpvOWVOK+OoD69hQ1xx2OSIJR8EvCScjPY3/fO8CRuVk8olfrqS5vSvskkQSioJfElJJfjb/9b4F7Nh3iH/4zRrc9eUukYFS8EvCOr2yiK9cMZPHXq3ntue3hl2OSMJQ8EtC+8h5k7lyTinffnQjL23dG3Y5IglBwS8Jzcz49rvmMqloBDffu4qG5vawSxIZ9kILfjNLN7NVZrYsrBokOeTnZPKj9y+ktb2bm+9ZRVdP5PhPEklhYR7xfxp4PcTtSxKZUZrP/3vnHF7eto/vPrYx7HJEhrVQgt/MxgNXAT8NY/uSnJbMr+ADZ03i1ue28uh6jfUr0p+wjvhvAb4I6H9yGVL/tOhUqiYU8oXfrGVrY2vY5YgMS3EPfjNbBDS4+4rjrHeTmS03s+WNjY1xqk4SXXZGOv/9vgVkphuf+OVKDnV2h12SyLATxhH/ucDVZrYN+BVwkZn9su9K7n6ru1e7e3VJSUm8a5QEVlGYyy3Xz2dTQwtf/t062rvUk6dIb3EPfnf/iruPd/dK4HrgKXd/f7zrkOR24fQSPnfJdB5cU8PbvvsMv3hxu7pyFgnoOn5JWjdfdAr3fPRMxo/O5X8/sJ6Lvvcs9768Q5d7SsqzROjjpLq62pcvXx52GZKg3J3nN+/h+49vYvXOJiYU5fLJi6bxzvkVZKTr2EeSl5mtcPfqt8xX8EuqcHee2djI9x/fxLrdB6gcM4JPXzKNq6sqSE+zsMsTGXL9Bb8OdyRlmBlvnzmWB28+l9s+WE1uVgafvW8Nl/37szy4poZIZGgPgjq7I+w/2DmkrykyFHTELykrEnEee7WOf39iE5vqW5k+Lo/PXDKdy2eVknYC/wG4O/XNHbxe18yG2hY21jWzoa6FNxpb6epxLj1tHB+/cCoLJ42O4U8j8lZq6hHpRyTiPLSullue2MQbjQc5tWwUn71kGpeeNg6zIz8ADnZ0s6m+hQ11LWysa+H12mjIH2j762Aw5QU5zCwbxczSfBy49+UdNB3q4ozKIj7+tim8fcbYt7yuSCwo+EWOoyfiLF1Tw388uZk39xxkTkUB/+vMidQdaGdDcBS/Y98hDv/JjMxKZ0Zp/l9CfmbpKGaU5lOQm3nE6x7s6Oa+V3by0+e3UnOgnRnj8vnbC6ewuKqcTJ1clhhS8IsMUHdPhPtX7eYHT21m57420gwqi0dyahDsM0vzObVsFBWFuSfUJNTVE2Hpmhp+8uxWNta3UFGYy0fOm8z1Z0xgRFZGDH8iSVUKfpET1NUT4c09B5lYNIKczPQhe1135+mNDfz4ma28vG0fhSMy+eDZldx4TiVFI7OGZBvN7V1srm/lUGc3504tPqEPKEkeCn6RYWjF9v38+Nk3ePy1enIy07iuegIfPX8KE4pGDOj5B9q62NLQwub6VjbVt7I5mK7rNSDNFbNL+bf3VOm/ihSk4BcZxrY0tHDrc1u5f9VuIg5XzSnj4xdO5bTyUcBfA35TfSubg4DfVN9CfXPHX14jNzOdU8bmMW1sHtPG5TNtbB6bG1r57mMbmFk6ittuqKaiMDesH1FCoOAXSQB1B9r5+QtvcveL2znY2cPsilE0tnS8JeCnjcvjlLF5TA8Cfvq4/H7POTy9sYFP3bOKrIw0fvyBhZxeWRTPH0lCpOAXSSAH2rr45YvbeXZjIxOKRjBtXB7Tx+UxbWz/AX8sWxpa+dhdy9m1/xD/vGQ2150+MUaVy3Ci4BdJcQcOdXHzvSt5fvMePnRuJV+98lT1VZTk1GWDSIorGJHJ7TeezofPncztL2zjQ3e8woFDXcd/oiQdBb9ICslIT+Nri0/jO++ay4tb97Lkv19gS4OGqEw1Cn6RFPSe0ydwz8fOormti2v/6wWe3tgwZK/t7qzcsZ+v/3491/zwj9zyxCZ2N7UN2evL4KmNXySF7W5q42N3LmdDXTNfueJUPnr+5JPuR2hLQyu/X72b36+uYce+Q2RlpDGzNJ+1uw5gBhdMK+G9Z0zg4lPHqauKONHJXRE5qkOd3XzhN2t4eF0d71ownn+5dvaAv6lcd6CdpWtq+P2a3azf3UyawbmnFHN1VTmXzy4lPyeTnfsO8evlO/n18p3UN3dQnJfFuxaO57rqCUwpyYvZz9UTcfYd7KQkPztm2xjuFPwi0q9IxPnBU5u55YnNzJ9YyE/ev5Cxo3KOuu6Bti4eXV/LA6tqePHNvbhD1fgCrp5XweK5Zf0+r7snwnObG7n35Z08taGBnohz5uQirj9jAlfMLht0txgHDnWxaud+Vu5oYtWO/aze0URLRzdnVBZx0wVTuGjm2JTrukLBLyLH9ci6Wj736zUU5GZy2wermTO+AID2rh6e3tDAA6t38/SGRjp7IlSOGcE18yq4Zl75CR+5NzS389uVu7jvlZ1s33uIUTkZXDu/gutOn/iXbysfSyTibGlsZeX2/azcEQ37wyep0wxmlo5iwaRCxubncN8rO9nd1MbUkpHcdMEUrplXMaR9Lw1nCn4RGZDXapr52F3L2dPawRcum8Gm+hYeXV9HS0c3xXnZLK4qY8m8CuaOLxj0uAKRiPPim3v51cs7eXR9HZ09EarGF3Dd6RO5el45ednR/oUOtHWxemfTX4J+9c4mWtq7ARg9IpMFE0ezYNJo5k8spGp8ISOz/9ovUVdPhIfX1fKTZ7fyWm0zxXnZfOjcSt5/5iQKRmQeta5koeAXkQHb09rBJ365gle27ScvO4N3zCplyfxyzp4yJmZf+tp/sJMHVu/mVy/vZGN9CyOy0jn3lGK27z3I5oZW3KNH89PH5bNg0uho2E8sZHLxyAF9ALk7f3pjLz95bivPbWpkRFY6150+gY+cN5nxowfWKV6iUfCLyAnp7I7was0BTi0bFdemEXdn9c4m7ntlJ3/csodpY/P+ckRfNaHwL/8FDMbrtc3c9txWHlxTgxPtFO+mC6Ywu6Jg8D/AMKLgFxHpo6apjdtfeJN7X95Ja0c3554yhpsumMoF06j77CEAAAg8SURBVIqTYnjMYRP8ZjYBuAsYBzhwq7v/x7Geo+AXkVhqbu/inpd2cPsLb1Lf3MHM0nw+dn50eMysjCObtrp6IrR19dDe1UN7Z4T27uh0W2cP7d0R2jp76AjmdXZHyExPIyczneyM4D7zyMc5menkZKSRHdwPZVPacAr+MqDM3VeaWT6wAlji7q/19xwFv4jEQ2d3hN+v3s1tz29lU30rRSOzyM1Mp6P7r8HeE4ltZmak2REfDN/9m7mcM7X4pF6rv+CP+5A87l4L1AbTLWb2OlAB9Bv8IiLxkJWRxt9UT+DdC8fzzKZGlq6pAaJjIORkpgf3vY7Ug8e5fR4fXjcrI43O7ggd3ZHofwhdPbR3RYL/CCJ/+c+g97y+j8eMHPovoIU6FpuZVQLzgZeOsuwm4CaAiRPVd7iIxI+Z8fYZY3n7jLFhlxIToXWYYWZ5wO+Az7h7c9/l7n6ru1e7e3VJSUn8CxQRSVKhBL+ZZRIN/bvd/X/CqEFEJFXFPfgteo3Uz4DX3f378d6+iEiqC+OI/1zgA8BFZrY6uF0ZQh0iIikpjKt6/ggk/jcjREQSlEZDEBFJMQp+EZEUo+AXEUkxCdFJm5k1AttP8unFwJ4hLGeoqb7BUX2Do/oGbzjXOMnd3/JFqIQI/sEws+VH66tiuFB9g6P6Bkf1DV4i1NiXmnpERFKMgl9EJMWkQvDfGnYBx6H6Bkf1DY7qG7xEqPEISd/GLyIiR0qFI34REelFwS8ikmKSJvjN7HIz22hmW8zsy0dZnm1m9wXLXwoGgYlXbRPM7Gkze83MXjWzTx9lnbeZ2YFeHdd9LV71BdvfZmbrgm2/ZZxLi/pBsP/WmtmCONY2o9d+WW1mzWb2mT7rxHX/mdnPzazBzNb3mldkZo+b2ebgfnQ/z70hWGezmd0Qx/q+a2Ybgt/f/WZW2M9zj/leiGF93zCz3cfrvPF4f+sxrO++XrVtM7PV/Tw35vtv0Nw94W9AOvAGMAXIAtYAp/VZ5++AHwfT1wP3xbG+MmBBMJ0PbDpKfW8DloW4D7cBxcdYfiXwCNEO9s4CXgrxd11H9Ispoe0/4AJgAbC+17zvAF8Opr8MfPsozysCtgb3o4Pp0XGq7zIgI5j+9tHqG8h7IYb1fQP4wgB+/8f8W49VfX2W/xvwtbD232BvyXLEfwawxd23unsn8Cvgmj7rXAPcGUz/Frg4GBsg5ty91t1XBtMtwOFxhhPJNcBdHvUiUGhmZSHUcTHwhruf7De5h4S7Pwfs6zO793vsTmDJUZ76DuBxd9/n7vuBx4HL41Gfu//B3buDhy8C44d6uwPVz/4biIH8rQ/aseoLcuM9wL1Dvd14SZbgrwB29nq8i7cG61/WCd78B4Axcamul2ONMwycbWZrzOwRM5sV18LAgT+Y2YpgvOO+BrKP4+F6+v+DC3P/AYxz99pgug4Yd5R1hst+/DDR/+CO5njvhVi6OWiK+nk/TWXDYf+dD9S7++Z+loe5/wYkWYI/IRxnnOGVRJsvqoD/BB6Ic3nnufsC4Arg783sgjhv/7jMLAu4GvjNURaHvf+O4NH/+YfltdJm9lWgG7i7n1XCei/8CJgKzANqiTanDEfv5dhH+8P+bylZgn83MKHX4/HBvKOuY2YZQAGwNy7Vcfxxht292d1bg+mHgUwzK45Xfe6+O7hvAO4n+i91bwPZx7F2BbDS3ev7Lgh7/wXqDzd/BfcNR1kn1P1oZjcCi4D3BR9ObzGA90JMuHu9u/e4ewS4rZ/thr3/MoB3Avf1t05Y++9EJEvwvwJMM7PJwVHh9cCDfdZ5EDh8BcW7gaf6e+MPtaBN8JjjDJtZ6eFzDmZ2BtHfTVw+mMxspJnlH54mehJwfZ/VHgQ+GFzdcxZwoFezRrz0e6QV5v7rpfd77Abg90dZ5zHgMjMbHTRlXBbMizkzuxz4InC1ux/qZ52BvBdiVV/vc0bX9rPdgfytx9IlwAZ333W0hWHuvxMS9tnloboRvepkE9Ez/l8N5n2L6JscIIdoE8EW4GVgShxrO4/ov/1rgdXB7Urg48DHg3VuBl4lepXCi8A5caxvSrDdNUENh/df7/oM+K9g/64DquP8+x1JNMgLes0Lbf8R/QCqBbqItjN/hOg5oyeBzcATQFGwbjXw017P/XDwPtwCfCiO9W0h2j5++D14+Cq3cuDhY70X4lTfL4L31lqiYV7Wt77g8Vv+1uNRXzD/jsPvuV7rxn3/DfamLhtERFJMsjT1iIjIACn4RURSjIJfRCTFKPhFRFKMgl9EJMUo+EUAM+sJelNcb2ZL++u5cgi3d6OZ/TCW2xDpj4JfJKrN3ee5+2yinXP9fdgFicSKgl/krf5M0PGXmc0zsxd79WE/Opj/jJlVB9PFZrYtmL7RzP7HzB4N+tv/zuEXNbMPmdkmM3sZODfuP5VIQMEv0ouZpRPt+vlwNwB3AV9y97lEv1X69QG8zDzgOmAOcJ1FB+IpA75JNPDPA04b6tpFBkrBLxKVG4yodLg75cfNrAAodPdng3XuJDpAx/E86e4H3L0deA2YBJwJPOPujR7tR77fTr5EYk3BLxLV5u7ziIa0cfw2/m7++veT02dZR6/pHiBjSCoUGSIKfpFePNpr5aeAzwMHgf1mdn6w+APA4aP/bcDCYPrdA3jpl4ALzWxM0EX33wxZ0SInSEciIn24+yozW0u0G+gbgB+b2Qii4+N+KFjte8CvgxGWHhrAa9aa2TeInjhuIto7pkgo1DuniEiKUVOPiEiKUfCLiKQYBb+ISIpR8IuIpBgFv4hIilHwi4ikGAW/iEiK+f9X9qwEbp+hgQAAAABJRU5ErkJggg==\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAX4AAAEWCAYAAABhffzLAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90\nbGliIHZlcnNpb24zLjYuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/av/WaAAAACXBIWXMAAAsT\nAAALEwEAmpwYAAAmzUlEQVR4nO3deZwcdZ3/8ddnuufMXJkjyUzuO0BIOEYIZyIBBCR4obseLIL7\nY9dVvFf4LT68Hj/XVVnXVXd1US4VWVcBIRwKKyYQIdGEkBACuQ9CJjOTaybJ3DOf3x9dgckwM+lk\npru6p9/Px6MeXV1V3fWZSufd1d+q+pa5OyIikjmywi5ARESSS8EvIpJhFPwiIhlGwS8ikmEU/CIi\nGUbBLyKSYRT8In0wsyfM7Pqw6xBJBNN5/DJcmNnhHk8LgDagK3j+d+5+X5Lq2A78rbv/bzLWJ3Ki\nomEXIDJU3L3w6PhA4WtmUXfvTGZtIqlETT0y7JnZAjPbZWa3mNke4G4zG2lmj5pZg5kdCMbH9XjN\nEjP722D8o2a2zMxuD5bdZmZXnkQduWb2PTPbHQzfM7PcYF5FUMNBM9tvZs+aWVYw7xYze93MDpnZ\nBjNbOESbRjKUgl8yxRigDJgI3ETss3938HwC0AL8cIDXnwtsACqAbwN3mpmdYA23AfOAM4C5wDnA\nl4J5nwd2AZXAaOCfADezmcAngbe5exHwDmD7Ca5X5BgKfskU3cBX3L3N3VvcfZ+7P+Duze5+CPgG\nMH+A1+9w95+4exdwL1BFLKBPxIeBr7t7vbs3AF8DrgvmdQTvOdHdO9z9WY8dgOsCcoFTzSzb3be7\n+5YTXK/IMRT8kika3L316BMzKzCz/zKzHWbWBDwDlJpZpJ/X7zk64u7NwWhhP8v2pxrY0eP5jmAa\nwHeAzcCTZrbVzG4N1rUZ+AzwVaDezP7bzKoRGQQFv2SK3qevfR6YCZzr7sXAxcH0E22+ORG7iTUt\nHTUhmIa7H3L3z7v7FGAR8Lmjbfnu/kt3vzB4rQPfSmCNkgEU/JKpioi16x80szLgK0P8/tlmltdj\niAL3A18ys0ozqwC+DPwCwMyuNrNpwXGDJmJNPF1mNtPMLgkOArcGNXf1vUqR+Cj4JVN9D8gH9gLL\ngd8N8fs/Tiykjw5fBf4fsBJYC7wEvBBMA5gO/C9wGHge+E93X0Ksff9fgjr3AKOIHfgVOWm6gEtE\nJMNoj19EJMMo+EVEMoyCX0Qkwyj4RUQyTFp00lZRUeGTJk0KuwwRkbSyatWqve5e2Xt6WgT/pEmT\nWLlyZdhliIikFTPb0dd0NfWIiGQYBb+ISIZR8IuIZBgFv4hIhlHwi4hkGAW/iEiGUfCLiGSYtDiP\n/2S1d3bT0tFFe2c3bZ2xx/aubto6Yo89p7cFQ3tn9xvP2zu7MYPSgmxKC3Iozc+OjefnUFKQTXFe\nlBO/7aqISLiGdfB/bfHL3LdiZ8LeP5JllORnU5qfTUlBNiODL4eS4Mth5IhsTqkq5m2TyhJWg4jI\niRrWwX/l7ComV4wgN5pFTjSL3GiEnGgWOZEscrNjjz2nv7lc1hvLdTs0tnTQ2NLOweYODjZ3cKC5\nncaW2PjBlnYONHfQ2NxB/aFWNtYd4mBzB4fbOgHIjhjLbrmE0cV5IW8NEZGYYR38F06v4MLpFYN+\nn8qiXCqLck/oNR1d3WysO8TVP1jGL5bv4POXzxx0HSIiQ0EHdxMkO5LFadUlXHrKaO5bsZPWDt0m\nVURSg4I/wW68YDL7j7Tz29Wvh12KiAig4E+4eVPKOKWqmLv+tA3d31hEUkHCgt/M7jKzejNb18e8\nL5iZm9ngG+BTnJlx4wWT2Fh3mD9t3hd2OSIiCd3jvwe4ovdEMxsPXAYk7jzLFHPNGdVUFOZw15+2\nhV2KiEjigt/dnwH29zHr34AvAhnT7pEbjfCReRN5+tV6tjYcDrscEclwSW3jN7NrgNfdfU0cy95k\nZivNbGVDQ0MSqkusD587kZxIFvc8tz3sUkQkwyUt+M2sALgN+HI8y7v7He5e4+41lZVvuWVk2qks\nyuWaM6r59cpdNDZ3hF2OiGSwZO7xTwUmA2vMbDswDnjBzMYksYZQ3XDBJFo6uvjVyow5vCEiKShp\nwe/uL7n7KHef5O6TgF3AWe6+J1k1hO206hLmTSnj3ud20NnVHXY5IpKhEnk65/3A88BMM9tlZh9L\n1LrSyY0XTOb1gy08ub4u7FJEJEMlrK8ed//gceZPStS6U9nCU0YzoayAO5dt46rTq8IuR0QykK7c\nTbJIlvHR8yexascBXnztYNjliEgGUvCH4P014yjMjXK3LugSkRAo+ENQlJfNB2rG89jaWvY0toZd\njohkGAV/SG64YBLd7vx8+fawSxGRDKPgD8n4sgIuO3U0v1yxk5Z29dUvIsmj4A/RjRdM5kBzB799\nUX31i0jyKPhDdM7kMk6rLuauZeqrX0SSR8Efolhf/ZPZVH+YZZv3hl2OiGQIBX/Irp5bRUVhLnct\n06mdIpIcCv6Q5UYjXDdvIn/c0MAW9dUvIkmg4E8BH543gZxIli7oEpGkUPCngIrCXN51RjUPrHqd\ng83tYZcjIsOcgj9F3HDBZFo6uvjvv7wWdikiMswp+FPEqdXFnDelnHuf206H+uoXkQRS8KeQj104\nmdrGVn7/csbcm0ZEQqDgTyGXzBrFxPICndopIgml4E8hWVnGDedP4oWdB1m980DY5YjIMKXgTzHX\n1oynKDfK3X/aHnYpIjJMKfhTTGFulL9623gef6mW2saWsMsRkWFIwZ+Crj8/6Kv/+R1hlyIiw5CC\nPwWNLyvg8lPH8Ms/q69+ERl6Cv4UdeOFkznY3MGDq3eFXYqIDDMJC34zu8vM6s1sXY9p3zGzV81s\nrZk9ZGaliVp/unvbpJHMHlvM3X/arr76RWRIJXKP/x7gil7TngJmu/scYCPwfxO4/rRmZnzk3Ils\nrj/MS683hl2OiAwjCQt+d38G2N9r2pPu3hk8XQ6MS9T6h4MrZ1eRHTEWr9kddikiMoyE2cZ/I/BE\nfzPN7CYzW2lmKxsaGpJYVuooKchm/oxKHl1bS3e3mntEZGiEEvxmdhvQCdzX3zLufoe717h7TWVl\nZfKKSzGL5lZT29jKKl3JKyJDJOnBb2bXA1cDH3YdtTyuS08ZTV52Fo+8qOYeERkaSQ1+M7sCuAW4\nxt2bk7nudDUiN8rCWaN5/KVaOtVds4gMgUSeznk/8Dww08x2mdnHgB8CRcBTZvaimf04UesfThbN\nrWLfkXae37ov7FJEZBiIJuqN3f2DfUy+M1HrG84WzBxFYW6UxWt2c9H0zD3eISJDQ1fupoG87AiX\nnzaaJ9btoa1TXTiIyOAo+NPEornVHGrt5JmNe8MuRUTSnII/TVw4rYKRBdm6mEtEBk3BnyayI1lc\neXoVT62vo7m98/gvEBHph4I/jSyaU01LRxdPv1ofdikiksYU/GnknMlljCrK1cVcIjIoCv40Esky\n3jmniiUbGmhq7Qi7HBFJUwr+NHPN3Grau7p58uW6sEsRkTSl4E8zZ4wvZdzIfJ3dIyInTcGfZsyM\nRXOrWbZ5L/sOt4VdjoikIQV/Glo0p5qubueJdXvCLkVE0pCCPw2dUlXE1MoRau4RkZOi4E9DZsY1\nc8fy5+372dPYGnY5IpJmFPxp6uq5VbjDo2u11y8iJ0bBn6amVhZyWnUxi9fWhl2KiKQZBX8aWzS3\nmjWvHWTnPt3MTETip+BPY1fPqQJgsZp7ROQEKPjT2LiRBZw9caTO7hGRE6LgT3OL5lTx6p5DbKo7\nFHYpIpImFPxp7qo5VWQZ2usXkbgp+NPcqKI85k0pZ/HaWtw97HJEJA0o+IeBa+ZWs23vEV7e3RR2\nKSKSBhT8w8AVs8cQzTIeUXOPiMQhYcFvZneZWb2ZresxrczMnjKzTcHjyEStP5OUFuRw8YxKHl2z\nm+5uNfeIyMASucd/D3BFr2m3An9w9+nAH4LnMgQWza1id2MrL+w8EHYpIpLiEhb87v4MsL/X5HcB\n9wbj9wLvTtT6M81lp44hN5qls3tE5LiS3cY/2t1rAYLHUf0taGY3mdlKM1vZ0NCQtALTVWFulIWn\njOKxl2rp7OoOuxwRSWEpe3DX3e9w9xp3r6msrAy7nLSwaE41ew+3s3xr7x9aIiJvSnbw15lZFUDw\nWJ/k9Q9rb581isLcqJp7RGRAyQ7+R4Drg/HrgYeTvP5hLS87wuWnjuaJdbW0d6q5R0T6dtzgN7Op\nZpYbjC8ws0+ZWWkcr7sfeB6YaWa7zOxjwL8Al5nZJuCy4LkMoUVzq2lq7eSZjTouIiJ9i8axzANA\njZlNA+4kttf+S+CqgV7k7h/sZ9bCE6pQTsgF0yooLchm8drdXHrq6LDLEZEUFE9TT7e7dwLvAb7n\n7p8FqhJblpysnGgWV84ew1Pr62hp7wq7HBFJQfEEf4eZfZBYm/yjwbTsxJUkg7VoTjXN7V08/aqO\nnYvIW8UT/DcA5wHfcPdtZjYZ+EViy5LBOHdKOZVFuTq7R0T6dNw2fndfD3wKIOhbp8jddVA2hUWy\njHeeXsUv/7yTptYOivP0A01E3hTPWT1LzKzYzMqANcDdZvbdxJcmg7FobjXtnd089XJd2KWISIqJ\np6mnxN2bgPcCd7v72cCliS1LBuusCaWMLc3XjdhF5C3iCf5ocJXtB3jz4K6kODPj6rlVLNu0l6bW\njrDLEZEUEk/wfx34PbDF3f9iZlOATYktS4bCwlmj6ex2ntu8N+xSRCSFHDf43f3X7j7H3T8ePN/q\n7u9LfGkyWGdOKKUoN8qSDbqKV0TeFM/B3XFm9lBwN606M3vAzMYlozgZnOxIFhdMq2DpxgbdiF1E\n3hBPU8/dxLppqAbGAouDaZIG5s+spLaxlU31h8MuRURSRDzBX+nud7t7ZzDcA6iD/DQxf0bsn2qp\nmntEJBBP8O81s4+YWSQYPgLsS3RhMjSqS/OZMbqQJRvVfYOIxMQT/DcSO5VzD1ALXEusGwdJE/Nn\nVPKXbQc40tYZdikikgLiOatnp7tf4+6V7j7K3d9N0IWDpIcFM0fR3tXN8q36oSYiJ38Hrg8MaRWS\nUDWTRpKfHdFpnSICnHzw25BWIQmVG41w/tRylmys12mdItJ/8JtZWT9DOQr+tLNgZiWv7W9h+77m\nsEsRkZAN1C3zKsDpO+TbE1OOJMr8GaOAl1m6oZ7JFZPDLkdEQtRv8Lu70mEYmVBewOSKESzZ2MBH\nL9A/rUgmO9k2fklD82dUsnzrPlo7dC9ekUym4M8g82dW0trRzZ+37Q+7FBEJUSjBb2afNbOXzWyd\nmd1vZnlh1JFp5k0uJyeaxdKNOq1TJJPFFfxBVw3VZjbh6HCyKzSzscQuAKtx99lABPjrk30/iV9+\nToRzJ5exZIO6bxDJZPF0y3wzUAc8BTwWDIO9E1cUyDezKFAA6P6ASbJg5ii2NBzhtf06rVMkU8Wz\nx/9pYKa7n+bupwfDnJNdobu/DtwO7CTW90+juz/Zezkzu8nMVprZyoYGNU0MlaO9dT6zSdtUJFPF\nE/yvAY1DtUIzGwm8C5hMrI//EUGPn8dw9zvcvcbdayor1Qv0UJlaOYKxpfnqvkEkgw10AddRW4El\nZvYY0HZ0ort/9yTXeSmwzd0bAMzsQeB84Bcn+X5yAsyMBTMr+e3q12nv7CYnqhO7RDJNPP/rdxJr\n388BinoMJ2snMM/MCszMgIXAK4N4PzlB82dUcqS9i1U7DoRdioiE4Lh7/O7+taFcobuvMLPfAC8A\nncBq4I6hXIcM7PxpFUSzjKUbGzhvannY5YhIkvUb/Gb2PXf/jJktJtZnzzHc/ZqTXam7fwX4ysm+\nXganMDdKzaSRLNlQz61Xzgq7HBFJsoH2+H8ePN6ejEIkuRbMHMW/PPEqdU2tjC7W9XMimaTfNn53\nXxU8Lu1rSF6Jkghv3IRdV/GKZJx4LuCabma/MbP1Zrb16JCM4iRxZo0pYnRxLkt1WqdIxonnrJ67\ngR8ROxD7duBnvNkMJGnKzJg/o5JnNzXQ2dUddjkikkTxBH++u/8BMHff4e5fBS5JbFmSDPNnjKKp\ntZM1uw6GXYqIJFE8wd9qZlnAJjP7pJm9BxiV4LokCS6cVkGWoeYekQwTT/B/hlhHap8CzgY+Alyf\nwJokSUoKsjlrwkiW6ACvSEYZMPjNLAJ8wN0Pu/sud7/B3d/n7suTVJ8k2PwZlazd1cjew23HX1hE\nhoV+g9/Mou7eBZwddK0gw9D8mbHTOpdt2htyJSKSLAPt8f85eFwNPGxm15nZe48OSahNkmB2dQnl\nI3J0cxaRDBJP75xlwD5iZ/I4YMHjgwmsS5IkK8u4eEYlSzc20N3tZGXpx53IcDdQ8I8ys88B63gz\n8I96S989kr7mz6jkodWvs253I3PGlYZdjogk2EBNPRGgMBiKeowfHWSYuGh6BabTOkUyxkB7/LXu\n/vWkVSKhKS/MZc7YEpZsbODmhdPDLkdEEmygPX419maQ+TMqWb3zAI3NHWGXIiIJNlDwL0xaFRK6\n+TMr6XZYtlmndYoMdwN1y7w/mYVIuOaOK6UkP1undYpkAN1pWwCIRrK4cHoFSzc24K6TtkSGMwW/\nvGH+jErqD7Xx6p5DYZciIgmk4Jc36K5cIplBwS9vGF2cxylVxWrnFxnmFPxyjPkzKlm5/QCH2zrD\nLkVEEkTBL8eYP6OSzm7nOZ3WKTJshRL8ZlYa3MD9VTN7xczOC6MOeauzJ46kMDeqdn6RYSye3jkT\n4d+B37n7tWaWQ+wOX5ICcqJZnD+1nCUbYqd16lYMIsNP0vf4zawYuBi4E8Dd2939YLLrkP7Nn1nJ\n6wdb2NJwJOxSRCQBwmjqmQI0AHeb2Woz+6mZjei9kJndZGYrzWxlQ4OaHZJJp3WKDG9hBH8UOAv4\nkbufCRwBbu29kLvf4e417l5TWVmZ7Boz2riRBUwbVajTOkWGqTCCfxewy91XBM9/Q+yLQFLI/BmV\nrNi2n5b2rrBLEZEhlvTgd/c9wGtmNjOYtBBYn+w6ZGDzZ1TS3tnN8m37wi5FRIZYWOfx3wzcZ2Zr\ngTOAfw6pDunHOZPLyMvO0l25RIahUE7ndPcXgZow1i3xycuOcN6Uch5/qZbi/GxKegzFeVFKCt58\nnp8d0WmfImkkrPP4JQ28v2Y8qx5Yy/f/sGnA5bIjRnFe8KWQ3/MxyilVxXzonAn6YhBJIQp+6ddV\np1dx1elVdHU7h1o7aGyJDU0tnW+MN7Z00HTMvA4ONrezY98RDjR38IvlO8mNRrj27HFh/zkiElDw\ny3FFsozSghxKC3JO6HXd3c4H/ut5vr74ZS6aXsHo4rwEVSgiJ0KdtEnCZGUZ3752Dm2d3dz20Eu6\ns5dIilDwS0JNqSzkC5fP5H9fqefhF3eHXY6IoOCXJLjxwsmcOaGUrzzyMvWHWsMuRyTjKfgl4SJZ\nxneunUtLRxdfemidmnxEQqbgl6SYNqqQz102gyfX17F4bW3Y5YhkNAW/JM3fXjiZueNL+crD69h7\nuC3sckQyloJfkiYayeL2a+dwpK2LLz+8LuxyRDKWgl+SavroIj596XQef2kPj6nJRyQUCn5Jur+7\neAqnjy3hyw+vY5+afESSTsEvSReNZPGd98+hqbWDry5Wj9wiyabgl1DMGlPMzZdMZ/Ga3fxu3Z6w\nyxHJKAp+Cc3HF0zl1KpivvTbdRw40h52OSIZQ8EvocmOZHH7++dysLmdry1+OexyRDKGgl9CdWp1\nMZ94+zR+++JunlpfF3Y5IhlBwS+h+8TbpzFrTBG3PfQSjc0dYZcjMuwp+CV0OdFYk8++I+18/VGd\n5SOSaAp+SQmzx5bwDwum8sALu3j6VTX5iCSSgl9SxicvmcaM0YX804PraGxRk49Ioij4JWXkRiPc\n/v65NBxu4xuPqclHJFFCC34zi5jZajN7NKwaJPXMGVfKTRdP4X9W7mLpxoawyxEZlsLc4/808EqI\n65cU9emF05k2qpBbH1jLoVY1+YgMtVCC38zGAe8EfhrG+iW15WVH+M61c6hrauXri9fT1a07dokM\npbD2+L8HfBHo7m8BM7vJzFaa2cqGBv3kzzRnThjJTRdP5derdnHZvy3l1ytfo6Or34+LiJyApAe/\nmV0N1Lv7qoGWc/c73L3G3WsqKyuTVJ2kki++Yyb/8aGzyItG+MffrGXBd5Zw73Pbae3oCrs0kbRm\nyb7xtZl9E7gO6ATygGLgQXf/SH+vqamp8ZUrVyapQkk17s6SDQ388I+bWbXjABWFOXzswil8ZN4E\nivKywy5PJGWZ2Sp3r3nL9GQH/zErN1sAfMHdrx5oOQW/QOwL4M/b9vMfS7bwzMYGivOiXH/+JG64\nYDJlI3LCLk8k5fQX/NEwihE5GWbGuVPKOXdKOWt3HeQ//7iFHzy9mZ8+u40PnTuB/3PRFMaU5IVd\npkjKC3WPP17a45f+bKo7xI+WbOHhNbuJmPG+s8fx9/OnMLF8RNiliYQuJZt64qXgl+N5bX8z//XM\nFv5n5S46u7pZNLeaf1gwjZljio5Zzt1p6ejiYHMHjS29hj6mRbOM68+fxEXTKzCzkP46kZOj4JeM\nUN/Uyp3LtvGL5Ts40t7F2RNH0u1OY0sHTUGYd3T1/5nPMijOz6YkGOqaWqlrauOcSWV89rIZnDe1\nPIl/jcjgKPgloxxsbuee57azZEMDhblRSgreDPN+h4JsCnOiZGW9uWff1tnFr/7yGj98ejP1h9o4\nf2o5n7tsBjWTykL860Tio+AXGYTWji7uW7GTHy3ZzN7D7Vw8o5LPXTaDM8aXhl2aSL8U/CJDoLm9\nk58/v4MfL93CgeYOFs4axWcvm8HssSVhlybyFgp+kSF0uK2Te/60jTue2UpTayfvOG00n71sBrPG\nFIddmsgbFPwiCdDU2sGdz27jrmXbONzeyTtPr+Izl85g2qjCk37Pts4u6pvaqG1sxd05Z3KZziiS\nk6LgF0mgg83t/OTZrdz9p1hfQu86YyyfXjidSRXHXk9wpK2T2sZW9jS2sqeplT2NLdQ2tlLX1PrG\n9H1H2o95zRWnjeFb75tDSYG6p5ATo+AXSYJ9h9u445mt3Pv8djq6nLfPHEVbZ1cs6BtbOdTW+ZbX\njCzIZkxJPmOKcxlTkk9VSR5jivMYU5LH+tombv/9BkYX5/H9D57J2RNHhvBXSbpS8IskUf2hVn68\nZCtPvbKHshG5jCnOpaoknzE9Qr2qJI/RxXnkZUcGfK8XXzvIzfe/wO6DrfzjO2Zy00VTjjnlVKQ/\nCn6RNNbY0sE/PfgSj71Uy8UzKvnuB+ZSUZgbdlmS4voLft1sXSQNlORn88MPnck33jOb5Vv3ceW/\nP8tzm/eGXZakKQW/SJowMz587kQe/sQFFOdF+fCdK/jukxvo1J3J5AQp+EXSzClVxSy++ULed9Y4\nvv/0Zj70kxXUNrYM2fu3tHfx2NpaPv6LVbz99iX89NmttHXqrmfDidr4RdLYQ6t3cdtD68iNZnH7\n++ey8JTRJ/U+rR1dLN3YwKNra/nDK3U0t3dRUZjLxPICVu04wPiyfG65YhbvPL1K1xSkER3cFRmm\ntjYc5pO/XM362iY+duFkbrliFjnR4/+Yb+/sZtnmBh5dU8tT6+s41NZJ2Ygcrpg9hqvnVHHu5HIi\nWcYzGxv458df4dU9hzhzQim3XXWKOqlLEwp+kWGstaOLbz7+Cvc+v4M540r4wQfP7PNmNJ1d3Ty3\nZR+Prt3N71+uo7Glg+K8aBD21Zw3tZzsyFu/NLq6nQde2MW/PrmBuqY2rpw9hluumPWWC9QktSj4\nRTLA79bt4Yu/WUO3wzffezqL5lbT1e2s2LaPR9fW8rt1e9h/pJ3C3CiXnTqaq+dUcdH0yrh+IUCs\nk7qfPruNHy/dQntnN9edN5FPXTKdkQm657G7q2lpEBT8Ihli14Fmbr5/Nat3HuSi6RW8uucQDYfa\nyM+OsPCUUVw9p5oFMyuPe+HYQOoPtfJvT23iV3/ZyYjcKDdfMo2/OW/SoN4TYP+Rdv68bR/Lt+5n\n+dZ9bG04wvnTylk0p5rLTxtNUZ66rTgRCn6RDNLR1c2/PrmRX698jbdNKuPquVVcMmsUBTnRIV3P\nxrpDfPPxV/jjhgbGjczni1fMYtGc+A8A7zvcxopt+1mxNRb2G+oOAZCXncXZE0cyqXwESzY08PrB\nFnKiWVwycxTXnFHNJbNGDfpLJhMo+EUkYZZt2ss3Hn+FV2qbmDs+dgD4nMlvPQDccKiNFdv2sSLY\no99UfxiA/OwINZNGcu7kMuZNKWfOuNI3mp/cnRd2HmTxmt08uraWvYfbGJET4bJTR7NobvUJNVVl\nGgW/iCRUV7fz0OrXuf33G9jT1Mrlp47m4wum8tqBlmCPfh9bGo4AUJAToWZSGfOmlHHu5HJOH1sS\nV3h3dTsrtu7jkTW7eWLdHhpbOijJz+bK2WNYNLeaeVNiZyJJjIJfRJKipb2LO5dt5UdLtnCkPXbh\nV2FulJpJI5k3pZxzJ5cxe2xJn2cPnYijp6M+8uJunlpfx5Hg2oOr51SxaG4VZ44fmfGd2aVM8JvZ\neOBnwBigG7jD3f99oNco+EXST8OhNv74aj0zxxRxWnUx0UEG/UBa2rv444Z6HnlxN09vqKe9s5ux\npflcftpoykfkEI1kkR3JIjtiZEeyiGZZ8DyLaMTICR6jWVnkRGOP2ZEscqJZjCzIprQgJy1/SaRS\n8FcBVe7+gpkVAauAd7v7+v5eo+AXkXgdau3gqfV1LF6zm2c37aWze/AZZwYjC3IoGxEbKgqPjudS\nHkwrL8yhfEQuZSNyGFmQ3ecXnbvT1tlNa0cXrR3dtHR00dLeRUtHF629xls7YuNXzq5ifFnBSdbd\nd/AP7SH+OLh7LVAbjB8ys1eAsUC/wS8iEq+ivGzee9Y43nvWONydzm6no6ubjq7YY2fX0ec9pnUf\nO60zGG/r7ObAkXb2H2ln35F29h2OjW/Yc4j9R9o52NJBX/vOZrEeVUvzs+no8jdCvKWjq8/lBzJ9\nVNFJB39/kh78PZnZJOBMYEUf824CbgKYMGFCcgsTkWHBzN5o3kmEzq5uDjR3BF8MbewPviT2Hm5n\n/5E2Gls6yYlkkZ+TRX52hPzsCHk5EfKiEfJzgufZb47nZ0fIz8mKTTs6LwGnrYYW/GZWCDwAfMbd\nm3rPd/c7gDsg1tST5PJERI4rGsmisiiXyqJcoCjscuIWysmvZpZNLPTvc/cHw6hBRCRTJT34LXZJ\n353AK+7+3WSvX0Qk04Wxx38BcB1wiZm9GAxXhVCHiEhGCuOsnmVA+p0QKyIyTKiDCxGRDKPgFxHJ\nMAp+EZEMo+AXEckwadE7p5k1ADtO8uUVwN4hLGeoqb7BUX2Do/oGL5VrnOjulb0npkXwD4aZreyr\nk6JUofoGR/UNjuobvHSosTc19YiIZBgFv4hIhsmE4L8j7AKOQ/UNjuobHNU3eOlQ4zGGfRu/iIgc\nKxP2+EVEpAcFv4hIhhk2wW9mV5jZBjPbbGa39jHfzOz7wfy1ZnZWEmsbb2Z/NLNXzOxlM/t0H8ss\nMLPGHj2WfjlZ9QXr325mLwXrfssNjkPefjN7bJcXzazJzD7Ta5mkbj8zu8vM6s1sXY9pZWb2lJlt\nCh5H9vPaAT+rCazvO2b2avDv95CZlfbz2gE/Cwms76tm9vrxeu0Ncfv9qkdt283sxX5em/DtN2ju\nnvYDEAG2AFOAHGANcGqvZa4CniDWM+g8YEUS66sCzgrGi4CNfdS3AHg0xG24HagYYH5o26+Pf+s9\nxC5MCW37ARcDZwHrekz7NnBrMH4r8K1+6h/ws5rA+i4HosH4t/qqL57PQgLr+yrwhTj+/UPZfr3m\n/yvw5bC232CH4bLHfw6w2d23uns78N/Au3ot8y7gZx6zHCg1s6pkFOfute7+QjB+CDh6g/l0Etr2\n62UhsMXdT/ZK7iHh7s8A+3tNfhdwbzB+L/DuPl4az2c1IfW5+5Pu3hk8XQ6MG+r1xquf7ReP0Lbf\nUcHNpD4A3D/U602W4RL8Y4HXejzfxVuDNZ5lEm6gG8wD55nZGjN7wsxOS25lOPCkma0KbnTfW0ps\nP+Cv6f8/XJjbD2C0u9dC7MseGNXHMqmyHW8k9guuL8f7LCTSJ4OmqLv6aSpLhe13EVDn7pv6mR/m\n9ovLcAn+vm7s0vs81XiWSSgb+AbzLxBrvpgL/AD4bTJrAy5w97OAK4FPmNnFveanwvbLAa4Bft3H\n7LC3X7xSYTveBnQC9/WzyPE+C4nyI2AqcAZQS6w5pbfQtx/wQQbe2w9r+8VtuAT/LmB8j+fjgN0n\nsUzC2HFuMO/uTe5+OBh/HMg2s4pk1efuu4PHeuAhYj+pewp1+wWuBF5w97reM8LefoG6o81fwWN9\nH8uE/Tm8Hrga+LAHDdK9xfFZSAh3r3P3LnfvBn7Sz3rD3n5R4L3Ar/pbJqztdyKGS/D/BZhuZpOD\nvcK/Bh7ptcwjwN8EZ6fMAxqP/ixPtKBNcMAbzJvZmGA5zOwcYv82+5JU3wgzKzo6Tuwg4Lpei4W2\n/Xrod08rzO3XwyPA9cH49cDDfSwTz2c1IczsCuAW4Bp3b+5nmXg+C4mqr+cxo/f0s97Qtl/gUuBV\nd9/V18wwt98JCfvo8lANxM462UjsiP9twbS/B/4+GDfgP4L5LwE1SaztQmI/R9cCLwbDVb3q+yTw\nMrGzFJYD5yexvinBetcENaTU9gvWX0AsyEt6TAtt+xH7AqoFOojthX4MKAf+AGwKHsuCZauBxwf6\nrCapvs3E2sePfgZ/3Lu+/j4LSarv58Fnay2xMK9Kpe0XTL/n6Geux7JJ336DHdRlg4hIhhkuTT0i\nIhInBb+ISIZR8IuIZBgFv4hIhlHwi4hkGAW/CGBmXUFviuvMbHF/PVcO4fo+amY/TOQ6RPqj4BeJ\naXH3M9x9NrHOuT4RdkEiiaLgF3mr5wk6/jKzM8xseY8+7EcG05eYWU0wXmFm24Pxj5rZg2b2O4v1\ny//to29qZjeY2UYzWwpckPS/SiSg4BfpwcwixLp+PtoNwM+AW9x9DrGrSr8Sx9ucAfwVcDrwVxa7\nEU8V8DVigX8ZcOoQly4SNwW/SEx+cEelfUAZ8JSZlQCl7r40WOZeYjfoOJ4/uHuju7cC64GJwLnA\nEndv8Fg/8v128iWSaAp+kZgWdz+DWEjncPw2/k7e/P+T12teW4/xLiAajKt/FEkJCn6RHty9EfgU\n8AWgGThgZhcFs68Dju79bwfODsavjeOtVwALzKw86KL7/UNWtMgJih5/EZHM4u6rzWwNsS5/rwd+\nbGYFwFbghmCx24H/MbPrgKfjeM9aM/sqsQPHtcRuHBNJQPkix6XeOUVEMoyaekREMoyCX0Qkwyj4\nRUQyjIJfRCTDKPhFRDKMgl9EJMMo+EVEMsz/B+huDHc/R5R0AAAAAElFTkSuQmCC\n", "text/plain": [ - "
" + "
" ] }, "metadata": { - "needs_background": "light" }, "output_type": "display_data" }, { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEWCAYAAAB8LwAVAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3deXxU9b3/8dcnCUkI+74lEFZlEVkioOKKC26gRQsuFZfWWtfW66/13vaqtb1ttba39161aivu1g1UVBT3HWQHAUEhBEjYCXtYsnx+f8yJjnESJstksryfj8c8cpbv95zPTGbmM+f7Ped8zd0REREpKyHeAYiISN2kBCEiIhEpQYiISERKECIiEpEShIiIRKQEISIiESlBSKNnZm+Y2eR4xyFS15iug5D6yMz2hs2mAQeB4mD+p+7+dC3H8wFwNNDZ3Q/W5r5FYkVHEFIvuXvz0gewDjgvbNk3ycHMkmIdi5llAicADoyL9f7K7Dvmz08aLyUIaVDM7GQzyzWzX5nZJuBRM2tjZq+Z2VYz2xFMp4fV+cDMfhxMX2Fmn5jZvUHZNWZ21mF2ezkwG3gM+E5TlZllmNm0YN/bzey+sHU/MbMvzWyPmS03s2HBcjezPmHlHjOz31fj+bU1s0fNbEOw/uVg+VIzOy+sXBMz22ZmQyv5sksDpQQhDVFnoC3QA7iG0Pv80WC+O7AfuK/c2jASWAm0B+4BHjEzq6D85cDTweNMM+sEYGaJwGvAWiAT6AY8G6y7CLgzqNuS0JHH9hg9vycJNcMNBDoC/x0sfwK4LKzc2cBGd18YZRzS0Lm7HnrU6weQA5wWTJ8MHAJSKyg/BNgRNv8B8ONg+gpgVdi6NEJNR53L2dZooBBoH8yvAH4RTB8LbAWSItSbCdxczjYd6BM2/xjw+6o8P6ALUAK0iVCuK7AHaBnMvwj8Mt7/Tz3qzkNHENIQbXX3A6UzZpZmZg+Z2Voz2w18BLQOfuFHsql0wt0Lgsnm5ZSdDLzl7tuC+Wf4tpkpA1jr7kUR6mUAq6N7Ot9TmeeXAeS7+46yG3H3DcCnwAQzaw2cRegoSAQAdXBJQ1T21Lx/A44ARrr7JjMbAiwEKmo2Oiwzawr8EEgM+gMAUgh9OR8NrAe6m1lShCSxHuhdzqYLCB25lOoM5IbNV+b5rQfamllrd98ZYV+PAz8m9F0wy93zyn/G0tjoCEIagxaE2uV3mllb4I4a2u75hE6tHUCoWWcI0B/4mFDfwhxgI/AnM2tmZqlmdnxQ95/ArWY23EL6mFmPYN0i4BIzSzSzscBJVX1+7r4ReAN4IOjMbmJmJ4bVfRkYBtxMqE9C5BtKENIY/A1oCmwjdLbRmzW03cnAo+6+zt03lT4IdRBfSugX/HlAH0Kn4uYCEwHc/QXgvwg1Se0h9EXdNtjuzUG9ncF2Xq7m8/sRoX6SFcAW4OelK9x9PzAV6AlMq9zTl4ZOF8qJNHJmdjvQz90vO2xhaVTUByHSiAVNUlcTOsoQ+Q41MYk0Umb2E0Kd2G+4+0fxjkfqHjUxiYhIRDqCEBGRiBpMH0T79u09MzMz3mGIiNQr8+fP3+buHSKtazAJIjMzk3nz5sU7DBGResXM1pa3Tk1MIiISkRKEiIhEpAQhIiIRKUGIiEhEShAiIhKREoSIiESkBCEiIhEpQYiI1FPuzptLN/HsnHUx2X6DuVBORKQx+WrzHn776jI+XbWdYd1bM/GYDMyqNUji9yhBiIjUI7sKCvnvd77iydlraZ6SxF3jB3LJiO41nhxACUJEpF4oLnGem7uee99ayc6CQ1wysju3nH4EbZslx2yfMe2DMLOxZrbSzFaZ2W0R1l9rZl+Y2SIz+8TMBoSt+/eg3kozOzOWcYqIVFX21r2s2rKHWA6dMDcnn3H3fcJ/vPQFfTo259UbR/P784+KaXKAGB5BmFkicD9wOqGxeOea2XR3Xx5W7Bl3fzAoPw74KzA2SBSTgIFAV+AdM+vn7sWxildEJFr5+w7x6uINTF2Qy5LcXQD06tCMsQM7M3ZQZ47q1qpGmnw27trPH2esYPriDXRplcr/XTyUcwd3iUlzUiSxbGIaAaxy92wAM3sWGA98kyDcfXdY+WZAaQoeDzzr7geBNWa2KtjerBjGKyJSrkNFJby3YgtTF+Ty/ootFJU4A7q05Dfn9CclKYGZyzbz0EfZPPDBarq2SuXMQZ0ZO7AzWZltSUyo3Bf6gcJi/vlxNve/v5pid246tQ/XntybtOTa7RWI5d66ERrOsFQuMLJsITO7HrgFSAZODas7u0zdbhHqXgNcA9C9e/caCVpEpJS7szh3F9MW5DJ98QZ2FhTSoUUKVx6fyQ+GpdO/S8tvyv7o2Ex2FhzinS+38ObSTTz9+Toe/TSHds2SOWNgJ84c2JnjercnOan8ln13563lm/n968tZn7+fsQM78+tz+pPRNq02nu73xL2T2t3vB+43s0uA3wCTK1H3YeBhgKysLI2dKiI1YsPO/by0MI9pC3JZvXUfKUkJnDGwMxOGdWN0n/YkJUb+km+dlsyFw9O5cHg6ew8W8eHKrby5bBPTF23gX3PW0yI1iTFHdmTsoC6c1K8DTZMTv6n79eY93PXacj7+ehv9OjXn6R+P5Pg+7WvrKUcUywSRB2SEzacHy8rzLPD3KtYVEamWfQeLeGPpJqYtyGVW9nbcYURmW645sRdnHdWFlqlNKrW95ilJnDO4C+cM7sKBwmI+XbWNN5du4u0vN/Pyog2kNkng5H4dOXNQJ77I3c3js3JolpzIHecN4LJRPWhSThKqTbFMEHOBvmbWk9CX+yTgkvACZtbX3b8OZs8BSqenA8+Y2V8JdVL3BebEMFYRaaSW5O7ksU9zeGPpJvYXFtOjXRo/H9OPC4Z2o3u7mmnaSW2SyJj+nRjTvxNFxSXMWZPPm8s2MXPZJt5ctgkzmHRMd249ox/tmqfUyD5rQswShLsXmdkNwEwgEZji7svM7C5gnrtPB24ws9OAQmAHQfNSUO55Qh3aRcD1OoNJRGqSu/PIJ2v44xsrSEtO5Pyh3ZgwrBvDe7SJ6VlCSYkJHNenPcf1ac+d5w1kSd4umqck0adj85jts6oslufu1qasrCzXmNQiEo19B4v45dQlvL5kI2cO7MSfLzq60k1IDYWZzXf3rEjr4t5JLSJSm1Zt2cu1T80ne+tebjvrSH56Yq9au66gvlGCEJFG440vNnLrC4tJbZLIU1eP5Lg4nyVU1ylBiEiDV1Rcwp9nruShj7IZktGaBy4dRtfWTeMdVp2nBCEiDdrWPQe58V8LmJ2dz49G9eA35/YnJSnx8BVFCUJEGq75a3dw/dML2FFwiL9cdDQThqfHO6R6RQlCRBocd+fJ2Wv53WvL6dKqKdOuO46BXVvFO6x6RwlCRBqU/YeK+Y+XvuClhXmcemRH/vuHQ2iV1jhPYa0uJQgRqZYDhcUkJVi59yeqTTnb9nHtU/NZuXkP/3Z6P64/pQ8JlbyTqnxLCUJEqqykxDn//k/Zd6iIP1xwFCf07RC3WN5ZvplfPL+IxATjsStHcFK/+MXSUMQ/5YtIvfXBV1tYsWkPew8U8aNH5vBvzy9mx75DtRpDcYlz78yV/PiJefRol8arN4xWcqghOoIQkSp75JM1dGmVyju3nMTfP1jNgx+u5oOVW7hj3EDOq4WRz2at3s69b61k/todTMzK4LfjB5LaRKew1hQdQYhIlazYtJtPV23n8mMzaZaSxK1nHsGrN44mvU1TbvrXQq5+fB55O/fHZN+zVm9n4kOzuPgfs1mfX8CfLxzM3RcOVnKoYTqCEJEqefSTHFKbJHDxiG+HbunfpSXTrjuexz7L4d6ZKznjrx/yy7FHctmoHpUedjOSWau387d3vuLzNfl0bJHCHecN4OIR3ZUYYkQJQkQqbdveg7y0KI+LhqfTOi35O+sSE4yrR/fkjAGd+PXLS7lj+jJeWZTHnyYMpl+nFlXanxJDfChBiEilPfP5Og4VlXDl8T3LLZPRNo3HrzyGlxflcderyznnfz/mupP7cN0pvaO+1YUSQ3wpQYhIpRwsKubJ2Ws5+YgOhx3kxsy4YGg6J/btwO9eW87/vPs1r3+xkbsnHMXwHm3LrafEUDcoQYhIpby2eCNb9xzkqgqOHspq1zyFv00ayvih3fjNS0u58MFZ/GhUD/7fmUfQImygHiWGukUJQkSi5u5M+XQNfTs254S+lR9L4ZQjOvLWL07k3rdW8thnOby9fDO/P38QaclJSgx1kBKEiERtzpp8lm3YzR9/cFSVr3FolpLEHecNZNzRXblt6hdc/XhoqGAlhrpHCUJEovbIJ2tok9aEC4Z2q/a2hnZvw6s3juZfc9aRkGBcNDxdiaGOUYIQkais217A219u5rqTe9fYF3lyUgKTj8uskW1JzdOV1CISlcc+yyHRjMuPzYx3KFJLlCBE5LD2HCjk+XnrOXdwFzq1TI13OFJLYpogzGysma00s1VmdluE9beY2XIzW2Jm75pZj7B1xWa2KHhMj2WcIlKx5+flsvdgEVeNjv7UVqn/YtYHYWaJwP3A6UAuMNfMprv78rBiC4Esdy8ws58B9wATg3X73X1IrOITkegUlziPfbaGYzLbMDi9dbzDkVoUyyOIEcAqd89290PAs8D48ALu/r67FwSzswGNKC5Sx7y9fDPr8/dX6sI4aRhimSC6AevD5nODZeW5GngjbD7VzOaZ2WwzOz9SBTO7Jigzb+vWrdWPWES+Z8qna+jWuimnD+gU71CkltWJTmozuwzIAv4ctriHu2cBlwB/M7PeZeu5+8PunuXuWR06aAQpkZq2NG8Xc9bkc8VxmXVizGmpXbH8j+cBGWHz6cGy7zCz04BfA+Pc/WDpcnfPC/5mAx8AQ2MYq4hEMOXTNTRLTmTiiIzDF5YGJ5YJYi7Q18x6mlkyMAn4ztlIZjYUeIhQctgStryNmaUE0+2B44Hwzm0RibEtuw/w6uINXJSVQcuwG+pJ4xGzs5jcvcjMbgBmAonAFHdfZmZ3AfPcfTqhJqXmwAvBfV3Wufs4oD/wkJmVEEpifypz9pOIxNhTs9dSVOJcoSudG62Y3mrD3WcAM8osuz1s+rRy6n0GHBXL2ESkfAcKi3nq83WMObITme2bxTsciRP1OonI97yyKI/8fYe4anRmvEOROFKCEJHvcHemfJLDkZ1bcGyvdvEOR+JICUKknsvfd4j9h4prbHufrd7Oys17uGp0zyqP+SANgxKESD3l7rw4P5fRd7/Hyfe+zyuL8nD3am/3kU/W0L55MuOO7loDUUp9pgQhUg/tOVDIL55bxK0vLGZQt1Z0apnKzc8uYtLDs1m5aU+Vt5u9dS/vrdjCpSN7aPAe0YBBIvXN4vU7uenZhazPL+CW0/tx/Sl9AHhu7nrumbmCs//3Y644LpObT+tb6esXHvssh+TEBC4b1ePwhaXBU4IQqSdKSpx/fpLNPW+upGOLFJ776bEck9n2m/WXjOzOWYM6c+9bK5ny6RpeWbSB/zj7SC4Y2i2qvoRdBYW8MC+XcUO60qFFSiyfitQTamISqQe27jnIFY/N5Q8zVjCmf0dm3HzCd5JDqTbNkvmvC47ileuPJ71NU255fjE/fGgWyzfsPuw+np27jv2FxVx5fGYMnoHUR0oQInXcR19t5az/+ZjPs7fz+/MH8eBlw2mdllxhncHprZn2s+O4e8JRrN66j3P/72PunL6MXfsLI5YvKi7h8c9yGNWrLQO7torF05B6SE1MInXUoaIS/vL2Sh76MJu+HZvz9I9HckTnFlHXT0gwJh7TnTMHduavb3/FE7NyeG3JBn419kgmDEsnIeHbZqc3l21iw64D/Hb8oBg8E6mvdAQhUget217ARQ/N4qEPs7lkZHem3zC6UskhXOu0ZO4aP4jpN4yme9s0/t+LS7jwwc9YmrfrmzJTPllDj3ZpnHpkx5p6CtIA6AhCpI55ZVEev35pKQkGD1w6jLOP6lIj2x3UrRUvXnscUxfk8qc3VjDuvk+4dGQPxvTvyIJ1O7njvAEkJujCOPmWEoRIHbHvYBF3Tl/GC/NzGd6jDf8zaQjpbdJqdB8JCcZFWRmcMbAz/x00Oz05ey0tUpK4KEtjPsh3KUGI1AHLNuzixn8tZM22fdx4ah9uHtM3piO4tWrahDvHDWTiMRnc8+YKRvftQPMUfR3Id+kdIVLL3J2dBYWsyy9g/Y4Clm/YzT8/XkObZk14+scjOa53+1qLpX+Xljx65Yha25/UL0oQIjFwsKiYvB37Q0kgvyD4++38noNF3yl/+oBO3D1hMG2bVXz6qkhtUoIQqYIDhcVs2nWAjbsOsHHXfnKDZLAuv4Dc/AI27j5A+H3zkpMSyGjTlO5t0zgmsw0ZbdPIaJtG9+CvmnekLtK7UqSMfQeL2LT7ABt3hr78N+06wMbdB75JCJt27WdHwfcvOOvUMoWMNmmM6tXumy//7u3SyGiTRscWKd+57kCkPlCCkEbvoQ9X89nq7UEC2M/uA0XfK9O2WTKdW6bStVUqw7q3pkurVDq3ahr8TaVb66a6+6k0OEoQ0qitzy/gj2+sILNdGn07tWBkr7Z0Cfvi79IqlU4tU/XlL43SYROEmd0IPOXuO2ohHpFaNW1BHmbw9E9G0a1103iHI1KnRHOidSdgrpk9b2ZjTWMQSgPh7kxbmMuxvdopOYhEcNgE4e6/AfoCjwBXAF+b2R/MrHeMYxOJqflrd7B2ewE/GJYe71BE6qSoLtX00EC3m4JHEdAGeNHM7qmoXnDEsdLMVpnZbRHW32Jmy81siZm9a2Y9wtZNNrOvg8fkSj0rkShMXZBH0yaJjB3UOd6hiNRJh00QZnazmc0H7gE+BY5y958Bw4EJFdRLBO4HzgIGABeb2YAyxRYCWe4+GHgx2Adm1ha4AxgJjADuMLM2lXxuIuU6UFjMa0s2cNagzroGQaQc0RxBtAV+4O5nuvsL7l4I4O4lwLkV1BsBrHL3bHc/BDwLjA8v4O7vu3tBMDsbKD3WPxN4293zg87xt4GxUT8rkcN458vN7DlQpOYlkQpEkyDeAPJLZ8yspZmNBHD3Lyuo1w1YHzafGywrz9XBvqKua2bXmNk8M5u3devWCp+ESLhpC/Lo0iqVY3u3i3coInVWNAni78DesPm9wbIaY2aXAVnAnytTz90fdvcsd8/q0KFDTYYkDdjWPQf58KutnD+0m8Y/EKlANAnCgk5q4JumpWgabfOA8BvMpwfLvrtxs9OAXwPj3P1gZeqKVMUri/IoLnEmDKvogFZEokkQ2WZ2k5k1CR43A9lR1JsL9DWznmaWDEwCpocXMLOhwEOEksOWsFUzgTPMrE3QOX1GsEyk2qYuyOPo9Fb06Vi1ITxFGotoEsS1wHGEfsHnEjqz6JrDVXL3IuAGQl/sXwLPu/syM7vLzMYFxf4MNAdeMLNFZjY9qJsP/I5QkpkL3BUsE6mW5Rt28+XG3eqcFonCYZuKgl/2k6qycXefAcwos+z2sOnTKqg7BZhSlf2KlOelhbk0STTOO7prvEMRqfOiuRdTKqEzjAYCqaXL3f2qGMYlUuOKikt4aeEGTjmiowbmEYlCNE1MTwKdCV2b8CGhDuM9sQxKJBY+XrWNbXsPMmG4mpdEohFNgujj7v8J7HP3x4FzCPVDiNQrU+fn0jqtCacc0THeoYjUC9EkiNKhs3aa2SCgFaBPmNQru/YX8tbyzYw7uivJSVHdgkyk0YvmeoaHg1NNf0PoNNXmwH/GNCqRGvbGFxs5VFTCBJ29JBK1ChOEmSUAu4P7IX0E9KqVqERq2NQFufTu0IzB6a3iHYpIvVHhsXZw1fQvaykWkZhYu30fc3N2MGF4OhrvSiR60TTGvmNmt5pZhpm1LX3EPDKRGlI6rOgFQ3VrDZHKiKYPYmLw9/qwZY6am6QeKB1W9Pje7enSSsOKilRGNFdS96yNQERiYW7ODtbn7+cXp/WLdygi9U40V1JfHmm5uz9R8+GI1KxpC3JJS07kzIEaVlSksqJpYjombDoVGAMsAJQgpE47UFjM60s2ctagLjTTsKIilRZNE9ON4fNm1prQ8KEiddpbyzez52CRxn0QqaKqXFK6D1C/hNR50xbk0rVVKqN6aVhRkaqIpg/iVUJnLUEooQwAno9lUCLVtWX3AT76ais/O7k3CRpWVKRKommYvTdsughY6+65MYpHpEa8smgDJY4GBhKphmgSxDpgo7sfADCzpmaW6e45MY1MpBqmLshlSEZrendoHu9QROqtaPogXgBKwuaLg2UiddKyDbtYsWmPOqdFqimaBJHk7odKZ4JpDccldda0BXkaVlSkBkSTILaa2bjSGTMbD2yLXUgiVVdYXMIri/IYc2QnWqfpd4xIdUTTB3Et8LSZ3RfM5wIRr64WibePv97Ktr2HNKyoSA2I5kK51cAoM2sezO+NeVQiVTR1QR5tmyVzUr8O8Q5FpN47bBOTmf3BzFq7+15332tmbczs97URnEhl7Coo5G0NKypSY6L5FJ3l7jtLZ4LR5c6OZuNmNtbMVprZKjO7LcL6E81sgZkVmdmFZdYVm9mi4DE9mv1J4/a6hhUVqVHR9EEkmlmKux+E0HUQQMrhKplZInA/cDqhfou5Zjbd3ZeHFVsHXAHcGmET+919SBTxiQChax/6dmzOoG4t4x2KSIMQTYJ4GnjXzB4N5q8kuju5jgBWuXs2gJk9C4wHvkkQpRfbmVlJpA2IRCtn2z7mr93BbWcdqWFFRWpINJ3Ud5vZYuC0YNHv3H1mFNvuBqwPm88FRlYitlQzm0fo9h5/cveXyxYws2uAawC6d+9eiU1LQzNtYR4JBucP0cVxIjUlqpvku/ubwJtm1gz4gZm97u7nxDY0erh7npn1At4zsy+CM6rC43oYeBggKyvLI21EGr6SEmfaglyO79Oezq1S4x2OSIMRzVlMyWZ2gZm9AGwETgUejGLbeUBG2Hx6sCwq7p4X/M0GPgCGRltXGpe5Ofnk7tivzmmRGlZugjCzM4J+hzXABEL9DvnufqW7vxrFtucCfc2sp5klA5OAqM5GCk6lTQmm2wPHE9Z3IRJu6oJcmiUncsbATvEORaRBqegI4k2gFzDa3S8LkkLUncnuXgTcAMwEvgSed/dlZnZX6a07zOwYM8sFLgIeMrNlQfX+wLyg7+N9Qn0QShDyHe7Os3PW8fLCDZwzuAtpyRpWVKQmVfSJGkboV/87ZpZNaJjRxMps3N1nADPKLLs9bHouoaansvU+A46qzL6kcTlQWMx/vryUF+bnckLf9vz7Wf3jHZJIg1NugnD3RcAi4DYzOw64GGhiZm8ALwUdxCK1bu32fVz71AK+3Libm8b05eYxfUnUqHEiNS7as5g+Az4zs5sJne46ieDsIZHa9Pbyzdzy/CISzHj0imM45ciO8Q5JpMGqVKOtu5cAbwUPkVpTVFzCX9/+igc+WM1R3VrxwKXDyGibFu+wRBo09epJnbd1z0Fu+tdCZmVv5+IR3bnjvAGkNqlUd5iIVIEShNRp83Lyuf6ZBewsKOTPFw7moqyMw1cSkRoRVYIIbrzXKby8u6+LVVAi7s6jn+bwhxlf0q1NU6ZddwwDu7aKd1gijcphE4SZ3QjcAWzm2+sgHBgcw7ikEdt7sIhfTV3C60s2clr/Tvzlh0fTqmmTeIcl0uhEcwRxM3CEu2+PdTAiX2/ew7VPzWfNtn38auyR/PTEXiToFFaRuIgmQawHdsU6EJHpizdw29QlpCUn8tSPR3Jc7/bxDkmkUYsmQWQDH5jZ68DB0oXu/teYRSWNyqGiEv4w40se+yyH4T3acP8lw3RXVpE6IJoEsS54JAcPkRpTUuJMnjKHWdnbuXp0T24760iaJGo8aZG6IJoBg35bG4FI4/TJqm3Myt7O7ecO4KrRPeMdjoiEKTdBmNnf3P3nZvYqobOWvsPdx8U0MmkUnpi1lnbNkrl0lEYEFKlrKjqCeDL4e29tBCKNz/r8At5bsZlrT+pNSpKujBapayq6m+v84O+HtReONCZPfx661vLSUT3iHImIRBLNhXJ9gT8CA4BvTi1x914xjEsauAOFxTw3dx2n9e9Et9ZN4x2OiEQQzekijwJ/B4qAUwgNPfpULIOShu/1JRvZUVDI5cdmxjsUESlHNAmiqbu/C5i7r3X3O4FzYhuWNHRPzMqhV4dmHN+nXbxDEZFyRJMgDppZAvC1md1gZhcAzWMclzRgi9fvZHHuLi4f1QMz3UZDpK6KJkHcDKQBNwHDgcuAybEMShq2J2atpVlyIhOGf284chGpQyrspA5u8z3R3W8F9gJX1kpU0mDl7zvEq0s28MOsdFqk6g6tInVZuUcQZpbk7sXA6FqMRxq45+au51BRiTqnReqBio4g5gDDgIVmNh14AdhXutLdp8U4Nmlgikucp2avZVSvtvTr1CLe4YjIYUTTB5EKbAdOBc4Fzgv+HpaZjTWzlWa2ysxui7D+RDNbYGZFZnZhmXWTzezr4KE+jwbgvRVbyNu5X0cPIvVERUcQHc3sFmApoXsxhZ9u8r17M5UV9F/cD5wO5AJzzWy6uy8PK7YOuAK4tUzdtoRGscsK9jU/qLvjsM9I6qwnZuXQqWUKpw/oFO9QRCQKFR1BJBI6nbU50CJsuvRxOCOAVe6e7e6HgGeB8eEF3D3H3Zfw7VCmpc4E3nb3/CApvA2MjWKfUkdlb93Lx19v49KRPXQ7b5F6oqIjiI3uflc1tt2N0Gh0pXKBkdWo261sITO7BrgGoHt33Q20Lnty9lqaJBqTRmTEOxQRiVJFP+Xq/BVM7v6wu2e5e1aHDh3iHY6Uo+BQES/Oz2XsoC50bKGR4kTqi4oSxJhqbjsPCP+5mB4si3VdqWNeXriBPQeKmHys7toqUp+UmyDcPb+a254L9DWznmaWDEwCpkdZdyZwhpm1MbM2wBnBMqln3J0nZuXQv0tLhvdoE+9wRKQSYtZb6O5FwA2Evti/BJ5392VmdpeZjQMws2PMLBe4CHjIzJYFdfOB3xFKMnOBu2ogYUkczM3ZwYpNe7j8WN13SaS+Oex4ENXh7jOAGWWW3R42PZdQ81GkuoE/BeMAAA64SURBVFOAKbGMT2LviVk5tExNYvyQrvEORUQqSecbSsxs2X2AN5du4qKsDNKSY/pbRERiQAlCYuaZOesoKnEu05CiIvWSEoTERGFxCc98vo6T+nWgZ/tm8Q5HRKpACUJiYuayTWzZc5DLdWqrSL2lBCEx8cSstaS3acrJR3SMdygiUkVKEFLjVmzazZw1+fxoVA8SE3Rqq0h9pQQhNe6JWWtJSUrgh1m675JIfaYEITVq94FCXl6Yx3lHd6VNs+R4hyMi1aAEITVq6vxcCg4VM1mDAonUe0oQUmNKSpwnZ61lSEZrjkpvFe9wRKSalCCkxny6ehvZ2/bp1FaRBkIJQmrME7PW0q5ZMmcf1SXeoYhIDVCCkBqRu6OAd7/czMRjMkhtkhjvcESkBihBSI14+vN1AFyq+y6JNBhKEFJtBwqLeW7uek7r34lurZvGOxwRqSFKEFJtM77YSP6+Q1yuU1tFGhQlCKkWd+fxWWvp1aEZx/dpF+9wRKQGKUFItcxZk8/i9TuZfGymhhQVaWCUIKRa7v9gNe2bJzPxGN13SaShUYKQKvsidxcffbWVq0b31KmtIg2QEoRU2QMfrKJFapKGFBVpoJQgpEpWbdnDm8s2MfnYTFqmNol3OCISA0oQUiV//yCb1KRErjw+M96hiEiMxDRBmNlYM1tpZqvM7LYI61PM7Llg/edmlhkszzSz/Wa2KHg8GMs4pXLW5xfw8qI8Lh7RnXbNU+IdjojESFKsNmxmicD9wOlALjDXzKa7+/KwYlcDO9y9j5lNAu4GJgbrVrv7kFjFJ1X38EfZJBj85MSe8Q5FRGIolkcQI4BV7p7t7oeAZ4HxZcqMBx4Ppl8ExphOpq/Ttuw5wHPz1jNhWDpdWum2GiINWSwTRDdgfdh8brAsYhl3LwJ2AaWX4/Y0s4Vm9qGZnRBpB2Z2jZnNM7N5W7durdnoJaJHPllDUXEJPz2pd7xDEZEYq6ud1BuB7u4+FLgFeMbMWpYt5O4Pu3uWu2d16NCh1oNsbHYVFPLUrLWcM7grPds3i3c4IhJjsUwQeUD45bXpwbKIZcwsCWgFbHf3g+6+HcDd5wOrgX4xjFWi8PisHPYdKua6k3X0INIYxDJBzAX6mllPM0sGJgHTy5SZDkwOpi8E3nN3N7MOQSc3ZtYL6AtkxzBWOYx9B4uY8ukaxhzZkf5dvncwJyINUMzOYnL3IjO7AZgJJAJT3H2Zmd0FzHP36cAjwJNmtgrIJ5REAE4E7jKzQqAEuNbd82MVqxzev+asY2dBIdef2ifeoYhILYlZggBw9xnAjDLLbg+bPgBcFKHeVGBqLGOT6B0sKuYfH2dzbK92DOveJt7hiEgtqaud1FKHTJ2fx+bdB7n+FB09iDQmShBSoaLiEh78cDVHp7fSgEAijYwShFTo9S82si6/gOtO6aMBgUQaGSUIKVdJifPA+6vp27E5p/fvFO9wRKSWKUFIud5dsYWVm/dw3Sm9SUjQ0YNIY6MEIRG5O/e9v4qMtk05b3DXeIcjInGgBCERzVq9ncXrd3LtSb1JStTbRKQx0idfIrrv/VV0bJHChGHp8Q5FROJECUK+Z8G6HXy2ejs/OaEXqU0S4x2OiMSJEoR8zwPvr6ZV0yZcMrJ7vEMRkThSgpDvWLFpN+98uZkrj8+kWUpM78QiInWcEoR8x98/WE2z5ESuOC4z3qGISJwpQcg31m7fx6uLN3DZqB60TkuOdzgiEmdKEPKNBz/MJikxgatH94x3KCJSByhBCACbdh1g6vxcfpiVTseWqfEOR0TqACUIAeAfH2dT7M5PT9RwoiISogQh5O87xDOfr2P80V3JaJsW73BEpI5QghAe+3QN+wuL+dnJOnoQkW/pRPdGLG/nfh79ZA1Pfb6WsQM707dTi3iHJCJ1iBJEI7Qkdyf/+HgNM77YCMC5g7vw72f1j3NUIlLXKEE0EiUlznsrtvDwx9nMWZNPi5Qkrh7dkyuOy6Rr66bxDk9E6iAliAbuQGExUxfk8sgna8jeuo9urZvym3P6M/GYDFqkNol3eCJShylBNFDb9h7kiVlreWr2WvL3HWJweiv+9+KhnD2os8Z3EJGoxDRBmNlY4H+AROCf7v6nMutTgCeA4cB2YKK75wTr/h24GigGbnL3mbGMtaFYtWUP//x4DdMW5lFYXMKYIzvxkxN6MqJnW8w0bKiIRC9mCcLMEoH7gdOBXGCumU139+Vhxa4Gdrh7HzObBNwNTDSzAcAkYCDQFXjHzPq5e3Gs4q3P3J1Zq7fzj4+zeX/lVlKSErhoeDpXj+5Jrw7N4x2eiNRTsTyCGAGscvdsADN7FhgPhCeI8cCdwfSLwH0W+pk7HnjW3Q8Ca8xsVbC9WTUd5M6CQ1z0YI1vtlYVHComb+d+2jdP5pbT+3HZqB60baab7YlI9cQyQXQD1ofN5wIjyyvj7kVmtgtoFyyfXaZut7I7MLNrgGsAunev2uA2CQlG3071+1e2mXHjqX04f2g3jQAnIjWmXndSu/vDwMMAWVlZXpVttExtwgOXDq/RuEREGoJYns6SB2SEzacHyyKWMbMkoBWhzupo6oqISAzFMkHMBfqaWU8zSybU6Ty9TJnpwORg+kLgPXf3YPkkM0sxs55AX2BODGMVEZEyYtbEFPQp3ADMJHSa6xR3X2ZmdwHz3H068AjwZNAJnU8oiRCUe55Qh3YRcL3OYBIRqV0W+sFe/2VlZfm8efPiHYaISL1iZvPdPSvSOl1SKyIiESlBiIhIREoQIiISkRKEiIhE1GA6qc1sK7C2GptoD2yroXBiQfFVj+KrHsVXPXU5vh7u3iHSigaTIKrLzOaV15NfFyi+6lF81aP4qqeux1ceNTGJiEhEShAiIhKREsS3Ho53AIeh+KpH8VWP4queuh5fROqDEBGRiHQEISIiESlBiIhIRI0qQZjZWDNbaWarzOy2COtTzOy5YP3nZpZZi7FlmNn7ZrbczJaZ2c0RypxsZrvMbFHwuL224guLIcfMvgj2/727I1rI/wav4RIzG1aLsR0R9tosMrPdZvbzMmVq9TU0sylmtsXMloYta2tmb5vZ18HfNuXUnRyU+drMJkcqE6P4/mxmK4L/30tm1rqcuhW+F2IY351mlhf2Pzy7nLoVft5jGN9zYbHlmNmicurG/PWrNndvFA9CtxxfDfQCkoHFwIAyZa4DHgymJwHP1WJ8XYBhwXQL4KsI8Z0MvBbn1zEHaF/B+rOBNwADRgGfx/H/vYnQRUBxew2BE4FhwNKwZfcAtwXTtwF3R6jXFsgO/rYJptvUUnxnAEnB9N2R4ovmvRDD+O4Ebo3i/1/h5z1W8ZVZ/xfg9ni9ftV9NKYjiBHAKnfPdvdDwLPA+DJlxgOPB9MvAmPMzGojOHff6O4Lguk9wJdEGIe7HhgPPOEhs4HWZtYlDnGMAVa7e3Wurq82d/+I0Fgn4cLfZ48D50eoeibwtrvnu/sO4G1gbG3E5+5vuXtRMDub0IiOcVHO6xeNaD7v1VZRfMF3xw+Bf9X0fmtLY0oQ3YD1YfO5fP8L+JsywQdkF9CuVqILEzRtDQU+j7D6WDNbbGZvmNnAWg0sxIG3zGy+mV0TYX00r3NtmET5H8x4v4ad3H1jML0J6BShTF15Ha8idEQYyeHeC7F0Q9AENqWcJrq68PqdAGx296/LWR/P1y8qjSlB1Atm1hyYCvzc3XeXWb2AUJPJ0cD/AS/XdnzAaHcfBpwFXG9mJ8YhhgpZaIjbccALEVbXhdfwGx5qa6iT55qb2a8Jjej4dDlF4vVe+DvQGxgCbCTUjFMXXUzFRw91/rPUmBJEHpARNp8eLItYxsySgFbA9lqJLrTPJoSSw9PuPq3senff7e57g+kZQBMza19b8QX7zQv+bgFeInQoHy6a1znWzgIWuPvmsivqwmsIbC5tdgv+bolQJq6vo5ldAZwLXBokse+J4r0QE+6+2d2L3b0E+Ec5+43365cE/AB4rrwy8Xr9KqMxJYi5QF8z6xn8wpwETC9TZjpQerbIhcB75X04alrQXvkI8KW7/7WcMp1L+0TMbASh/19tJrBmZtaidJpQZ+bSMsWmA5cHZzONAnaFNafUlnJ/ucX7NQyEv88mA69EKDMTOMPM2gRNKGcEy2LOzMYCvwTGuXtBOWWieS/EKr7wPq0LytlvNJ/3WDoNWOHuuZFWxvP1q5R495LX5oPQGTZfETq74dfBsrsIfRAAUgk1S6wC5gC9ajG20YSaGpYAi4LH2cC1wLVBmRuAZYTOyJgNHFfLr1+vYN+LgzhKX8PwGA24P3iNvwCyajnGZoS+8FuFLYvba0goUW0ECgm1g19NqF/rXeBr4B2gbVA2C/hnWN2rgvfiKuDKWoxvFaH2+9L3YemZfV2BGRW9F2opvieD99YSQl/6XcrGF8x/7/NeG/EFyx8rfc+Fla3116+6D91qQ0REImpMTUwiIlIJShAiIhKREoSIiESkBCEiIhEpQYiISERKECJRMrPi4M6bS83s1fLuclqD+7vCzO6L5T5EKqIEIRK9/e4+xN0HEbpB2/XxDkgklpQgRKpmFsHN38xsiJnNDhs/oU2w/AMzywqm25tZTjB9hZlNM7M3g7Ee7indqJldaWZfmdkc4Phaf1YiYZQgRCrJzBIJ3U689NYNTwC/cvfBhK7wvSOKzQwBJgJHARMtNGBUF+C3hBLDaGBATccuUhlKECLRaxqMDlZ6i+63zawV0NrdPwzKPE5oEJnDedfdd7n7AWA50AMYCXzg7ls9NIZBuTd6E6kNShAi0dvv7kMIfZkbh++DKOLbz1hqmXUHw6aLgaQaiVCkBilBiFSSh+5wehPwb8A+YIeZnRCs/hFQejSRAwwPpi+MYtOfAyeZWbvg1u8X1VjQIlWgXy0iVeDuC81sCaFbi08GHjSzNEJjR18ZFLsXeD4YLez1KLa50czuJNQBvpPQnVRF4kZ3cxURkYjUxCQiIhEpQYiISERKECIiEpEShIiIRKQEISIiESlBiIhIREoQIiIS0f8HajyR/sgBOQAAAAAASUVORK5CYII=\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYgAAAEWCAYAAAB8LwAVAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90\nbGliIHZlcnNpb24zLjYuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/av/WaAAAACXBIWXMAAAsT\nAAALEwEAmpwYAAAsGElEQVR4nO3deXxU9b3/8dcnCWEJEAgJa8JqREFAMQLWpdqC4oK0damtdauW\n+mu9LW29rb1trb3X3i7X7rW11N1q0VpoEXCrdWmtSIJCWJQtBAgJEgIECIRsn98fc6LTOEkGyGSS\nzPv5eJzHnOX7PfOZwzCfnO/5fs8xd0dERKSppHgHICIiHZMShIiIRKQEISIiESlBiIhIREoQIiIS\nkRKEiIhEpAQhCc/MnjGz6+Mdh0hHYxoHIZ2RmR0MW+wFHAHqg+XPu/tj7RzPy8AkYLC7H2nP9xaJ\nFZ1BSKfk7r0bJ2AbMCts3XvJwcxSYh2LmY0EzgEcuCzW79fkvWP++SRxKUFIl2Jm55lZiZl9w8x2\nAg+aWX8zW2xm5Wa2N5jPDqvzspndHMzfYGb/NLO7g7JbzOyiVt72OmAZ8BDwb01VZpZjZguC964w\ns1+Hbfucmb1tZgfMbJ2ZTQ7Wu5mdEFbuITO76zg+X4aZPWhmpcH2vwTr15jZrLBy3cxst5mdelQH\nXbosJQjpigYDGcAIYA6h7/mDwfJw4DDw62Zrw1RgPZAJ/Bi438yshfLXAY8F04VmNgjAzJKBxcBW\nYCQwDJgfbLsSuDOo25fQmUdFjD7fo4Sa4cYDA4GfBesfAT4TVu5ioMzdV0YZh3R17q5JU6eegGJg\nejB/HlAD9Gih/KnA3rDll4Gbg/kbgE1h23oRajoa3My+zgZqgcxg+R3gK8H8mUA5kBKh3nPAl5vZ\npwMnhC0/BNx1LJ8PGAI0AP0jlBsKHAD6BstPAV+P97+npo4z6QxCuqJyd69uXDCzXmb2OzPbamb7\ngVeBfsFf+JHsbJxx90PBbO9myl4PPO/uu4Plx3m/mSkH2OrudRHq5QCbo/s4H3A0ny8H2OPue5vu\nxN1LgdeAy82sH3ARobMgEQB0gUu6oqZd874GjAWmuvvOoI39LaClZqNWmVlP4CogObgeANCd0I/z\nJGA7MNzMUiIkie3AmGZ2fYjQmUujwUBJ2PLRfL7tQIaZ9XP3fRHe62HgZkK/Ba+7+47mPq8kHp1B\nSCLoQ6hdfp+ZZQDfbaP9foxQ19pxhJp1TgVOBv5B6NrCcqAM+KGZpZlZDzM7K6h7H3CbmZ1uISeY\n2Yhg20rg02aWbGYzgQ8f6+dz9zLgGeA3wcXsbmZ2bljdvwCTgS8TuiYh8h4lCEkEPwd6ArsJ9TZ6\nto32ez3woLtvc/edjROhC8TXEPoLfhZwAqGuuCXAJwHc/U/A9wk1SR0g9EOdEez3y0G9fcF+/tJK\nHD+n5c93LaHrJO8Au4C5jRvc/TDwZ2AUsCDqTy4JQQPlRBKcmd0BnOjun2m1sCQUXYMQSWBBk9RN\nhM4yRP6NmphEEpSZfY7QRexn3P3VeMcjHY+amEREJCKdQYiISERd6hpEZmamjxw5Mt5hiIh0GitW\nrNjt7lmRtnWpBDFy5EgKCgriHYaISKdhZlub26YmJhERiUgJQkREIlKCEBGRiJQgREQkIiUIERGJ\nSAlCREQiUoIQEZGIutQ4CBGRzmbNjkpe2VBOv17dGJCWSkZadzLSupGR1p1+PbuRlHRcz7U6LkoQ\nIiJxsnR1GXOfWElNXUPE7UkG/Xul0j8tlYy01CCB/Ps0IK07A3qncvKQvm0enxKEiEgc3P/PLdy1\nZB2Th/fnt9dMpsGhouoIe6tqqag6wp6qGvZU1VBRVcPe4HXjroPsqaph76Eawu+zOiAtlRXfmdHm\nMSpBiIi0o4YG5/tL3+b+f25h5vjB/PzqU+nRLRmAwek9otpHfYNTebiWPVVHqDhYw5FmzkCOlxKE\niEg7qa6t52tPrmLJ6jJu+NBIvnPpOJKP4RpDcpK918R0wsAYBBpQghARaQf7DtXwuUcKyC/ey7cv\nOZmbzh6FWfwuQEdDCUJEJMa27znEDQ8uZ/uew/zqU6cxa9LQeIcUFSUIEZEYWrOjkhsfyudIbT2P\n3jSFqaMHxDukqClBiIjEyMvrd/GFx96kf69UHr95KrmD+sQ7pKMS05HUZjbTzNab2SYzuz3C9tlm\nVmhmK82swMzOjrauiMjxKio/yFefXMmnf7+MR14vpvzAkTbb95P527np4QJGDkhjwRc+1OmSA4B5\neGfattyxWTKwAZgBlAD5wKfcfV1Ymd5Albu7mU0EnnT3k6KpG0leXp7riXIi0pptFYf45d83suDN\nErqnJDO0Xw82l1eRZDBt9ABmTRrKzPGD6Z+WetT7dnd+/reN/OLFjZyTm8lvP3M6vbt33MYaM1vh\n7nmRtsUy6inAJncvCoKYD8wG3vuRd/eDYeXTAI+2rojI0SrZe4hf/30TT60oITnJuPGsUdzy4TFk\n9enO+p0HWFxYytOrSvnmgtV85y9rODs3k0snDuWC8YPo26Nbq/uvrW/gvxas5k8rSrji9Gx+8IkJ\ndEvuvLe8i2WCGAZsD1suAaY2LWRmHwd+AAwELjmauiIi0dhZWc2vX9rIE/nbMYzPTBvB/ztvDIP6\nvj8wbezgPowdPJavzjiRtaX7ebqwlMWryrjtT6tIXZDEh8dmMWvSUKafPJBeqR/86Tx4pI4vPPYm\nr24o58sfzWXu9NwO3421NbFMEJGOzAfas9x9IbDQzM4F/geYHm1dADObA8wBGD58+DEHKyJdz64D\n1fzmpc08vnwb7s5VeTl88fwTGNqvZ7N1zIxThqVzyrB0bp95Em9t38fTq0pZUljGC+vepUe3JD56\n8iBmTRzCeWMH0qNbMrv2V3PjQ/m8s/MAP7p8Ap88o2v8FsUyQZQAOWHL2UBpc4Xd/VUzG2NmmUdT\n193nAfMgdA3ieIMWkc6v4uAR7n1lM48u20ptvXPF5Gxu/cgJ5GT0Oqr9mBmTh/dn8vD+fPuSceQX\n72FxYSlLV+9kSWEZvbunMGPcIJZv2cPeQzXcd30e54+N4dDmdhbLBJEP5JrZKGAHcDXw6fACZnYC\nsDm4SD0ZSAUqgH2t1RURaWpvVQ3z/lHEw/8qprq2no+dNowvfSSXkZlpx73v5CRj2ugBTBs9gDtn\njef1ogqeXlXKs2t20qNbMk/MOZMJ2elt8Ck6jpglCHevM7NbgeeAZOABd19rZrcE2+8FLgeuM7Na\n4DDwSQ91q4pYN1axikjnVnm4lvv/UcQDrxVTVVPHrIlD+fL0XMZk9Y7J+6UkJ3FObhbn5Gbx/Y9P\nINksrs9tiJWYdXONB3VzFUks+6trefCfxdz3zyIOVNdx8YTBzJ1+Iid2wjEH8RKvbq4iIjFx8Egd\nD722hd//YwuVh2uZMW4QX5l+IuOGtv1DcxKZEoSIdBpVR+p4+PVifv9qEXsP1fLRkwYyd/qJXa7t\nv6NQghCRDu9QTR2Pvr6V371axJ6qGs4bm8Xc6Sdyak6/eIfWpSlBiEiHdbimnsfe2Mq9r2xm98Ea\nzj0xi7nTc5k8vH+8Q0sIShAi0uFU19bz+Bvb+O0rmyk/cISzT8jkKzNyOX1ERrxDSyhKECLSYVTX\n1jN/+TZ+8/Jmdh04wpmjB3DPpyczZZQSQzwoQYhI3B2pq+fJghLu+fsmdu6vZsqoDH5x9WmcOabz\nPFynK1KCEJG4cXeeWlHCz17YQGllNXkj+vPTqyZx5pgBnf5Gd12BEoSIxMXGdw/wrYVrWF68h0k5\n/fjRFRM5+4RMJYYORAlCRNrV4Zp6fvX3jcx7tYjePVL40eUTuPL0nC55q4rOTglCRNrNS+t3ccdf\n17B9z2Eun5zNf118EgN6d493WNIMJQgRibl391fzvafXsnT1TsZkpfHHz03TBehOQAlCRGKmvsF5\n9PVi7n5+A7X1Ddx2wYl87tzRdE9JjndoEgUlCBGJicKSfXxr4RpW76jknNxM7vrYKYwYcPzPZZD2\nowQhIm1qf3UtP3luPY8s20pm7+78+tOnccmEIeqd1AkpQYhIm3B3lq7eyfeeXkv5wSNcN20EX7tw\nLH17dIt3aHKMlCBEEtz85dt4eX05fXum0LdHN9J7dqNvz2707ZkSmg9bl96zG91Tkj5wNrCt4hDf\n+esaXtlQzvihffn9dXlM0p1WOz0lCJEEtvCtEm5fsJqh6T2od2f/4ToO19a3WCc1OSmUTILk0adH\nCsu37CElybjj0nFcd+YIUpKT2ukTSCwpQYgkqDeKKvj6U4WcOXoAD392CqkpoR/1mroG9lfXsv9w\nLfur66g83DhfG8wH6xrLHK7l4glD+MbMkxic3iPOn0rakhKESALasruKz/9hBTkZvbj3M6e/lxwA\nUlOSyOzdnUwNYEt4Og8USTB7q2q48cHlJJnx4A1nkN5LF5ElMp1BiCSQI3X1zHm0gNLKav74uaka\nlyAt0hmESIJwd27/82ryi/dy95WT9HQ2aVVME4SZzTSz9Wa2ycxuj7D9GjMrDKZ/mdmksG3FZrba\nzFaaWUEs4xRJBL94cSML39rBbRecyGWThsY7HOkEYtbEZGbJwD3ADKAEyDezRe6+LqzYFuDD7r7X\nzC4C5gFTw7af7+67YxWjSKJY+FYJP//bRi6fnM0Xzz8h3uFIJxHLM4gpwCZ3L3L3GmA+MDu8gLv/\ny933BovLgOwYxiOSkJZv2cM3nlrNtNEZ/OATE3TLC4laLBPEMGB72HJJsK45NwHPhC078LyZrTCz\nOc1VMrM5ZlZgZgXl5eXHFbBIV7NldxVzHi0gO6PnB7qzirQmlr2YIv2Z4hELmp1PKEGcHbb6LHcv\nNbOBwAtm9o67v/qBHbrPI9Q0RV5eXsT9iySipt1Z+/VKjXdI0snE8s+JEiAnbDkbKG1ayMwmAvcB\ns929onG9u5cGr7uAhYSarEQkCkfq6vn8oyso3VfNvGtPV3dWOSaxTBD5QK6ZjTKzVOBqYFF4ATMb\nDiwArnX3DWHr08ysT+M8cAGwJoaxinQZjd1Zlxfv4f+unEjeSHVnlWMTsyYmd68zs1uB54Bk4AF3\nX2tmtwTb7wXuAAYAvwkunNW5ex4wCFgYrEsBHnf3Z2MVq0hX8ssXN7HwrR18bcaJzD61pct+Ii0z\n967TbJ+Xl+cFBRoyIZ3H7oNH+NIf32LL7iqmjspg6ugBTBs9gJEDeh1Tb6O/vLWDuU+s5PLJ2dx9\n5UT1WJJWmdmK4A/zD9CtNkTipKj8IDc8mM+uA9Wcd+JA/rmpgr+sDF2mG9S3O9OCZDF1VAajMtNa\n/bFfvmUPX3+qkKmj1J1V2oYShEgcrNi6h5sfLsDM+OPnpnHa8P64O0W7q1hWVMGyoj38a3MFfw0S\nxsA+7yeMaaM/mDC27K7i848WkN2/J7+7Vt1ZpW0oQYi0s2dWlzH3iZUMSe/BQzdOYWRmqIeRmTEm\nqzdjsnpzzdQRuDtbdlexrGhPkDQqWLQqlDCy3ksYGUwYls6X568E4MEb1Z1V2o4ShEg7uv+fW7hr\nyTpOy+nHfdefQUZa8z/mZsborN6MzurNp6cOx90prjj0XrJYVlTB00HCSE1O4jHdnVXamBKESDuo\nb3DuWrKOB18r5sLxg/jF1afRo1vyUe3DzBiVmcaozDQ+NSWUMLZWHOKNLRUMz0jjDHVnlTamBCES\nY9W19cydv5Jn1+7kxrNG8u1LxpGcdPwXkM2MkZlp7zVRibQ1JQiRGNpTVcPND+fz1vZ9fOfScdx0\n9qh4hyQSNSUIkRjZWlHFDQ/ms2PfYX7z6clcNGFIvEMSOSpKECIx8Na2vdz8cAH17jx+81Td7kI6\nJSUIkTb2/NqdfGn+Wwzs04OHbjyD0Vm94x2SyDFRghBpQw//q5g7n17LxGHp3H/DGWT27h7vkESO\nmRKESBtoaHB++Ow7zHu1iOknD+KXnzqVXqn67yWdm77BIsepuraer/1pFUsKy7h22gjuvGx8m3Rj\nFYk3JQiRo1R5uJa1Oyop3FHJ6pJK3ty2l7LKar550UnMOXe0bpInXYYShEgLDh6pY02QCEIJYR/F\nFYfe2z48oxeTR/Tn46cOY/q4QXGMVKTttZoggof+PObue9shHpG4OVRTx7rS/RSWVLJ6RyWFJfso\n2l1F4yNThvXryYRh6VyZl8PE7HROGZpO/xbupSTS2UVzBjEYyDezN4EHgOe8Kz1lSBJa5aFafvvK\nZl56Zxcbdx2gIfhmD+rbnQnD+jH71GFMyE5nwrB09UiShNNqgnD3b5vZdwg9F/pG4Ndm9iRwv7tv\njnWAIrFQW9/A429s4+d/28C+w7Wck5vFhacMZuKwdCZkpzOob494hygSd1Fdg3B3N7OdwE6gDugP\nPGVmL7j712MZoEhbcndeWr+L7y95m83lVXxozAC+fck4xg3tG+/QRDqcaK5BfAm4HtgN3Af8p7vX\nmlkSsBFQgpBOYf3OA9y1ZB3/2LibUZlp/P66PKafPFC9jkSaEc0ZRCbwCXffGr7S3RvM7NLYhCXS\ndnYfPMJPX9jA/OXb6NOjG3dcOo7PTBuhx3KKtCKaBLEU2NO4YGZ9gHHu/oa7vx2zyESOU3VtPQ/9\nq5h7/r6Jw7X1XHfmSOZOz9UjOUWiFM2fUL8FDoYtVwXrWmVmM81svZltMrPbI2y/xswKg+lfZjYp\n2roizXF3lhSWMeNnr/DDZ95h6ugMnvvKudx52XglB5GjEM0ZhIV3aw2alqK5dpEM3APMAEoIdZVd\n5O7rwoptAT7s7nvN7CJgHjA1yroiH7Bq+z7uWrKO/OK9nDS4D3+4aSpn52bGOyyRTimaBFEUXKhu\nPGv4AlAURb0pwCZ3LwIws/nAbOC9H3l3/1dY+WVAdrR1RcKVVR7m/55dz4K3dpDZO5UffGICV+Xl\n6J5IIschmgRxC/BL4NuAAy8Cc6KoNwzYHrZcAkxtofxNwDNHW9fM5jTGM3z48CjCkq7mD8u2cteS\ndTQ4/L/zxvCF88bQp0e3eIcl0ulFM1BuF3D1Mew70p9uEUdgm9n5hBLE2Udb193nEWqaIi8vTyO8\nE8z+6lruWrKOSdn9uPvKSeRk9Ip3SCJdRjTXEnoQ+vEeD7w3vNTdP9tK1RIgJ2w5GyiNsP+JhMZX\nXOTuFUdTV2RpYRnVtQ188+KTlRxE2lg0vZgeJXQ/pguBVwj9WB+Iol4+kGtmo8wsldBZyKLwAmY2\nHFgAXOvuG46mrgjAUytKOGFgbyZlp8c7FJEuJ5oEcYK7fweocveHgUuACa1Vcvc64FbgOeBt4El3\nX2tmt5jZLUGxO4ABwG/MbKWZFbRU9yg/m3RxW3ZXUbB1L1ecnq3R0CIxEM1F6trgdZ+ZnULofkwj\no9m5uy8lNNAufN29YfM3AzdHW1ck3J9XlJBk8PHThsU7FJEuKZoEMc/M+hPqxbQI6A18J6ZRibSi\nvsH585slnHtilu68KhIjLSaI4IZ8+4OHBb0KjG6XqERa8frmCsoqq/nWJSfHOxSRLqvFaxDu3kDo\nWoBIh/LUiu307ZHC9JP1mE+RWInmIvULZnabmeWYWUbjFPPIRJpxoLqWZ9fuZNakofTolhzvcES6\nrGiuQTSOd/hi2DpHzU0SJ0tXh8Y+XHF6duuFReSYRTOSelR7BCISradWlDAmK41Tc/rFOxSRLi2a\nkdTXRVrv7o+0fTgiLSveXUV+8V6+MfMkjX0QibFompjOCJvvAXwUeBNQgpB29+c3NfZBpL1E08T0\nH+HLZpZO6PYbIu2qocH584oSzsnNYnC6xj6IxNqxPJT3EJDb1oGItOb1ogpKK6t1cVqknURzDeJp\n3r/VdhIwDngylkGJRPLUihL69EhhxjiNfRBpD9Fcg7g7bL4O2OruJTGKRySiA9W1PLOmjMsnZ2vs\ng0g7iSZBbAPK3L0awMx6mtlIdy+OaWQiYTT2QaT9RXMN4k9AQ9hyfbBOpN1o7INI+4smQaS4e03j\nQjCfGruQRP5d49iHK07P0dgHkXYUTYIoN7PLGhfMbDawO3Yhifw7jX0QiY9orkHcAjxmZr8OlkuA\niKOrRdqaxj6IxE80A+U2A9PMrDdg7h7N86hF2kTj2IdvXqznPoi0t1abmMzsf82sn7sfdPcDZtbf\nzO5qj+BENPZBJH6iuQZxkbvva1wIni53ccwiEgk0jn3Qcx9E4iOaBJFsZt0bF8ysJ9C9hfIibUJj\nH0TiK5qL1H8AXjSzBwndcuOz6E6u0g6eWlHC6Kw0TtPYB5G4aPUMwt1/DNwFnAyMB/7H3X8Uzc7N\nbKaZrTezTWZ2e4TtJ5nZ62Z2xMxua7Kt2MxWm9lKMyuI7uNIV/H+2IdsjX0QiZNoziBw92eBZ80s\nDfi4mS1x90taqmNmycA9wAxCXWPzzWyRu68LK7YH+BLwsWZ2c767a8xFAloQjH34xGlqXhKJl2h6\nMaWa2cfM7EmgjNADg+6NYt9TgE3uXhSMvp4PzA4v4O673D0fqD360KWramhw/vzmDs7W2AeRuGo2\nQZjZDDN7ANgCXEHoIUF73P1Gd386in0PA7aHLZcE66LlwPNmtsLM5rQQ5xwzKzCzgvLy8qPYvXRU\ny4oq2LHvsC5Oi8RZS2cQzwFjgLPd/TNBUmhooXxTkRqOPcK65pzl7pOBi4Avmtm5kQq5+zx3z3P3\nvKysrKPYvXRUjWMfLtDYB5G4ailBnA4sA/5mZi+Y2U3A0XRGLwFywpazgdJoK7t7afC6C1hIqMlK\nurgD1bUs1dgHkQ6h2QTh7m+5+zfcfQxwJ3AakGpmz7TU5BMmH8g1s1FmlgpcDSyKJigzSzOzPo3z\nwAXAmmjqSuf2zOqdGvsg0kFE24vpNeA1M/sSoV5JVwPzWqlTZ2a3EmqqSgYecPe1ZnZLsP1eMxsM\nFAB9gQYzm0vokaaZwMKge2MK8HjQk0q6OI19EOk4okoQjdy9gdAP/nNRll8KLG2y7t6w+Z2Emp6a\n2g9MOprYpPMr3l3F8uI9fH3mWI19EOkAornVhki70NgHkY5FCUI6BI19EOl4okoQZpZsZkPNbHjj\nFOvAJLFo7INIx9PqNQgz+w/gu8C7vD8OwoGJMYxLEozGPoh0PNFcpP4yMNbdK2IdjCSmxrEPn5ic\nrbEPIh1INE1M24HKWAciiatx7MPlk9W8JNKRRHMGUQS8bGZLgCONK939pzGLShJCdW09SwrL+NVL\nGxmdmcbk4f3iHZKIhIkmQWwLptRgEjkum8sP8vgb23hqRQmVh2sZnZXG9y4br7EPIh1MqwnC3b/X\nHoFI11ZT18Dz63by2LJtvF5UQUqSceEpg7lm6nDOHD1AyUGkA2o2QZjZz919rpk9TYS7sLr7ZTGN\nTLqE7XsOMT9/G0/kl7D74BGG9evJf144lqvycsjqo0ebi3RkLZ1BPBq83t0egUjXUd/gvPTOLh57\nYysvbyjHgI+cNIhrpg3n3NwskpN0tiDSGTSbINx9RfD6SvuFI53Zrv3VPJG/nT8u30ZpZTUD+3Tn\nPz6Sy9Vn5DC0X894hyciRymagXK5wA8I3WX1vXsguPvoGMYlncjqkkp+8/ImXlj3LnUNzjm5mdwx\naxwfPXkQ3ZJ1NxeRziqaXkwPEhpJ/TPgfOBGIj8tThJQbX0D1z+4HHfnprNH8akpwxmZmRbvsESk\nDUSTIHq6+4tmZu6+FbjTzP5BKGlIgntt0272VNVw33V5TNdtMkS6lGgSRLWZJQEbgwcA7QAGxjYs\n6SwWF5bRp0cK55yYGe9QRKSNRdNAPBfoBXyJ0HOqPwNcH8OYpJM4UlfPc2t3csG4wXRP0T2URLqa\nFs8gzCwZuMrd/xM4SOj6gwgA/9iwmwPVdVw6aUi8QxGRGGj2DMLMUty9HjjdNMxVIlhcWEq/Xt04\n+wQ1L4l0RS2dQSwHJgNvAX81sz8BVY0b3X1BjGOTDqy6tp4X1r3LrElD1ZVVpIuK5iJ1BlABfITQ\nLTcseFWCSGAvr99FVU09l04cGu9QRCRGWkoQA83sq8Aa3k8MjT5wbyZJLE8XljEgLZVpozPiHYqI\nxEhLbQPJQO9g6hM23zi1ysxmmtl6M9tkZrdH2H6Smb1uZkfM7LajqSvxc6imjr+/vYuLJgwmRc1L\nIl1WS2cQZe7+38e646AH1D3ADKAEyDezRe6+LqzYHkLdZz92DHUlTl58exeHa9W8JNLVtfTn3/H2\nXJoCbHL3InevAeYDs8MLuPsud88Hao+2rsTP4sJSBvbpzhkj1bwk0pW1lCA+epz7HkboedaNSoJ1\nbVrXzOaYWYGZFZSXlx9ToBK9A9W1vLS+nIsnDNFtu0W6uGYThLvvOc59R/r1iPbidtR13X2eu+e5\ne15WVlbUwcmx+dvb71JT18AsDY4T6fJieYWxBMgJW84GStuhrsTQ4lVlDE3vwWk5/eMdiojEWCwT\nRD6Qa2ajzCwVuBpY1A51JUYqD9Xy6sZyLpk4hCQ1L4l0edEMlDsm7l4X3P31OUJdZh9w97Vmdkuw\n/V4zGwwUAH2BBjObC4xz9/2R6sYqVonOc+t2Ulvv6r0kkiBiliAA3H0psLTJunvD5ncSaj6Kqq7E\n1+LCMnIyejIxOz3eoYhIO9AoJ4nKnqoaXtu0m0smDEX3bhRJDEoQEpVn1+ykvsG5dKJ6L4kkCiUI\nicqS1aWMykxj/NC+8Q5FRNqJEoS0qvzAEV7fXMGlE4eoeUkkgShBSKueXVNGg6PeSyIJRglCWvV0\nYRm5A3szdnCfeIciIu1ICUJa9O7+avKL9+jsQSQBKUFIi5YUluEOl+reSyIJRwlCWrS4sJSTh/Rl\nTFZUz4gSkS5ECUKatWPfYd7ctk9jH0QSlBKENGtJYegGurN0/UEkISlBSLMWF5YxMTud4QN6xTsU\nEYkDJQiJaGtFFYUllWpeEklgShAS0eLCMgAuUfOSSMJSgpCIFheWMXl4P4b16xnvUEQkTpQg5AM2\nlx/k7bL9OnsQSXBKEPIBi1eVYQaXTND1B5FEpgQhH7C4sJQzRmQwOL1HvEMRkThSgpB/s37nATbu\nOqhba4iIEoT8u8WFpSQZXHSKEoRIolOCkPe4O4sLy5g2egBZfbrHOxwRiTMlCHnP2tL9bNldpVt7\niwgQ4wRhZjPNbL2ZbTKz2yNsNzP7ZbC90Mwmh20rNrPVZrbSzApiGaeELFldRnKSMfOUwfEORUQ6\ngJRY7djMkoF7gBlACZBvZovcfV1YsYuA3GCaCvw2eG10vrvvjlWM8r5Q81IpZ52QSUZaarzDEZEO\nIJZnEFOATe5e5O41wHxgdpMys4FHPGQZ0M/MdHU0DgpLKtm+57DuvSQi74llghgGbA9bLgnWRVvG\ngefNbIWZzWnuTcxsjpkVmFlBeXl5G4SdmBYXltIt2bhwnJqXRCQklgnCIqzzoyhzlrtPJtQM9UUz\nOzfSm7j7PHfPc/e8rKysY482gTU0OEsKyzg3N4v0Xt3iHY6IdBCxTBAlQE7YcjZQGm0Zd2983QUs\nJNRkJTHw1va9lFZWa3CciPybWCaIfCDXzEaZWSpwNbCoSZlFwHVBb6ZpQKW7l5lZmpn1ATCzNOAC\nYE0MY01oT68qIzUlieknD4p3KCLSgcSsF5O715nZrcBzQDLwgLuvNbNbgu33AkuBi4FNwCHgxqD6\nIGChmTXG+Li7PxurWBNZ+YEj/GXlDs4fm0WfHmpeEpH3xSxBALj7UkJJIHzdvWHzDnwxQr0iYFIs\nY5NQ19ZvLijkUE09X7tgbLzDEZEORiOpE9iTBdv529u7+PqFYzlxUJ94hyMiHYwSRILaVnGI/356\nHdNGZ/DZs0bFOxwR6YCUIBJQfYPztT+tJMmMu6+cRFJSpN7GIpLolCAS0H3/KCK/eC/fvWw82f17\nxTscEemglCASzNtl+/nJ8xu4cPwgLp/cdGC7iMj7lCASyJG6er7yxEr69uzG/358AkE3YhGRiGLa\nzVU6lp+9sJF3dh7g/uvzGNBbDwQSkZbpDCJB5Bfv4XevbubqM3L4qEZMi0gUlCASwMEjdXz1yZVk\n9+/Jty8dF+9wRKSTUBNTArhr8TpK9h7myc+fSe/u+icXkejoDKKL+9u6d5mfv53PnzuGM0ZmxDsc\nEelElCC6sIqDR7h9QSEnDe7DV2bkxjscEelk1N7QRbk7/7VwNfsP1/HoTVPpnpIc75BEpJPRGUQX\nteDNHTy39l2+esGJnDykb7zDEZFOSAmiC9qx7zB3LlrLlJEZfO6c0fEOR0Q6KSWILqahwbntyVU0\nuPOTqyaRrBvxicgxUoLoYh54bQuvF1Vwx6xx5GToRnwicuyUILqQDe8e4MfPrWf6yQO5Ki8n3uGI\nSCenBNFF1NQ18JUnVtKnewo/+MRE3YhPRI6burl2Eb98cSNrS/fzu2tPJ6uPbsQnIsdPZxBdwIqt\ne/nNy5u44vRsLhw/ON7hiEgXoTOITqa+wSkqP8ia0kpWl+xnzY5KVu+oZEh6T747SzfiE5G2E9ME\nYWYzgV8AycB97v7DJtst2H4xcAi4wd3fjKZuIqirb2BzeRWrd1SyJpjWle3nUE09AD26JTFuSF+u\nysvm2jNH0KdHtzhHLCJdScwShJklA/cAM4ASIN/MFrn7urBiFwG5wTQV+C0wNcq6XUptfQObdh18\nLxms3lHJ22X7qa5tAKBXajLjh/blqrwcJgxLZ0J2OqMz00hJViuhiMRGLM8gpgCb3L0IwMzmA7OB\n8B/52cAj7u7AMjPrZ2ZDgJFR1G0zs371T6pr62Ox66g0uFOy9zBH6kLJIC01mfHD0rlm6ghOGdaX\nCcPSGZXZW4PeRKRdxTJBDAO2hy2XEDpLaK3MsCjrAmBmc4A5AMOHDz+mQMdkpVFT33BMddvKR04a\nyCnD0jllWDqjBqSRpGQgInEWywQR6RfOoywTTd3QSvd5wDyAvLy8iGVa8/OrTzuWaiIiXVosE0QJ\nED6cNxsojbJMahR1RUQkhmJ5hTMfyDWzUWaWClwNLGpSZhFwnYVMAyrdvSzKuiIiEkMxO4Nw9zoz\nuxV4jlBX1Qfcfa2Z3RJsvxdYSqiL6yZC3VxvbKlurGIVEZEPslAHoq4hLy/PCwoK4h2GiEinYWYr\n3D0v0jZ1ohcRkYiUIEREJCIlCBERiUgJQkREIupSF6nNrBzYeozVM4HdbRhOW1N8x0fxHR/Fd3w6\ncnwj3D0r0oYulSCOh5kVNHclvyNQfMdH8R0fxXd8Onp8zVETk4iIRKQEISIiESlBvG9evANoheI7\nPorv+Ci+49PR44tI1yBERCQinUGIiEhEShAiIhJRQiUIM5tpZuvNbJOZ3R5hu5nZL4PthWY2uZ3j\nyzGzl8zsbTNba2ZfjlDmPDOrNLOVwXRHO8dYbGarg/f+wJ0R43kMzWxs2HFZaWb7zWxukzLtevzM\n7AEz22Vma8LWZZjZC2a2MXjt30zdFr+vMYzv/8zsneDfb6GZ9WumbovfhRjGd6eZ7Qj7N7y4mbrx\nOn5PhMVWbGYrm6kb8+N33Nw9ISZCtw3fDIwm9ECiVcC4JmUuBp4h9ES7acAb7RzjEGByMN8H2BAh\nxvOAxXE8jsVAZgvb43oMm/x77yQ0CChuxw84F5gMrAlb92Pg9mD+duBHzcTf4vc1hvFdAKQE8z+K\nFF8034UYxncncFsU//5xOX5Ntv8EuCNex+94p0Q6g5gCbHL3InevAeYDs5uUmQ084iHLgH5mNqS9\nAnT3Mnd/M5g/ALxN6PncnUlcj2GYjwKb3f1YR9a3CXd/FdjTZPVs4OFg/mHgYxGqRvN9jUl87v68\nu9cFi8sIPdExLpo5ftGI2/FrZGYGXAX8sa3ft70kUoIYBmwPWy7hgz++0ZRpF2Y2EjgNeCPC5jPN\nbJWZPWNm49s3Mhx43sxWmNmcCNs7yjG8mub/Y8bz+AEM8tCTEwleB0Yo01GO42cJnRFG0tp3IZZu\nDZrAHmimia4jHL9zgHfdfWMz2+N5/KKSSAnCIqxr2sc3mjIxZ2a9gT8Dc919f5PNbxJqNpkE/Ar4\nSzuHd5a7TwYuAr5oZuc22R73Y2ihx9ReBvwpwuZ4H79odYTj+C2gDnismSKtfRdi5bfAGOBUoIxQ\nM05TcT9+wKdo+ewhXscvaomUIEqAnLDlbKD0GMrElJl1I5QcHnP3BU23u/t+dz8YzC8FuplZZnvF\n5+6lwesuYCGhU/lwcT+GhP7Dvenu7zbdEO/jF3i3sdkteN0VoUxcj6OZXQ9cClzjQYN5U1F8F2LC\n3d9193p3bwB+38z7xvv4pQCfAJ5orky8jt/RSKQEkQ/kmtmo4C/Mq4FFTcosAq4LeuJMAyobmwLa\nQ9BmeT/wtrv/tJkyg4NymNkUQv+GFe0UX5qZ9WmcJ3Qxc02TYnE9hoFm/3KL5/ELswi4Ppi/Hvhr\nhDLRfF9jwsxmAt8ALnP3Q82Uiea7EKv4wq9pfbyZ943b8QtMB95x95JIG+N5/I5KvK+St+dEqIfN\nBkK9G74VrLsFuCWYN+CeYPtqIK+d4zub0GlwIbAymC5uEuOtwFpCvTKWAR9qx/hGB++7KoihIx7D\nXoR+8NPD1sXt+BFKVGVALaG/am8CBgAvAhuD14yg7FBgaUvf13aKbxOh9vvG7+C9TeNr7rvQTvE9\nGny3Cgn96A/pSMcvWP9Q43curGy7H7/jnXSrDRERiSiRmphEROQoKEGIiEhEShAiIhKREoSIiESk\nBCEiIhEpQYhEyczqgztvrjGzp5u7y2kbvt8NZvbrWL6HSEuUIESid9jdT3X3UwjdoO2L8Q5IJJaU\nIESOzesEN38zs1PNbFnY8xP6B+tfNrO8YD7TzIqD+RvMbIGZPWuhZ0L8uHGnZnajmW0ws1eAs9r9\nU4mEUYIQOUpmlkzoduKNt254BPiGu08kNML3u1Hs5lTgk8AE4JMWeljUEOB7hBLDDGBcG4cuclSU\nIESi1zN4OlgFkAG8YGbpQD93fyUo8zChh8i05kV3r3T3amAdMAKYCrzs7uUeeoZBszd6E2kPShAi\n0Tvs7qcS+jFPpfVrEHW8/3+sR5NtR8Lm64GUYF73vpEOQwlC5Ci5eyXwJeA24BCw18zOCTZfCzSe\nTRQDpwfzV0Sx6zeA88xsQHDb9yvbLGiRY5DSehERacrd3zKzVYRuI309cK+Z9QKKgBuDYncDT5rZ\ntcDfo9hnmZndSegCeBmhhxslxyB8kajobq4iIhKRmphERCQiJQgREYlICUJERCJSghARkYiUIERE\nJCIlCBERiUgJQkREIvr/+9eQRarOwPQAAAAASUVORK5CYII=\n", "text/plain": [ - "
" + "
" ] }, "metadata": { - "needs_background": "light" }, "output_type": "display_data" } @@ -1297,7 +1609,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 32, "metadata": { "id": "Iq0UxEBBJcR-" }, @@ -1306,13 +1618,16 @@ "name": "stdout", "output_type": "stream", "text": [ - "Final Test: OrderedDict([('loss', 1.9566978), ('rating_accuracy', 0.30792442)])\n" + "Final Test: OrderedDict([('rating_accuracy', 0.3129535), ('loss', 1.9429641)])\n" ] } ], "source": [ - "eval_metrics = evaluation_computation(state.model, tf_test_datasets)\n", - "print('Final Test:', eval_metrics['eval'])" + "eval_state = evaluation_process.set_model_weights(\n", + " eval_state, training_process.get_model_weights(state)\n", + ")\n", + "_, eval_metrics = evaluation_process.next(eval_state, tf_test_datasets)\n", + "print('Final Test:', eval_metrics['client_work']['eval'])" ] }, { @@ -1325,11 +1640,11 @@ "\n", "이 노트북을 끝까지 완수하느라 수고하셨습니다. 대략적으로 난이도를 높여가며 부분적 로컬 페더레이션 학습을 추가로 탐구해볼 수 있는 다음 연습을 추천합니다.\n", "\n", - "- 페더레이션 평균화의 일반적인 구현 방식은 데이터에 대해 여러 로컬 패스(epoch)를 수행하는 것입니다(여러 배치에서 데이터를 한 번 전달하는 것 외에도). 페더레이션 재구성의 경우 재구성 및 재구성 후 훈련에 대해 별도로 단계 수를 제어해야 할 수 있습니다. `dataset_split_fn` 인수를 훈련 및 평가 계산 빌더에 전달하면 재구성 및 재구성 후 데이터세트 모두에 대한 단계 및 epoch 수를 제어할 수 있습니다. 연습으로 50단계로 제한되는 재구성 훈련 3개 로컬 epoch와 50단계로 제한되는 재구성 후 훈련 1개 로컬 epoch를 수행해 보세요. 힌트: [`tff.learning.reconstruction.build_dataset_split_fn`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/reconstruction/build_dataset_split_fn)이 도움이 될 것입니다. 이 작업을 수행한 후에는 이러한 하이퍼파라미터와 학습률 및 배치 크기와 같은 기타 관련 매개변수를 조정하여 결과를 개선해 보세요.\n", + "- 페더레이션 평균화의 일반적인 구현 방식은 데이터에 대해 여러 로컬 패스(epoch)를 수행하는 것입니다(여러 배치에서 데이터를 한 번 전달하는 것 외에도). 페더레이션 재구성의 경우 재구성 및 재구성 후 훈련에 대해 별도로 단계 수를 제어해야 할 수 있습니다. `dataset_split_fn` 인수를 훈련 및 평가 계산 빌더에 전달하면 재구성 및 재구성 후 데이터세트 모두에 대한 단계 및 epoch 수를 제어할 수 있습니다. 연습으로 50단계로 제한되는 재구성 훈련 3개 로컬 epoch와 50단계로 제한되는 재구성 후 훈련 1개 로컬 epoch를 수행해 보세요. 힌트: [`tff.learning.models.ReconstructionModel.build_dataset_split_fn`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/models/ReconstructionModel#build_dataset_split_fn)이 도움이 될 것입니다. 이 작업을 수행한 후에는 이러한 하이퍼파라미터와 학습률 및 배치 크기와 같은 기타 관련 매개변수를 조정하여 결과를 개선해 보세요.\n", "\n", - "- 페더레이션 재구성 훈련 및 평가의 기본적인 동작은 재구성 및 재구성 후 각각에 대해 클라이언트의 로컬 데이터를 반으로 나누는 것입니다. 클라이언트에 로컬 데이터가 매우 적은 경우 훈련 과정에서만 재구성 및 재구성 후에 데이터를 재사용하는 것이 합리적일 수 있습니다(평가에 대해서는 그렇게 하지 않아야 하는데, 불공정한 평가로 이어지기 때문). 평가를 위한 `dataset_split_fn`은 여전히 재구성 및 재구성 후 데이터 분리를 유지하도록 하면서 훈련 프로세스에 대해 이 변경을 시도해 보세요. 힌트: [`tff.learning.reconstruction.simple_dataset_split_fn`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/reconstruction/simple_dataset_split_fn)이 유용할 수 있습니다.\n", + "- 페더레이션 재구성 훈련 및 평가의 기본적인 동작은 재구성 및 재구성 후 각각에 대해 클라이언트의 로컬 데이터를 반으로 나누는 것입니다. 클라이언트에 로컬 데이터가 매우 적은 경우 훈련 과정에서만 재구성 및 재구성 후에 데이터를 재사용하는 것이 합리적일 수 있습니다(평가에 대해서는 그렇게 하지 않아야 하는데, 불공정한 평가로 이어지기 때문). 평가를 위한 `dataset_split_fn`은 여전히 재구성 및 재구성 후 데이터 분리를 유지하도록 하면서 훈련 프로세스에 대해 이 변경을 시도해 보세요. 힌트: [`tff.learning.models.ReconstructionModel.simple_dataset_split_fn`](https://www.tensorflow.org/federated/api_docs/python/tff/learning/models/ReconstructionModel#simple_dataset_split_fn)이 유용할 수 있습니다.\n", "\n", - "- 위에서 `tff.learning.reconstruction.from_keras_model`을 사용하여 Keras 모델에서 `tff.learning.Model`을 생성했습니다. 또한 [모델 인터페이스를 구현](https://www.tensorflow.org/federated/api_docs/python/tff/learning/reconstruction/Model)하여 순수 TensorFlow 2.0으로 사용자 정의 모델을 구현할 수도 있습니다. `get_matrix_factorization_model`을 수정하여 해당 메서드를 구현하는 `tff.learning.reconstruction.Model`을 확장하는 클래스를 빌드하고 반환합니다. 힌트: `tff.learning.reconstruction.from_keras_model`의 소스 코드는 `tff.learning.reconstruction.Model` 클래스를 확장하는 예를 제공합니다. `tff.learning.Model` 확장에 대한 유사한 연습은 [EMNIST 이미지 분류에서 사용자 정의 모델 구현 튜토리얼](https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification#customizing_the_model_implementation)도 참조하세요.\n", + "- 위에서는 `tff.learning.models.ReconstructionModel.from_keras_model_and_layers`을 사용해 Keras 모델에서 `tff.learning.models.VariableModel`을 생성했습니다. 또한 [모델 인터페이스 구현](https://www.tensorflow.org/federated/api_docs/python/tff/learning/models/ReconstructionModel)으로 순수 TensorFlow 2.0을 사용해 사용자 정의 모델을 구현할 수도 있습니다. `get_matrix_factorization_model`을 수정해 `tff.learning.models.ReconstructionModel`을 확장하고 해당 메서드를 구현하는 클래스를 빌드하고 반환해 보세요. 힌트: `tff.learning.models.ReconstructionModel.from_keras_model_and_layers`의 소스 코드에는 `tff.learning.models.ReconstructionModel` 클래스를 확장하는 예제가 있습니다. 또한 `tff.learning.models.VariableModel`을 확장하는 유사한 예제는 [EMNIST 이미지 분류에서 사용자 정의 모델을 구현하는 튜토리얼](https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification#customizing_the_model_implementation)도 참조하세요.\n", "\n", "- 이 튜토리얼에서 우리는 매트릭스 분해의 맥락에서 부분적 로컬 페더레이션 학습에 동기를 부여했습니다. 여기서 사용자 임베딩을 서버로 보내면 사용자 선호도가 쉽게 누출됩니다. 또한 (로컬 매개변수가 서버로 전송되지 않기 때문에) 통신을 줄이면서 (모델의 일부가 각 사용자에게 완전히 로컬이기 때문에) 더 많은 개별 모델을 훈련하는 방법으로 다른 환경에서 페더레이션 재구성을 적용할 수도 있습니다. 일반적으로, 여기에 제공된 인터페이스를 사용하여 통상 완전히 전역적으로 훈련되는 페더레이션 모델을 사용하고, 대신 해당 변수를 전역 변수와 지역 변수로 분할할 수 있습니다. [페더레이션 재구성 백서](https://arxiv.org/abs/2102.03448)에서 탐구한 예는 개인의 다음 단어 예측입니다. 여기에서 각 사용자는 어휘에 없는 단어의 고유한 로컬 단어 임베딩 세트를 가지고 있어 모델이 사용자의 속어를 캡처하고 추가 통신 없이 개인화를 달성할 수 있습니다. 연습으로, 페더레이션 재구성과 함께 사용할 다른 모델(Keras 모델 또는 사용자 정의 TensorFlow 2.0 모델로)을 구현해 보세요. 제안: 개인 사용자 임베딩을 사용하여 EMNIST 분류 모델을 구현합니다. 여기서 개인 사용자 임베딩은 모델의 마지막 밀집 레이어 이전에 CNN 이미지 요소에 연결됩니다. 이 튜토리얼의 코드(예: `UserEmbedding` 클래스)와 [이미지 분류 튜토리얼](https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification)의 코드를 많은 부분 재사용할 수 있습니다.\n", "\n", @@ -1339,7 +1654,6 @@ ], "metadata": { "colab": { - "collapsed_sections": [], "name": "federated_reconstruction_for_matrix_factorization.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/tutorials/random_noise_generation.ipynb b/site/ko/federated/tutorials/random_noise_generation.ipynb index 74f5e30281..49528af914 100644 --- a/site/ko/federated/tutorials/random_noise_generation.ipynb +++ b/site/ko/federated/tutorials/random_noise_generation.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "koW3R4ntOgLS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -50,9 +52,9 @@ "source": [ "\n", " \n", - " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소그 보기GitHub에서 소그 보기노트북 다운로드
" ] @@ -74,14 +76,12 @@ "metadata": { "id": "ke7EyuvG0Zyn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@test {\"skip\": true}\n", - "!pip install --quiet --upgrade tensorflow-federated\n", - "!pip install --quiet --upgrade nest-asyncio\n", - "\n", - "import nest_asyncio\n", - "nest_asyncio.apply()" + "!pip install --quiet --upgrade tensorflow-federated" ] }, { @@ -90,7 +90,9 @@ "metadata": { "id": "rtgStTrNIId-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import numpy as np\n", "import tensorflow as tf\n", @@ -120,7 +122,8 @@ ] }, "execution_count": 43, - "metadata": {}, + "metadata": { + }, "output_type": "execute_result" } ], @@ -175,7 +178,7 @@ ], "source": [ "# Set to use 10 clients.\n", - "tff.backends.native.set_local_python_execution_context(default_num_clients=10)\n", + "tff.backends.native.set_sync_local_cpp_execution_context(default_num_clients=10)\n", "\n", "@tff.tf_computation\n", "def noise_from_seed(seed):\n", @@ -527,7 +530,7 @@ "id": "TnyhlV0fIxYR" }, "source": [ - "TFF에서 일반적인 권장 사항은 무작위 노이즈 생성에 기능적인 `tf.random.stateless_*` 함수를 사용하는 것입니다. 이러한 함수는 무작위 노이즈를 생성하기 위한 명시적 입력 인수로 `seed`(형상이 `[2]`인 텐서 또는 두 스칼라 텐서의 `tuple`)를 취합니다. 먼저 시드를 의사 상태로 유지하기 위해 헬퍼 클래스를 정의합니다. 헬퍼 `RandomSeedGenerator`에는 state-in-state-out 방식의 기능 연산자가 있습니다. 카운터를 `tf.random.stateless_*`에 대한 의사 상태로 사용하는 것은 합리적인데, 이러한 함수는 통계적으로 상관 관계가 없는 상관된 시드에 의해 노이즈가 만들어지기 전에 시드를 [스크램블](https://github.com/tensorflow/tensorflow/blob/919f693420e35d00c8d0.43.00837ae3718f7927/tensorflow/core/kernels/stateless_random_ops.cc#L50-L64)하기 때문입니다." + "TFF에서 일반적인 권장 사항은 무작위 노이즈 생성에 기능적인 `tf.random.stateless_*` 함수를 사용하는 것입니다. 이러한 함수는 무작위 노이즈를 생성하기 위한 명시적 입력 인수로 `seed`(형상이 `[2]`인 텐서 또는 두 스칼라 텐서의 `tuple`)를 취합니다. 먼저 시드를 의사 상태로 유지하기 위해 헬퍼 클래스를 정의합니다. 헬퍼 `RandomSeedGenerator`에는 state-in-state-out 방식의 기능 연산자가 있습니다. 카운터를 `tf.random.stateless_*`에 대한 의사 상태로 사용하는 것은 합리적인데, 이러한 함수는 통계적으로 상관 관계가 없는 상관된 시드에 의해 노이즈가 만들어지기 전에 시드를 [스크램블](https://github.com/tensorflow/tensorflow/blob/919f693420e35d00c8d0a42100837ae3718f7927/tensorflow/core/kernels/stateless_random_ops.cc#L50-L64)하기 때문입니다." ] }, { @@ -536,7 +539,9 @@ "metadata": { "id": "NF1gaMgrKdwU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def timestamp_seed():\n", " # tf.timestamp returns microseconds as decimal places, thus scaling by 1e6.\n", @@ -626,7 +631,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "random_noise_generation.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/tutorials/simulations.ipynb b/site/ko/federated/tutorials/simulations.ipynb index ce62d8d728..612cab4a82 100644 --- a/site/ko/federated/tutorials/simulations.ipynb +++ b/site/ko/federated/tutorials/simulations.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "PTdfUcwp8Eru" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -37,15 +39,15 @@ "id": "grBmytrShbUE" }, "source": [ - "# TFF를 사용한 고성능 시뮬레이션\n", + "# TFF를 사용하는 고성능 시뮬레이션\n", "\n", "이 튜토리얼에서는 다양한 일반적인 시나리오에서 TFF로 고성능 시뮬레이션을 설정하는 방법을 설명합니다.\n", "\n", - "TODO(b/134543154): 여기에서 다룰 몇 가지 사항으로 내용을 채우세요:\n", + "TODO: b/134543154 - 여기에서 다룰 몇 가지 사항으로 내용을 채우세요.\n", "\n", "- 단일 시스템 설정에서 GPU 사용하기\n", "- TPU가 있거나 없는 GCP/GKE에서 다중 시스템 설정하기\n", - "- MapReduce와 유사한 백엔드 접속하기\n", + "- interfacing MapReduce-like backends,\n", "- 현재 제한 사항과 이를 완화할 수 있는 시기/방법" ] }, @@ -56,11 +58,10 @@ }, "source": [ "\n", - " \n", + " \n", - " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기\n", + " TensorFlow.org에서 보기 Google Colab에서 실행하기\n", " Google Colab에서 실행하기\n", - "GitHub에서 소그 보기GitHub에서 소그 보기노트북 다운론드하기
" ] @@ -71,7 +72,7 @@ "id": "yiq_MY4LopET" }, "source": [ - "## Before we begin\n", + "## 시작하기 전에\n", "\n", "먼저, 관련 구성 요소(다중 시스템 시나리오에 대한 gRPC 종속성 포함)가 컴파일된 백엔드에 노트북이 연결되어 있는지 확인합니다." ] @@ -102,11 +103,7 @@ ], "source": [ "#@test {\"skip\": true}\n", - "!pip install --quiet --upgrade tensorflow-federated\n", - "!pip install --quiet --upgrade nest-asyncio\n", - "\n", - "import nest_asyncio\n", - "nest_asyncio.apply()" + "!pip install --quiet --upgrade tensorflow-federated" ] }, { @@ -115,7 +112,9 @@ "metadata": { "id": "2dVPgxN0MdG2" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import collections\n", "import time\n", @@ -147,7 +146,7 @@ " tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),\n", " tf.keras.layers.Softmax(),\n", " ])\n", - " return tff.learning.from_keras_model(\n", + " return tff.learning.models.from_keras_model(\n", " model,\n", " input_spec=element_spec,\n", " loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n", @@ -198,7 +197,7 @@ "train metrics OrderedDict([('sparse_categorical_accuracy', 0.27160493), ('loss', 2.340346), ('num_examples', 9720), ('num_batches', 490)]), round time 2.25 seconds\n", "train metrics OrderedDict([('sparse_categorical_accuracy', 0.34115225), ('loss', 2.0537064), ('num_examples', 9720), ('num_batches', 490)]), round time 2.27 seconds\n", "train metrics OrderedDict([('sparse_categorical_accuracy', 0.3745885), ('loss', 1.9158486), ('num_examples', 9720), ('num_batches', 490)]), round time 2.21 seconds\n", - "train metrics OrderedDict([('sparse_categorical_accuracy', 0.43.02059), ('loss', 1.7523248), ('num_examples', 9720), ('num_batches', 490)]), round time 2.19 seconds\n", + "train metrics OrderedDict([('sparse_categorical_accuracy', 0.41502059), ('loss', 1.7523248), ('num_examples', 9720), ('num_batches', 490)]), round time 2.19 seconds\n", "train metrics OrderedDict([('sparse_categorical_accuracy', 0.47644034), ('loss', 1.6085855), ('num_examples', 9720), ('num_batches', 490)]), round time 2.20 seconds\n", "train metrics OrderedDict([('sparse_categorical_accuracy', 0.5126543), ('loss', 1.5272282), ('num_examples', 9720), ('num_batches', 490)]), round time 2.27 seconds\n", "train metrics OrderedDict([('sparse_categorical_accuracy', 0.5576132), ('loss', 1.393721), ('num_examples', 9720), ('num_batches', 490)]), round time 2.16 seconds\n" @@ -223,7 +222,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "simulations.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/advanced_autodiff.ipynb b/site/ko/guide/advanced_autodiff.ipynb index 18536ff985..483c3f69b2 100644 --- a/site/ko/guide/advanced_autodiff.ipynb +++ b/site/ko/guide/advanced_autodiff.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -78,7 +80,9 @@ "metadata": { "id": "IqR2PQG4ZaZ0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "\n", @@ -120,7 +124,9 @@ "metadata": { "id": "mhFSYf7uQWxR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(2.0)\n", "y = tf.Variable(3.0)\n", @@ -154,7 +160,9 @@ "metadata": { "id": "lsMHsmrh4pqM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(2.0)\n", "y = tf.Variable(3.0)\n", @@ -190,7 +198,9 @@ "metadata": { "id": "30qnZMe48BkB" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(2.0)\n", "y = tf.Variable(3.0)\n", @@ -240,7 +250,9 @@ "metadata": { "id": "Mjj01w4NYtwd" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Establish an identity operation, but clip during the gradient pass.\n", "@tf.custom_gradient\n", @@ -285,7 +297,9 @@ "metadata": { "id": "Q5JBgIBYjN1I" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class MyModule(tf.Module):\n", "\n", @@ -302,7 +316,9 @@ "metadata": { "id": "xZTrgy2q-9pq" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.saved_model.save(\n", " model,\n", @@ -345,7 +361,9 @@ "metadata": { "id": "BJ0HdMvte0VZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x0 = tf.constant(0.0)\n", "x1 = tf.constant(0.0)\n", @@ -368,7 +386,9 @@ "metadata": { "id": "6ApAoMNFfNz6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tape0.gradient(ys, x0).numpy() # cos(x) => 1.0" ] @@ -379,7 +399,9 @@ "metadata": { "id": "rF1jrAJsfYW_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tape1.gradient(ys, x1).numpy() # sigmoid(x1)*(1-sigmoid(x1)) => 0.25" ] @@ -403,7 +425,9 @@ "metadata": { "id": "cPQgthZ7ugRJ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0\n", "\n", @@ -456,7 +480,9 @@ "metadata": { "id": "tH3ZFuUfDLrR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.random.normal([7, 5])\n", "\n", @@ -469,7 +495,9 @@ "metadata": { "id": "E6yOFsjEDR9u" }, - "outputs": [], + "outputs": [ + + ], "source": [ "with tf.GradientTape() as t2:\n", " # The inner tape only takes the gradient with respect to the input,\n", @@ -493,7 +521,9 @@ "metadata": { "id": "123QMq6PqK_d" }, - "outputs": [], + "outputs": [ + + ], "source": [ "[var.shape for var in dg1_mag]" ] @@ -556,7 +586,9 @@ "metadata": { "id": "bAFeIE8EuVIq" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.linspace(-10.0, 10.0, 200+1)\n", "delta = tf.Variable(0.0)\n", @@ -582,7 +614,9 @@ "metadata": { "id": "iZ6awnDzr_BA" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(y.shape)\n", "print(dy_dx.shape)" @@ -594,7 +628,9 @@ "metadata": { "id": "siNZaklc0_-e" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plt.plot(x.numpy(), y, label='y')\n", "plt.plot(x.numpy(), dy_dx, label='dy/dx')\n", @@ -628,7 +664,9 @@ "metadata": { "id": "39YXItgLxMBk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.random.normal([7, 5])\n", "layer = tf.keras.layers.Dense(10, activation=tf.nn.relu)\n", @@ -654,7 +692,9 @@ "metadata": { "id": "CigTWyfPvPuv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "layer.kernel.shape" ] @@ -674,7 +714,9 @@ "metadata": { "id": "pRLzTTbvEimH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "j = tape.jacobian(y, layer.kernel)\n", "j.shape" @@ -695,7 +737,9 @@ "metadata": { "id": "FJjZpYRnDjVa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "g = tape.gradient(y, layer.kernel)\n", "print('g.shape:', g.shape)\n", @@ -734,7 +778,9 @@ "metadata": { "id": "ELGTaell_j81" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.random.normal([7, 5])\n", "layer1 = tf.keras.layers.Dense(8, activation=tf.nn.relu)\n", @@ -757,7 +803,9 @@ "metadata": { "id": "FVqQuZj4XGjm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(f'layer.kernel.shape: {layer1.kernel.shape}')\n", "print(f'h.shape: {h.shape}')" @@ -778,7 +826,9 @@ "metadata": { "id": "6te7N6wVXwXX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "n_params = tf.reduce_prod(layer1.kernel.shape)\n", "\n", @@ -801,7 +851,9 @@ "metadata": { "id": "8TCHc7Vrf52S" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def imshow_zero_center(image, **kwargs):\n", " lim = tf.reduce_max(abs(image))\n", @@ -815,7 +867,9 @@ "metadata": { "id": "DExOxd7Ok2H0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "imshow_zero_center(h_mat)" ] @@ -835,7 +889,9 @@ "metadata": { "id": "3DdnbynBdSor" }, - "outputs": [], + "outputs": [ + + ], "source": [ "eps = 1e-3\n", "eye_eps = tf.eye(h_mat.shape[0])*eps" @@ -856,7 +912,9 @@ "metadata": { "id": "k1LYftgmswOO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# X(k+1) = X(k) - (∇²f(X(k)))^-1 @ ∇f(X(k))\n", "# h_mat = ∇²f(X(k))\n", @@ -902,7 +960,9 @@ "metadata": { "id": "tQMndhIUHMes" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.random.normal([7, 5])\n", "\n", @@ -932,7 +992,9 @@ "metadata": { "id": "1zSl2A5-HhMH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "j = tape.jacobian(y, x)\n", "j.shape" @@ -953,7 +1015,9 @@ "metadata": { "id": "ZFl9uj3ueVSH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "imshow_zero_center(j[:, 0, :, 0])\n", "_ = plt.title('A (batch, batch) slice')" @@ -965,7 +1029,9 @@ "metadata": { "id": "g4ZoRJcJNmy5" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def plot_as_patches(j):\n", " # Reorder axes so the diagonals will each form a contiguous patch.\n", @@ -998,7 +1064,9 @@ "metadata": { "id": "v65OAjEgLQwl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "j_sum = tf.reduce_sum(j, axis=2)\n", "print(j_sum.shape)\n", @@ -1021,7 +1089,9 @@ "metadata": { "id": "YJLIl9WpHqYq" }, - "outputs": [], + "outputs": [ + + ], "source": [ "jb = tape.batch_jacobian(y, x)\n", "jb.shape" @@ -1033,7 +1103,9 @@ "metadata": { "id": "-5t_q5SfHw7T" }, - "outputs": [], + "outputs": [ + + ], "source": [ "error = tf.reduce_max(abs(jb - j_sum))\n", "assert error < 1e-3\n", @@ -1055,7 +1127,9 @@ "metadata": { "id": "tnDugVc-L4fj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.random.normal([7, 5])\n", "\n", @@ -1079,7 +1153,9 @@ "metadata": { "id": "SNyZ1WhJMVLm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plot_as_patches(j)\n", "\n", @@ -1102,7 +1178,9 @@ "metadata": { "id": "k8_mICHoasCi" }, - "outputs": [], + "outputs": [ + + ], "source": [ "jb = tape.batch_jacobian(y, x)\n", "print(f'jb.shape: {jb.shape}')" @@ -1111,7 +1189,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "advanced_autodiff.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/autodiff.ipynb b/site/ko/guide/autodiff.ipynb index d98cd6a0c1..45d003c10c 100644 --- a/site/ko/guide/autodiff.ipynb +++ b/site/ko/guide/autodiff.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -82,7 +84,9 @@ "metadata": { "id": "IqR2PQG4ZaZ0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", @@ -120,7 +124,9 @@ "metadata": { "id": "Xq9GgTCP7a4A" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(3.0)\n", "\n", @@ -143,7 +149,9 @@ "metadata": { "id": "LsvrwF6bHroC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# dy = 2x * dx\n", "dy_dx = tape.gradient(y, x)\n", @@ -165,7 +173,9 @@ "metadata": { "id": "vacZ3-Ws5VdV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "w = tf.Variable(tf.random.normal((3, 2)), name='w')\n", "b = tf.Variable(tf.zeros(2, dtype=tf.float32), name='b')\n", @@ -191,7 +201,9 @@ "metadata": { "id": "luOtK1Da_BR0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "[dl_dw, dl_db] = tape.gradient(loss, [w, b])" ] @@ -211,7 +223,9 @@ "metadata": { "id": "aYbWRFPZqk4U" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(w.shape)\n", "print(dl_dw.shape)" @@ -232,7 +246,9 @@ "metadata": { "id": "d73cY6NOuaMd" }, - "outputs": [], + "outputs": [ + + ], "source": [ "my_vars = {\n", " 'w': w,\n", @@ -262,7 +278,9 @@ "metadata": { "id": "JvesHtbQESc-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "layer = tf.keras.layers.Dense(2, activation='relu')\n", "x = tf.constant([[1., 2., 3.]])\n", @@ -282,7 +300,9 @@ "metadata": { "id": "PR_ezr6UFrpI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for var, g in zip(layer.trainable_variables, grad):\n", " print(f'{var.name}, shape: {g.shape}')" @@ -320,7 +340,9 @@ "metadata": { "id": "Kj9gPckdB37a" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# A trainable variable\n", "x0 = tf.Variable(3.0, name='x0')\n", @@ -355,7 +377,9 @@ "metadata": { "id": "hwNwjW1eAkib" }, - "outputs": [], + "outputs": [ + + ], "source": [ "[var.name for var in tape.watched_variables()]" ] @@ -377,7 +401,9 @@ "metadata": { "id": "tVN1QqFRDHBK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant(3.0)\n", "with tf.GradientTape() as tape:\n", @@ -404,7 +430,9 @@ "metadata": { "id": "7QPzwWvSEwIp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x0 = tf.Variable(0.0)\n", "x1 = tf.Variable(10.0)\n", @@ -432,7 +460,9 @@ "metadata": { "id": "e6GM-3evH1Sz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# dys/dx1 = exp(x1) / (1 + exp(x1)) = sigmoid(x1)\n", "grad = tape.gradient(ys, {'x0': x0, 'x1': x1})\n", @@ -458,7 +488,9 @@ "metadata": { "id": "7XaPRAwUyYms" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant(3.0)\n", "\n", @@ -488,7 +520,9 @@ "metadata": { "id": "zZaCm3-9zVCi" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant([1, 3.0])\n", "with tf.GradientTape(persistent=True) as tape:\n", @@ -506,7 +540,9 @@ "metadata": { "id": "j8bv_jQFg6CN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "del tape # Drop the reference to the tape" ] @@ -550,7 +586,9 @@ "metadata": { "id": "qI0sDV_WeXBb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(2.0)\n", "with tf.GradientTape(persistent=True) as tape:\n", @@ -579,7 +617,9 @@ "metadata": { "id": "o4a6_YOcfWKS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(2.0)\n", "with tf.GradientTape() as tape:\n", @@ -604,7 +644,9 @@ "metadata": { "id": "DArPWqsSh5un" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(2.)\n", "\n", @@ -640,7 +682,9 @@ "metadata": { "id": "JQvk_jnMmTDS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.linspace(-10.0, 10.0, 200+1)\n", "\n", @@ -657,7 +701,9 @@ "metadata": { "id": "e_f2QgDPmcPE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plt.plot(x, y, label='y')\n", "plt.plot(x, dy_dx, label='dy/dx')\n", @@ -684,7 +730,9 @@ "metadata": { "id": "ciFLizhrrjy7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant(1.0)\n", "\n", @@ -721,7 +769,9 @@ "metadata": { "id": "8k05WmuAwPm7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dx = tape.gradient(result, x)\n", "\n", @@ -745,7 +795,9 @@ "metadata": { "id": "CU185WDM81Ut" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(2.)\n", "y = tf.Variable(3.)\n", @@ -783,7 +835,9 @@ "metadata": { "id": "QPKY4Tn9zX7_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable(2.0)\n", "\n", @@ -812,7 +866,9 @@ "metadata": { "id": "jmoLCDJb_yw1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable([[1.0, 2.0],\n", " [3.0, 4.0]], dtype=tf.float32)\n", @@ -849,7 +905,9 @@ "metadata": { "id": "9jlHXHqfASU3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant(10)\n", "\n", @@ -890,7 +948,9 @@ "metadata": { "id": "C1tLeeRFE479" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x0 = tf.Variable(3.0)\n", "x1 = tf.Variable(0.0)\n", @@ -944,7 +1004,9 @@ "metadata": { "id": "HSb20FXc_V0U" }, - "outputs": [], + "outputs": [ + + ], "source": [ "image = tf.Variable([[[0.5, 0.0, 0.0]]])\n", "delta = tf.Variable(0.1)\n", @@ -992,7 +1054,9 @@ "metadata": { "id": "U6zxk1sf9Ixx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.Variable([2., 2.])\n", "y = tf.Variable(3.)\n", diff --git a/site/ko/guide/basic_training_loops.ipynb b/site/ko/guide/basic_training_loops.ipynb index 8dd1b7380c..4e6bcf0d20 100644 --- a/site/ko/guide/basic_training_loops.ipynb +++ b/site/ko/guide/basic_training_loops.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "m8y3rGtQsYP2" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -80,7 +82,9 @@ "metadata": { "id": "NiolgWMPgpwI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "\n", @@ -132,7 +136,9 @@ "metadata": { "id": "NzivK2ATByOz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# The actual line\n", "TRUE_W = 3.0\n", @@ -160,7 +166,9 @@ "metadata": { "id": "IlFd_HVBFGIF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Plot all the data\n", "plt.plot(x, y, '.')\n", @@ -197,7 +205,9 @@ "metadata": { "id": "_WRu7Pze7wk8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class MyModel(tf.Module):\n", " def __init__(self, **kwargs):\n", @@ -245,7 +255,9 @@ "metadata": { "id": "Y0ysUFGY924U" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# This computes a single loss value for an entire batch\n", "def loss(target_y, predicted_y):\n", @@ -267,7 +279,9 @@ "metadata": { "id": "_eb83LtrB4nt" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plt.plot(x, y, '.', label=\"Data\")\n", "plt.plot(x, f(x), label=\"Ground truth\")\n", @@ -304,7 +318,9 @@ "metadata": { "id": "MBIACgdnA55X" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Given a callable model, inputs, outputs, and a learning rate...\n", "def train(model, x, y, learning_rate):\n", @@ -336,7 +352,9 @@ "metadata": { "id": "XdfkR223D9dW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "model = MyModel()\n", "\n", @@ -380,7 +398,9 @@ "metadata": { "id": "iRuNUghs1lHY" }, - "outputs": [], + "outputs": [ + + ], "source": [ "current_loss = loss(y, model(x))\n", "\n", @@ -405,7 +425,9 @@ "metadata": { "id": "ND1fQw8sbTNr" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plt.plot(epochs, weights, label='Weights', color=colors[0])\n", "plt.plot(epochs, [TRUE_W] * len(epochs), '--',\n", @@ -434,7 +456,9 @@ "metadata": { "id": "tpTEjWWex568" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plt.plot(x, y, '.', label=\"Data\")\n", "plt.plot(x, f(x), label=\"Ground truth\")\n", @@ -464,7 +488,9 @@ "metadata": { "id": "Z86hCI0x1YX3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class MyModelKeras(tf.keras.Model):\n", " def __init__(self, **kwargs):\n", @@ -503,7 +529,9 @@ "metadata": { "id": "-nbLLfPE2pEl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "keras_model = MyModelKeras()\n", "\n", @@ -540,7 +568,9 @@ "metadata": { "id": "zfAYqtu136PO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(x.shape[0])\n", "keras_model.fit(x, y, epochs=10, batch_size=1000)" diff --git a/site/ko/guide/core/logistic_regression_core.ipynb b/site/ko/guide/core/logistic_regression_core.ipynb index df17d249e9..d637e4c73d 100644 --- a/site/ko/guide/core/logistic_regression_core.ipynb +++ b/site/ko/guide/core/logistic_regression_core.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "AwOEIRJC6Une" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -47,10 +49,8 @@ }, "source": [ "\n", - " \n", - " \n", + " \n", + " \n", " \n", " \n", "
TensorFlow.org에서 보기\n", - " Google Colab에서 실행하기\n", - " TensorFlow.org에서 보기 Google Colab에서 실행하기 GitHub에서 소스 보기노트북 다운로드하기
" @@ -84,7 +84,9 @@ "metadata": { "id": "5lZoUK6AVTos" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install -q seaborn" ] @@ -95,7 +97,9 @@ "metadata": { "id": "1rRo8oNqZ-Rj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "import pandas as pd\n", @@ -131,7 +135,9 @@ "metadata": { "id": "CiX2FI4gZtTt" }, - "outputs": [], + "outputs": [ + + ], "source": [ "url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data'\n", "\n", @@ -159,7 +165,9 @@ "metadata": { "id": "uvR2Bzb691lJ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = pd.read_csv(url, names=column_names)" ] @@ -170,7 +178,9 @@ "metadata": { "id": "YB9eq6Zq-IZ4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset.info()" ] @@ -190,7 +200,9 @@ "metadata": { "id": "hWxktwbv-KPp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset.head()" ] @@ -210,7 +222,9 @@ "metadata": { "id": "m2O60B-IVG9Q" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_dataset = dataset.sample(frac=0.75, random_state=1)" ] @@ -221,7 +235,9 @@ "metadata": { "id": "i06vHFv_QB24" }, - "outputs": [], + "outputs": [ + + ], "source": [ "len(train_dataset)" ] @@ -232,7 +248,9 @@ "metadata": { "id": "19JaochhaQ3m" }, - "outputs": [], + "outputs": [ + + ], "source": [ "test_dataset = dataset.drop(train_dataset.index)" ] @@ -243,7 +261,9 @@ "metadata": { "id": "LmHRcbAfaSag" }, - "outputs": [], + "outputs": [ + + ], "source": [ "len(test_dataset)" ] @@ -254,7 +274,9 @@ "metadata": { "id": "w6JxBhBc_wwO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# The `id` column can be dropped since each row is unique\n", "x_train, y_train = train_dataset.iloc[:, 2:], train_dataset.iloc[:, 1]\n", @@ -282,7 +304,9 @@ "metadata": { "id": "JEJHhN65a2VV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "y_train, y_test = y_train.map({'B': 0, 'M': 1}), y_test.map({'B': 0, 'M': 1})\n", "x_train, y_train = tf.convert_to_tensor(x_train, dtype=tf.float32), tf.convert_to_tensor(y_train, dtype=tf.float32)\n", @@ -304,7 +328,9 @@ "metadata": { "id": "oRKO_x8gWKv-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "sns.pairplot(train_dataset.iloc[:, 1:6], hue = 'diagnosis', diag_kind='kde');" ] @@ -326,7 +352,9 @@ "metadata": { "id": "yi2FzC3T21jR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_dataset.describe().transpose()[:10]" ] @@ -346,7 +374,9 @@ "metadata": { "id": "FrzKNFNjLQDl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class Normalize(tf.Module):\n", " def __init__(self, x):\n", @@ -384,21 +414,27 @@ "\n", "$Y$는 클래스 `1`에 속할 확률을 나타냅니다(종양은 악성임). 선형 회귀 출력을 클래스 `0`이 아닌 클래스 `1`에 있는 [로그 오즈(log odds)](https://developers.google.com/machine-learning/glossary#log-odds){:.external} 비율로 해석하여 원하는 매핑 결과를 얻을 수 있습니다.\n", "\n", - "$$\\ln(\\frac{Y}{1-Y}) = wX + b$$\n", - "\n", + "```\n", "$wX + b = z$를 설정하면 $Y$에 대한 이 방정식을 풀 수 있습니다.\n", - "\n", - "$$Y = \\frac{e^{z}}{1 + e^{z}} = \\frac{1}{1 + e^{-z}}$$\n", + "```\n", "\n", "$\\frac{1}{1 + e^{-z}}$ 표현식은 [시그모이드 함수](https://developers.google.com/machine-learning/glossary#sigmoid_function){:.external} $\\sigma(z)$로 알려져 있습니다. 따라서 로지스틱 회귀 방정식은 $Y = \\sigma(wX + b)$로 작성할 수 있습니다.\n", "\n", + "```\n", "이 튜토리얼의 데이터세트는 고차원 특성 행렬을 처리합니다. 따라서 위의 수식은 다음과 같이 행렬 벡터 형식으로 다시 작성해야 합니다.\n", + "```\n", + "\n", + "여기서,\n", + "\n", + "먼저 선형 출력 `(-∞, ∞)`이 `0`과 `1` 사이에 있도록 변환하는 시그모이드 함수를 시각화합니다. 시그모이드 함수는 `tf.math.sigmoid`에서 사용할 수 있습니다.\n", "\n", + "```\n", "$${\\mathrm{Y}} = \\sigma({\\mathrm{X}}w + b)$$\n", + "```\n", "\n", - "여기서,\n", + "여기서:\n", "\n", - "- $\\underset{m\\times 1}{\\mathrm{Y}}$: 대상 벡터\n", + "- $\\underset{m\\times 1}{\\mathrm{Y}}$: 목표 벡터\n", "- $\\underset{m\\times n}{\\mathrm{X}}$: 특성 행렬\n", "- $\\underset{n\\times 1}w$: 가중치 벡터\n", "- $b$: 바이어스\n", @@ -413,7 +449,9 @@ "metadata": { "id": "ThHaV_RmucZl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.linspace(-10, 10, 500)\n", "x = tf.cast(x, tf.float32)\n", @@ -433,12 +471,14 @@ "\n", "[로그 손실](https://developers.google.com/machine-learning/glossary#Log_Loss){:.external} 또는 바이너리 교차 엔트로피 손실은 로지스틱 회귀가 있는 바이너리 분류 문제를 다루는 이상적인 손실 함수입니다. 각 예제에서 로그 손실은 예측 확률과 예제의 실제 값 사이의 유사성을 수량화합니다. 이는 다음 수식에 의해 결정됩니다.\n", "\n", - "$$L = -\\frac{1}{m}\\sum_{i=1}^{m}y_i\\cdot\\log(\\hat{y}_i) + (1- y_i)\\cdot\\log(1 - \\hat{y}_i)$$\n", - "\n", + "```\n", "여기서,\n", + "```\n", + "\n", + "`tf.nn.sigmoid_cross_entropy_with_logits` 함수를 사용하여 로그 손실을 계산할 수 있습니다. 이 함수는 시그모이드 활성화를 회귀 출력에 자동으로 적용합니다.\n", "\n", "- $\\hat{y}$: 예측 확률의 벡터\n", - "- $y$: 실제 대상의 벡터\n", + "- $y$: 실제 목표의 벡터\n", "\n", "`tf.nn.sigmoid_cross_entropy_with_logits` 함수를 사용하여 로그 손실을 계산할 수 있습니다. 이 함수는 시그모이드 활성화를 회귀 출력에 자동으로 적용합니다." ] @@ -449,7 +489,9 @@ "metadata": { "id": "JVBInnSqS36W" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def log_loss(y_pred, y):\n", " # Compute the log loss function\n", @@ -471,15 +513,21 @@ "\n", "목표는 로그 손실을 최소화하는 $w^*$와 $b^*$를 찾는 것입니다.\n", "\n", - "$$L = -\\frac{1}{m}\\sum_{i=1}^{m}y_i\\cdot\\log(\\sigma({\\mathrm{X_i}}w + b)) + (1- y_i)\\cdot\\log(1 - \\sigma({\\mathrm{X_i}}w + b))$$\n", - "\n", + "```\n", "$w$에 대해 그래디언트 $L$를 선택하면 다음을 얻습니다.\n", + "```\n", + "\n", + "$b$에 대해 그래디언트 $L$를 선택하면 다음을 얻습니다.\n", "\n", + "```\n", "$$\\frac{\\partial L}{\\partial w} = \\frac{1}{m}(\\sigma({\\mathrm{X}}w + b) - y)X$$\n", + "```\n", "\n", "$b$에 대해 그래디언트 $L$를 선택하면 다음을 얻습니다.\n", "\n", - "$$\\frac{\\partial L}{\\partial b} = \\frac{1}{m}\\sum_{i=1}^{m}\\sigma({\\mathrm{X_i}}w + b) - y_i$$" + "```\n", + "$$\\frac{\\partial L}{\\partial b} = \\frac{1}{m}\\sum_{i=1}^{m}\\sigma({\\mathrm{X_i}}w + b) - y_i$$\n", + "```" ] }, { @@ -497,7 +545,9 @@ "metadata": { "id": "c0sXM7qLlKfZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class LogisticRegression(tf.Module):\n", "\n", @@ -536,7 +586,9 @@ "metadata": { "id": "5bIovC0Z4QHJ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "log_reg = LogisticRegression()" ] @@ -547,7 +599,9 @@ "metadata": { "id": "QJ2ievISyf0p" }, - "outputs": [], + "outputs": [ + + ], "source": [ "y_pred = log_reg(x_train_norm[:5], train=False)\n", "y_pred.numpy()" @@ -568,7 +622,9 @@ "metadata": { "id": "ssnVcKg7oMe6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def predict_class(y_pred, thresh=0.5):\n", " # Return a tensor with `1` if `y_pred` > `0.5`, and `0` otherwise\n", @@ -600,7 +656,9 @@ "metadata": { "id": "vJD7-4U0etqa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "batch_size = 64\n", "train_dataset = tf.data.Dataset.from_tensor_slices((x_train_norm, y_train))\n", @@ -624,7 +682,9 @@ "metadata": { "id": "jNC3D1DGsGgK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Set training parameters\n", "epochs = 200\n", @@ -688,7 +748,9 @@ "metadata": { "id": "mv3oCQPvWhr0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plt.plot(range(epochs), train_losses, label = \"Training loss\")\n", "plt.plot(range(epochs), test_losses, label = \"Testing loss\")\n", @@ -704,7 +766,9 @@ "metadata": { "id": "D2HDVGLPODIE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plt.plot(range(epochs), train_accs, label = \"Training accuracy\")\n", "plt.plot(range(epochs), test_accs, label = \"Testing accuracy\")\n", @@ -720,7 +784,9 @@ "metadata": { "id": "jonKhUzuPyfa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(f\"Final training log loss: {train_losses[-1]:.3f}\")\n", "print(f\"Final testing log Loss: {test_losses[-1]:.3f}\")" @@ -732,7 +798,9 @@ "metadata": { "id": "d3DF4qyrPyke" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(f\"Final training accuracy: {train_accs[-1]:.3f}\")\n", "print(f\"Final testing accuracy: {test_accs[-1]:.3f}\")" @@ -757,13 +825,15 @@ "metadata": { "id": "OJO7YkA8ZDMU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def show_confusion_matrix(y, y_classes, typ):\n", " # Compute the confusion matrix and normalize it\n", " plt.figure(figsize=(10,10))\n", " confusion = sk_metrics.confusion_matrix(y.numpy(), y_classes.numpy())\n", - " confusion_normalized = confusion / confusion.sum(axis=1)\n", + " confusion_normalized = confusion / confusion.sum(axis=1, keepdims=True)\n", " axis_labels = range(2)\n", " ax = sns.heatmap(\n", " confusion_normalized, xticklabels=axis_labels, yticklabels=axis_labels,\n", @@ -782,7 +852,9 @@ "metadata": { "id": "OQ5DFcleiDFm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "show_confusion_matrix(y_train, train_classes, 'Training')" ] @@ -793,7 +865,9 @@ "metadata": { "id": "gtfcsAp_iCNR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "show_confusion_matrix(y_test, test_classes, 'Testing')" ] @@ -830,7 +904,9 @@ "metadata": { "id": "6KPRHCzg4ZxH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class ExportModule(tf.Module):\n", " def __init__(self, model, norm_x, class_pred):\n", @@ -854,7 +930,9 @@ "metadata": { "id": "2YzRclo5-yjO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "log_reg_export = ExportModule(model=log_reg,\n", " norm_x=norm_x,\n", @@ -876,7 +954,9 @@ "metadata": { "id": "a4Qum1Ts_pmF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "models = tempfile.mkdtemp()\n", "save_path = os.path.join(models, 'log_reg_export')\n", @@ -889,7 +969,9 @@ "metadata": { "id": "3KPILr1i_M_c" }, - "outputs": [], + "outputs": [ + + ], "source": [ "log_reg_loaded = tf.saved_model.load(save_path)\n", "test_preds = log_reg_loaded(x_test)\n", @@ -910,13 +992,15 @@ "- 오류율 분석은 전체 정확성 점수 이면의 분류 모델의 성능에 대한 더 많은 인사이트를 얻을 수 있는 좋은 방법입니다.\n", "- 과대적합은 로지스틱 회귀 모델의 또 다른 일반적인 문제이지만 이 튜토리얼에서는 문제가 되지 않았습니다. 이에 대한 추가적인 도움이 필요하면 [과대적합 및 과소적합](../../tutorials/keras/overfit_and_underfit.ipynb) 튜토리얼을 참조하세요.\n", "\n", - "TensorFlow Core API를 사용하는 더 많은 예제는 [가이드](https://www.tensorflow.org/guide/core)를 확인하세요. 데이터 로드 및 준비에 대해 자세히 알아보려면 [이미지 데이터 로드](../../load_data/images.ipynb) 또는 [CSV 데이터 로드](../../load_data/csv.ipynb) 튜토리얼을 참고하세요." + "TensorFlow Core API를 사용하는 더 많은 예제는 [가이드](https://www.tensorflow.org/guide/core)를 확인하세요. 데이터 로드 및 준비에 대해 자세히 알아보려면 [이미지 데이터 로드](../../tutorials/load_data/images.ipynb) 또는 [CSV 데이터 로드](../../tutorials/load_data/csv.ipynb) 튜토리얼을 참고하세요." ] } ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "logistic_regression_core.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/core/matrix_core.ipynb b/site/ko/guide/core/matrix_core.ipynb index 72f2254200..8529286654 100644 --- a/site/ko/guide/core/matrix_core.ipynb +++ b/site/ko/guide/core/matrix_core.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "AwOEIRJC6Une" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -91,7 +93,9 @@ "metadata": { "id": "1rRo8oNqZ-Rj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import matplotlib\n", "from matplotlib.image import imread\n", @@ -107,7 +111,9 @@ "metadata": { "id": "9xQKvCJ85kCQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "print(tf.__version__)" @@ -123,7 +129,9 @@ "\n", "행렬 ${\\mathrm{A}}$의 특잇값 분해는 다음 인수분해에 의해 결정됩니다.\n", "\n", + "```\n", "여기서\n", + "```\n", "\n", "$m < n$일 때 ${\\mathrm{U}}$ 및 $\\Sigma$은 모두 $(m \\times m)$ 차원이며 ${\\mathrm{V}}^T$는 $(m \\times n)$ 차원을 가집니다.\n", "\n", @@ -159,7 +167,9 @@ "metadata": { "id": "C3QAcgyoeIpv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "A = tf.random.uniform(shape=[40,30])\n", "# Compute the SVD factorization\n", @@ -191,7 +201,9 @@ "metadata": { "id": "TPE6QeMtADUn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "A_svd = tf.einsum('s,us,vs -> uv',s,U,V)\n", "print('\\nReconstructed Matrix, A_svd', A_svd)" @@ -209,7 +221,9 @@ "\n", "SVD의 관점에서 ${\\mathrm{A}}$의 rank-r 근삿값은 다음 공식으로 정의합니다.\n", "\n", + "```\n", "어디\n", + "```\n", "\n", "어디\n", "\n", @@ -242,7 +256,9 @@ "metadata": { "id": "2oY3pMPagJrO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def rank_r_approx(s, U, V, r, verbose=False):\n", " # Compute the matrices necessary for a rank-r approximation\n", @@ -272,7 +288,9 @@ "metadata": { "id": "O3ZRkYCkX2FQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(f\"Original Size of A: {tf.size(A)}\")\n", "s, U, V = tf.linalg.svd(A)" @@ -284,7 +302,9 @@ "metadata": { "id": "S1DR83VMX4cM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Rank-15 approximation\n", "A_15, A_15_size = rank_r_approx(s, U, V, 15, verbose = True)\n", @@ -297,7 +317,9 @@ "metadata": { "id": "KgFT70XFX57E" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Rank-3 approximation\n", "A_3, A_3_size = rank_r_approx(s, U, V, 3, verbose = True)\n", @@ -332,7 +354,9 @@ "metadata": { "id": "OVsZOQUAZ2C7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "img_link = \"https://imagen.research.google/main_gallery_images/a-photo-of-a-corgi-dog-riding-a-bike-in-times-square.jpg\"\n", "img_path = requests.get(img_link, stream=True).raw\n", @@ -346,7 +370,9 @@ "metadata": { "id": "Qvs7uftcZ54x" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def show_img(I):\n", " # Display the image in matplotlib\n", @@ -361,7 +387,9 @@ "metadata": { "id": "ZbesXO3HZ6Qs" }, - "outputs": [], + "outputs": [ + + ], "source": [ "show_img(I)" ] @@ -383,7 +411,9 @@ "metadata": { "id": "i7DDp0h7oSIk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def compress_image(I, r, verbose=False):\n", " # Compress an image with the SVD given a rank \n", @@ -426,7 +456,9 @@ "metadata": { "id": "7GlKkVLGDjre" }, - "outputs": [], + "outputs": [ + + ], "source": [ "I_100, I_100_prop = compress_image(I, 100, verbose=True)" ] @@ -437,7 +469,9 @@ "metadata": { "id": "XdvUkF5_E75D" }, - "outputs": [], + "outputs": [ + + ], "source": [ "I_50, I_50_prop = compress_image(I, 50, verbose=True)" ] @@ -448,7 +482,9 @@ "metadata": { "id": "MsCNZ8416Sbk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "I_10, I_10_prop = compress_image(I, 10, verbose=True)" ] @@ -481,7 +517,9 @@ "metadata": { "id": "O1ariNQe6Wbl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plt.figure(figsize=(11,6))\n", "plt.plot([100, 50, 10], [I_100_prop, I_50_prop, I_10_prop])\n", @@ -498,9 +536,9 @@ "source": [ "이 플롯을 기반으로 근삿값 이미지의 압축 인자와 순위 사이에는 선형 관계가 있습니다. 더 자세히 알아보기 위해 근삿값 행렬 ${\\mathrm{A}}_r$의 데이터 크기를 계산에 필요한 총 요소 수로 정의합니다. 다음 수식을 사용하여 압축 인자와 순위 사이의 관계를 찾을 수 있습니다.\n", "\n", - "여기서\n", + "$$x = (m \\times r) + r + (r \\times n) = r \\times (m + n + 1)$$\n", "\n", - "각 RGB 근삿값은 서로 영향을 미치지 않으므로 이 수식은 색상 채널 차원과 무관합니다. 이제 원하는 압축 인자가 주어질 경우 입력 이미지를 압축하는 함수를 작성합니다.\n", + "$$c = \\large \\frac{x}{y} = \\frac{r \\times (m + n + 1)}{m \\times n}$$\n", "\n", "어디\n", "\n", @@ -523,7 +561,9 @@ "metadata": { "id": "viVO-I60QynI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def compress_image_with_factor(I, compression_factor, verbose=False):\n", " # Returns a compressed image based on a desired compression factor\n", @@ -548,7 +588,9 @@ "metadata": { "id": "HVeeloIwQ1b6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "compression_factor = 0.15\n", "I_r_img = compress_image_with_factor(I, compression_factor, verbose=True)" @@ -571,7 +613,9 @@ "metadata": { "id": "CteJ6VbKlndu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def viz_energy(I):\n", " # Visualize the energy captured based on rank\n", @@ -595,7 +639,9 @@ "metadata": { "id": "Vl9PKow-GgCp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "viz_energy(I)" ] @@ -615,7 +661,9 @@ "metadata": { "id": "fum5Cvm7R5vH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def compress_image_with_energy(I, energy_factor, verbose=False):\n", " # Returns a compressed image based on a desired energy factor\n", @@ -649,7 +697,9 @@ "metadata": { "id": "xDXBaZQ4c5jF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "energy_factor = 0.75\n", "I_r_img = compress_image_with_energy(I, energy_factor, verbose=True)" @@ -676,7 +726,9 @@ "metadata": { "id": "hctOvN8BckiS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "s, U, V = tf.linalg.svd(A)\n", "A_10, A_10_size = rank_r_approx(s, U, V, 10)\n", @@ -706,7 +758,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "matrix_core.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/core/mlp_core.ipynb b/site/ko/guide/core/mlp_core.ipynb index eb0f358de1..54081db1f4 100644 --- a/site/ko/guide/core/mlp_core.ipynb +++ b/site/ko/guide/core/mlp_core.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "AwOEIRJC6Une" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -37,7 +39,7 @@ "id": "EIdT9iu_Z4Rb" }, "source": [ - "# Core API를 사용하여 숫자 인식을 수행하는 멀티레이어 퍼셉트론" + "# Core API를 이용한 숫자 인식 다층 퍼셉트론" ] }, { @@ -47,10 +49,13 @@ }, "source": [ "\n", - " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", + " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기 GitHub에서 소스 보기노트북 다운로드하기 TensorFlow.org에서 보기\n", + " Google Colab에서 실행하기\n", + " GitHub에서 소스 보기\n", + " 노트북 다운로드하기
" ] }, @@ -60,7 +65,7 @@ "id": "SjAxxRpBzVYg" }, "source": [ - "이 노트북에서는 [TensorFlow Core 하위 수준 API](https://www.tensorflow.org/guide/core)를 사용하여 [멀티레이어 퍼셉트론](https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/anatomy) 및 [MNIST 데이터세트](http://yann.lecun.com/exdb/mnist)[로 손으로 작성한 숫자를 분류하는 엔드 투 엔드 머신러닝 워크플로를 구축합니다. TensorFlow Core 및 기본 사용 사례에 대한 자세한 내용은 ](http://yann.lecun.com/exdb/mnist)[Core API 개요](https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/anatomy)를 방문하여 확인하세요." + "이 노트북에서는 [다층 퍼셉트론](https://developers.google.com/machine-learning/crash-course/introduction-to-neural-networks/anatomy) 및 [MNIST 데이터세트](http://yann.lecun.com/exdb/mnist)[로 손으로 작성한 숫자를 분류하는 엔드 투 엔드 머신러닝 워크플로를 구축하기 위해 ](https://www.tensorflow.org/guide/core)[TensorFlow Core 로우 레벨 API](https://www.tensorflow.org/guide/core)를 사용합니다. TensorFlow Core 및 기본 사용 사례에 대한 자세한 내용은 Core API 개요를 방문하여 확인하세요." ] }, { @@ -77,7 +82,7 @@ "\n", "$$Z = \\vec{w}⋅\\mathrm{X} + b$$\n", "\n", - "여기서\n", + "여기서 각 기호는 다음을 의미합니다.\n", "\n", "- $Z$: 퍼셉트론 출력\n", "- $\\mathrm{X}$: 특성 행렬\n", @@ -86,9 +91,9 @@ "\n", "이러한 퍼셉트론이 쌓이면 밀집 레이어라는 구조를 형성하고, 이를 연결하여 신경망을 구축할 수 있습니다. 밀집 레이어의 수식은 퍼셉트론의 수식과 유사하지만 대신 가중치 행렬과 바이어스 벡터를 사용합니다.\n", "\n", - "$$Y = \\mathrm{W}⋅\\mathrm{X} + \\vec{b}$$\n", + "$$Z = \\mathrm{W}⋅\\mathrm{X} + \\vec{b}$$\n", "\n", - "여기서\n", + "여기서 각 기호는 다음을 의미합니다.\n", "\n", "- $Z$: 밀집 레이어 출력\n", "- $\\mathrm{X}$: 특성 행렬\n", @@ -104,7 +109,7 @@ "id": "nchsZfwEVtVs" }, "source": [ - "## 설치\n", + "## 설치하기\n", "\n", "시작하려면 TensorFlow, [pandas](https://pandas.pydata.org), [Matplotlib](https://matplotlib.org), [seaborn](https://seaborn.pydata.org)을 가져옵니다." ] @@ -115,7 +120,9 @@ "metadata": { "id": "mSfgqmwBagw_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Use seaborn for countplot.\n", "!pip install -q seaborn" @@ -127,7 +134,9 @@ "metadata": { "id": "1rRo8oNqZ-Rj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import pandas as pd\n", "import matplotlib\n", @@ -145,7 +154,9 @@ "metadata": { "id": "9xQKvCJ85kCQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "import tensorflow_datasets as tfds\n", @@ -173,7 +184,9 @@ "metadata": { "id": "Uiuh0B098_3p" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_data, val_data, test_data = tfds.load(\"mnist\", \n", " split=['train[10000:]', 'train[0:10000]', 'test'],\n", @@ -195,7 +208,9 @@ "metadata": { "id": "6V8hSqJ7AMjQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x_viz, y_viz = tfds.load(\"mnist\", split=['train[:1500]'], batch_size=-1, as_supervised=True)[0]\n", "x_viz = tf.squeeze(x_viz, axis=3)\n", @@ -223,9 +238,11 @@ "metadata": { "id": "Rj3K4XgQE7qR" }, - "outputs": [], + "outputs": [ + + ], "source": [ - "sns.countplot(y_viz.numpy());\n", + "sns.countplot(x=y_viz.numpy());\n", "plt.xlabel('Digits')\n", "plt.title(\"MNIST Digit Distribution\");" ] @@ -247,7 +264,9 @@ "metadata": { "id": "JSyCm2V2_AvI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def preprocess(x, y):\n", " # Reshaping the data\n", @@ -278,7 +297,9 @@ "metadata": { "id": "hYunzt3UyT9G" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.linspace(-2, 2, 201)\n", "x = tf.cast(x, tf.float32)\n", @@ -305,7 +326,9 @@ "metadata": { "id": "fVM8pvhWwuwI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.linspace(-4, 4, 201)\n", "x = tf.cast(x, tf.float32)\n", @@ -336,7 +359,9 @@ "metadata": { "id": "re1SSFyBdMrS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def xavier_init(shape):\n", " # Computes the xavier initialization values for a weight matrix\n", @@ -362,7 +387,9 @@ "metadata": { "id": "IM0yJos25FG5" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class DenseLayer(tf.Module):\n", "\n", @@ -377,8 +404,8 @@ " if not self.built:\n", " # Infer the input dimension based on first call\n", " self.in_dim = x.shape[1]\n", - " # Initialize the weights and biases using Xavier scheme\n", - " self.w = tf.Variable(xavier_init(shape=(self.in_dim, self.out_dim)))\n", + " # Initialize the weights and biases\n", + " self.w = tf.Variable(self.weight_init(shape=(self.in_dim, self.out_dim)))\n", " self.b = tf.Variable(tf.zeros(shape=(self.out_dim,)))\n", " self.built = True\n", " # Compute the forward pass\n", @@ -401,7 +428,9 @@ "metadata": { "id": "6XisRWiCyHAb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class MLP(tf.Module):\n", "\n", @@ -435,7 +464,9 @@ "metadata": { "id": "VmlACuki3oPi" }, - "outputs": [], + "outputs": [ + + ], "source": [ "hidden_layer_1_size = 700\n", "hidden_layer_2_size = 500\n", @@ -459,7 +490,7 @@ "\n", "$$L = -\\frac{1}{n}\\sum_{i=1}^{n}\\sum_{i=j}^{n} {y_j}^{[i]}⋅\\log(\\hat{{y_j}}^{[i]})$$\n", "\n", - "여기서\n", + "여기서 각 기호는 다음을 의미합니다.\n", "\n", "- $\\underset{n\\times m}{\\hat{y}}$: 예측 클래스 분포의 행렬\n", "- $\\underset{n\\times m}{y}$: 실제 클래스의 원-핫 인코딩된 행렬\n", @@ -473,7 +504,9 @@ "metadata": { "id": "rskOYA7FVCwg" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def cross_entropy_loss(y_pred, y):\n", " # Compute cross entropy loss with a sparse operation\n", @@ -496,7 +529,9 @@ "metadata": { "id": "jPJMWx2UgiBm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def accuracy(y_pred, y):\n", " # Compute accuracy after extracting class predictions\n", @@ -511,7 +546,7 @@ "id": "JSiNRhTOnKZr" }, "source": [ - "### 모델 훈련\n", + "### 모델 학습\n", "\n", "옵티마이저를 사용하면 표준 경사 하강에 비해 훨씬 더 빠른 수렴 결과를 얻을 수 있습니다. Adam 옵티마이저는 아래에 구현되어 있습니다. TensorFlow Core로 사용자 정의 옵티마이저 프로그램을 설계하는 방법에 대한 자세한 내용은 [옵티마이저](https://www.tensorflow.org/guide/core/optimizers_core) 가이드를 방문하여 확인하세요." ] @@ -522,7 +557,9 @@ "metadata": { "id": "iGIBDk3cAv6a" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class Adam:\n", "\n", @@ -571,7 +608,9 @@ "metadata": { "id": "CJLeY2ao1aw6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def train_step(x_batch, y_batch, loss, acc, model, optimizer):\n", " # Update the model state given a batch of data\n", @@ -597,7 +636,9 @@ "metadata": { "id": "oC85kuZgmh3q" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def train_model(mlp, train_data, val_data, loss, acc, optimizer, epochs):\n", " # Initialize data structures\n", @@ -651,7 +692,9 @@ "metadata": { "id": "zPlT8QfxptYl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train_losses, train_accs, val_losses, val_accs = train_model(mlp_model, train_data, val_data, \n", " loss=cross_entropy_loss, acc=accuracy,\n", @@ -666,7 +709,7 @@ "source": [ "### 성능 평가\n", "\n", - "먼저 훈련하는 동안 모델의 손실과 정확성을 시각화하는 플로팅 함수를 작성합니다. " + "훈련하는 동안 모델의 손실과 정확성을 시각화하는 플로팅 함수를 작성하는 것으로 시작합니다. " ] }, { @@ -675,7 +718,9 @@ "metadata": { "id": "VXTCYVtNDjAM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def plot_metrics(train_metric, val_metric, metric_type):\n", " # Visualize metrics vs training Epochs\n", @@ -694,7 +739,9 @@ "metadata": { "id": "DC-qIvZbHo0G" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plot_metrics(train_losses, val_losses, \"cross entropy loss\")" ] @@ -705,7 +752,9 @@ "metadata": { "id": "P-w2xk2PIDve" }, - "outputs": [], + "outputs": [ + + ], "source": [ "plot_metrics(train_accs, val_accs, \"accuracy\")" ] @@ -716,7 +765,7 @@ "id": "tbrJJaFrD_XR" }, "source": [ - "## 모델 저장하기 및 로드하기\n", + "## 모델 저장 및 로드하기\n", "\n", "먼저 원시 데이터를 선택하고 다음 연산을 수행하는 내보내기 모듈을 만들어 봅니다.\n", "\n", @@ -731,7 +780,9 @@ "metadata": { "id": "1sszfWuJJZoo" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class ExportModule(tf.Module):\n", " def __init__(self, model, preprocess, class_pred):\n", @@ -755,7 +806,9 @@ "metadata": { "id": "p8x6gjTDVi5d" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def preprocess_test(x):\n", " # The export module takes in unprocessed and unlabeled data\n", @@ -783,7 +836,9 @@ "metadata": { "id": "fN9pPBQTKTe3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mlp_model_export = ExportModule(model=mlp_model,\n", " preprocess=preprocess_test,\n", @@ -796,7 +851,9 @@ "metadata": { "id": "idS7rQKbKwRS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "models = tempfile.mkdtemp()\n", "save_path = os.path.join(models, 'mlp_model_export')\n", @@ -818,7 +875,9 @@ "metadata": { "id": "W5cwBTUqxldW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mlp_loaded = tf.saved_model.load(save_path)" ] @@ -829,7 +888,9 @@ "metadata": { "id": "bmv0u6j_b5OC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def accuracy_score(y_pred, y):\n", " # Generic accuracy function\n", @@ -857,7 +918,9 @@ "metadata": { "id": "UD8YiC1Vfeyp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Accuracy breakdown by digit:\")\n", "print(\"---------------------------\")\n", @@ -866,9 +929,9 @@ " label_ind = (y_test == label)\n", " # extract predictions for specific true label\n", " pred_label = test_classes[label_ind]\n", - " label_filled = tf.cast(tf.fill(pred_label.shape[0], label), tf.int64)\n", + " labels = y_test[label_ind]\n", " # compute class-wise accuracy\n", - " label_accs[accuracy_score(pred_label, label_filled).numpy()] = label\n", + " label_accs[accuracy_score(pred_label, labels).numpy()] = label\n", "for key in sorted(label_accs):\n", " print(f\"Digit {label_accs[key]}: {key:.3f}\")" ] @@ -888,7 +951,9 @@ "metadata": { "id": "JqCaqPwwh1tN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import sklearn.metrics as sk_metrics\n", "\n", @@ -897,7 +962,7 @@ " plt.figure(figsize=(10,10))\n", " confusion = sk_metrics.confusion_matrix(test_labels.numpy(), \n", " test_classes.numpy())\n", - " confusion_normalized = confusion / confusion.sum(axis=1)\n", + " confusion_normalized = confusion / confusion.sum(axis=1, keepdims=True)\n", " axis_labels = range(10)\n", " ax = sns.heatmap(\n", " confusion_normalized, xticklabels=axis_labels, yticklabels=axis_labels,\n", diff --git a/site/ko/guide/create_op.md b/site/ko/guide/create_op.md index ba74a36952..2a2076670e 100644 --- a/site/ko/guide/create_op.md +++ b/site/ko/guide/create_op.md @@ -21,7 +21,7 @@ ### 전제 조건 - C++에 어느 정도 익숙해야 합니다. -- [TensorFlow 바이너리](../../install)를 설치했거나 [TensorFlow 소스](../../install/source.md)를 다운로드하여 빌드할 수 있어야 합니다. +- [TensorFlow 바이너리](https://www.tensorflow.org/install)를 설치했거나 [TensorFlow 소스](https://www.tensorflow.org/install/source)를 다운로드하여 빌드할 수 있어야 합니다. ## op 인터페이스 정의하기 @@ -373,7 +373,7 @@ if __name__ == "__main__": $ python zero_out_op_test.py ``` -## Op에 고급 특성 빌드하기 +## op에 고급 특성 빌드하기 기본 (그리고, 다소 제한적인) op 및 구현을 빌드하는 방법을 살펴보았으므로 일반적으로 op에 빌드하는 데 필요한 조금 더 복잡한 항목을 살펴보겠습니다. 여기에는 다음이 포함됩니다. @@ -412,7 +412,7 @@ $ python zero_out_op_test.py 일부 함수에서 반환된 `Status` 객체가 오류인지 테스트하려는 경우, [`OP_REQUIRES_OK`](https://www.tensorflow.org/code/tensorflow/core/platform/errors.h)를 사용합니다. 이 두 매크로는 모두 오류 시 함수로부터 반환합니다. -### Op 등록 +### op 등록 #### Attrs @@ -1083,6 +1083,8 @@ def _zero_out_grad(op, grad): 그래디언트 함수가 호출될 때 텐서 데이터 자체가 아니라 ops의 데이터 흐름 그래프만 사용할 수 있습니다. 따라서, 모든 계산은 그래프 실행 시간에 실행되도록 다른 tensorflow ops를 사용하여 수행해야 합니다. +op 유형에 대한 사용자 정의 그래디언트를 등록할 때 유형 힌트를 추가하면 데이터 유효성 검사를 통해 코드의 가독성, 디버깅 가능성, 유지 관리 용이성 및 견고성을 높일 수 있습니다. 예를 들어, 함수에서 `op`를 매개변수로 사용할 때 그래디언트 함수가 tf.Operation을 매개변수 유형으로 사용하도록 지정합니다. + ### C++의 형상 함수 TensorFlow API에 "도형 유추"라는 특성이 있어 그래프를 실행하지 않고도 텐서 도형에 대한 정보를 제공합니다. 도형 유추는 C++ `REGISTER_OP` 선언에서 각 op 유형에 등록된 "도형 함수"에 의해 지원되며 두 가지 역할을 수행합니다. 입력의 도형이 그래프 생성 중에 호환되는지 확인하고 출력의 도형을 지정합니다. diff --git a/site/ko/guide/data.ipynb b/site/ko/guide/data.ipynb index 536b9ba38e..242da3339a 100644 --- a/site/ko/guide/data.ipynb +++ b/site/ko/guide/data.ipynb @@ -18,7 +18,9 @@ "cellView": "form", "id": "llMNufAK7nfK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n", "# you may not use this file except in compliance with the License.\n", @@ -79,7 +81,9 @@ "metadata": { "id": "UJIEjEIBdf-h" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf" ] @@ -90,7 +94,9 @@ "metadata": { "id": "7Y0JtWBNR9E5" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import pathlib\n", "import os\n", @@ -124,7 +130,9 @@ "metadata": { "id": "0F-FDnjB6t6J" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])\n", "dataset" @@ -136,7 +144,9 @@ "metadata": { "id": "pwJsRJ-FbDcJ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for elem in dataset:\n", " print(elem.numpy())" @@ -157,7 +167,9 @@ "metadata": { "id": "03w9oxFfbDcM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "it = iter(dataset)\n", "\n", @@ -179,7 +191,9 @@ "metadata": { "id": "C2bHAeNxbDcO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(dataset.reduce(0, lambda state, value: state + value).numpy())" ] @@ -209,7 +223,9 @@ "metadata": { "id": "Mg0m1beIhXGn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10]))\n", "\n", @@ -222,7 +238,9 @@ "metadata": { "id": "cwyemaghhXaG" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset2 = tf.data.Dataset.from_tensor_slices(\n", " (tf.random.uniform([4]),\n", @@ -237,7 +255,9 @@ "metadata": { "id": "1CL7aB0ahXn_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset3 = tf.data.Dataset.zip((dataset1, dataset2))\n", "\n", @@ -250,7 +270,9 @@ "metadata": { "id": "m5bz7R1xhX1f" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Dataset containing a sparse tensor.\n", "dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]))\n", @@ -264,7 +286,9 @@ "metadata": { "id": "lVOPHur_hYQv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Use value_type to see the type of value represented by the element spec\n", "dataset4.element_spec.value_type" @@ -285,7 +309,9 @@ "metadata": { "id": "2myAr3Pxd-zF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset1 = tf.data.Dataset.from_tensor_slices(\n", " tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32))\n", @@ -299,7 +325,9 @@ "metadata": { "id": "woPXMP14gUTg" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for z in dataset1:\n", " print(z.numpy())" @@ -311,7 +339,9 @@ "metadata": { "id": "53PA4x6XgLar" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset2 = tf.data.Dataset.from_tensor_slices(\n", " (tf.random.uniform([4]),\n", @@ -326,7 +356,9 @@ "metadata": { "id": "2ju4sNSebDcR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset3 = tf.data.Dataset.zip((dataset1, dataset2))\n", "\n", @@ -339,7 +371,9 @@ "metadata": { "id": "BgxsfAS2g6gk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for a, (b,c) in dataset3:\n", " print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c))" @@ -373,7 +407,9 @@ "metadata": { "id": "NmaE6PjjhQ47" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train, test = tf.keras.datasets.fashion_mnist.load_data()" ] @@ -384,7 +420,9 @@ "metadata": { "id": "J6cNiuDBbDcU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "images, labels = train\n", "images = images/255\n", @@ -421,7 +459,9 @@ "metadata": { "id": "9njpME-jmDza" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def count(stop):\n", " i = 0\n", @@ -436,7 +476,9 @@ "metadata": { "id": "xwqLrjnTpD8Y" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for n in count(5):\n", " print(n)" @@ -461,7 +503,9 @@ "metadata": { "id": "GFga_OTwm0Je" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), )" ] @@ -472,7 +516,9 @@ "metadata": { "id": "fel1SUuBnDUE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for count_batch in ds_counter.repeat().batch(10).take(10):\n", " print(count_batch.numpy())" @@ -497,7 +543,9 @@ "metadata": { "id": "allFX1g8rGKe" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def gen_series():\n", " i = 0\n", @@ -513,7 +561,9 @@ "metadata": { "id": "6Ku26Yb9rcJX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for i, series in gen_series():\n", " print(i, \":\", str(series))\n", @@ -538,7 +588,9 @@ "metadata": { "id": "zDTfhEzhsliM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds_series = tf.data.Dataset.from_generator(\n", " gen_series, \n", @@ -563,7 +615,9 @@ "metadata": { "id": "A7jEpj3As1lO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds_series_batch = ds_series.shuffle(20).padded_batch(10)\n", "\n", @@ -590,7 +644,9 @@ "metadata": { "id": "g-_JCFRQ1CXM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "flowers = tf.keras.utils.get_file(\n", " 'flower_photos',\n", @@ -613,7 +669,9 @@ "metadata": { "id": "vPCZeBQE5DfH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20)" ] @@ -624,7 +682,9 @@ "metadata": { "id": "my4PxqfH26p6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "images, labels = next(img_gen.flow_from_directory(flowers))" ] @@ -635,7 +695,9 @@ "metadata": { "id": "Hd96nH1w3eKH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(images.dtype, images.shape)\n", "print(labels.dtype, labels.shape)" @@ -647,7 +709,9 @@ "metadata": { "id": "KvRwvt5E2rTH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds = tf.data.Dataset.from_generator(\n", " lambda: img_gen.flow_from_directory(flowers), \n", @@ -664,7 +728,9 @@ "metadata": { "id": "LcaULBCXj_2_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for images, labels in ds.take(1):\n", " print('images.shape: ', images.shape)\n", @@ -699,7 +765,9 @@ "metadata": { "id": "jZo_4fzdbDcW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Creates a dataset that reads all of the examples from two files.\n", "fsns_test_file = tf.keras.utils.get_file(\"fsns.tfrec\", \"https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001\")" @@ -720,7 +788,9 @@ "metadata": { "id": "e2WV5d7DRUA-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])\n", "dataset" @@ -741,7 +811,9 @@ "metadata": { "id": "3tk29nlMl5P3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "raw_example = next(iter(dataset))\n", "parsed = tf.train.Example.FromString(raw_example.numpy())\n", @@ -768,7 +840,9 @@ "metadata": { "id": "hQMoFu2TbDcc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'\n", "file_names = ['cowper.txt', 'derby.txt', 'butler.txt']\n", @@ -785,7 +859,9 @@ "metadata": { "id": "il4cOjiVwj95" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = tf.data.TextLineDataset(file_paths)" ] @@ -805,7 +881,9 @@ "metadata": { "id": "vpEHKyvHxu8A" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for line in dataset.take(5):\n", " print(line.numpy())" @@ -826,7 +904,9 @@ "metadata": { "id": "1UCveWOt7fDE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "files_ds = tf.data.Dataset.from_tensor_slices(file_paths)\n", "lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3)\n", @@ -852,7 +932,9 @@ "metadata": { "id": "X6b20Gua2jPO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n", "titanic_lines = tf.data.TextLineDataset(titanic_file)" @@ -864,7 +946,9 @@ "metadata": { "id": "5M1pauNT68B2" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for line in titanic_lines.take(10):\n", " print(line.numpy())" @@ -876,7 +960,9 @@ "metadata": { "id": "dEIP95cibDcf" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def survived(line):\n", " return tf.not_equal(tf.strings.substr(line, 0, 1), \"0\")\n", @@ -890,7 +976,9 @@ "metadata": { "id": "odQ4618h1XqD" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for line in survivors.take(10):\n", " print(line.numpy())" @@ -924,7 +1012,9 @@ "metadata": { "id": "kj28j5u49Bjm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")" ] @@ -935,7 +1025,9 @@ "metadata": { "id": "ghvtmW40LM0B" }, - "outputs": [], + "outputs": [ + + ], "source": [ "df = pd.read_csv(titanic_file)\n", "df.head()" @@ -956,7 +1048,9 @@ "metadata": { "id": "JmAMCiPJA0qO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df))\n", "\n", @@ -984,7 +1078,9 @@ "metadata": { "id": "zHUDrM_s_brq" }, - "outputs": [], + "outputs": [ + + ], "source": [ "titanic_batches = tf.data.experimental.make_csv_dataset(\n", " titanic_file, batch_size=4,\n", @@ -997,7 +1093,9 @@ "metadata": { "id": "TsZfhz79_Wlg" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for feature_batch, label_batch in titanic_batches.take(1):\n", " print(\"'survived': {}\".format(label_batch))\n", @@ -1021,7 +1119,9 @@ "metadata": { "id": "H9KNHyDwF2Sc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "titanic_batches = tf.data.experimental.make_csv_dataset(\n", " titanic_file, batch_size=4,\n", @@ -1034,7 +1134,9 @@ "metadata": { "id": "7C2uosFnGIT8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for feature_batch, label_batch in titanic_batches.take(1):\n", " print(\"'survived': {}\".format(label_batch))\n", @@ -1057,7 +1159,9 @@ "metadata": { "id": "wP1Y_NXA8bYl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string]\n", "dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True)\n", @@ -1081,7 +1185,9 @@ "metadata": { "id": "Qry-g90FMo2I" }, - "outputs": [], + "outputs": [ + + ], "source": [ "%%writefile missing.csv\n", "1,2,3,4\n", @@ -1098,7 +1204,9 @@ "metadata": { "id": "d5_hbiE9bDck" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Creates a dataset that reads all of the records from two CSV files, each with\n", "# four float columns which may have missing values.\n", @@ -1115,7 +1223,9 @@ "metadata": { "id": "__jc7iD9M9FC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for line in dataset:\n", " print(line.numpy())" @@ -1136,7 +1246,9 @@ "metadata": { "id": "p2IF_K0obDcm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Creates a dataset that reads all of the records from two CSV files with\n", "# headers, extracting float data from columns 2 and 4.\n", @@ -1152,7 +1264,9 @@ "metadata": { "id": "-5aLprDeRNb0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for line in dataset:\n", " print(line.numpy())" @@ -1182,7 +1296,9 @@ "metadata": { "id": "1dZwN3CS-jV2" }, - "outputs": [], + "outputs": [ + + ], "source": [ "flowers_root = tf.keras.utils.get_file(\n", " 'flower_photos',\n", @@ -1215,7 +1331,9 @@ "metadata": { "id": "_2iCXsHu6jJH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for item in flowers_root.glob(\"*\"):\n", " print(item.name)" @@ -1236,7 +1354,9 @@ "metadata": { "id": "lAkQp5uxoINu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))\n", "\n", @@ -1259,7 +1379,9 @@ "metadata": { "id": "-xhBRgvNqRRe" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def process_path(file_path):\n", " label = tf.strings.split(file_path, os.sep)[-2]\n", @@ -1274,7 +1396,9 @@ "metadata": { "id": "kxrl0lGdnpRz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for image_raw, label_text in labeled_ds.take(1):\n", " print(repr(image_raw.numpy()[:100]))\n", @@ -1314,7 +1438,9 @@ "metadata": { "id": "xB7KeceLoSH0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "inc_dataset = tf.data.Dataset.range(100)\n", "dec_dataset = tf.data.Dataset.range(0, -100, -1)\n", @@ -1340,7 +1466,9 @@ "metadata": { "id": "yN7hn7OBoSHx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "batched_dataset" ] @@ -1360,7 +1488,9 @@ "metadata": { "id": "BycWC7WCoSHt" }, - "outputs": [], + "outputs": [ + + ], "source": [ "batched_dataset = dataset.batch(7, drop_remainder=True)\n", "batched_dataset" @@ -1383,7 +1513,9 @@ "metadata": { "id": "kycwO0JooSHn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = tf.data.Dataset.range(100)\n", "dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x))\n", @@ -1437,7 +1569,9 @@ "metadata": { "id": "0tODHZzRoSHg" }, - "outputs": [], + "outputs": [ + + ], "source": [ "titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n", "titanic_lines = tf.data.TextLineDataset(titanic_file)" @@ -1449,7 +1583,9 @@ "metadata": { "id": "LMO6mlXxoSHc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def plot_batch_sizes(ds):\n", " batch_sizes = [batch.shape[0] for batch in ds]\n", @@ -1475,7 +1611,9 @@ "metadata": { "id": "nZ0G1cztoSHX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "titanic_batches = titanic_lines.repeat(3).batch(128)\n", "plot_batch_sizes(titanic_batches)" @@ -1496,7 +1634,9 @@ "metadata": { "id": "wmbmdK1qoSHS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "titanic_batches = titanic_lines.batch(128).repeat(3)\n", "\n", @@ -1518,7 +1658,9 @@ "metadata": { "id": "YyekyeY7oSHO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "epochs = 3\n", "dataset = titanic_lines.batch(128)\n", @@ -1557,7 +1699,9 @@ "metadata": { "id": "Io4iJH1toSHI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "lines = tf.data.TextLineDataset(titanic_file)\n", "counter = tf.data.experimental.Counter()\n", @@ -1583,7 +1727,9 @@ "metadata": { "id": "ayM3FFFAoSHC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "n,line_batch = next(iter(dataset))\n", "print(n.numpy())" @@ -1606,7 +1752,9 @@ "metadata": { "id": "YX3pe7zZoSG6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = tf.data.Dataset.zip((counter, lines))\n", "shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2)\n", @@ -1622,7 +1770,9 @@ "metadata": { "id": "H9hlE-lGoSGz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled]\n", "plt.plot(shuffle_repeat, label=\"shuffle().repeat()\")\n", @@ -1645,7 +1795,9 @@ "metadata": { "id": "Bhxb5YGZoSGm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = tf.data.Dataset.zip((counter, lines))\n", "shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10)\n", @@ -1661,7 +1813,9 @@ "metadata": { "id": "VAM4cbpZoSGL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled]\n", "\n", @@ -1705,7 +1859,9 @@ "metadata": { "id": "rMGlj8V-u-NH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))" ] @@ -1725,7 +1881,9 @@ "metadata": { "id": "fZObC0debDcr" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Reads an image from a file, decodes it into a dense tensor, and resizes it\n", "# to a fixed shape.\n", @@ -1755,7 +1913,9 @@ "metadata": { "id": "y8xuN_HBzGup" }, - "outputs": [], + "outputs": [ + + ], "source": [ "file_path = next(iter(list_ds))\n", "image, label = parse_image(file_path)\n", @@ -1784,7 +1944,9 @@ "metadata": { "id": "SzO8LI_H5Sk_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "images_ds = list_ds.map(parse_image)\n", "\n", @@ -1822,7 +1984,9 @@ "metadata": { "id": "tBUmbERt7Czz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import scipy.ndimage as ndimage\n", "\n", @@ -1837,7 +2001,9 @@ "metadata": { "id": "_wEyL7bS9S6t" }, - "outputs": [], + "outputs": [ + + ], "source": [ "image, label = next(iter(images_ds))\n", "image = random_rotate_image(image)\n", @@ -1859,7 +2025,9 @@ "metadata": { "id": "Cn2nIu92BMp0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def tf_random_rotate_image(image, label):\n", " im_shape = image.shape\n", @@ -1874,7 +2042,9 @@ "metadata": { "id": "bWPqKbTnbDct" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rot_ds = images_ds.map(tf_random_rotate_image)\n", "\n", @@ -1899,7 +2069,9 @@ "metadata": { "id": "6wnE134b32KY" }, - "outputs": [], + "outputs": [ + + ], "source": [ "fsns_test_file = tf.keras.utils.get_file(\"fsns.tfrec\", \"https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001\")\n", "dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])\n", @@ -1921,7 +2093,9 @@ "metadata": { "id": "4znsVNqnF73C" }, - "outputs": [], + "outputs": [ + + ], "source": [ "raw_example = next(iter(dataset))\n", "parsed = tf.train.Example.FromString(raw_example.numpy())\n", @@ -1940,7 +2114,9 @@ "metadata": { "id": "cwzqp8IGC_vQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "raw_example = next(iter(dataset))" ] @@ -1951,7 +2127,9 @@ "metadata": { "id": "y2X1dQNfC8Lu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def tf_parse(eg):\n", " example = tf.io.parse_example(\n", @@ -1968,7 +2146,9 @@ "metadata": { "id": "lGJhKDp_61A_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "img, txt = tf_parse(raw_example)\n", "print(txt.numpy())\n", @@ -1981,7 +2161,9 @@ "metadata": { "id": "8vFIUFzD5qIC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "decoded = dataset.map(tf_parse)\n", "decoded" @@ -1993,7 +2175,9 @@ "metadata": { "id": "vRYNYkEej7Ix" }, - "outputs": [], + "outputs": [ + + ], "source": [ "image_batch, text_batch = next(iter(decoded.batch(10)))\n", "image_batch.shape" @@ -2036,7 +2220,9 @@ "metadata": { "id": "kTQgo49skjuY" }, - "outputs": [], + "outputs": [ + + ], "source": [ "range_ds = tf.data.Dataset.range(100000)" ] @@ -2067,7 +2253,9 @@ "metadata": { "id": "pSs9XqwQpvIN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "batches = range_ds.batch(10, drop_remainder=True)\n", "\n", @@ -2090,7 +2278,9 @@ "metadata": { "id": "47XfwPhetkIN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def dense_1_step(batch):\n", " # Shift features and labels one step relative to each other.\n", @@ -2117,7 +2307,9 @@ "metadata": { "id": "FMmkQB1Gqo6x" }, - "outputs": [], + "outputs": [ + + ], "source": [ "batches = range_ds.batch(15, drop_remainder=True)\n", "\n", @@ -2146,7 +2338,9 @@ "metadata": { "id": "11dF3wyFjk2J" }, - "outputs": [], + "outputs": [ + + ], "source": [ "feature_length = 10\n", "label_length = 3\n", @@ -2184,7 +2378,9 @@ "metadata": { "id": "ZEI2W_EBw2OX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "window_size = 5\n", "\n", @@ -2208,7 +2404,9 @@ "metadata": { "id": "SB8AI03mnF8u" }, - "outputs": [], + "outputs": [ + + ], "source": [ " for x in windows.flat_map(lambda x: x).take(30):\n", " print(x.numpy(), end=' ')" @@ -2229,7 +2427,9 @@ "metadata": { "id": "5j_y84rmyVQa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def sub_to_batch(sub):\n", " return sub.batch(window_size, drop_remainder=True)\n", @@ -2255,7 +2455,9 @@ "metadata": { "id": "LdFRv_0D4FqW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def make_window_dataset(ds, window_size=5, shift=1, stride=1):\n", " windows = ds.window(window_size, shift=shift, stride=stride)\n", @@ -2273,7 +2475,9 @@ "metadata": { "id": "-iVxcVfEdf5b" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3)\n", "\n", @@ -2296,7 +2500,9 @@ "metadata": { "id": "F0fPfZkZw6j_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dense_labels_ds = ds.map(dense_1_step)\n", "\n", @@ -2323,7 +2529,9 @@ "metadata": { "id": "U2e8dxVUlFHO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "zip_path = tf.keras.utils.get_file(\n", " origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip',\n", @@ -2339,7 +2547,9 @@ "metadata": { "id": "EhkkM4Wx75S_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "creditcard_ds = tf.data.experimental.make_csv_dataset(\n", " csv_path, batch_size=1024, label_name=\"Class\",\n", @@ -2362,7 +2572,9 @@ "metadata": { "id": "a8-Ss69XlzXD" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def count(counts, batch):\n", " features, labels = batch\n", @@ -2384,7 +2596,9 @@ "metadata": { "id": "O1a3t_B4l_f6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "counts = creditcard_ds.take(10).reduce(\n", " initial_state={'class_0': 0, 'class_1': 0},\n", @@ -2432,7 +2646,9 @@ "metadata": { "id": "6YKfCPa-nioA" }, - "outputs": [], + "outputs": [ + + ], "source": [ "negative_ds = (\n", " creditcard_ds\n", @@ -2452,7 +2668,9 @@ "metadata": { "id": "8FNd3sQjzl9-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for features, label in positive_ds.batch(10).take(1):\n", " print(label.numpy())" @@ -2473,7 +2691,9 @@ "metadata": { "id": "vjdPVIFCngOb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "balanced_ds = tf.data.Dataset.sample_from_datasets(\n", " [negative_ds, positive_ds], [0.5, 0.5]).batch(10)" @@ -2494,7 +2714,9 @@ "metadata": { "id": "Myvkw21Rz-fH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for features, labels in balanced_ds.take(10):\n", " print(labels.numpy())" @@ -2530,7 +2752,9 @@ "metadata": { "id": "zC_Cuzw8lhI5" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def class_func(features, label):\n", " return label" @@ -2553,7 +2777,9 @@ "metadata": { "id": "fY6VIhr3oGHG" }, - "outputs": [], + "outputs": [ + + ], "source": [ "resample_ds = (\n", " creditcard_ds\n", @@ -2578,7 +2804,9 @@ "metadata": { "id": "KpfCGU6BiaZq" }, - "outputs": [], + "outputs": [ + + ], "source": [ "balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label)" ] @@ -2598,7 +2826,9 @@ "metadata": { "id": "XGLYChBQwkDV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for features, labels in balanced_ds.take(10):\n", " print(labels.numpy())" @@ -2630,7 +2860,9 @@ "metadata": { "id": "3Fsm9wvKUsNC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "range_ds = tf.data.Dataset.range(20)\n", "\n", @@ -2682,7 +2914,9 @@ "metadata": { "id": "-bfjqm0hOfES" }, - "outputs": [], + "outputs": [ + + ], "source": [ "train, test = tf.keras.datasets.fashion_mnist.load_data()\n", "\n", @@ -2697,7 +2931,9 @@ "metadata": { "id": "wDhF3rGnbDdD" }, - "outputs": [], + "outputs": [ + + ], "source": [ "fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))\n", "fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)\n", @@ -2727,7 +2963,9 @@ "metadata": { "id": "9cu4kPzOHnlt" }, - "outputs": [], + "outputs": [ + + ], "source": [ "model.fit(fmnist_train_ds, epochs=2)" ] @@ -2747,7 +2985,9 @@ "metadata": { "id": "Bp1BpzlyJinb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20)" ] @@ -2767,7 +3007,9 @@ "metadata": { "id": "TnlRHlaL-XUI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "loss, accuracy = model.evaluate(fmnist_train_ds)\n", "print(\"Loss :\", loss)\n", @@ -2789,7 +3031,9 @@ "metadata": { "id": "uVgamf9HKDon" }, - "outputs": [], + "outputs": [ + + ], "source": [ "loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10)\n", "print(\"Loss :\", loss)\n", @@ -2811,7 +3055,9 @@ "metadata": { "id": "343lXJ-pIqWD" }, - "outputs": [], + "outputs": [ + + ], "source": [ "predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32)\n", "result = model.predict(predict_ds, steps = 10)\n", @@ -2833,7 +3079,9 @@ "metadata": { "id": "mgQJTPrT-2WF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "result = model.predict(fmnist_train_ds, steps = 10)\n", "print(result.shape)" @@ -2842,7 +3090,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "data.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/distributed_training.ipynb b/site/ko/guide/distributed_training.ipynb index 13ced9d548..6ee4584ae1 100644 --- a/site/ko/guide/distributed_training.ipynb +++ b/site/ko/guide/distributed_training.ipynb @@ -6,7 +6,7 @@ "id": "Tce3stUlHN0L" }, "source": [ - "##### Copyright 2018 The TensorFlow Authors.\n" + "##### Copyright 2018 The TensorFlow Probability Authors.\n" ] }, { @@ -16,7 +16,9 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -37,7 +39,7 @@ "id": "MfBg1C5NB3X0" }, "source": [ - "# TensorFlow를 사용한 분산 훈련" + "# TensorFlow를 사용하여 분산 훈련하기" ] }, { @@ -47,10 +49,12 @@ }, "source": [ "\n", - " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", + " \n", "
TensorFlow.org에서 보기Google Colab에서 실행하기GitHub에서 소스 보기노트북 다운로드하기 TensorFlow.org에서 보기\n", + " Google Colab에서 실행하기\n", + "GitHub에서 소스 보기노트북 다운로드하기
" ] }, @@ -60,14 +64,14 @@ "id": "xHxb-dlhMIzW" }, "source": [ - "## 개요\n", + "## Overview\n", "\n", "`tf.distribute.Strategy`는 여러 GPU, 여러 머신 또는 TPU에 훈련을 분산하는 TensorFlow API입니다. 이 API를 사용하면 코드를 최소한으로 변경하여 기존 모델 및 훈련 코드를 분산 처리할 수 있습니다.\n", "\n", "`tf.distribute.Strategy`는 다음과 같은 주요 목표를 염두에 두고 설계되었습니다.\n", "\n", "- 쉬운 사용법과 연구원, 머신러닝 엔지니어 등 여러 사용자 세그먼트 지원\n", - "- 즉시 뛰어난 성능을 제공\n", + "- 그대로 적용하기만 하면 좋은 성능을 보일 것.\n", "- 전략 간 용이한 전환\n", "\n", "Keras `Model.fit`과 같은 고수준 API를 사용하는 `tf.distribute.Strategy`뿐만 아니라 [사용자 정의 훈련 루프](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch)를 사용해도 훈련을 분산할 수 있습니다(일반적으로 TensorFlow를 사용하는 모든 계산 포함).\n", @@ -96,7 +100,9 @@ "metadata": { "id": "EVOZFbNgXghB" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf" ] @@ -107,9 +113,9 @@ "id": "eQ1QESxxEbCh" }, "source": [ - "## 전략 유형\n", + "## 전략의 종류\n", "\n", - "`tf.distribute.Strategy`는 다양한 축에 따른 여러 가지 사용 사례를 다룹니다. 현재는 일부 조합만 지원되며 다른 조합은 향후에 추가될 예정입니다. 일부 축에 대한 내용은 다음과 같습니다.\n", + "`tf.distribute.Strategy`는 서로 다른 다양한 사용 형태를 아우르려고 합니다. 몇 가지 조합은 현재 지원하지만, 추후에 추가될 전략들도 있습니다. 이들 중 몇 가지를 살펴보겠습니다.\n", "\n", "- *동기식 및 비동기식 훈련:* 데이터 병렬 처리로 훈련을 분산하는 두 가지 일반적인 방법입니다. 동기식 훈련에서 모든 작업자는 동기화된 입력 데이터의 다른 슬라이스에 대해 훈련하고 각 단계에서 그래디언트를 집계합니다. 비동기식 훈련에서 모든 작업자는 입력 데이터에 대해 독립적으로 훈련하고 변수를 비동기적으로 업데이트합니다. 일반적으로 동기식 훈련은 매개변수 서버 아키텍처를 통해 올 리듀스(all-reduce) 및 비동기삭(async)으로 지원됩니다.\n", "- *하드웨어 플랫폼:* 하나의 머신에서 여러 GPU를 사용하도록 혹은 네트워크에서 여러 머신(각각 0개 이상의 GPU 포함)을 사용하도록 혹은 Cloud TPU를 사용하도록 훈련을 확장할 수 있습니다.\n", @@ -119,7 +125,7 @@ "훈련 API | `MirroredStrategy` | `TPUStrategy` | `MultiWorkerMirroredStrategy` | `CentralStorageStrategy` | `ParameterServerStrategy`\n", ":-- | :-- | :-- | :-- | :-- | :--\n", "**Keras `Model.fit`** | 지원됨 | 지원됨 | 지원됨 | 실험 기능으로 지원 | 실험 기능으로 지원\n", - "**사용자 정의 훈련 루프** | 지원됨 | 지원됨 | 지원됨 | 실험 기능으로 지원 | 실험 기능으로 지원\n", + "**사용자 정의 훈련 루프** | 지원됨 | 지원됨 | 지원됨 | 실험적으로 지원 | 실험 기능으로 지원\n", "**Estimator API** | 제한적인 지원 | 지원되지 않음 | 제한적인 지원 | 제한적인 지원 | 제한적인 지원\n", "\n", "참고: [실험 기능으로 지원](https://www.tensorflow.org/guide/versions#what_is_not_covered)은 API가 호환성을 보장하지 않음을 의미합니다.\n", @@ -139,7 +145,7 @@ "\n", "여러 기기에 변수의 변경사항을 전달하기 위해 효율적인 올 리듀스(all-reduce) 알고리즘을 사용합니다. 올 리듀스 알고리즘은 모든 장치의 텐서를 모은 다음 그 합을 구하고 다시 각 장비에 제공합니다. 이 통합된 알고리즘은 매우 효율적이기에 동기식의 오버헤드를 줄일 수 있습니다. 장치 간에 사용 가능한 통신 방법에 따라 다양한 올 리듀스 알고리즘과 구현이 있습니다. 기본적으로 NVIDIA Collective Communication Library([NCCL](https://developer.nvidia.com/nccl))를 올 리듀스 구현으로 사용합니다. 몇 가지 다른 옵션 중에서 선택하거나 직접 작성할 수 있습니다.\n", "\n", - "`MirroredStrategy`를 생성하는 가장 간단한 방법은 다음과 같습니다." + "`MirroredStrategy`를 만드는 가장 쉬운 방법은 다음과 같습니다." ] }, { @@ -148,7 +154,9 @@ "metadata": { "id": "9Z4FMAY9ADxK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mirrored_strategy = tf.distribute.MirroredStrategy()" ] @@ -170,7 +178,9 @@ "metadata": { "id": "nbGleskCACv_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mirrored_strategy = tf.distribute.MirroredStrategy(devices=[\"/gpu:0\", \"/gpu:1\"])" ] @@ -190,7 +200,9 @@ "metadata": { "id": "6-xIOIpgBItn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mirrored_strategy = tf.distribute.MirroredStrategy(\n", " cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())" @@ -220,7 +232,7 @@ "tpu_strategy = tf.distribute.TPUStrategy(cluster_resolver)\n", "```\n", "\n", - "`TPUClusterResolver` 인스턴스는 TPU를 찾을 수 있도록 돕습니다. Colab에서는 이에 해당하는 인수를 지정할 필요가 없습니다.\n", + "Note: 이 전략은 아직 개선 중이고 더 많은 경우에 쓸 수 있도록 만들고 있기 때문에, 실험 기능으로 지원됩니다. 따라서 다음에 API가 바뀔 수 있음에 유념하십시오.\n", "\n", "Cloud TPU에서 사용하려는 경우:\n", "\n", @@ -238,7 +250,7 @@ "\n", "`tf.distribute.MultiWorkerMirroredStrategy`는 `MirroredStrategy`와 매우 유사합니다. 이는 잠재적으로 여러 GPU가 있는 여러 작업자에 걸쳐 동기식 분산 훈련을 구현합니다. `tf.distribute.MirroredStrategy`와 유사하게 모든 작업자에 걸쳐 각 기기의 모델에 있는 모든 변수의 사본을 생성합니다.\n", "\n", - "`MultiWorkerMirroredStrategy`를 생성하는 가장 간단한 방법은 다음과 같습니다." + "여기에 추가 성능 최적화도 구현하고 있습니다. 예를 들어 작은 텐서들의 여러 올 리듀스 작업을 큰 텐서들의 더 적은 올 리듀스 작업으로 바꾸는 정적 최적화 기능이 있습니다. 뿐만아니라 플러그인 구조를 갖도록 설계하였습니다. 따라서 추후에는 사용자가 자신의 하드웨어에 더 최적화된 알고리즘을 사용할 수도 있을 것입니다. 참고로 이 수집 연산은 올 리듀스 외에 브로드캐스트(broadcast)나 전체 수집(all-gather)도 구현하고 있습니다." ] }, { @@ -247,7 +259,9 @@ "metadata": { "id": "m3a_6ebbEjre" }, - "outputs": [], + "outputs": [ + + ], "source": [ "strategy = tf.distribute.MultiWorkerMirroredStrategy()" ] @@ -267,7 +281,9 @@ "metadata": { "id": "QGX_QAEtFQSv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "communication_options = tf.distribute.experimental.CommunicationOptions(\n", " implementation=tf.distribute.experimental.CommunicationImplementation.NCCL)\n", @@ -315,7 +331,7 @@ "\n", "`ParameterServerStrategy`에 대해 자세히 알아보려면 [Keras Model.fit 및 사용자 정의 훈련 루프를 사용하는 매개변수 서버 훈련](../tutorials/distribute/parameter_server_training.ipynb) 튜토리얼을 확인하세요.\n", "\n", - "참고: `TFConfigClusterResolver`를 사용하는 경우 `'TF_CONFIG'` 환경 변수를 구성해야 합니다. 이는 `MultiWorkerMirroredStrategy`의 `'TF_CONFIG'`와 유사하지만 추가 주의 사항이 있습니다.\n", + "참고: `TFConfigClusterResolver`를 사용하는 경우 `'TF_CONFIG'` 환경 변수를 구성해야 합니다. 이는 MultiWorkerMirroredStrategy`'TF_CONFIG'`와 유사하지만 추가 주의 사항이 있습니다.\n", "\n", "TensorFlow 1에서 `ParameterServerStrategy`는 `tf.compat.v1.distribute.experimental.ParameterServerStrategy` 기호를 통해 Estimator에서만 사용할 수 있습니다." ] @@ -326,7 +342,7 @@ "id": "E20tG21LFfv1" }, "source": [ - "참고: 이 전략은 현재 개발 중이므로 [`experimental`](https://www.tensorflow.org/guide/versions#what_is_not_covered)입니다." + "Note: 이 전략은 아직 개선 중이고 더 많은 경우에 쓸 수 있도록 만들고 있기 때문에, [`실험 기능`](https://www.tensorflow.org/guide/versions#what_is_not_covered)으로 지원됩니다. 따라서 나중에 API가 바뀔 수 있음에 유념하십시오." ] }, { @@ -348,7 +364,9 @@ "metadata": { "id": "rtjZOyaoMWrP" }, - "outputs": [], + "outputs": [ + + ], "source": [ "central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()" ] @@ -401,7 +419,9 @@ "metadata": { "id": "ibHleFOOmPn9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "default_strategy = tf.distribute.get_strategy()" ] @@ -423,7 +443,9 @@ "metadata": { "id": "WECeRzUdT6bU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# In optimizer or other library code\n", "# Get currently active strategy\n", @@ -446,7 +468,9 @@ "metadata": { "id": "O4Vmae5jmSE6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "if tf.config.list_physical_devices('GPU'):\n", " strategy = tf.distribute.MirroredStrategy()\n", @@ -498,7 +522,7 @@ "\n", "`tf.distribute.Strategy`는 Keras API 사양의 TensorFlow 구현인 tf.keras에 통합되었습니다. `tf.keras`는 모델을 빌드하고 훈련하는 고수준 API입니다. `tf.keras` 백엔드에 통합하면 [Model.fit을 사용](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit)하여 Keras 훈련 프레임워크로 작성된 교육을 원활하게 분산할 수 있습니다.\n", "\n", - "코드에서 변경해야 하는 사항은 다음과 같습니다.\n", + "다음은 한 개의 밀집 층(dense layer)을 가진 매우 간단한 케라스 모델에 분산 전략을 사용하는 코드의 일부입니다.\n", "\n", "1. 적절한 `tf.distribute.Strategy`의 인스턴스를 생성합니다.\n", "2. Keras 모델, 옵티마이저 및 메트릭 생성을 `strategy.scope` 안으로 이동합니다. 따라서 모델의 `call()`, `train_step()` 및 `test_step()` 메소드에 있는 코드는 모두 가속기에서 배포되고 실행됩니다.\n", @@ -514,14 +538,17 @@ "metadata": { "id": "gbbcpzRnPZ6V" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mirrored_strategy = tf.distribute.MirroredStrategy()\n", "\n", "with mirrored_strategy.scope():\n", - " model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])\n", - "\n", - "model.compile(loss='mse', optimizer='sgd')" + " model = tf.keras.Sequential([\n", + " tf.keras.layers.Dense(1, input_shape=(1,),\n", + " kernel_regularizer=tf.keras.regularizers.L2(1e-4))])\n", + " model.compile(loss='mse', optimizer='sgd')" ] }, { @@ -539,7 +566,9 @@ "metadata": { "id": "ZMmxEFRTEjH5" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)\n", "model.fit(dataset, epochs=2)\n", @@ -561,7 +590,9 @@ "metadata": { "id": "Lqgd9SdxW5OW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import numpy as np\n", "\n", @@ -584,7 +615,9 @@ "metadata": { "id": "8ZmJqErtS4A1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mirrored_strategy.num_replicas_in_sync" ] @@ -595,7 +628,9 @@ "metadata": { "id": "quNNTytWdGBf" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Compute a global batch size using a number of replicas.\n", "BATCH_SIZE_PER_REPLICA = 5\n", @@ -664,10 +699,14 @@ "metadata": { "id": "W-3Bn-CaiPKD" }, - "outputs": [], + "outputs": [ + + ], "source": [ "with mirrored_strategy.scope():\n", - " model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])\n", + " model = tf.keras.Sequential([\n", + " tf.keras.layers.Dense(1, input_shape=(1,),\n", + " kernel_regularizer=tf.keras.regularizers.L2(1e-4))])\n", " optimizer = tf.keras.optimizers.SGD()" ] }, @@ -686,7 +725,9 @@ "metadata": { "id": "94BkvkLInkKd" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(\n", " global_batch_size)\n", @@ -708,22 +749,25 @@ "metadata": { "id": "NJxL5YrVniDe" }, - "outputs": [], + "outputs": [ + + ], "source": [ + "# Sets `reduction=NONE` to leave it to tf.nn.compute_average_loss() below.\n", "loss_object = tf.keras.losses.BinaryCrossentropy(\n", " from_logits=True,\n", " reduction=tf.keras.losses.Reduction.NONE)\n", "\n", - "def compute_loss(labels, predictions):\n", - " per_example_loss = loss_object(labels, predictions)\n", - " return tf.nn.compute_average_loss(per_example_loss, global_batch_size=global_batch_size)\n", - "\n", "def train_step(inputs):\n", " features, labels = inputs\n", "\n", " with tf.GradientTape() as tape:\n", " predictions = model(features, training=True)\n", - " loss = compute_loss(labels, predictions)\n", + " per_example_loss = loss_object(labels, predictions)\n", + " loss = tf.nn.compute_average_loss(per_example_loss)\n", + " model_losses = model.losses\n", + " if model_losses:\n", + " loss += tf.nn.scale_regularization_loss(tf.add_n(model_losses))\n", "\n", " gradients = tape.gradient(loss, model.trainable_variables)\n", " optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n", @@ -742,11 +786,17 @@ "id": "yRL5u_NLoTvq" }, "source": [ - "위의 코드에서 주의해야 할 몇 가지 사항은 다음과 같습니다.\n", + "위 코드에서 몇 가지 더 짚어볼 점이 있습니다.\n", "\n", - "1. `tf.nn.compute_average_loss`를 사용하여 손실을 계산했습니다. `tf.nn.compute_average_loss`는 예제당 손실을 합산하고 합산 값을 `global_batch_size`로 나눕니다. 이 계산의 결과는 각 복제본으로 그래디언트를 계산한 후 이를 **합산**하는 방식으로 복제본들의 결과를 집계하기 때문에 중요합니다.\n", - "2. 또한 `tf.distribute.Strategy.reduce` API를 사용하여 `tf.distribute.Strategy.run`에서 반환한 결과를 집계했습니다. `tf.distribute.Strategy.run`은 전략의 각 로컬 복제본으로부터 입수한 결과를 반환하며 이 결과를 사용하는 방법에는 여러 가지가 있습니다. 사용자는 이를 `reduce`하여 집계된 값을 얻을 수 있습니다. `tf.distribute.Strategy.experimental_local_results`를 수행하여 결과에 포함된 값 목록을 로컬 복제본당 하나씩 가져올 수도 있습니다.\n", - "3. 분산 전략 범위 내에서 `apply_gradients`를 호출하면 동작이 수정됩니다. 특히, 동기식 훈련 동안 각 병렬 인스턴스에 그래디언트를 적용하기 전에 그래디언트의 전체 복제본 합계를 수행합니다.\n" + "1. 예제별 예측 손실을 스칼라로 줄이기 위해 `tf.nn.compute_average_loss`를 사용했습니다. `tf.nn.compute_average_loss`는 예제별 손실을 합산하고 합산 값을 전체 배치 크기로 나눕니다. 이 계산의 결과는 나중에 각 복제본으로 그래디언트를 계산한 후 이를 **합산**하는 방식으로 복제본들의 결과를 집계하기 때문에 중요합니다.\n", + "\n", + "기본적으로 전체 배치 크기는 `tf.get_strategy().num_replicas_in_sync * tf.shape(per_example_loss)[0]`로 설정됩니다. 전체 배치 크기는 키워드 인수인 `global_batch_size=`로 명시적으로 지정할 수도 있습니다. 짧은 배치가 없는 경우 기본값은 위에서 정의한 `global_batch_size`와 함께 `tf.nn.compute_average_loss(..., global_batch_size=global_batch_size)`에 해당합니다. (짧은 배치에 대핸 내용이나 짧은 배치를 방지하거나 처리하는 방법에 대한 자세한 내용은 [사용자 정의 훈련 튜토리얼](../tutorials/distribute/custom_training.ipynb)을 참조하세요).\n", + "\n", + "1. `tf.nn.scale_regularization_loss`를 사용하여 `Model` 객체에 등록된 정규화 손실(있는 경우)을 `1/num_replicas_in_sync`에 따라 스케일링했습니다. 입력에 따라 달라지는 정규화 손실의 경우, 사용자 정의 훈련 루프가 아닌 모델링 코드에서 레플리카별(!) 배치 크기에 대한 평균화를 수행하므로 모델링 코드가 복제에 구애받지 않으면서 훈련 루프를 정규화 손실 계산 방식에 구애받지 않고 유지할 수 있습니다.\n", + "\n", + "2. 분산 전략 범위 내에서 `apply_gradients`를 호출하면 동작이 수정됩니다. 특히, 동기식 훈련 동안 각 병렬 인스턴스에 그래디언트를 적용하기 전에 그래디언트의 전체 복제본 합계를 수행합니다.\n", + "\n", + "3. 또한 `tf.distribute.Strategy.reduce` API를 사용하여 `tf.distribute.Strategy.run`에서 반환한 결과를 집계하여 보고에 사용했습니다. `tf.distribute.Strategy.run`은 전략의 각 로컬 복제본으로부터 입수한 결과를 반환하며, 이 결과를 사용하는 방법에는 여러 가지가 있습니다. 사용자는 이를 `reduce`하여 집계된 값을 얻을 수 있습니다. 또한 `tf.distribute.Strategy.experimental_local_results`를 수행하여 결과에 포함된 값 목록을 로컬 복제본당 하나씩 가져올 수도 있습니다.\n" ] }, { @@ -764,7 +814,9 @@ "metadata": { "id": "Egq9eufToRf6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for dist_inputs in dist_dataset:\n", " print(distributed_train_step(dist_inputs))" @@ -787,7 +839,9 @@ "metadata": { "id": "e5BEvR0-LJAc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "iterator = iter(dist_dataset)\n", "for _ in range(10):\n", @@ -821,9 +875,9 @@ "\n", "1. [튜토리얼](../tutorials/distribute/custom_training.ipynb): 사용자 정의 훈련 루프 및 `MirroredStrategy`를 사용하는 훈련입니다.\n", "2. [튜토리얼](../tutorials/distribute/multi_worker_with_ctl.ipynb): 사용자 정의 훈련 루프 및 `MultiWorkerMirroredStrategy`를 사용하는 훈련입니다.\n", - "3. [가이드](tpu.ipynb): `TPUStrategy`를 사용하는 사용자 정의 훈련 루프 예제를 포함합니다.\n", + "3. `MirroredStrategy`와 `TPUStrategy`를 사용하여 훈련하는 [BERT](https://github.com/tensorflow/models/blob/master/official/bert/run_classifier.py) 예제. 이 예제는 분산 훈련 도중 체크포인트로부터 불러오거나 주기적인 체크포인트를 만드는 방법을 이해하는 데 매우 유용합니다.\n", "4. [튜토리얼](../tutorials/distribute/parameter_server_training.ipynb): 사용자 정의 훈련 루프와 `ParameterServerStrategy`를 사용하는 매개변수 서버 훈련입니다.\n", - "5. 다양한 전략을 사용하여 구현한 최신 모델 컬렉션이 포함된 TensorFlow Model Garden [리포지토리](https://github.com/tensorflow/models/tree/master/official)입니다.\n" + "5. `MirroredStrategy`를 사용한 [Transformer](https://github.com/tensorflow/models/blob/master/official/nlp/transformer/transformer_main.py) 훈련.\n" ] }, { @@ -882,7 +936,7 @@ "id": "GXIbqSW-sFVg" }, "source": [ - "## 다음 단계\n", + "## 다음은?\n", "\n", "`tf.distribute.Strategy`는 현재 개발 중입니다. 사용해 보고 [GitHub 문제](https://github.com/tensorflow/tensorflow/issues/new)를 사용하여 피드백을 제공해주세요." ] diff --git a/site/ko/guide/dtensor_overview.ipynb b/site/ko/guide/dtensor_overview.ipynb index 2e19985bd5..cd0307062d 100644 --- a/site/ko/guide/dtensor_overview.ipynb +++ b/site/ko/guide/dtensor_overview.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -88,7 +90,9 @@ "metadata": { "id": "OKaPw8vwwZAC" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install --quiet --upgrade --pre tensorflow" ] @@ -110,7 +114,9 @@ "metadata": { "id": "Q92lo0zjwej8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "from tensorflow.experimental import dtensor\n", @@ -167,7 +173,7 @@ "1차원 `Mesh`에서는 모든 장치가 단일 메시 차원으로 목록을 형성합니다. 다음 예제에서는 6개 장치를 사용하는 `'x'` 메시 차원에 따라 `dtensor.create_mesh`를 사용하여 메시를 생성합니다.\n", "\n", "\n", - "\"6개의 \n" + "\"6개의 \n" ] }, { @@ -176,7 +182,9 @@ "metadata": { "id": "QLH5fgdBmA58" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mesh_1d = dtensor.create_mesh([('x', 6)], devices=DEVICES)\n", "print(mesh_1d)" @@ -199,7 +207,9 @@ "metadata": { "id": "op6TmKUQE-sZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mesh_2d = dtensor.create_mesh([('x', 3), ('y', 2)], devices=DEVICES)\n", "print(mesh_2d)" @@ -248,7 +258,9 @@ "metadata": { "id": "-a3EnmZag6x1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh_1d)" ] @@ -262,7 +274,7 @@ "동일한 텐서를 사용하고 `Layout(['unsharded', 'x'])` 레이아웃을 메시하면 6개의 장치에서 텐서의 두 번째 축이 분할될 수 있습니다.\n", "\n", "\n", - "\"1순위 " + "\"1순위 " ] }, { @@ -271,7 +283,9 @@ "metadata": { "id": "7BgqL0jUvV5a" }, - "outputs": [], + "outputs": [ + + ], "source": [ "layout = dtensor.Layout([dtensor.UNSHARDED, 'x'], mesh_1d)" ] @@ -291,7 +305,8 @@ "id": "Eyp_qOSyvieo" }, "source": [ - "\"메시 \n" + "\n", + "\"메시 \n" ] }, { @@ -300,7 +315,9 @@ "metadata": { "id": "p8OrehEuhPbS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "layout = dtensor.Layout(['y', 'x'], mesh_2d)" ] @@ -314,7 +331,7 @@ "동일한 `mesh_2d`의 경우 레이아웃 `Layout([\"x\", dtensor.UNSHARDED], mesh_2d)`은 2순위 `Tensor`이며, 이는 `\"y\"`에 복제되고 첫 번째 축이 메시 차원 `x`에서 분할됩니다.\n", "\n", "\n", - "\"메시 \n" + "\"메시 \n" ] }, { @@ -323,7 +340,9 @@ "metadata": { "id": "IkWe6mVl7uRb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "layout = dtensor.Layout([\"x\", dtensor.UNSHARDED], mesh_2d)" ] @@ -367,7 +386,9 @@ "metadata": { "id": "s6aws-b8dN9L" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def dtensor_from_array(arr, layout, shape=None, dtype=None):\n", " \"\"\"Convert a DTensor from something that looks like an array or Tensor.\n", @@ -410,7 +431,9 @@ "metadata": { "id": "mQu_nScGUvYH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mesh = dtensor.create_mesh([(\"x\", 6)], devices=DEVICES)\n", "layout = dtensor.Layout([dtensor.UNSHARDED], mesh)\n", @@ -440,7 +463,9 @@ "metadata": { "id": "dCSFyaAjmzGu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(dtensor.fetch_layout(my_first_dtensor))\n", "assert layout == dtensor.fetch_layout(my_first_dtensor)" @@ -467,7 +492,9 @@ "metadata": { "id": "BGbjqVAOnXMk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for component_tensor in dtensor.unpack(my_first_dtensor):\n", " print(\"Device:\", component_tensor.device, \",\", component_tensor)" @@ -499,7 +526,9 @@ "metadata": { "id": "9lT-6qQwxOgf" }, - "outputs": [], + "outputs": [ + + ], "source": [ "packed_dtensor = dtensor.pack(\n", " [[0, 1], [0, 1], [0, 1],\n", @@ -528,7 +557,9 @@ "metadata": { "id": "KWb9Ae0VJ-Rc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)" ] @@ -553,7 +584,9 @@ "metadata": { "id": "ax_ZHouJp1MX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "fully_sharded_dtensor = dtensor_from_array(\n", " tf.reshape(tf.range(6), (3, 2)),\n", @@ -583,7 +616,9 @@ "metadata": { "id": "xmyC6H6Ec90P" }, - "outputs": [], + "outputs": [ + + ], "source": [ "fully_replicated_dtensor = dtensor_from_array(\n", " tf.reshape(tf.range(6), (3, 2)),\n", @@ -620,7 +655,9 @@ "metadata": { "id": "DygnbkQ1Lu42" }, - "outputs": [], + "outputs": [ + + ], "source": [ "hybrid_sharded_dtensor = dtensor_from_array(\n", " tf.reshape(tf.range(6), (3, 2)),\n", @@ -658,7 +695,9 @@ "metadata": { "id": "hNdwmnL0jAXS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(fully_replicated_dtensor.numpy())\n", "\n", @@ -734,7 +773,9 @@ "metadata": { "id": "TiZf2J9JNd2D" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mesh = dtensor.create_mesh([(\"x\", 6)], devices=DEVICES)\n", "layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)\n", @@ -773,7 +814,9 @@ "metadata": { "id": "EyVAUvMePbms" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)\n", "a_layout = dtensor.Layout([dtensor.UNSHARDED, 'x'], mesh)\n", @@ -805,7 +848,9 @@ "metadata": { "id": "0PYqe0neiOpR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)\n", "\n", @@ -843,7 +888,9 @@ "metadata": { "id": "J0jo_8NPtJiO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "help(dtensor.call_with_layout)" ] @@ -876,7 +923,9 @@ "metadata": { "id": "G1CuKYSFtFeM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "help(tf.ones)" ] @@ -887,7 +936,9 @@ "metadata": { "id": "2m_EAwy-ozOh" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)\n", "ones = dtensor.call_with_layout(tf.ones, dtensor.Layout(['x', 'y'], mesh), shape=(6, 4))\n", @@ -911,7 +962,9 @@ "metadata": { "id": "H8BQSTRFtCih" }, - "outputs": [], + "outputs": [ + + ], "source": [ "help(tf.random.stateless_normal)" ] @@ -922,7 +975,9 @@ "metadata": { "id": "TvP81eYopSPm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ones = dtensor.call_with_layout(\n", " tf.function(tf.random.stateless_normal),\n", @@ -947,7 +1002,9 @@ "metadata": { "id": "LbAtKrSkpOaq" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ones = dtensor.call_with_layout(\n", " tf.function(tf.ones),\n", @@ -975,7 +1032,9 @@ "metadata": { "id": "awRPuR26P0Sc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mesh = dtensor.create_mesh([(\"x\", 6)], devices=DEVICES)\n", "layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)\n", @@ -1007,7 +1066,9 @@ "metadata": { "id": "adxFw9wJpqQQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "a = dtensor.call_with_layout(tf.ones, layout=layout, shape=(64, 32))\n", "b = v + a # add DVariable and DTensor\n", @@ -1029,7 +1090,9 @@ "metadata": { "id": "oYwfiyw5P94U" }, - "outputs": [], + "outputs": [ + + ], "source": [ "v.assign(a) # assign a DTensor to a DVariable\n", "print(a)" @@ -1050,7 +1113,9 @@ "metadata": { "id": "3pckUugYP_r-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# variable's layout is immutable.\n", "another_mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)\n", @@ -1077,7 +1142,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "dtensor_overview.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/function.ipynb b/site/ko/guide/function.ipynb index 22c2c56b92..7e61a9ecf5 100644 --- a/site/ko/guide/function.ipynb +++ b/site/ko/guide/function.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "3jTMb1dySr3V" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -40,10 +42,10 @@ "# tf.function으로 성능 향상하기\n", "\n", "\n", - " \n", - " \n", + " \n", + " \n", " \n", - " \n", + " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기 TensorFlow.org 에서 보기 구글 코랩(Google Colab)에서 실행하기GitHub에서 소스 보기노트북 다운로드하기노트북 다운론드하기
" ] }, @@ -53,17 +55,17 @@ "id": "J122XQYG7W6w" }, "source": [ - "텐서플로 2에서는 즉시 실행(eager execution)이 기본적으로 활성화되어 있습니다. 직관적이고 유연한 사용자 인터페이스를 제공하지만 성능과 배포에 비용이 더 듭니다(하나의 연산을 실행할 때는 훨씬 간단하고 빠릅니다).\n", + "Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 [공식 영문 문서](eager.ipynb)의 내용과 일치하지 않을 수 있습니다. 이 번역에 개선할 부분이 있다면 tensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다. 문서 번역이나 리뷰에 참여하려면 docs-ko@tensorflow.org로 메일을 보내주시기 바랍니다.\n", "\n", - "성능을 높이고 이식성이 좋은 모델을 만들려면 `tf.function`을 사용해 그래프로 변환하세요. 하지만 조심해야 할 점이 있습니다. `tf.function`은 무조건 속도를 높여주는 마법의 은총알이 아닙니다!\n", + "`tf.function`을 사용하여 프로그램을 그래프로 변환할 수 있습니다. 이 코드는 Python 코드를 사용하여 Python 독립적인 데이터 흐름 그래프를 생성하는 변환 도구입니다. 이렇게 하면 성능이 좋고 이식 가능한 모델을 만들 수 있으며 이 경우 `SavedModel`을 사용해야 합니다.\n", "\n", "이 가이드는 `tf.function`의 이면에 있는 개념을 이해하고 효과적으로 사용할 수 있도록 돕습니다.\n", "\n", "여기서 배울 주요 내용과 권고 사항은 다음과 같습니다:\n", "\n", - "- 즉시 실행 모드에서 디버깅한 다음 `@tf.function`으로 데코레이팅하세요.\n", + "- Eager 모드에서 디버깅한 다음 `@tf.function`으로 데코레이팅하세요.\n", "- 객체 변경(object mutation)이나 리스트 요소 추가 같은 Python의 부수 효과에 의존하지 마세요.\n", - "- `tf.function`은 텐서플로 연산과 가장 잘 동작합니다: 넘파이와 파이썬 호출은 상수로 바뀝니다.\n" + "- `tf.function`은 TensorFlow ops와 가장 잘 동작합니다. NumPy와 Python 호출은 상수로 바뀝니다.\n" ] }, { @@ -72,7 +74,7 @@ "id": "SjvqpgepHJPd" }, "source": [ - "## 설정" + "## 설정하기" ] }, { @@ -81,10 +83,10 @@ "metadata": { "id": "otIdN1TS8N7S" }, - "outputs": [], + "outputs": [ + + ], "source": [ - "# Update TensorFlow, as this notebook requires version 2.9 or later\n", - "!pip install -q -U tensorflow>=2.9.0\n", "import tensorflow as tf" ] }, @@ -94,7 +96,7 @@ "id": "I0xDjO4SHLUD" }, "source": [ - "에러 출력을 위한 헬퍼 함수를 정의합니다:" + "발생할 수 있는 오류의 종류를 보여주는 헬퍼 함수를 정의합니다." ] }, { @@ -103,7 +105,9 @@ "metadata": { "id": "D25apou9IOXa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import traceback\n", "import contextlib\n", @@ -149,7 +153,9 @@ "metadata": { "id": "SbtT1-Wm70F2" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function # The decorator converts `add` into a `Function`.\n", "def add(a, b):\n", @@ -164,7 +170,9 @@ "metadata": { "id": "uP-zUelB8DbX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "v = tf.Variable(1.0)\n", "with tf.GradientTape() as tape:\n", @@ -178,7 +186,7 @@ "id": "ocWZvqrmHnmX" }, "source": [ - "다른 함수 내부에 사용할 수 있습니다." + "다른 `Function` 내부에서 `Function`을 사용할 수 있습니다." ] }, { @@ -187,7 +195,9 @@ "metadata": { "id": "l5qRjdbBVdU6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def dense_layer(x, w, b):\n", @@ -211,7 +221,9 @@ "metadata": { "id": "zuXt4wRysI03" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import timeit\n", "conv_layer = tf.keras.layers.Conv2D(100, 3)\n", @@ -275,7 +287,9 @@ "metadata": { "id": "kojmJrgq8U9v" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def double(a):\n", @@ -305,7 +319,9 @@ "metadata": { "id": "hFccbWFRrsBp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# This doesn't print 'Tracing with ...'\n", "print(double(tf.constant(\"b\")))" @@ -317,7 +333,7 @@ "id": "fgIO_XEzcB9o" }, "source": [ - "`pretty_printed_concrete_signatures()`를 사용하여 사용 가능한 모든 추적을 볼 수 있습니다." + "`pretty_printed_concrete_signatures()` 를 사용하여 사용 가능한 모든 추적을 볼 수 있습니다." ] }, { @@ -326,7 +342,9 @@ "metadata": { "id": "IiQc4IKAb-NX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(double.pretty_printed_concrete_signatures())" ] @@ -342,7 +360,7 @@ "- `tf.Graph`는 언어에 구애받지 않고 TensorFlow 계산을 이식 가능하게 원시 형태로 표현한 것입니다.\n", "- `ConcreteFunction`은 `tf.Graph`를 래핑합니다.\n", "- `Function`은 `ConcreteFunction`의 캐시를 관리하고 입력에 적합한 캐시를 선택합니다.\n", - "- `tf.function`은 Python 함수를 래핑하여 `Function` 객체를 반환합니다.\n", + "- `tf.function`은 Python 함수를 래핑하여 `Function` 개체를 반환합니다.\n", "- **추적**(tracing)은 `tf.Graph`를 생성하고 **추적**(trace)이라고도 하는 `ConcreteFunction`에서 이를 래핑합니다.\n" ] }, @@ -361,12 +379,23 @@ "`TraceType`은 다음과 같이 입력 인수에서 결정됩니다.\n", "\n", "- `Tensor`의 경우 유형이 `Tensor`의 `dtype` 및 `shape`에 의해 매개변수화됩니다. 순위 형상은 순위가 지정되지 않은 형상의 하위 유형입니다. 고정 차원은 알 수 없는 차원의 하위 유형입니다.\n", + "\n", "- `Variable`의 경우 유형이 `Tensor`와 유사하지만 제어 종속성을 올바르게 연결하는 데 필요한 변수의 고유 리소스 ID도 포함합니다.\n", + "\n", "- Python 기본 값의 경우 유형은 **값** 자체에 해당합니다. 예를 들어 `3` 값의 `TraceType`은 `int`가 아니라 `LiteralTraceType<3>`입니다.\n", + "\n", "- `list` 및 `tuple` 등과 같은 순서가 유지되는 Python 컨테이너의 경우 유형이 요소 유형에 따라 매개변수화됩니다. 예를 들어 `[1, 2]`의 유형은 `ListTraceType, LiteralTraceType<2>>`이고 `[2, 1]`의 유형은 앞선 유형과는 달리 `ListTraceType, LiteralTraceType<1>>`입니다.\n", + "\n", "- `dict`와 같은 Python 매핑의 경우 유형은 동일한 키에서 실제 값 대신의 값 유형으로의 매핑이기도 합니다. 예를 들어 `{1: 2, 3: 4}`의 유형은 `MappingTraceType<>>, >>>`입니다. 순서가 정해져 있는 컨테이너와 달리 `{1: 2, 3: 4}` 및 `{3: 4, 1: 2}`는 동일한 유형을 갖습니다.\n", + "\n", "- `__tf_tracing_type__` 메서드를 구현하는 Python 객체의 경우 해당 메소드가 반환하는 모든 항목이 유형으로 지정됩니다.\n", - "- 다른 Python 개체의 경우 유형은 매칭을 위해 객체의 Python 동등성 및 해싱을 사용하는 제네릭 `TraceType`입니다(참고: 객체에 대한 [weakref](https://docs.python.org/3/library/weakref.html)에 의존하므로 객체가 범위 내에 있거나 삭제되지 않은 경우에만 작동합니다).\n" + "\n", + "- 다른 Python 객체의 경우, 유형은 일반 `TraceType`이며, 이에 해당하는 절차는 다음과 같습니다.\n", + "\n", + " - 먼저 해당 객체가 이전 추적에서 사용한 객체와 동일한 객체인지 확인합니다(파이썬 `id()` 또는 `is` 사용). 객체가 변경된 경우에도 일치하는지 확인하므로, Python 객체를 `tf.function` 인수로 사용하는 경우 *immutable* 객체를 사용하는 것이 가장 좋습니다.\n", + " - 다음으로 객체가 이전 추적에서 사용한 객체와 동일한지 확인합니다(Python `==` 사용).\n", + "\n", + " 이 절차는 객체에 대한 [weakref](https://docs.python.org/3/library/weakref.html)만 유지하므로 객체가 범위 내에 있고 삭제되지 않은 경우에만 작동합니다).\n" ] }, { @@ -406,7 +435,9 @@ "metadata": { "id": "_BDMIRmu1RGB" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))\n", "def next_collatz(x):\n", @@ -415,11 +446,11 @@ "\n", "print(next_collatz(tf.constant([1, 2])))\n", "# You specified a 1-D tensor in the input signature, so this should fail.\n", - "with assert_raises(ValueError):\n", + "with assert_raises(TypeError):\n", " next_collatz(tf.constant([[1, 2], [3, 4]]))\n", "\n", "# You specified an int32 dtype in the input signature, so this should fail.\n", - "with assert_raises(ValueError):\n", + "with assert_raises(TypeError):\n", " next_collatz(tf.constant([1.0, 2.0]))\n" ] }, @@ -440,7 +471,9 @@ "metadata": { "id": "4Viun7dh7PmF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))\n", "def g(x):\n", @@ -471,7 +504,9 @@ "metadata": { "id": "uydzR5JYUU8H" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def train_one_step():\n", " pass\n", @@ -508,7 +543,9 @@ "metadata": { "id": "uHp4ousu4DdN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def f():\n", " print('Tracing!')\n", @@ -537,7 +574,9 @@ "metadata": { "id": "gZkIh7UaIKc6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def get_mixed_flavor(fruit_a, fruit_b):\n", @@ -553,8 +592,8 @@ " flavor = tf.constant([3, 4])\n", "\n", "# As described in the above rules, a generic TraceType for `Apple` and `Mango`\n", - "# is generated (and a corresponding ConcreteFunction is traced) but it fails to \n", - "# match the second function call since the first pair of Apple() and Mango() \n", + "# is generated (and a corresponding ConcreteFunction is traced) but it fails to\n", + "# match the second function call since the first pair of Apple() and Mango()\n", "# have gone out out of scope by then and deleted.\n", "get_mixed_flavor(Apple(), Mango()) # Traces a new concrete function\n", "get_mixed_flavor(Apple(), Mango()) # Traces a new concrete function again\n", @@ -565,26 +604,33 @@ "# can have significant performance benefits.\n", "\n", "class FruitTraceType(tf.types.experimental.TraceType):\n", - " def __init__(self, fruit_type):\n", - " self.fruit_type = fruit_type\n", + " def __init__(self, fruit):\n", + " self.fruit_type = type(fruit)\n", + " self.fruit_value = fruit\n", "\n", " def is_subtype_of(self, other):\n", + " # True if self subtypes `other` and `other`'s type matches FruitTraceType.\n", " return (type(other) is FruitTraceType and\n", " self.fruit_type is other.fruit_type)\n", "\n", " def most_specific_common_supertype(self, others):\n", + " # `self` is the specific common supertype if all input types match it.\n", " return self if all(self == other for other in others) else None\n", "\n", + " def placeholder_value(self, placeholder_context=None):\n", + " # Use the fruit itself instead of the type for correct tracing.\n", + " return self.fruit_value\n", + "\n", " def __eq__(self, other):\n", " return type(other) is FruitTraceType and self.fruit_type == other.fruit_type\n", - " \n", + "\n", " def __hash__(self):\n", " return hash(self.fruit_type)\n", "\n", "class FruitWithTraceType:\n", "\n", " def __tf_tracing_type__(self, context):\n", - " return FruitTraceType(type(self))\n", + " return FruitTraceType(self)\n", "\n", "class AppleWithTraceType(FruitWithTraceType):\n", " flavor = tf.constant([1, 2])\n", @@ -605,7 +651,7 @@ "source": [ "### 구체적인 함수 얻기\n", "\n", - "`get_concrete_function` 메서드를 사용해 트레이싱된 특정 함수를 얻을 수 있습니다.\n" + "함수가 추적될 때마다 새로운 구체적인 함수가 생성됩니다. `get_concrete_function` 메서드를 사용하여 구체적인 함수를 직접 얻을 수 있습니다.\n" ] }, { @@ -614,7 +660,9 @@ "metadata": { "id": "mHg2CGtPQ3Hz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Obtaining concrete trace\")\n", "double_strings = double.get_concrete_function(tf.constant(\"a\"))\n", @@ -629,7 +677,9 @@ "metadata": { "id": "6IVZ-NVf9vsx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# You can also call get_concrete_function on an InputSpec\n", "double_strings_from_inputspec = double.get_concrete_function(tf.TensorSpec(shape=[], dtype=tf.string))\n", @@ -651,7 +701,9 @@ "metadata": { "id": "o3-JbkIk41r8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(double_strings)" ] @@ -671,7 +723,9 @@ "metadata": { "id": "nzbrqFABe0zG" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(double_strings.structured_input_signature)\n", "print(double_strings.structured_outputs)" @@ -692,7 +746,9 @@ "metadata": { "id": "G5eeTK-T5KYj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "with assert_raises(tf.errors.InvalidArgumentError):\n", " double_strings(tf.constant(1))" @@ -713,7 +769,9 @@ "metadata": { "id": "U_QyPSGoaC35" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def pow(a, b):\n", @@ -729,7 +787,9 @@ "metadata": { "id": "E76vIDhQbXIb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "assert square(tf.constant(10.0)) == 100\n", "\n", @@ -754,7 +814,9 @@ "metadata": { "id": "5UENeGHfaX8g" }, - "outputs": [], + "outputs": [ + + ], "source": [ "graph = double_strings.graph\n", "for node in graph.as_graph_def().node:\n", @@ -774,8 +836,8 @@ "다음은 `tf.function` 내에서만 나타나는 문제를 추적할 때 사용할 수 있는 몇 가지 팁입니다.\n", "\n", "- Python `print` 함수는 추적(tracing)하는 동안에만 호출되므로 함수가 (재)추적될 때 추적하는데 도움이 됩니다.\n", - "- `tf.print` 함수는 언제나 실행되므로 실행하는 동안 중간 값을 추적할 때 도움이 됩니다.\n", - "- `tf.debugging.enable_check_numerics`을 사용하면 쉽게 NaN과 Inf가 발생되는 곳을 추적할 수 있습니다.\n", + "- `tf.print`는 언제나 실행되므로 실행하는 동안 중간 값을 추적할 때 도움이 됩니다.\n", + "- `tf.debugging.enable_check_numerics`을 사용하면 쉽게 NaN과 Inf가 생성되는 곳을 추적할 수 있습니다.\n", "- `pdb`([Python 디버거](https://docs.python.org/3/library/pdb.html))는 추적 중에 어떤 일이 일어나는지 이해하는데 도움이 될 수 있습니다(주의: `pdb`는 사용자를 AutoGraph로 변환된 소스 코드로 이동시킵니다)." ] }, @@ -789,7 +851,7 @@ "\n", "AutoGraph는 `tf.function`안에 기본으로 활성화되어 있는 라이브러리이며 Python의 Eager 코드를 그래프 호환 TensorFlow ops로 변환합니다. 여기에는 `if`, `for`, `while`과 같은 제어 흐름이 포함됩니다.\n", "\n", - "`tf.cond`와 `tf.while_loop` 같은 텐서플로 연산을 여전히 사용할 수 있지만 파이썬으로 제어 흐름을 작성하는 것이 만들기도 이해하기도 쉽습니다." + "`tf.cond`와 `tf.while_loop` 같은 TensorFlow ops를 여전히 사용할 수 있지만 Python으로 제어 흐름을 작성하는 것이 만들기도 이해하기도 쉽습니다." ] }, { @@ -798,7 +860,9 @@ "metadata": { "id": "yCQTtTPTW3WF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# A simple loop\n", "\n", @@ -818,7 +882,7 @@ "id": "KxwJ8znPI0Cg" }, "source": [ - "관심있다면 오토그래프가 생성한 코드를 확인해 볼 수 있습니다." + "관심있다면 AutoGraph가 생성한 코드를 확인해 볼 수 있습니다." ] }, { @@ -827,7 +891,9 @@ "metadata": { "id": "jlQD1ffRXJhl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(tf.autograph.to_code(f.python_function))" ] @@ -853,7 +919,9 @@ "metadata": { "id": "BOQl8PMq2Sf3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def fizzbuzz(n):\n", @@ -891,13 +959,13 @@ "id": "yho4J0a0ZkQS" }, "source": [ - "### 반복문\n", + "### 루프\n", "\n", "Autograph는 일부 `for`와 `while` 문장을 `tf.while_loop`와 같은 동등한 TensorFlow 루프 ops로 바꿉니다. 변환되지 않으면 Python 루프로 `for`와 `while` 루프가 실행됩니다.\n", "\n", "이런 대체는 다음과 같은 경우에 일어납니다:\n", "\n", - "- `for x in y`: `y`가 텐서이면 `tf.while_loop`로 변환됩니다. 특별히 `y`가 `tf.data.Dataset`인 경우에는 `tf.data.Dataset` 연산의 조합이 생성됩니다.\n", + "- `for x in y`: `y`가 텐서이면 `tf.while_loop`로 변환됩니다. 특별히 `y`가 `tf.data.Dataset`인 경우에는 `tf.data.Dataset` ops의 조합이 생성됩니다.\n", "- `while `: ``이 텐서라면 `tf.while_loop`로 변환됩니다.\n", "\n", "추적하는 동안 Python 루프가 실행되므로 매 루프 반복 때마다 `tf.Graph`에 추가적인 ops가 추가됩니다.\n", @@ -926,7 +994,9 @@ "metadata": { "id": "WGZ19LspbZ27" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def measure_graph_size(f, *args):\n", " g = f.get_concrete_function(*args).graph\n", @@ -957,7 +1027,7 @@ "id": "JeD2U-yrbfVb" }, "source": [ - "데이터 세트에서 Python/NumPy 데이터를 래핑할 때 `tf.data.Dataset.from_generator`와 ` tf.data.Dataset.from_tensors`에 유의해야 합니다. 전자는 데이터를 Python에 유지하고 성능에 영향을 미칠 수 있는 `tf.py_function`을 통해 가져오는 반면 후자는 데이터 복사본을 그래프에서 하나의 큰 `tf.constant()` 노드로 묶으며 이는 메모리에 영향을 미칠 수 있습니다.\n", + "데이터 세트에서 Python/NumPy 데이터를 래핑할 때 `tf.data.Dataset.from_generator`와 ` tf.data.Dataset.from_tensor_slices`에 유의해야 합니다. 전자는 데이터를 Python에 보관하고 성능에 영향을 미칠 수 있는 `tf.py_function`을 통해 가져오는 반면 후자는 데이터 복사본을 그래프에서 하나의 큰 `tf.constant()` 노드로 묶으며 이는 메모리에 영향을 미칠 수 있습니다.\n", "\n", "`TFRecordDataset`, `CsvDataset` 등을 통해 파일에서 데이터를 읽는 것은 데이터를 소비하는 가장 효과적인 방법이며 이렇게 할 경우 Python을 사용하지 않아도 TensorFlow 자체적으로 데이터의 비동기 로드 및 프리페치를 관리할 수 있습니다. ​자세한 내용은 [`tf.data`: TensorFlow 입력 파이프라인 빌드](../../guide/data) 가이드를 참조하세요" ] @@ -968,7 +1038,7 @@ "id": "hyksHW9TCukR" }, "source": [ - "#### 반복하면서 값을 누적하기\n", + "#### 루핑하면서 값을 누적하기\n", "\n", "반복하면서 중간 값을 누적하는 패턴은 자주 있습니다. 보통 Python 목록이나 사전에 입력 항목을 추가하는 방식을 사용합니다. 하지만 Python 부수 효과 때문에 동적으로 펼쳐지는 반복에서는 기대대로 동작하지 않습니다. 대신 `tf.TensorArray`를 사용해 동적으로 펼쳐지는 반복에서 결과를 누적하세요." ] @@ -979,7 +1049,9 @@ "metadata": { "id": "HJ3Vb3dXfefN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "batch_size = 2\n", "seq_len = 3\n", @@ -1036,7 +1108,9 @@ "metadata": { "id": "w2sACuZ9TTRk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def f(x):\n", @@ -1074,7 +1148,9 @@ "metadata": { "id": "7aJD--9qTWmg" }, - "outputs": [], + "outputs": [ + + ], "source": [ "external_list = []\n", "\n", @@ -1105,7 +1181,9 @@ "metadata": { "id": "5r6p7-9jk_3L" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class Model(tf.Module):\n", " def __init__(self):\n", @@ -1141,7 +1219,9 @@ "metadata": { "id": "An4MrIbrcvi8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class Model(tf.Module):\n", " def __init__(self):\n", @@ -1198,7 +1278,9 @@ "metadata": { "id": "FNPD4unZUedH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def buggy_consume_next(iterator):\n", @@ -1226,7 +1308,9 @@ "metadata": { "id": "8D_iKetXW6VE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def good_consume_next(iterator):\n", @@ -1259,7 +1343,9 @@ "metadata": { "id": "zrdp4rjxg6jo" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = None\n", "\n", @@ -1293,7 +1379,9 @@ "metadata": { "id": "PrcpPB8C5s9T" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def leaky_function(a):\n", @@ -1335,7 +1423,9 @@ "metadata": { "id": "D7bLe8y652wU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class MyClass:\n", "\n", @@ -1368,7 +1458,9 @@ "metadata": { "id": "QSN-T1m5EFcR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def recursive_fn(n):\n", @@ -1396,7 +1488,9 @@ "metadata": { "id": "7FlmTqfMUwmT" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def recursive_fn(n):\n", @@ -1439,7 +1533,9 @@ "metadata": { "id": "oeJMdXd3M0cM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def buggy_add():\n", @@ -1460,7 +1556,9 @@ "metadata": { "id": "L3q7sUJWZOSU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Updating the value of `foo` to 100!\")\n", "foo = 100\n", @@ -1483,7 +1581,9 @@ "metadata": { "id": "oeJMdXd3M0cc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def variable_add():\n", @@ -1499,7 +1599,9 @@ "metadata": { "id": "L3q7sUJWZOSd" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Updating the value of `foo` to 100!\")\n", "foo.assign(100)\n", @@ -1534,7 +1636,9 @@ "metadata": { "id": "ux8KJESVWDxX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class SimpleModel(tf.Module):\n", " def __init__(self):\n", @@ -1557,7 +1661,9 @@ "metadata": { "id": "mUxRF4ghZZvX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Adding bias!\")\n", "simple_model.bias += 5.0\n", @@ -1583,7 +1689,9 @@ "metadata": { "id": "pFvWmWAAQjrv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def evaluate(model, x):\n", " return model.weight * x + model.bias\n", @@ -1591,7 +1699,7 @@ "new_model = SimpleModel()\n", "evaluate_no_bias = tf.function(evaluate).get_concrete_function(new_model, x)\n", "# Don't pass in `new_model`, `Function` already captured its state during tracing.\n", - "print(evaluate_no_bias(x)) " + "print(evaluate_no_bias(x))" ] }, { @@ -1600,7 +1708,9 @@ "metadata": { "id": "bdU2-jF4ZH0B" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Adding bias!\")\n", "new_model.bias += 5.0\n", @@ -1624,7 +1734,9 @@ "metadata": { "id": "daAP_lucwS6w" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class BetterModel:\n", "\n", @@ -1646,7 +1758,9 @@ "metadata": { "id": "ktqwMJBqwTFj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Adding bias!\")\n", "better_model.bias.assign_add(5.0) # Note: instead of better_model.bias += 5\n", @@ -1672,7 +1786,9 @@ "metadata": { "id": "Tx0Vvnb_9OB-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def f(x):\n", @@ -1698,7 +1814,9 @@ "metadata": { "id": "HQrG5_kOiKl_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class Count(tf.Module):\n", " def __init__(self):\n", @@ -1732,11 +1850,13 @@ "metadata": { "id": "yWQ3-r99Jvze" }, - "outputs": [], + "outputs": [ + + ], "source": [ "opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)\n", "opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)\n", - " \n", + "\n", "@tf.function\n", "def train_step(w, x, y, optimizer):\n", " with tf.GradientTape() as tape:\n", @@ -1769,7 +1889,9 @@ "metadata": { "id": "YV5F2Gy9hSI3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)\n", "opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)\n", @@ -1786,13 +1908,13 @@ "y = tf.constant([2.])\n", "\n", "# Make a new Function and ConcreteFunction for each optimizer.\n", - "train_step_1 = tf.function(train_step).get_concrete_function(w, x, y, opt1)\n", - "train_step_2 = tf.function(train_step).get_concrete_function(w, x, y, opt2)\n", + "train_step_1 = tf.function(train_step)\n", + "train_step_2 = tf.function(train_step)\n", "for i in range(10):\n", " if i % 2 == 0:\n", - " train_step_1(w, x, y) # `opt1` is not used as a parameter. \n", + " train_step_1(w, x, y, opt1)\n", " else:\n", - " train_step_2(w, x, y) # `opt2` is not used as a parameter." + " train_step_2(w, x, y, opt2)" ] }, { @@ -1822,7 +1944,6 @@ ], "metadata": { "colab": { - "collapsed_sections": [], "name": "function.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/gpu.ipynb b/site/ko/guide/gpu.ipynb index 8d50ae0cf5..59e841fd20 100644 --- a/site/ko/guide/gpu.ipynb +++ b/site/ko/guide/gpu.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -79,9 +81,12 @@ "metadata": { "id": "IqR2PQG4ZaZ0" }, - "outputs": [], + "outputs": [ + + ], "source": [ - "import tensorflow as tf" + "import tensorflow as tf\n", + "print(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))\n" ] }, { @@ -127,7 +132,9 @@ "metadata": { "id": "2Dbw0tpEirCd" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.debugging.set_log_device_placement(True)\n", "\n", @@ -165,7 +172,9 @@ "metadata": { "id": "8wqaQfEhjHit" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.debugging.set_log_device_placement(True)\n", "\n", @@ -204,7 +213,9 @@ "metadata": { "id": "hPI--n_jhZhv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "gpus = tf.config.list_physical_devices('GPU')\n", "if gpus:\n", @@ -233,7 +244,9 @@ "metadata": { "id": "jr3Kf1boFnCO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "gpus = tf.config.list_physical_devices('GPU')\n", "if gpus:\n", @@ -261,7 +274,9 @@ "metadata": { "id": "2qO2cS9QFn42" }, - "outputs": [], + "outputs": [ + + ], "source": [ "gpus = tf.config.list_physical_devices('GPU')\n", "if gpus:\n", @@ -301,7 +316,9 @@ "metadata": { "id": "wep4iteljjG1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.debugging.set_log_device_placement(True)\n", "\n", @@ -332,7 +349,9 @@ "metadata": { "id": "sut_UHlkjvWd" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.config.set_soft_device_placement(True)\n", "tf.debugging.set_log_device_placement(True)\n", @@ -362,7 +381,9 @@ "metadata": { "id": "8EMGuGKbNkc6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "gpus = tf.config.list_physical_devices('GPU')\n", "if gpus:\n", @@ -405,7 +426,9 @@ "metadata": { "id": "1KgzY8V2AvRv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.debugging.set_log_device_placement(True)\n", "gpus = tf.config.list_logical_devices('GPU')\n", @@ -446,7 +469,9 @@ "metadata": { "id": "AqPo9ltUA_EY" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.debugging.set_log_device_placement(True)\n", "\n", @@ -470,7 +495,9 @@ "metadata": { "accelerator": "GPU", "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "gpu.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/jax2tf.ipynb b/site/ko/guide/jax2tf.ipynb new file mode 100644 index 0000000000..9d0225a380 --- /dev/null +++ b/site/ko/guide/jax2tf.ipynb @@ -0,0 +1,892 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "ckM5wJMsNTYL" + }, + "source": [ + "##### Copyright 2023 The TensorFlow Authors." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "NKvERjPVNWxu" + }, + "outputs": [ + + ], + "source": [ + "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "bqePLdDjNhNk" + }, + "source": [ + "# JAX2TF를 사용하여 JAX 모델 가져오기" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gw3w46yhNiK_" + }, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
TensorFlow.org에서 보기Google Colab에서 실행하기GitHub에서 소스 보기
노트북 다운로드하기
" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "IyrsY3uTOmPY" + }, + "source": [ + "이 노트북은 [JAX](https://jax.readthedocs.io/en/latest/)를 사용하는 모델을 생성하고 이를 TensorFlow로 가져와 훈련을 계속하는 완전하고 실행 가능한 예제를 제공합니다. 이 작업은 JAX 생태계에서 TensorFlow 생태계로 이동하는 길을 제공하는 경량 API인 [JAX2TF](https://github.com/google/jax/tree/main/jax/experimental/jax2tf)를 통해 가능합니다.\n", + "\n", + "JAX는 고성능 배열 컴퓨팅 라이브러리입니다. 모델을 생성하기 위해 이 노트북은 JAX용 신경망 라이브러리인 [Flax](https://flax.readthedocs.io/en/latest/)를 사용합니다. 모델을 훈련하기 위해 JAX용 최적화 라이브러리인 [Optax](https://optax.readthedocs.io)를 사용합니다.\n", + "\n", + "JAX를 사용하는 연구자에게 JAX2TF는 TensorFlow의 검증된 도구를 사용하여 프로덕션으로 이동하는 길을 제공합니다.\n", + "\n", + "이 기능을 유용하게 사용할 수 있는 방법은 여러 가지가 있지만 그 중 몇 가지만 소개하겠습니다.\n", + "\n", + "- 추론: JAX용으로 작성된 모델을 가져와 TF Serving을 사용하는 서버에 배포하거나, TFLite를 사용하는 온디바이스(on-device)에 배포하거나, TensorFlow.js를 사용하는 웹에 배포할 수 있습니다.\n", + "\n", + "- 미세 조정: JAX를 사용하여 훈련한 모델의 구성 요소를 JAX2TF를 사용하는 TF로 가져온 다음, 기존 훈련 데이터와 설정을 사용하는 TensorFlow에서 계속 훈련할 수 있습니다.\n", + "\n", + "- 융합: 유연성을 극대화하기 위해 JAX를 사용하여 훈련한 모델의 일부와 TensorFlow를 사용하여 훈련한 모델의 일부를 결합합니다.\n", + "\n", + "JAX와 TensorFlow 사이의 이러한 상호 운용을 가능하게 하는 핵심은 `jax2tf.convert`이며 이 기능은 JAX에서 생성된 모델 구성 요소(손실 함수, 예측 함수 등)를 가져와서 이를 TensorFlow 함수와 동등한 모습을 갖도록 만든 다음 TensorFlow SavedModel로 내보낼 수 있습니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "G6rtu96yOepm" + }, + "source": [ + "## 설치하기\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "9yqxfHzr0LPF" + }, + "outputs": [ + + ], + "source": [ + "import tensorflow as tf\n", + "import numpy as np\n", + "import jax\n", + "import jax.numpy as jnp\n", + "import flax\n", + "import optax\n", + "import os\n", + "from matplotlib import pyplot as plt\n", + "from jax.experimental import jax2tf\n", + "from threading import Lock # Only used in the visualization utility.\n", + "from functools import partial" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "SDnTaZO0r872" + }, + "outputs": [ + + ], + "source": [ + "# Needed for TensorFlow and JAX to coexist in GPU memory.\n", + "os.environ['XLA_PYTHON_CLIENT_PREALLOCATE'] = \"false\"\n", + "gpus = tf.config.list_physical_devices('GPU')\n", + "if gpus:\n", + " try:\n", + " for gpu in gpus:\n", + " tf.config.experimental.set_memory_growth(gpu, True)\n", + " except RuntimeError as e:\n", + " # Memory growth must be set before GPUs have been initialized.\n", + " print(e)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "BXOjCNJxDLil" + }, + "outputs": [ + + ], + "source": [ + "#@title Visualization utilities\n", + "\n", + "plt.rcParams[\"figure.figsize\"] = (20,8)\n", + "\n", + "# The utility for displaying training and validation curves.\n", + "def display_train_curves(loss, avg_loss, eval_loss, eval_accuracy, epochs, steps_per_epochs, ignore_first_n=10):\n", + "\n", + " ignore_first_n_epochs = int(ignore_first_n/steps_per_epochs)\n", + "\n", + " # The losses.\n", + " ax = plt.subplot(121)\n", + " if loss is not None:\n", + " x = np.arange(len(loss)) / steps_per_epochs #* epochs\n", + " ax.plot(x, loss)\n", + " ax.plot(range(1, epochs+1), avg_loss, \"-o\", linewidth=3)\n", + " ax.plot(range(1, epochs+1), eval_loss, \"-o\", linewidth=3)\n", + " ax.set_title('Loss')\n", + " ax.set_ylabel('loss')\n", + " ax.set_xlabel('epoch')\n", + " if loss is not None:\n", + " ax.set_ylim(0, np.max(loss[ignore_first_n:]))\n", + " ax.legend(['train', 'avg train', 'eval'])\n", + " else:\n", + " ymin = np.min(avg_loss[ignore_first_n_epochs:])\n", + " ymax = np.max(avg_loss[ignore_first_n_epochs:])\n", + " ax.set_ylim(ymin-(ymax-ymin)/10, ymax+(ymax-ymin)/10)\n", + " ax.legend(['avg train', 'eval'])\n", + "\n", + " # The accuracy.\n", + " ax = plt.subplot(122)\n", + " ax.set_title('Eval Accuracy')\n", + " ax.set_ylabel('accuracy')\n", + " ax.set_xlabel('epoch')\n", + " ymin = np.min(eval_accuracy[ignore_first_n_epochs:])\n", + " ymax = np.max(eval_accuracy[ignore_first_n_epochs:])\n", + " ax.set_ylim(ymin-(ymax-ymin)/10, ymax+(ymax-ymin)/10)\n", + " ax.plot(range(1, epochs+1), eval_accuracy, \"-o\", linewidth=3)\n", + "\n", + "class Progress:\n", + " \"\"\"Text mode progress bar.\n", + " Usage:\n", + " p = Progress(30)\n", + " p.step()\n", + " p.step()\n", + " p.step(reset=True) # to restart form 0%\n", + " The progress bar displays a new header at each restart.\"\"\"\n", + " def __init__(self, maxi, size=100, msg=\"\"):\n", + " \"\"\"\n", + " :param maxi: the number of steps required to reach 100%\n", + " :param size: the number of characters taken on the screen by the progress bar\n", + " :param msg: the message displayed in the header of the progress bar\n", + " \"\"\"\n", + " self.maxi = maxi\n", + " self.p = self.__start_progress(maxi)() # `()`: to get the iterator from the generator.\n", + " self.header_printed = False\n", + " self.msg = msg\n", + " self.size = size\n", + " self.lock = Lock()\n", + "\n", + " def step(self, reset=False):\n", + " with self.lock:\n", + " if reset:\n", + " self.__init__(self.maxi, self.size, self.msg)\n", + " if not self.header_printed:\n", + " self.__print_header()\n", + " next(self.p)\n", + "\n", + " def __print_header(self):\n", + " print()\n", + " format_string = \"0%{: ^\" + str(self.size - 6) + \"}100%\"\n", + " print(format_string.format(self.msg))\n", + " self.header_printed = True\n", + "\n", + " def __start_progress(self, maxi):\n", + " def print_progress():\n", + " # Bresenham's algorithm. Yields the number of dots printed.\n", + " # This will always print 100 dots in max invocations.\n", + " dx = maxi\n", + " dy = self.size\n", + " d = dy - dx\n", + " for x in range(maxi):\n", + " k = 0\n", + " while d >= 0:\n", + " print('=', end=\"\", flush=True)\n", + " k += 1\n", + " d -= dx\n", + " d += dy\n", + " yield k\n", + " # Keep yielding the last result if there are too many steps.\n", + " while True:\n", + " yield k\n", + "\n", + " return print_progress" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6xgS_8nDDIu8" + }, + "source": [ + "## MNIST 데이터세트 다운로드 및 준비하기" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nbN7rmuF0VFB" + }, + "outputs": [ + + ], + "source": [ + "(x_train, train_labels), (x_test, test_labels) = tf.keras.datasets.mnist.load_data()\n", + "\n", + "train_data = tf.data.Dataset.from_tensor_slices((x_train, train_labels))\n", + "train_data = train_data.map(lambda x,y: (tf.expand_dims(tf.cast(x, tf.float32)/255.0, axis=-1),\n", + " tf.one_hot(y, depth=10)))\n", + "\n", + "BATCH_SIZE = 256\n", + "train_data = train_data.batch(BATCH_SIZE, drop_remainder=True)\n", + "train_data = train_data.cache()\n", + "train_data = train_data.shuffle(5000, reshuffle_each_iteration=True)\n", + "\n", + "test_data = tf.data.Dataset.from_tensor_slices((x_test, test_labels))\n", + "test_data = test_data.map(lambda x,y: (tf.expand_dims(tf.cast(x, tf.float32)/255.0, axis=-1),\n", + " tf.one_hot(y, depth=10)))\n", + "test_data = test_data.batch(10000)\n", + "test_data = test_data.cache()\n", + "\n", + "(one_batch, one_batch_labels) = next(iter(train_data)) # just one batch\n", + "(all_test_data, all_test_labels) = next(iter(test_data)) # all in one batch since batch size is 10000" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "LuZTo7SM3W_n" + }, + "source": [ + "## 훈련 구성하기\n", + "\n", + "이 노트북에서는 데모 목적으로 간단한 모델을 만들고 훈련합니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "3vbKB4yZ3aTL" + }, + "outputs": [ + + ], + "source": [ + "# Training hyperparameters.\n", + "JAX_EPOCHS = 3\n", + "TF_EPOCHS = 7\n", + "STEPS_PER_EPOCH = len(train_labels)//BATCH_SIZE\n", + "LEARNING_RATE = 0.01\n", + "LEARNING_RATE_EXP_DECAY = 0.6\n", + "\n", + "# The learning rate schedule for JAX (with Optax).\n", + "jlr_decay = optax.exponential_decay(LEARNING_RATE, transition_steps=STEPS_PER_EPOCH, decay_rate=LEARNING_RATE_EXP_DECAY, staircase=True)\n", + "\n", + "# THe learning rate schedule for TensorFlow.\n", + "tflr_decay = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate=LEARNING_RATE, decay_steps=STEPS_PER_EPOCH, decay_rate=LEARNING_RATE_EXP_DECAY, staircase=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Od3sMwQxtC34" + }, + "source": [ + "## Flax를 사용하여 모델 만들기" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "-ybqQF2zd2QX" + }, + "outputs": [ + + ], + "source": [ + "class ConvModel(flax.linen.Module):\n", + "\n", + " @flax.linen.compact\n", + " def __call__(self, x, train):\n", + " x = flax.linen.Conv(features=12, kernel_size=(3,3), padding=\"SAME\", use_bias=False)(x)\n", + " x = flax.linen.BatchNorm(use_running_average=not train, use_scale=False, use_bias=True)(x)\n", + " x = x.reshape((x.shape[0], -1)) # flatten\n", + " x = flax.linen.Dense(features=200, use_bias=True)(x)\n", + " x = flax.linen.BatchNorm(use_running_average=not train, use_scale=False, use_bias=True)(x)\n", + " x = flax.linen.Dropout(rate=0.3, deterministic=not train)(x)\n", + " x = flax.linen.relu(x)\n", + " x = flax.linen.Dense(features=10)(x)\n", + " #x = flax.linen.log_softmax(x)\n", + " return x\n", + "\n", + " # JAX differentiation requires a function `f(params, other_state, data, labels)` -> `loss` (as a single number).\n", + " # `jax.grad` will differentiate it against the fist argument.\n", + " # The user must split trainable and non-trainable variables into `params` and `other_state`.\n", + " # Must pass a different RNG key each time for the dropout mask to be different.\n", + " def loss(self, params, other_state, rng, data, labels, train):\n", + " logits, batch_stats = self.apply({'params': params, **other_state},\n", + " data,\n", + " mutable=['batch_stats'],\n", + " rngs={'dropout': rng},\n", + " train=train)\n", + " # The loss averaged across the batch dimension.\n", + " loss = optax.softmax_cross_entropy(logits, labels).mean()\n", + " return loss, batch_stats\n", + "\n", + " def predict(self, state, data):\n", + " logits = self.apply(state, data, train=False) # predict and accuracy disable dropout and use accumulated batch norm stats (train=False)\n", + " probabilities = flax.linen.log_softmax(logits)\n", + " return probabilities\n", + "\n", + " def accuracy(self, state, data, labels):\n", + " probabilities = self.predict(state, data)\n", + " predictions = jnp.argmax(probabilities, axis=-1)\n", + " dense_labels = jnp.argmax(labels, axis=-1)\n", + " accuracy = jnp.equal(predictions, dense_labels).mean()\n", + " return accuracy" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "7Cr0FRNFtHN4" + }, + "source": [ + "## 훈련 단계 함수 작성하기" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "tmDwApcpgZzw" + }, + "outputs": [ + + ], + "source": [ + "# The training step.\n", + "@partial(jax.jit, static_argnums=[0]) # this forces jax.jit to recompile for every new model\n", + "def train_step(model, state, optimizer_state, rng, data, labels):\n", + "\n", + " other_state, params = state.pop('params') # differentiate only against 'params' which represents trainable variables\n", + " (loss, batch_stats), grads = jax.value_and_grad(model.loss, has_aux=True)(params, other_state, rng, data, labels, train=True)\n", + "\n", + " updates, optimizer_state = optimizer.update(grads, optimizer_state)\n", + " params = optax.apply_updates(params, updates)\n", + " new_state = state.copy(add_or_replace={**batch_stats, 'params': params})\n", + "\n", + " rng, _ = jax.random.split(rng)\n", + "\n", + " return new_state, optimizer_state, rng, loss" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Zr16g6NzV4O9" + }, + "source": [ + "## 훈련 루프 작성하기" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "zbl5w-KUV7Qw" + }, + "outputs": [ + + ], + "source": [ + "def train(model, state, optimizer_state, train_data, epochs, losses, avg_losses, eval_losses, eval_accuracies):\n", + " p = Progress(STEPS_PER_EPOCH)\n", + " rng = jax.random.PRNGKey(0)\n", + " for epoch in range(epochs):\n", + "\n", + " # This is where the learning rate schedule state is stored in the optimizer state.\n", + " optimizer_step = optimizer_state[1].count\n", + "\n", + " # Run an epoch of training.\n", + " for step, (data, labels) in enumerate(train_data):\n", + " p.step(reset=(step==0))\n", + " state, optimizer_state, rng, loss = train_step(model, state, optimizer_state, rng, data.numpy(), labels.numpy())\n", + " losses.append(loss)\n", + " avg_loss = np.mean(losses[-step:])\n", + " avg_losses.append(avg_loss)\n", + "\n", + " # Run one epoch of evals (10,000 test images in a single batch).\n", + " other_state, params = state.pop('params')\n", + " # Gotcha: must discard modified batch_stats here\n", + " eval_loss, _ = model.loss(params, other_state, rng, all_test_data.numpy(), all_test_labels.numpy(), train=False)\n", + " eval_losses.append(eval_loss)\n", + " eval_accuracy = model.accuracy(state, all_test_data.numpy(), all_test_labels.numpy())\n", + " eval_accuracies.append(eval_accuracy)\n", + "\n", + " print(\"\\nEpoch\", epoch, \"train loss:\", avg_loss, \"eval loss:\", eval_loss, \"eval accuracy\", eval_accuracy, \"lr:\", jlr_decay(optimizer_step))\n", + "\n", + " return state, optimizer_state" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "DGB3W5g0Wt1H" + }, + "source": [ + "## 모델 및 옵티마이저 생성하기(Optax 사용)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "mW5mkmCWtN8W" + }, + "outputs": [ + + ], + "source": [ + "# The model.\n", + "model = ConvModel()\n", + "state = model.init({'params':jax.random.PRNGKey(0), 'dropout':jax.random.PRNGKey(0)}, one_batch, train=True) # Flax allows a separate RNG for \"dropout\"\n", + "\n", + "# The optimizer.\n", + "optimizer = optax.adam(learning_rate=jlr_decay) # Gotcha: it does not seem to be possible to pass just a callable as LR, must be an Optax Schedule\n", + "optimizer_state = optimizer.init(state['params'])\n", + "\n", + "losses=[]\n", + "avg_losses=[]\n", + "eval_losses=[]\n", + "eval_accuracies=[]" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "FJdsKghBNF" + }, + "source": [ + "## 모델 훈련하기" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nmcofTTBZSIb" + }, + "outputs": [ + + ], + "source": [ + "new_state, new_optimizer_state = train(model, state, optimizer_state, train_data, JAX_EPOCHS+TF_EPOCHS, losses, avg_losses, eval_losses, eval_accuracies)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "n_20vgvDXB5r" + }, + "outputs": [ + + ], + "source": [ + "display_train_curves(losses, avg_losses, eval_losses, eval_accuracies, len(eval_losses), STEPS_PER_EPOCH, ignore_first_n=1*STEPS_PER_EPOCH)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "0lT3cdENCBzL" + }, + "source": [ + "## 모델 부분적으로 훈련하기\n", + "\n", + "계속해서 TensorFlow에서 모델 훈련을 이어 가겠습니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "KT-xqj5N7C6L" + }, + "outputs": [ + + ], + "source": [ + "model = ConvModel()\n", + "state = model.init({'params':jax.random.PRNGKey(0), 'dropout':jax.random.PRNGKey(0)}, one_batch, train=True) # Flax allows a separate RNG for \"dropout\"\n", + "\n", + "# The optimizer.\n", + "optimizer = optax.adam(learning_rate=jlr_decay) # LR must be an Optax LR Schedule\n", + "optimizer_state = optimizer.init(state['params'])\n", + "\n", + "losses, avg_losses, eval_losses, eval_accuracies = [], [], [], []" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "oa362HMDbzDE" + }, + "outputs": [ + + ], + "source": [ + "state, optimizer_state = train(model, state, optimizer_state, train_data, JAX_EPOCHS, losses, avg_losses, eval_losses, eval_accuracies)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "0IyZtUPPCt0y" + }, + "outputs": [ + + ], + "source": [ + "display_train_curves(losses, avg_losses, eval_losses, eval_accuracies, len(eval_losses), STEPS_PER_EPOCH, ignore_first_n=1*STEPS_PER_EPOCH)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uNtlSaOCCumB" + }, + "source": [ + "## 추론에 필요한 만큼만 저장하기\n", + "\n", + "JAX 모델을 배포하는 것이 목표인 경우(`model.predict()`를 사용하여 추론을 실행할 수 있도록) 단순히 [SavedModel](https://www.tensorflow.org/guide/saved_model)로 이를 내보내는 것만으로도 충분합니다. 이 섹션에서는 이를 수행하는 방법을 설명합니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "O653B3-5H8FL" + }, + "outputs": [ + + ], + "source": [ + "# Test data with a different batch size to test polymorphic shapes.\n", + "x, y = next(iter(train_data.unbatch().batch(13)))\n", + "\n", + "m = tf.Module()\n", + "# Wrap the JAX state in `tf.Variable` (needed when calling the converted JAX function.\n", + "state_vars = tf.nest.map_structure(tf.Variable, state)\n", + "# Keep the wrapped state as flat list (needed in TensorFlow fine-tuning).\n", + "m.vars = tf.nest.flatten(state_vars)\n", + "# Convert the desired JAX function (`model.predict`).\n", + "predict_fn = jax2tf.convert(model.predict, polymorphic_shapes=[\"...\", \"(b, 28, 28, 1)\"])\n", + "# Wrap the converted function in `tf.function` with the correct `tf.TensorSpec` (necessary for dynamic shapes to work).\n", + "@tf.function(autograph=False, input_signature=[tf.TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32)])\n", + "def predict(data):\n", + " return predict_fn(state_vars, data)\n", + "m.predict = predict\n", + "tf.saved_model.save(m, \"./\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "8HFx67zStgvo" + }, + "outputs": [ + + ], + "source": [ + "# Test the converted function.\n", + "print(\"Converted function predictions:\", np.argmax(m.predict(x).numpy(), axis=-1))\n", + "# Reload the model.\n", + "reloaded_model = tf.saved_model.load(\"./\")\n", + "# Test the reloaded converted function (the result should be the same).\n", + "print(\"Reloaded function predictions:\", np.argmax(reloaded_model.predict(x).numpy(), axis=-1))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "eEk8wv4HJu94" + }, + "source": [ + "## 모두 저장하기\n", + "\n", + "전체 내보내기가 목표인 경우(미세 조정, 융합 등을 위해 모델을 TensorFlow로 가져올 계획인 경우 유용함) 이 섹션에서는 모델을 저장하여 다음과 같은 메서드에 액세스할 수 있는 방법을 설명합니다.\n", + "\n", + "- model.predict\n", + "- model.accuracy\n", + "- model.loss(train=True/False 부울, 드롭아웃을 위한 RNG 및 BatchNorm 상태 업데이트 포함)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "9mty52pmvDDp" + }, + "outputs": [ + + ], + "source": [ + "from collections import abc\n", + "\n", + "def _fix_frozen(d):\n", + " \"\"\"Changes any mappings (e.g. frozendict) back to dict.\"\"\"\n", + " if isinstance(d, list):\n", + " return [_fix_frozen(v) for v in d]\n", + " elif isinstance(d, tuple):\n", + " return tuple(_fix_frozen(v) for v in d)\n", + " elif not isinstance(d, abc.Mapping):\n", + " return d\n", + " d = dict(d)\n", + " for k, v in d.items():\n", + " d[k] = _fix_frozen(v)\n", + " return d" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "3HEsKNXbCwXw" + }, + "outputs": [ + + ], + "source": [ + "class TFModel(tf.Module):\n", + " def __init__(self, state, model):\n", + " super().__init__()\n", + "\n", + " # Special care needed for the train=True/False parameter in the loss\n", + " @jax.jit\n", + " def loss_with_train_bool(state, rng, data, labels, train):\n", + " other_state, params = state.pop('params')\n", + " loss, batch_stats = jax.lax.cond(train,\n", + " lambda state, data, labels: model.loss(params, other_state, rng, data, labels, train=True),\n", + " lambda state, data, labels: model.loss(params, other_state, rng, data, labels, train=False),\n", + " state, data, labels)\n", + " # must use JAX to split the RNG, therefore, must do it in a @jax.jit function\n", + " new_rng, _ = jax.random.split(rng)\n", + " return loss, batch_stats, new_rng\n", + "\n", + " self.state_vars = tf.nest.map_structure(tf.Variable, state)\n", + " self.vars = tf.nest.flatten(self.state_vars)\n", + " self.jax_rng = tf.Variable(jax.random.PRNGKey(0))\n", + "\n", + " self.loss_fn = jax2tf.convert(loss_with_train_bool, polymorphic_shapes=[\"...\", \"...\", \"(b, 28, 28, 1)\", \"(b, 10)\", \"...\"])\n", + " self.accuracy_fn = jax2tf.convert(model.accuracy, polymorphic_shapes=[\"...\", \"(b, 28, 28, 1)\", \"(b, 10)\"])\n", + " self.predict_fn = jax2tf.convert(model.predict, polymorphic_shapes=[\"...\", \"(b, 28, 28, 1)\"])\n", + "\n", + " # Must specify TensorSpec manually for variable batch size to work\n", + " @tf.function(autograph=False, input_signature=[tf.TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32)])\n", + " def predict(self, data):\n", + " # Make sure the TfModel.predict function implicitly use self.state_vars and not the JAX state directly\n", + " # otherwise, all model weights would be embedded in the TF graph as constants.\n", + " return self.predict_fn(self.state_vars, data)\n", + "\n", + " @tf.function(input_signature=[tf.TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32),\n", + " tf.TensorSpec(shape=(None, 10), dtype=tf.float32)],\n", + " autograph=False)\n", + " def train_loss(self, data, labels):\n", + " loss, batch_stats, new_rng = self.loss_fn(self.state_vars, self.jax_rng, data, labels, True)\n", + " # update batch norm stats\n", + " flat_vars = tf.nest.flatten(self.state_vars['batch_stats'])\n", + " flat_values = tf.nest.flatten(batch_stats['batch_stats'])\n", + " for var, val in zip(flat_vars, flat_values):\n", + " var.assign(val)\n", + " # update RNG\n", + " self.jax_rng.assign(new_rng)\n", + " return loss\n", + "\n", + " @tf.function(input_signature=[tf.TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32),\n", + " tf.TensorSpec(shape=(None, 10), dtype=tf.float32)],\n", + " autograph=False)\n", + " def eval_loss(self, data, labels):\n", + " loss, batch_stats, new_rng = self.loss_fn(self.state_vars, self.jax_rng, data, labels, False)\n", + " return loss\n", + "\n", + " @tf.function(input_signature=[tf.TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32),\n", + " tf.TensorSpec(shape=(None, 10), dtype=tf.float32)],\n", + " autograph=False)\n", + " def accuracy(self, data, labels):\n", + " return self.accuracy_fn(self.state_vars, data, labels)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "znJrAVpcxO9u" + }, + "outputs": [ + + ], + "source": [ + "# Instantiate the model.\n", + "tf_model = TFModel(state, model)\n", + "\n", + "# Save the model.\n", + "tf.saved_model.save(tf_model, \"./\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Y02DHEwTjNzV" + }, + "source": [ + "## 모델 다시 로드하기" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "i75yS3v2jPpM" + }, + "outputs": [ + + ], + "source": [ + "reloaded_model = tf.saved_model.load(\"./\")\n", + "\n", + "# Test if it works and that the batch size is indeed variable.\n", + "x,y = next(iter(train_data.unbatch().batch(13)))\n", + "print(np.argmax(reloaded_model.predict(x).numpy(), axis=-1))\n", + "x,y = next(iter(train_data.unbatch().batch(20)))\n", + "print(np.argmax(reloaded_model.predict(x).numpy(), axis=-1))\n", + "\n", + "print(reloaded_model.accuracy(one_batch, one_batch_labels))\n", + "print(reloaded_model.accuracy(all_test_data, all_test_labels))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "DiwEAwQmlx1x" + }, + "source": [ + "## 변환된 JAX 모델을 TensorFlow에서 계속 훈련하기" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "MubFcO_jl2vE" + }, + "outputs": [ + + ], + "source": [ + "optimizer = tf.keras.optimizers.Adam(learning_rate=tflr_decay)\n", + "\n", + "# Set the iteration step for the learning rate to resume from where it left off in JAX.\n", + "optimizer.iterations.assign(len(eval_losses)*STEPS_PER_EPOCH)\n", + "\n", + "p = Progress(STEPS_PER_EPOCH)\n", + "\n", + "for epoch in range(JAX_EPOCHS, JAX_EPOCHS+TF_EPOCHS):\n", + "\n", + " # This is where the learning rate schedule state is stored in the optimizer state.\n", + " optimizer_step = optimizer.iterations\n", + "\n", + " for step, (data, labels) in enumerate(train_data):\n", + " p.step(reset=(step==0))\n", + " with tf.GradientTape() as tape:\n", + " #loss = reloaded_model.loss(data, labels, True)\n", + " loss = reloaded_model.train_loss(data, labels)\n", + " grads = tape.gradient(loss, reloaded_model.vars)\n", + " optimizer.apply_gradients(zip(grads, reloaded_model.vars))\n", + " losses.append(loss)\n", + " avg_loss = np.mean(losses[-step:])\n", + " avg_losses.append(avg_loss)\n", + "\n", + " eval_loss = reloaded_model.eval_loss(all_test_data.numpy(), all_test_labels.numpy()).numpy()\n", + " eval_losses.append(eval_loss)\n", + " eval_accuracy = reloaded_model.accuracy(all_test_data.numpy(), all_test_labels.numpy()).numpy()\n", + " eval_accuracies.append(eval_accuracy)\n", + "\n", + " print(\"\\nEpoch\", epoch, \"train loss:\", avg_loss, \"eval loss:\", eval_loss, \"eval accuracy\", eval_accuracy, \"lr:\", tflr_decay(optimizer.iterations).numpy())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "50V1FSmI6UTk" + }, + "outputs": [ + + ], + "source": [ + "display_train_curves(losses, avg_losses, eval_losses, eval_accuracies, len(eval_losses), STEPS_PER_EPOCH, ignore_first_n=2*STEPS_PER_EPOCH)\n", + "\n", + "# The loss takes a hit when the training restarts, but does not go back to random levels.\n", + "# This is likely caused by the optimizer momentum being reinitialized." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "L7lSziW0K0ny" + }, + "source": [ + "## 다음 단계\n", + "\n", + "[JAX](https://jax.readthedocs.io/en/latest/index.html) 및 [Flax](https://flax.readthedocs.io/en/latest)에 대한 자세한 내용은 상세 가이드와 예제가 포함된 해당 문서 웹사이트에서 확인할 수 있습니다. JAX를 처음 접하는 경우 [JAX 101 튜토리얼](https://jax.readthedocs.io/en/latest/jax-101/index.html)과 [Flax 퀵스타트](https://flax.readthedocs.io/en/latest/getting_started.html)를 확인하세요. JAX 모델을 TensorFlow 형식으로 변환하는 방법에 대한 자세한 내용은 GitHub에서 [jax2tf](https://github.com/google/jax/tree/main/jax/experimental/jax2tf) 유틸리티를 확인하세요. 브라우저에서 실행할 수 있도록 JAX 모델을 변환하는 데 관심이 있는 경우 [JAX on the Web with TensorFlow.js](https://blog.tensorflow.org/2022/08/jax-on-web-with-tensorflowjs.html)를 방문하세요. TensorFlow Lite에서 실행하는 JAX 모델을 준비하려면 [TFLite용 JAX 모델 변환](https://www.tensorflow.org/lite/examples/jax_conversion/overview) 가이드를 참조하세요." + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "name": "jax2tf.ipynb", + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/site/ko/guide/migrate/saved_model.ipynb b/site/ko/guide/migrate/saved_model.ipynb index 44e6996a2c..63974fa6f6 100644 --- a/site/ko/guide/migrate/saved_model.ipynb +++ b/site/ko/guide/migrate/saved_model.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "WvqLCVQ6I58i" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -82,7 +84,9 @@ "metadata": { "id": "B94QZyy-kOGQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "import tensorflow.compat.v1 as tf1\n", @@ -124,7 +128,9 @@ "metadata": { "id": "dcZDQaI8jl3h" }, - "outputs": [], + "outputs": [ + + ], "source": [ "remove_dir(\"saved-model-builder\")\n", "\n", @@ -152,7 +158,9 @@ "metadata": { "id": "PwtC27VFlwCa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!saved_model_cli run --dir saved-model-builder --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=10" @@ -173,7 +181,9 @@ "metadata": { "id": "jtMxe2rjHSq9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "remove_dir(\"simple-save\")\n", "\n", @@ -195,7 +205,9 @@ "metadata": { "id": "AdnqemvIHb2P" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!saved_model_cli run --dir simple-save --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=10" @@ -226,7 +238,9 @@ "metadata": { "id": "3nQ5Stnxjhfs" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def model_fn(features, labels, mode):\n", " output = add_two(features['input'])\n", @@ -266,7 +280,9 @@ "metadata": { "id": "8_gD2gkE7CMu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!saved_model_cli run --dir estimator-model --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=[10]" @@ -298,7 +314,9 @@ "metadata": { "id": "_j-PwgP_jrgw" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class MyModel(tf.Module):\n", " @tf.function\n", @@ -324,7 +342,9 @@ "metadata": { "id": "slvU4vZN756F" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!saved_model_cli run --dir tf2-save --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=10" @@ -349,7 +369,9 @@ "metadata": { "id": "mMcjhzyRjvp6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "inp = tf.keras.Input(3)\n", "out = add_two(inp)\n", @@ -369,7 +391,9 @@ "metadata": { "id": "4P93WP5R7-VT" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!saved_model_cli run --dir keras-model --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=10" @@ -409,7 +433,9 @@ "metadata": { "id": "IMO0laj-m0p9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def load_tf1(path, input):\n", " print('Loading from', path)\n", @@ -448,7 +474,9 @@ "metadata": { "id": "OA52ezWV_KgL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def load_tf2(path, input):\n", " print('Loading from', path)\n", @@ -479,7 +507,9 @@ "metadata": { "id": "IfMTp-TGAfOs" }, - "outputs": [], + "outputs": [ + + ], "source": [ "loaded = tf.saved_model.load('tf2-save')\n", "print('restored __call__:', loaded.__call__)\n", @@ -507,7 +537,9 @@ "metadata": { "id": "ZFUAxK0YeIAe" }, - "outputs": [], + "outputs": [ + + ], "source": [ "loaded_model = tf.keras.models.load_model('keras-model')\n", "loaded_model.predict_on_batch(tf.constant([1, 3, 4]))" @@ -534,7 +566,9 @@ "metadata": { "id": "grKam9zGnNRZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Save a simple multiplication computation:\n", "with tf.Graph().as_default() as g:\n", @@ -564,7 +598,9 @@ "metadata": { "id": "3bbcGZ4CoVDL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "with tf.Graph().as_default() as g:\n", "meta = tf1.train.import_meta_graph('multiply.pb')\n", @@ -590,7 +626,9 @@ "metadata": { "id": "yAyGmDLlpVBX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def import_multiply():\n", "# Any graph-building code is allowed here.\n", @@ -735,7 +773,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "saved_model.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/migrate/tflite.ipynb b/site/ko/guide/migrate/tflite.ipynb index ced3e923de..97703058cd 100644 --- a/site/ko/guide/migrate/tflite.ipynb +++ b/site/ko/guide/migrate/tflite.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "HMUDt0CiUJk9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -122,7 +124,9 @@ "metadata": { "id": "iE0vSfMXumKI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "import tensorflow.compat.v1 as tf1\n", @@ -155,7 +159,9 @@ "metadata": { "id": "Bwq8EFiwjzjx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Create a TF1 SavedModel\n", "SAVED_MODEL_DIR = \"tf_saved_model/\"\n", @@ -217,7 +223,9 @@ "metadata": { "id": "dzXHHBQRdt_F" }, - "outputs": [], + "outputs": [ + + ], "source": [ "converter = tf1.lite.TFLiteConverter.from_saved_model(\n", " saved_model_dir=SAVED_MODEL_DIR,\n", @@ -247,7 +255,9 @@ "metadata": { "id": "0OyBjZ6Kdt_F" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Convert TF1 SavedModel to a TFLite model.\n", "converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir=SAVED_MODEL_DIR)\n", @@ -281,7 +291,9 @@ "metadata": { "id": "9EXO0xYq__fR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "converter = tf1.lite.TFLiteConverter.from_keras_model_file(model_file=KERAS_MODEL_PATH)\n", "converter.optimizations = {tf.lite.Optimize.DEFAULT}\n", @@ -306,7 +318,9 @@ "metadata": { "id": "IGB5ZMGl__fR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Convert TF1 Keras model file to TF2 SavedModel.\n", "model = tf.keras.models.load_model(KERAS_MODEL_PATH)\n", @@ -343,7 +357,9 @@ "metadata": { "id": "r7RvcdRv6lll" }, - "outputs": [], + "outputs": [ + + ], "source": [ "converter = tf1.lite.TFLiteConverter.from_frozen_graph(\n", " graph_def_file=GRAPH_DEF_MODEL_PATH,\n", @@ -373,7 +389,9 @@ "metadata": { "id": "Oigap0TZxjWG" }, - "outputs": [], + "outputs": [ + + ], "source": [ "## Convert TF1 frozen Graph to TF1 SavedModel.\n", "\n", @@ -420,7 +438,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "tflite.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/mixed_precision.ipynb b/site/ko/guide/mixed_precision.ipynb index 1866f83cee..5fcc4f0d93 100644 --- a/site/ko/guide/mixed_precision.ipynb +++ b/site/ko/guide/mixed_precision.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -47,10 +49,11 @@ }, "source": [ "\n", - " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", + " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기GitHub에서 소스 보기 노트북 다운로드하기 TensorFlow.org에서 보기\n", + "Google Colab에서 실행GitHub에서 소스 보기노트북 다운로드하기
" ] }, @@ -62,7 +65,7 @@ "source": [ "## 개요\n", "\n", - "혼합 정밀도는 훈련 중에 모델에서 16bit 및 32bit 부동 소수점 유형을 모두 사용하여 더 빠르게 실행하고 메모리를 더 적게 사용하는 데 목적을 두고 있습니다. 수치 안정성을 위해 모델의 특정 부분을 32bit 유형으로 유지함으로써 모델은 스텝 타임을 줄이고 정확성과 같은 메트릭에서도 동일하게 잘 훈련할 수 있습니다. 이 가이드에서는 Keras 혼합 정밀도 API를 사용하여 모델 속도를 높이는 방법을 설명합니다. 이 API를 사용하면 최신 GPU에서 3배 이상, TPU에서 60% 이상의 성능을 향상할 수 있습니다." + "혼합 정밀도는 훈련 중에 모델에서 16비트 및 32비트 부동 소수점 유형을 모두 사용하여 모델을 더 빠르게 실행하고 메모리를 더 적게 사용하는 데 목적을 두고 있습니다. 수치 안정성을 위해 모델의 특정 부분을 32비트 유형으로 유지하면 모델의 단계 시간이 짧아지고 정확도와 같은 평가 지표에서도 동일하게 잘 훈련할 수 있습니다. 이 가이드에서는 Keras 혼합 정밀도 API를 사용하여 모델 속도를 높이는 방법을 설명합니다. 이 API를 사용하면 최신 GPU에서는 3배 이상, TPU에서는 60% 이상, 최신 인텔 CPU에서는 2배 이상의 성능을 높일 수 있습니다." ] }, { @@ -73,7 +76,7 @@ "source": [ "오늘날 대부분의 모델은 32-bit 메모리를 사용하는 float32 dtype을 사용합니다. 그러나 정밀도가 낮은 dtype인 float16과 bfloat16도 있으며 각각은 16-bit의 메모리를 사용합니다. 최신 가속기는 16-bit 계산을 실행할 수 있는 특수한 하드웨어가 있어 16-bit dtype을 메모리에서 더 빨리 읽을 수 있으므로 16-bit dtype에서 연산을 더 빠르게 실행할 수 있습니다.\n", "\n", - "NVIDIA GPU는 float32보다 float16에서 연산을 더 빠르게 실행할 수 있고 TPU는 float32보다 bfloat16에서 연산을 더 빨리 실행할 수 있습니다. 따라서 이러한 정밀도가 낮은 dtype은 해당 장치에서 가능할 때마다 사용해야 합니다. 그러나 변수와 몇 가지 계산은 여전히 숫자로 인해 float32에 있어야 모델이 동일한 품질로 훈련합니다. Keras 혼합 정밀도 API를 사용하면 float16 또는 bfloat16을 float32와 함께 사용하여 float16/bfloat16의 성능 이점과 float32의 수치 안정성 이점을 얻을 수 있습니다.\n", + "NVIDIA GPU는 float32보다 float16에서 연산을 더 빠르게 수행할 수 있으며, TPU 및 지원되는 Intel CPU는 float32보다 bfloat16에서 연산을 더 빠르게 수행할 수 있습니다. 따라서 이러한 정밀도가 낮은 dtype은 해당 장치에서 가능할 때마다 사용해야 합니다. 그러나 변수와 몇 가지 계산은 여전히 수치상의 이유로 float32를 사용해야 모델이 동일한 품질로 훈련할 수 있습니다. Keras 혼합 정밀도 API를 사용하면 float16 또는 bfloat16과 float32를 함께 사용할 수 있으므로 float16/bfloat16의 성능 이점과 float32의 수치 안정성 이점을 얻을 수 있습니다.\n", "\n", "참고: 이 가이드에서 \"수치 안정성\"이라는 용어는 고정밀 dtype 대신 저정밀 dtype을 사용할 때 모델의 품질에 어떤 영향을 미치는지를 나타냅니다. float16 또는 bfloat16에서 \"수치적으로 불안정\"하다는 것은 이러한 dtype 중 하나에서 실행할 때 float32에서 연산을 실행할 때와 비교하여 모델의 평가 정확성이나 기타 메트릭이 떨어지는 경우를 말합니다." ] @@ -93,7 +96,9 @@ "metadata": { "id": "IqR2PQG4ZaZ0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "\n", @@ -110,9 +115,11 @@ "source": [ "## 지원하는 하드웨어\n", "\n", - "혼합 정밀도는 대부분의 하드웨어에서 실행되지만 최신 NVIDIA GPU 및 Cloud TPU에서는 모델의 속도만 향상됩니다. NVIDIA GPU는 float16과 float32의 혼합 사용을 지원하는 반면 TPU는 bfloat16과 float32의 혼합을 지원합니다.\n", + "혼합 정밀도는 대부분의 하드웨어에서 실행되지만, 최신 NVIDIA GPU, Cloud TPU 및 최신 인텔 CPU에서는 모델 속도만 높일 수 있습니다. NVIDIA GPU는 float16과 float32의 혼합 사용을 지원하는 반면 TPU와 인텔 CPU는 bfloat16과 float32의 혼합을 지원합니다.\n", "\n", - "NVIDIA GPU 중에서 계산 기능이 7.0 이상인 유닛은 float16 행렬 곱셈 및 컨볼루션을 가속화하기 위한 Tensor Cores라는 특수 하드웨어 장치를 가지고 있기 때문에 혼합 정밀도로부터 최대의 성능 이점을 얻을 수 있습니다. 구형 GPU의 경우 혼합 정밀도 사용에 따른 연산 성능 이점은 없지만 메모리 및 대역폭 절약으로 일부 속도 향상이 가능합니다. NVIDIA의 [CUDA GPU 웹 페이지](https://developer.nvidia.com/cuda-gpus)에서 해당 GPU의 계산 기능을 조회할 수 있습니다. 혼합 정밀도의 이점을 가장 크게 누릴 수 있는 GPU의 예로는 RTX GPU, Titan V 및 V100, A100이 있습니다." + "NVIDIA GPU 중에서 계산 기능이 7.0 이상인 유닛은 float16 행렬 곱셈 및 컨볼루션을 가속화하기 위한 Tensor Cores라는 특수 하드웨어 장치를 가지고 있기 때문에 혼합 정밀도로부터 최대의 성능 이점을 얻을 수 있습니다. 구형 GPU의 경우 혼합 정밀도 사용에 따른 연산 성능 이점은 없지만 메모리 및 대역폭 절약으로 일부 속도 향상이 가능합니다. NVIDIA의 [CUDA GPU 웹 페이지](https://developer.nvidia.com/cuda-gpus)에서 해당 GPU의 계산 기능을 조회할 수 있습니다. 혼합 정밀도의 이점을 가장 크게 누릴 수 있는 GPU의 예로는 RTX GPU, Titan V 및 V100, A100이 있습니다.\n", + "\n", + "인텔 CPU 중 4세대 인텔 제온 프로세서(코드명 Sapphire Rapids)부터는 AMX 명령어를 사용하여 bfloat16 계산을 가속화할 수 있으므로(Tensorflow 2.12 이상 필요) 혼합 정밀도의 성능 이점을 가장 크게 누릴 수 있습니다." ] }, { @@ -121,7 +128,7 @@ "id": "-q2hisD60F0_" }, "source": [ - "참고: Google Colab에서 이 가이드를 실행하면 일반적으로 GPU 런타임에 P100이 연결됩니다. P100은 계산 기능 6.0을 갖추고 있으며 눈에띄는 속도 향상을 기대하지는 않습니다.\n", + "참고: Google Colab에서 이 가이드를 실행하면 일반적으로 GPU 런타임에 P100이 연결됩니다. P100은 6.0의 계산 기능을 갖추고 있으며, 속도가 크게 향상되지는 않을 것으로 예상됩니다. CPU 런타임에서 실행하는 경우, 런타임에 AMX가 없는 CPU가 있을 가능성이 높으므로 속도가 느려질 수 있습니다.\n", "\n", "다음을 통해 GPU 타입을 확인할 수 있습니다. 이 명령은 NVIDIA 드라이버가 설치된 경우에만 존재하므로 다음과 같은 경우 오류가 발생합니다." ] @@ -132,7 +139,9 @@ "metadata": { "id": "j-Yzg_lfkoa_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!nvidia-smi -L" ] @@ -145,7 +154,7 @@ "source": [ "모든 Cloud TPU는 bfloat16을 지원합니다.\n", "\n", - "속도 향상이 예상되지 않는 CPU 및 이전 GPU에서도 혼합 정밀도 API는 단위 테스트, 디버깅 또는 API 시험 용도로 계속 사용할 수 있습니다. 그러나 CPU에서는 혼합 정밀도가 매우 느리게 실행됩니다." + "속도 향상이 예상되지 않는 구형 인텔 CPU, AMX가 없는 기타 x86 CPU, 구형 GPU에서도 혼합 정밀도 API는 단위 테스트, 디버깅 또는 API를 시험하는 용도로 계속 사용할 수 있습니다. 그러나 AMX 명령어가 없는 CPU의 mixed_bfloat16과 모든 x86 CPU의 mixed_float16은 상당히 느리게 실행됩니다." ] }, { @@ -172,7 +181,9 @@ "metadata": { "id": "x3kElPVH-siO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "policy = mixed_precision.Policy('mixed_float16')\n", "mixed_precision.set_global_policy(policy)" @@ -193,7 +204,9 @@ "metadata": { "id": "6a8iNFoBUSqR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Equivalent to the two lines above\n", "mixed_precision.set_global_policy('mixed_float16')" @@ -214,7 +227,9 @@ "metadata": { "id": "GQRbYm4f8p-k" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print('Compute dtype: %s' % policy.compute_dtype)\n", "print('Variable dtype: %s' % policy.variable_dtype)" @@ -226,7 +241,7 @@ "id": "MOFEcna28o4T" }, "source": [ - "앞에서 언급했듯이 `mixed_float16` 정책은 7.0 이상의 계산 기능을 갖춘 NVIDIA GPU의 성능이 가장 크게 향상됩니다. 이 정책은 다른 GPU 및 CPU에서 실행되지만, 성능이 향상되지 않을 수 있습니다. TPU의 경우 `mixed_bfloat16` 정책을 대신 사용해야 합니다." + "앞에서 언급했듯이 `mixed_float16` 정책은 7.0 이상의 계산 기능을 갖춘 NVIDIA GPU에서 성능이 가장 크게 향상됩니다. 이 정책은 다른 GPU 및 CPU에서도 실행되지만 성능이 향상되지 않을 수 있습니다. TPU 및 CPU의 경우 `mixed_bfloat16` 정책을 대신 사용해야 합니다." ] }, { @@ -253,7 +268,9 @@ "metadata": { "id": "0DQM24hL_14Q" }, - "outputs": [], + "outputs": [ + + ], "source": [ "inputs = keras.Input(shape=(784,), name='digits')\n", "if tf.config.list_physical_devices('GPU'):\n", @@ -284,8 +301,11 @@ "metadata": { "id": "kC58MzP4PEcC" }, - "outputs": [], + "outputs": [ + + ], "source": [ + "print(dense1.dtype_policy)\n", "print('x.dtype: %s' % x.dtype.name)\n", "# 'kernel' is dense1's variable\n", "print('dense1.kernel.dtype: %s' % dense1.kernel.dtype.name)" @@ -306,7 +326,9 @@ "metadata": { "id": "ybBq1JDwNIbz" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# INCORRECT: softmax and model output will be float16, when it should be float32\n", "outputs = layers.Dense(10, activation='softmax', name='predictions')(x)\n", @@ -330,7 +352,9 @@ "metadata": { "id": "IGqCGn4BsODw" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# CORRECT: softmax and model output are float32\n", "x = layers.Dense(10, name='dense_logits')(x)\n", @@ -359,7 +383,9 @@ "metadata": { "id": "dzVAoLI56jR8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# The linear activation is an identity function. So this simply casts 'outputs'\n", "# to float32. In this particular case, 'outputs' is already float32 so this is a\n", @@ -382,7 +408,9 @@ "metadata": { "id": "g4OT3Z6kqYAL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "model = keras.Model(inputs=inputs, outputs=outputs)\n", "model.compile(loss='sparse_categorical_crossentropy',\n", @@ -411,7 +439,9 @@ "metadata": { "id": "0UYs-u_DgiA5" }, - "outputs": [], + "outputs": [ + + ], "source": [ "initial_weights = model.get_weights()" ] @@ -433,7 +463,9 @@ "metadata": { "id": "hxI7-0ewmC0A" }, - "outputs": [], + "outputs": [ + + ], "source": [ "history = model.fit(x_train, y_train,\n", " batch_size=8192,\n", @@ -469,7 +501,9 @@ "source": [ "## 손실 조정\n", "\n", - "손실 스케일링은 `tf.keras.Model.fit`이 `mixed_float16` 정책에서 숫자 언더플로우를 방지하기 위해 자동으로 실행하는 기술입니다. 이 섹션에서는 손실 스케일링이 무엇인지 설명하고 다음 섹션에서는 사용자 정의 훈련 루프에서 이 기술을 사용하는 방법을 설명합니다." + "손실 스케일링은 `tf.keras.Model.fit`이 `mixed_float16` 정책에서 숫자 언더플로우를 방지하기 위해 자동으로 실행하는 기술입니다. 이 섹션에서는 손실 스케일링이 무엇인지 설명하고 다음 섹션에서는 사용자 정의 훈련 루프에서 이 기술을 사용하는 방법을 설명합니다.\n", + "\n", + "참고: `mixed_bfloat16` 정책을 사용하는 경우 손실 조정을 수행할 필요가 없습니다." ] }, { @@ -491,7 +525,9 @@ "metadata": { "id": "CHmXRb-yRWbE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant(256, dtype='float16')\n", "(x ** 2).numpy() # Overflow" @@ -503,7 +539,9 @@ "metadata": { "id": "5unZLhN0RfQM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant(1e-5, dtype='float16')\n", "(x ** 2).numpy() # Underflow" @@ -589,10 +627,12 @@ "metadata": { "id": "ogZN3rIH0vpj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "optimizer = keras.optimizers.RMSprop()\n", - "optimizer = mixed_precision.LossScaleOptimizer(optimizer, loss_scale='dynamic')" + "optimizer = mixed_precision.LossScaleOptimizer(optimizer)" ] }, { @@ -619,7 +659,9 @@ "metadata": { "id": "9cE7Mm533hxe" }, - "outputs": [], + "outputs": [ + + ], "source": [ "loss_object = tf.keras.losses.SparseCategoricalCrossentropy()\n", "train_dataset = (tf.data.Dataset.from_tensor_slices((x_train, y_train))\n", @@ -638,7 +680,7 @@ "- `get_scaled_loss(loss)`: 손실에 손실 규모를 곱합니다\n", "- `get_unscaled_gradients(gradients)`: 조정된 그래디언트 목록을 입력으로 취하고 각각을 손실 규모로 나누어 조정을 해제합니다\n", "\n", - "그래디언트의 언더플로우를 방지하려면 이러한 기능을 사용해야 합니다. `LossScaleOptimizer.apply_gradients`는 Infs 또는 NaN이 없는 그래디언트를 적용합니다. 이는 또한 손실 규모를 업데이트하는데, 그래디언트에 Infs 또는 NaN이 있으면 손실 규모를 절반으로 줄이고 그렇지 않은 경우 잠재적으로 증가시킵니다." + "그래디언트의 언더플로우를 방지하려면 이러한 기능을 사용해야 합니다. `LossScaleOptimizer.apply_gradients`는 `Inf` 또는 `NaN`이 없는 그래디언트를 적용합니다. 이는 또한 손실 규모를 업데이트하는데, 그래디언트에 `Inf` 또는 `NaN`이 있으면 손실 규모를 절반으로 줄이고 그렇지 않은 경우 잠재적으로 증가시킵니다." ] }, { @@ -647,7 +689,9 @@ "metadata": { "id": "V0vHlust4Rug" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def train_step(x, y):\n", @@ -685,7 +729,9 @@ "metadata": { "id": "nyk_xiZf42Tt" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def test_step(x):\n", @@ -707,7 +753,9 @@ "metadata": { "id": "jpzOe3WEhFUJ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "model.set_weights(initial_weights)" ] @@ -727,7 +775,9 @@ "metadata": { "id": "N274tJ3e4_6t" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for epoch in range(5):\n", " epoch_loss_avg = tf.keras.metrics.Mean()\n", @@ -795,17 +845,18 @@ "source": [ "## 요약\n", "\n", - "- 컴퓨팅 능력이 7.0 이상인 TPU 또는 NVIDIA GPU를 사용하는 경우 성능이 최대 3배 향상되므로 혼합 정밀도를 사용해야 합니다.\n", + "- 계산 기능이 7.0 이상인 TPU, NVIDIA GPU 또는 AMX 명령어를 지원하는 인텔 CPU를 사용하는 경우 성능이 최대 3배 향상되므로 혼합 정밀도를 사용해야 합니다.\n", "\n", "- 다음과 같이 혼합 정밀도를 사용할 수 있습니다.\n", "\n", " ```python\n", - " # On TPUs, use 'mixed_bfloat16' instead\n", + " # On TPUs and CPUs, use 'mixed_bfloat16' instead\n", " mixed_precision.set_global_policy('mixed_float16')\n", " ```\n", "\n", "- 모델이 softmax로 끝나는 경우 float32인지 확인합니다. 모델이 무엇으로 끝나는지에 관계없이 출력이 float32인지 확인합니다.\n", "- `mixed_float16`으로 사용자 정의 훈련 루프를 사용하는 경우 위 코드 외에도 `tf.keras.mixed_precision.LossScaleOptimizer`로 옵티마이저를 래핑해야 합니다. 그런 다음 `optimizer.get_scaled_loss`를 호출하여 손실을 스케일링하고 `optimizer.get_unscaled_gradients`를 사용하여 그래디언트의 스케일링을 해제합니다.\n", + "- `mixed_bfloat16`과 함께 사용자 정의 훈련 루프를 사용하는 경우 위에서 언급한 global_policy를 설정하는 것으로도 충분합니다.\n", "- 평가 정확성이 떨어지지 않으면 훈련 배치 크기를 두 배로 늘립니다\n", "- GPU에서 성능을 최대화하려면 대부분의 텐서 차원이 $8$의 배수가 되도록 합니다.\n", "\n", @@ -816,7 +867,9 @@ "metadata": { "accelerator": "GPU", "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "mixed_precision.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/profiler.md b/site/ko/guide/profiler.md index fb057a4d75..368270cb12 100644 --- a/site/ko/guide/profiler.md +++ b/site/ko/guide/profiler.md @@ -73,7 +73,7 @@ Profiler에는 성능 분석에 도움이 되는 다양한 도구가 있습니 개요 페이지에는 다음과 같은 데이터가 표시됩니다. - ![image](./images/tf_profiler/overview_page.png?raw=true) +![image](./images/tf_profiler/overview_page.png?raw=true) - **성능 요약**: 모델 성능에 대한 높은 수준의 요약을 표시합니다. 성능 요약은 두 부분으로 구성됩니다. @@ -221,7 +221,7 @@ TensorFlow 통계 도구는 프로파일링 세션 동안 호스트 또는 기 추적 뷰어를 열면 가장 최근에 실행된 내용이 표시됩니다. -![image](./images/tf_profiler/tf_stats.png) +![image](https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/images/tf_profiler/tf_data_graph.png?raw=true) 이 화면에는 다음과 같은 주요 요소가 포함되어 있습니다. @@ -260,7 +260,7 @@ TensorFlow 통계 도구는 프로파일링 세션 동안 호스트 또는 기 이 도구는 모든 GPU 가속 커널에 대한 성능 통계 및 원래 op를 보여줍니다. -![image](./images/tf_profiler/gpu_kernel_stats.png) +![image](./images/tf_profiler/memory_breakdown_table.png) 이 도구는 두 개의 창에서 정보를 표시합니다. @@ -322,7 +322,7 @@ TensorFlow 통계 도구는 프로파일링 세션 동안 호스트 또는 기 이 섹션에는 메모리 사용량(GiB) 플롯 및 시간에 따른 조각화 비율(ms)이 표시됩니다. -![image](./images/tf_profiler/memory_timeline_graph.png) +![image](https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/images/tf_profiler/tf_data_graph_selector.png?raw=true) X축은 프로파일링 기간의 타임라인(ms)을 나타냅니다. 왼쪽의 Y축은 메모리 사용량(GiB)을 나타내고 오른쪽의 Y축은 조각화 비율을 나타냅니다. X축의 각 시점에서 총 메모리는 스택(빨간색), 힙(주황색) 및 여유(녹색)의 세 가지 범주로 분류됩니다. 특정 타임스탬프 위로 마우스를 가져가면 아래와 같이 해당 시점의 메모리 할당/할당 해제 이벤트에 대한 세부 정보를 볼 수 있습니다. @@ -345,7 +345,7 @@ X축은 프로파일링 기간의 타임라인(ms)을 나타냅니다. 왼쪽의 이 표에는 프로파일링 기간 동안 최대 메모리 사용량 시점에서 활성 메모리 할당량이 표시됩니다. -![image](./images/tf_profiler/memory_breakdown_table.png) +![image](./images/tf_profiler/pod_viewer.png) TensorFlow 연산마다 하나의 행이 있으며 각 행에는 다음 열이 있습니다. @@ -365,7 +365,7 @@ TensorFlow 연산마다 하나의 행이 있으며 각 행에는 다음 열이 Pod 뷰어 도구는 모든 작업자의 학습 스텝 분석을 보여줍니다. -![image](./images/tf_profiler/pod_viewer.png) +![image](./images/tf_profiler/memory_timeline_graph.png) - 상단 창에는 스텝 번호를 선택하는 슬라이더가 있습니다. - 아래쪽 창에는 누적 세로 막대형 차트가 표시됩니다. 이것은 분류된 스텝 시간 범주를 서로의 위에 배치한 고차원적인 보기입니다. 누적된 각 열은 고유한 작업자를 나타냅니다. @@ -391,7 +391,7 @@ UI는 **성능 분석 요약**, **모든 입력 파이프라인 요약** 및 ** #### 성능 분석 요약 - ![image](./images/tf_profiler/tf_data_graph.png?raw=true) +![image](./images/tf_profiler/tf_stats.png) 이 섹션에서는 분석 요약을 제공합니다. 프로파일에서 느린 `tf.data` 입력 파이프라인이 감지되는지 여부가 보고됩니다. 이 섹션에는 또한 입력 바운드가 가장 큰 호스트와 지연 시간이 가장 큰 가장 느린 입력 파이프라인이 표시됩니다. 그리고 가장 중요한 부분으로, 입력 파이프라인의 어느 부분이 병목인지, 이 병목을 해결할 방법을 알려줍니다. 병목 현상 정보는 반복기 유형과 해당하는 긴 이름과 함께 제공됩니다. @@ -416,7 +416,7 @@ dataset = tf.data.Dataset.range(10).map(lambda x: x).repeat(2).batch(5) #### 모든 입력 파이프라인 요약 - ![image](./images/tf_profiler/tf_data_graph_selector.png?raw=true) +![image](./images/tf_profiler/gpu_kernel_stats.png) 이 섹션에서는 모든 호스트의 모든 입력 파이프라인에 대한 요약을 제공합니다. 일반적으로 하나의 입력 파이프라인이 있습니다. 배포 전략을 사용하는 경우, 프로그램의 `tf.data` 코드를 실행하는 하나의 호스트 입력 파이프라인과 호스트 입력 파이프라인에서 데이터를 검색하여 장치로 전송하는 여러 개의 기기 입력 파이프라인이 있습니다. @@ -424,11 +424,11 @@ dataset = tf.data.Dataset.range(10).map(lambda x: x).repeat(2).batch(5) #### 입력 파이프라인 그래프 -![image](https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/images/tf_profiler/tf_data_graph_selector.png?raw=true) +![image](./images/tf_profiler/tf_data_graph_selector.png?raw=true) 이 섹션에서는 실행 시간 정보와 함께 입력 파이프라인 그래프가 표시됩니다. "호스트" 및 "입력 파이프라인"을 사용하여 보려는 호스트와 입력 파이프라인을 선택할 수 있습니다. 입력 파이프라인의 실행은 실행 시간을 기준으로 정렬되며, **Rank** 드롭다운을 사용하여 내림차순으로 정렬할 수 있습니다. -![image](https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/images/tf_profiler/tf_data_graph.png?raw=true) +![image](./images/tf_profiler/tf_data_graph.png?raw=true) 중요 경로의 노드에는 굵은 윤곽선이 있습니다. 중요 경로에서 가장 긴 자체 시간을 가진 노드인 병목 노드는 빨간색 윤곽선으로 표시됩니다. 중요하지 않은 다른 노드에는 회색 점선 윤곽선이 있습니다. diff --git a/site/ko/guide/ragged_tensor.ipynb b/site/ko/guide/ragged_tensor.ipynb index c94967ba05..dd0892faef 100644 --- a/site/ko/guide/ragged_tensor.ipynb +++ b/site/ko/guide/ragged_tensor.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "tXAbWHtqs1Y2" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -71,7 +73,9 @@ "metadata": { "id": "KKvdSorS-pDD" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!pip install --pre -U tensorflow\n", "import math\n", @@ -111,7 +115,9 @@ "metadata": { "id": "vGmJGSf_-PVB" }, - "outputs": [], + "outputs": [ + + ], "source": [ "digits = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])\n", "words = tf.ragged.constant([[\"So\", \"long\"], [\"thanks\", \"for\", \"all\", \"the\", \"fish\"]])\n", @@ -156,7 +162,9 @@ "metadata": { "id": "n8YMKXpI-PVH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(digits[0]) # First row" ] @@ -167,7 +175,9 @@ "metadata": { "id": "Awi8i9q5_DuX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(digits[:, :2]) # First two values in each row." ] @@ -178,7 +188,9 @@ "metadata": { "id": "sXgQtTcgHHMR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(digits[:, -2:]) # Last two values in each row." ] @@ -198,7 +210,9 @@ "metadata": { "id": "2tdUEtb7-PVL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(digits + 3)" ] @@ -209,7 +223,9 @@ "metadata": { "id": "X-bxG0nc_Nmf" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(digits + tf.ragged.constant([[1, 2, 3, 4], [], [5, 6, 7], [8], []]))" ] @@ -229,7 +245,9 @@ "metadata": { "id": "pvt5URbdEt-D" }, - "outputs": [], + "outputs": [ + + ], "source": [ "times_two_plus_one = lambda x: x * 2 + 1\n", "print(tf.ragged.map_flat_values(times_two_plus_one, digits))" @@ -250,7 +268,9 @@ "metadata": { "id": "A5NHb8ViA9dt" }, - "outputs": [], + "outputs": [ + + ], "source": [ "digits.to_list()" ] @@ -261,7 +281,9 @@ "metadata": { "id": "2o1wogVyA6Yp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "digits.numpy()" ] @@ -283,7 +305,9 @@ "metadata": { "id": "yhgKMozw-PVP" }, - "outputs": [], + "outputs": [ + + ], "source": [ "sentences = tf.ragged.constant([\n", " [\"Let's\", \"build\", \"some\", \"ragged\", \"tensors\", \"!\"],\n", @@ -297,7 +321,9 @@ "metadata": { "id": "TW1g7eE2ee8M" }, - "outputs": [], + "outputs": [ + + ], "source": [ "paragraphs = tf.ragged.constant([\n", " [['I', 'have', 'a', 'cat'], ['His', 'name', 'is', 'Mat']],\n", @@ -327,7 +353,9 @@ "metadata": { "id": "SEvcPUcl-PVS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(tf.RaggedTensor.from_value_rowids(\n", " values=[3, 1, 4, 1, 5, 9, 2],\n", @@ -353,7 +381,9 @@ "metadata": { "id": "LBY81WXl-PVW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(tf.RaggedTensor.from_row_lengths(\n", " values=[3, 1, 4, 1, 5, 9, 2],\n", @@ -379,7 +409,9 @@ "metadata": { "id": "FwizuqZI-PVb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(tf.RaggedTensor.from_row_splits(\n", " values=[3, 1, 4, 1, 5, 9, 2],\n", @@ -414,7 +446,9 @@ "metadata": { "id": "SqbPBd_w-PVi" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(tf.ragged.constant([[\"Hi\"], [\"How\", \"are\", \"you\"]])) # ok: type=string, rank=2" ] @@ -425,7 +459,9 @@ "metadata": { "id": "83ZCSJnQAWAf" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(tf.ragged.constant([[[1, 2], [3]], [[4, 5]]])) # ok: type=int32, rank=3" ] @@ -436,7 +472,9 @@ "metadata": { "id": "ewA3cISdDfmP" }, - "outputs": [], + "outputs": [ + + ], "source": [ "try:\n", " tf.ragged.constant([[\"one\", \"two\"], [3, 4]]) # bad: multiple types\n", @@ -450,7 +488,9 @@ "metadata": { "id": "EOWIlVidDl-n" }, - "outputs": [], + "outputs": [ + + ], "source": [ "try:\n", " tf.ragged.constant([\"A\", [\"B\", \"C\"]]) # bad: multiple nesting depths\n", @@ -475,7 +515,9 @@ "metadata": { "id": "ZBs_V7e--PVr" }, - "outputs": [], + "outputs": [ + + ], "source": [ "queries = tf.ragged.constant([['Who', 'is', 'Dan', 'Smith'],\n", " ['Pause'],\n", @@ -546,7 +588,9 @@ "metadata": { "id": "M2Wzx4JEIvmb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.ragged.constant([[\"Hi\"], [\"How\", \"are\", \"you\"]]).shape" ] @@ -566,7 +610,9 @@ "metadata": { "id": "5DHaqXHxlWi0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(tf.ragged.constant([[\"Hi\"], [\"How\", \"are\", \"you\"]]).bounding_shape())" ] @@ -595,7 +641,9 @@ "metadata": { "id": "ush7IGUWLXIn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ragged_x = tf.ragged.constant([[\"John\"], [\"a\", \"big\", \"dog\"], [\"my\", \"cat\"]])\n", "ragged_y = tf.ragged.constant([[\"fell\", \"asleep\"], [\"barked\"], [\"is\", \"fuzzy\"]])\n", @@ -619,7 +667,9 @@ "metadata": { "id": "eTIhGayQL0gI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "sparse_x = ragged_x.to_sparse()\n", "sparse_y = ragged_y.to_sparse()\n", @@ -662,7 +712,9 @@ "metadata": { "id": "pHls7hQVJlk5" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Task: predict whether each sentence is a question or not.\n", "sentences = tf.constant(\n", @@ -709,7 +761,9 @@ "metadata": { "id": "xsiglYM7TXGr" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import google.protobuf.text_format as pbtext\n", "\n", @@ -750,7 +804,9 @@ "metadata": { "id": "xcdaIbYVT4mo" }, - "outputs": [], + "outputs": [ + + ], "source": [ "feature_specification = {\n", " 'colors': tf.io.RaggedFeature(tf.string),\n", @@ -787,7 +843,9 @@ "metadata": { "id": "fBml1m2G2vO9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Helper function used to print datasets in the examples below.\n", "def print_dictionary_dataset(dataset):\n", @@ -814,7 +872,9 @@ "metadata": { "id": "BuelF_y2mEq9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "dataset = tf.data.Dataset.from_tensor_slices(feature_tensors)\n", "print_dictionary_dataset(dataset)" @@ -846,7 +906,9 @@ "metadata": { "id": "lk62aRz63IZn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "batched_dataset = dataset.batch(2)\n", "print_dictionary_dataset(batched_dataset)" @@ -867,7 +929,9 @@ "metadata": { "id": "CxLlaPw_5Je4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "unbatched_dataset = batched_dataset.unbatch()\n", "print_dictionary_dataset(unbatched_dataset)" @@ -890,7 +954,9 @@ "metadata": { "id": "PYnhERwh3_mf" }, - "outputs": [], + "outputs": [ + + ], "source": [ "non_ragged_dataset = tf.data.Dataset.from_tensor_slices([1, 5, 3, 2, 8])\n", "non_ragged_dataset = non_ragged_dataset.map(tf.range)\n", @@ -917,7 +983,9 @@ "metadata": { "id": "Ios1GuG-pf9U" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def transform_lengths(features):\n", " return {\n", @@ -944,7 +1012,9 @@ "metadata": { "id": "PfyxgVaj_8tl" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def make_palindrome(x, axis):\n", @@ -957,7 +1027,9 @@ "metadata": { "id": "vcZdzvEnDEt0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "make_palindrome(tf.constant([[1, 2], [3, 4], [5, 6]]), axis=1)" ] @@ -968,7 +1040,9 @@ "metadata": { "id": "4WfCMIgdDMxj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "make_palindrome(tf.ragged.constant([[1, 2], [3], [4, 5, 6]]), axis=1)" ] @@ -988,7 +1062,9 @@ "metadata": { "id": "k6-hkhdDBk6G" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function(\n", " input_signature=[tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int32)])\n", @@ -1015,7 +1091,9 @@ "metadata": { "id": "yyJeXJ4wFWox" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def increment(x):\n", @@ -1052,7 +1130,9 @@ "metadata": { "id": "D-Dg9w7Je5pU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tempfile\n", "\n", @@ -1077,7 +1157,9 @@ "metadata": { "id": "Sfem1ESrdGzX" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class CustomModule(tf.Module):\n", " def __init__(self, variable_value):\n", @@ -1127,7 +1209,9 @@ "metadata": { "id": "skScd37P-PVu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]])\n", "y = tf.ragged.constant([[1, 1], [2], [3, 3, 3]])\n", @@ -1149,7 +1233,9 @@ "metadata": { "id": "IYybEEWc-PVx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]])\n", "print(x + 3)" @@ -1192,7 +1278,9 @@ "metadata": { "id": "MbSRZRDz-PV1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "queries = tf.ragged.constant(\n", " [['Who', 'is', 'George', 'Washington'],\n", @@ -1206,7 +1294,9 @@ "metadata": { "id": "2HRs2xhh-vZE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(queries[1]) # A single query" ] @@ -1217,7 +1307,9 @@ "metadata": { "id": "EFfjZV7YA3UH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(queries[1, 2]) # A single word" ] @@ -1228,7 +1320,9 @@ "metadata": { "id": "VISRPQSdA3xn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(queries[1:]) # Everything but the first row" ] @@ -1239,7 +1333,9 @@ "metadata": { "id": "J1PpSyKQBMng" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(queries[:, :3]) # The first 3 words of each query" ] @@ -1250,7 +1346,9 @@ "metadata": { "id": "ixrhHmJBeidy" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(queries[:, -2:]) # The last 2 words of each query" ] @@ -1270,7 +1368,9 @@ "metadata": { "id": "8VbqbKcE-PV6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt = tf.ragged.constant([[[1, 2, 3], [4]],\n", " [[5], [], [6]],\n", @@ -1284,7 +1384,9 @@ "metadata": { "id": "f9WPVWf4grVp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(rt[1]) # Second row (2D RaggedTensor)" ] @@ -1295,7 +1397,9 @@ "metadata": { "id": "ad8FGJoABjQH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(rt[3, 0]) # First element of fourth row (1D Tensor)" ] @@ -1306,7 +1410,9 @@ "metadata": { "id": "MPPr-a-bBjFE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(rt[:, 1:3]) # Items 1-3 of each row (3D RaggedTensor)" ] @@ -1317,7 +1423,9 @@ "metadata": { "id": "6SIDeoIUBi4z" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(rt[:, -1:]) # Last item of each row (3D RaggedTensor)" ] @@ -1348,7 +1456,9 @@ "metadata": { "id": "INnfmZGcBoU_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ragged_sentences = tf.ragged.constant([\n", " ['Hi'], ['Welcome', 'to', 'the', 'fair'], ['Have', 'fun']])" @@ -1360,7 +1470,9 @@ "metadata": { "id": "__iJ4iXtkGOx" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# RaggedTensor -> Tensor\n", "print(ragged_sentences.to_tensor(default_value='', shape=[None, 10]))" @@ -1372,7 +1484,9 @@ "metadata": { "id": "-rfiyYqne8QN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Tensor -> RaggedTensor\n", "x = [[1, 3, -1, -1], [2, -1, -1, -1], [4, 5, 8, 9]]\n", @@ -1385,7 +1499,9 @@ "metadata": { "id": "41WAZLXNnbwH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#RaggedTensor -> SparseTensor\n", "print(ragged_sentences.to_sparse())" @@ -1397,7 +1513,9 @@ "metadata": { "id": "S8MkYo2hfVhj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# SparseTensor -> RaggedTensor\n", "st = tf.SparseTensor(indices=[[0, 0], [2, 0], [2, 1]],\n", @@ -1428,7 +1546,9 @@ "metadata": { "id": "uMm1WMkc-PV_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt = tf.ragged.constant([[1, 2], [3, 4, 5], [6], [], [7]])\n", "print(\"Python list:\", rt.to_list())\n", @@ -1470,7 +1590,9 @@ "metadata": { "id": "btGDjT4uNgQy" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant([[1, 2], [3, 4], [5, 6]])\n", "x.shape # shape of a tf.tensor" @@ -1482,7 +1604,9 @@ "metadata": { "id": "__OgvmrGPEjq" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt = tf.ragged.constant([[1], [2, 3], [], [4]])\n", "rt.shape # shape of a tf.RaggedTensor" @@ -1514,7 +1638,9 @@ "metadata": { "id": "kWJ7Cn1EQTD_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant([['a', 'b'], ['c', 'd'], ['e', 'f']])\n", "tf.shape(x)" @@ -1535,7 +1661,9 @@ "metadata": { "id": "nZc2wqgQQUFU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt = tf.ragged.constant([[1], [2, 3, 4], [], [5, 6]])\n", "rt_shape = tf.shape(rt)\n", @@ -1559,7 +1687,9 @@ "metadata": { "id": "pclAODLXT6Gr" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(f\"tf.reshape(x, rt_shape) = {tf.reshape(x, rt_shape)}\")\n", "print(f\"tf.zeros(rt_shape) = {tf.zeros(rt_shape)}\")\n", @@ -1584,7 +1714,9 @@ "metadata": { "id": "MzQvPhsxS6HN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt_shape[0]" ] @@ -1604,7 +1736,9 @@ "metadata": { "id": "HgGMk0LeTGik" }, - "outputs": [], + "outputs": [ + + ], "source": [ "try:\n", " rt_shape[1]\n", @@ -1627,7 +1761,9 @@ "metadata": { "id": "APT72EaBU70t" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt_shape[:1]" ] @@ -1665,7 +1801,9 @@ "metadata": { "id": "NSRgD667WwIZ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.experimental.DynamicRaggedShape(\n", " row_partitions=[tf.experimental.RowPartition.from_row_lengths([5, 3, 2])],\n", @@ -1687,7 +1825,9 @@ "metadata": { "id": "gMxCzADUYIjY" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.experimental.DynamicRaggedShape.from_lengths([4, (2, 1, 0, 8), 12])" ] @@ -1733,7 +1873,9 @@ "metadata": { "id": "0n095XdR-PWU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# x (2D ragged): 2 x (num_rows)\n", "# y (scalar)\n", @@ -1749,7 +1891,9 @@ "metadata": { "id": "0SVYk5AP-PWW" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# x (2d ragged): 3 x (num_rows)\n", "# y (2d tensor): 3 x 1\n", @@ -1768,7 +1912,9 @@ "metadata": { "id": "MsfBMD80s8Ux" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# x (3d ragged): 2 x (r1) x 2\n", "# y (2d ragged): 1 x 1\n", @@ -1787,7 +1933,9 @@ "metadata": { "id": "rEj5QVfnva0t" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# x (3d ragged): 2 x (r1) x (r2) x 1\n", "# y (1d tensor): 3\n", @@ -1825,7 +1973,9 @@ "metadata": { "id": "UpI0FlfL4Eim" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# x (2d ragged): 3 x (r1)\n", "# y (2d tensor): 3 x 4 # trailing dimensions do not match\n", @@ -1843,7 +1993,9 @@ "metadata": { "id": "qGq1zOT4zMoc" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# x (2d ragged): 3 x (r1)\n", "# y (2d ragged): 3 x (r2) # ragged dimensions do not match.\n", @@ -1861,7 +2013,9 @@ "metadata": { "id": "CvLae5vMqeji" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# x (3d ragged): 3 x (r1) x 2\n", "# y (3d ragged): 3 x (r1) x 3 # trailing dimensions do not match\n", @@ -1908,7 +2062,9 @@ "metadata": { "id": "MrLgMu0gPuo-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt = tf.RaggedTensor.from_row_splits(\n", " values=[3, 1, 4, 1, 5, 9, 2],\n", @@ -1950,7 +2106,9 @@ "metadata": { "id": "yy3IGT2a-PWb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt = tf.RaggedTensor.from_row_splits(\n", " values=tf.RaggedTensor.from_row_splits(\n", @@ -1977,7 +2135,9 @@ "metadata": { "id": "AKYhtFcT-PWd" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt = tf.RaggedTensor.from_nested_row_splits(\n", " flat_values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],\n", @@ -2002,7 +2162,9 @@ "metadata": { "id": "BXp-Tt2bClem" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# shape = [batch, (paragraph), (sentence), (word)]\n", "conversations = tf.ragged.constant(\n", @@ -2020,7 +2182,9 @@ "metadata": { "id": "DZUMrgxXFd5s" }, - "outputs": [], + "outputs": [ + + ], "source": [ "assert conversations.ragged_rank == len(conversations.nested_row_splits)\n", "conversations.ragged_rank # Number of partitioned dimensions." @@ -2032,7 +2196,9 @@ "metadata": { "id": "xXLSNpS0Fdvp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "conversations.flat_values.numpy()" ] @@ -2056,7 +2222,9 @@ "metadata": { "id": "z2sHwHdy-PWg" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt = tf.RaggedTensor.from_row_splits(\n", " values=[[1, 3], [0, 0], [1, 3], [5, 3], [3, 3], [1, 2]],\n", @@ -2087,7 +2255,9 @@ "metadata": { "id": "70q1aCKwySgS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rt = tf.RaggedTensor.from_uniform_row_length(\n", " values=tf.RaggedTensor.from_row_splits(\n", @@ -2102,7 +2272,9 @@ ], "metadata": { "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "ragged_tensor.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/saved_model.ipynb b/site/ko/guide/saved_model.ipynb index f06b36c5d2..6baab54344 100644 --- a/site/ko/guide/saved_model.ipynb +++ b/site/ko/guide/saved_model.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "FlUw7tSKbtg4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -47,10 +49,12 @@ }, "source": [ "\n", - " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", + " \n", "
TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 TensorFlow.org에서 보기\n", + " Google Colab에서 실행하기\n", + "GitHub에서 소스 보기 노트북 다운로드하기
" ] }, @@ -79,8 +83,24 @@ "id": "9SuIC7FiI9g8" }, "source": [ - "## Keras에서 SavedModel 생성하기\n", - "\n", + "## Keras에서 SavedModel 생성하기" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "AtSmftAvhJvE" + }, + "source": [ + "사용이 중단됨: Keras 객체의 경우, [여기](https://www.tensorflow.org/guide/keras/save_and_serialize) 가이드에 설명된 대로 새로운 상위 수준 `.keras` 형식과 `tf.keras.Model.export`를 사용하는 것이 좋습니다. 기존 코드에는 하위 수준 SavedModel 형식이 계속 지원됩니다" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "eLSOptpYhJvE" + }, + "source": [ "빠른 도입을 위해 이 섹션에서는 사전에 훈련한 Keras 모델을 내보내고 이와 함께 이미지 분류 요청을 제공합니다. 가이드의 나머지 부분에서는 세부 정보를 채우고 SavedModels를 생성하는 다른 방법에 대해 논의합니다." ] }, @@ -90,7 +110,9 @@ "metadata": { "id": "Le5OB-fBHHW7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import os\n", "import tempfile\n", @@ -108,7 +130,9 @@ "metadata": { "id": "wlho4HEWoHUT" }, - "outputs": [], + "outputs": [ + + ], "source": [ "physical_devices = tf.config.list_physical_devices('GPU')\n", "for device in physical_devices:\n", @@ -121,7 +145,9 @@ "metadata": { "id": "SofdPKo0G8Lb" }, - "outputs": [], + "outputs": [ + + ], "source": [ "file = tf.keras.utils.get_file(\n", " \"grace_hopper.jpg\",\n", @@ -149,7 +175,9 @@ "metadata": { "id": "JhVecdzJTsKE" }, - "outputs": [], + "outputs": [ + + ], "source": [ "labels_path = tf.keras.utils.get_file(\n", " 'ImageNetLabels.txt',\n", @@ -163,7 +191,9 @@ "metadata": { "id": "aEHSYjW6JZHV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "pretrained_model = tf.keras.applications.MobileNet()\n", "result_before_save = pretrained_model(x)\n", @@ -188,7 +218,9 @@ "metadata": { "id": "8nfznDmHCW6F" }, - "outputs": [], + "outputs": [ + + ], "source": [ "mobilenet_save_path = os.path.join(tmpdir, \"mobilenet/1/\")\n", "tf.saved_model.save(pretrained_model, mobilenet_save_path)" @@ -202,7 +234,7 @@ "source": [ "저장 경로의 마지막 경로 요소(여기서는 `1/`)는 모델의 버전 번호인 텐서플로 서빙(TensorFlow Serving) 컨벤션을 따릅니다 - 텐서플로 서빙과 같은 도구가 최신 모델을 구분할 수 있게 합니다.\n", "\n", - "SavedModel은 시그니처(signatures)라 불리는 이름있는 함수를 가집니다. 케라스 모델은 `serving_default` 시그니처 키를 사용하여 정방향 패스(forward pass)를 내보냅니다. [SavedModel 커맨드 라인 인터페이스](#details_of_the_savedmodel_command_line_interface)는 디스크에 저장된 SavedModel을 검사할 때 유용합니다." + "파이썬에서 `tf.saved_model.load`로 SavedModel을 다시 불러오고 해군대장 호퍼(Admiral Hopper)의 이미지가 어떻게 분류되는지 볼 수 있습니다." ] }, { @@ -211,7 +243,9 @@ "metadata": { "id": "NP2UpVFRV7N_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "loaded = tf.saved_model.load(mobilenet_save_path)\n", "print(list(loaded.signatures.keys())) # [\"serving_default\"]" @@ -232,7 +266,9 @@ "metadata": { "id": "ChFLpegYfQGR" }, - "outputs": [], + "outputs": [ + + ], "source": [ "infer = loaded.signatures[\"serving_default\"]\n", "print(infer.structured_outputs)" @@ -253,7 +289,9 @@ "metadata": { "id": "9WjGEaS3XfX7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]\n", "\n", @@ -292,7 +330,9 @@ "metadata": { "id": "6u3YZuYZXyTO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!ls {mobilenet_save_path}" ] @@ -314,7 +354,9 @@ "metadata": { "id": "Pus0dOYTYXbI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!saved_model_cli show --dir {mobilenet_save_path} --tag_set serve" ] @@ -334,7 +376,9 @@ "metadata": { "id": "EDYqhDlNZAC2" }, - "outputs": [], + "outputs": [ + + ], "source": [ "!ls {mobilenet_save_path}/variables" ] @@ -347,7 +391,9 @@ "source": [ "`assets` 디렉토리에는 텐서플로 그래프(TensorFlow graph)에서 사용되는 파일들, 예를 들어 상수 테이블을 초기화하는 데 사용되는 텍스트 파일들이 있습니다. 이번 예제에서는 사용되지 않습니다.\n", "\n", - "SavedModel은 텐서플로 그래프에서 사용되지 않는 파일을 위해 `assets.extra` 디렉토리를 가질 수 있는데, 예를 들면 사용자가 SavedModel과 함께 사용할 파일입니다. 텐서플로 자체는 이 디렉토리를 사용하지 않습니다." + "SavedModel은 텐서플로 그래프에서 사용되지 않는 파일을 위해 `assets.extra` 디렉토리를 가질 수 있는데, 예를 들면 사용자가 SavedModel과 함께 사용할 파일입니다. 텐서플로 자체는 이 디렉토리를 사용하지 않습니다.\n", + "\n", + "`fingerprint.pb` 파일에는 SavedModel의 콘텐츠를 고유하게 식별하는 여러 64비트 해시로 구성된 SavedModel의 [fingerprint](https://en.wikipedia.org/wiki/Fingerprint_(computing))가 포함되어 있습니다. Fingerprint API는 현재 실험적이지만 `tf.saved_model.experimental.read_fingerprint`를 사용하여 SavedModel fingerprint를 `tf.saved_model.experimental.Fingerprint` 객체로 읽을 수 있습니다." ] }, { @@ -369,7 +415,9 @@ "metadata": { "id": "6EPvKiqXMm3d" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class CustomModule(tf.Module):\n", "\n", @@ -410,7 +458,9 @@ "metadata": { "id": "85PUO9iWH7xn" }, - "outputs": [], + "outputs": [ + + ], "source": [ "module_no_signatures_path = os.path.join(tmpdir, 'module_no_signatures')\n", "module(tf.constant(0.))\n", @@ -442,7 +492,9 @@ "metadata": { "id": "EMASjADPxPso" }, - "outputs": [], + "outputs": [ + + ], "source": [ "imported = tf.saved_model.load(module_no_signatures_path)\n", "assert imported(tf.constant(3.)).numpy() == 3\n", @@ -483,7 +535,9 @@ "metadata": { "id": "PEkQNarJ-7nT" }, - "outputs": [], + "outputs": [ + + ], "source": [ "optimizer = tf.keras.optimizers.SGD(0.05)\n", "\n", @@ -502,7 +556,9 @@ "metadata": { "id": "p41NM6fF---3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "for _ in range(10):\n", " # \"v\" approaches 5, \"loss\" approaches 0\n", @@ -532,7 +588,9 @@ "metadata": { "id": "Y6EUFdY8_PRD" }, - "outputs": [], + "outputs": [ + + ], "source": [ "loaded = tf.saved_model.load(mobilenet_save_path)\n", "print(\"MobileNet has {} trainable variables: {}, ...\".format(\n", @@ -546,7 +604,9 @@ "metadata": { "id": "B-mQJ8iP_R0h" }, - "outputs": [], + "outputs": [ + + ], "source": [ "trainable_variable_ids = {id(v) for v in loaded.trainable_variables}\n", "non_trainable_variables = [v for v in loaded.variables\n", @@ -577,7 +637,9 @@ "metadata": { "id": "h-IB5Xa0NxLa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "assert len(imported.signatures) == 0" ] @@ -597,7 +659,9 @@ "metadata": { "id": "_pAdgIORR2yH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "module_with_signature_path = os.path.join(tmpdir, 'module_with_signature')\n", "call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))\n", @@ -610,7 +674,9 @@ "metadata": { "id": "nAzRHR0UT4hv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "imported_with_signatures = tf.saved_model.load(module_with_signature_path)\n", "list(imported_with_signatures.signatures.keys())\n" @@ -631,7 +697,9 @@ "metadata": { "id": "6VYAiQmLUiox" }, - "outputs": [], + "outputs": [ + + ], "source": [ "module_multiple_signatures_path = os.path.join(tmpdir, 'module_with_multiple_signatures')\n", "signatures = {\"serving_default\": call,\n", @@ -646,7 +714,9 @@ "metadata": { "id": "8IPx_0RWEx07" }, - "outputs": [], + "outputs": [ + + ], "source": [ "imported_with_multiple_signatures = tf.saved_model.load(module_multiple_signatures_path)\n", "list(imported_with_multiple_signatures.signatures.keys())" @@ -667,14 +737,16 @@ "metadata": { "id": "ACKPl1X8G1gw" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class CustomModuleWithOutputName(tf.Module):\n", " def __init__(self):\n", " super(CustomModuleWithOutputName, self).__init__()\n", " self.v = tf.Variable(1.)\n", "\n", - " @tf.function(input_signature=[tf.TensorSpec([], tf.float32)])\n", + " @tf.function(input_signature=[tf.TensorSpec(None, tf.float32)])\n", " def __call__(self, x):\n", " return {'custom_output_name': x * self.v}\n", "\n", @@ -691,12 +763,46 @@ "metadata": { "id": "1yGVy4MuH-V0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "imported_with_output_name = tf.saved_model.load(module_output_path)\n", "imported_with_output_name.signatures['serving_default'].structured_outputs" ] }, + { + "cell_type": "markdown", + "metadata": { + "id": "Q4bCK55x1IBW" + }, + "source": [ + "## 프로토 분할\n", + "\n", + "참고: 이 기능은 TensorFlow 2.15 릴리스에 포함될 예정입니다. 이 기능은 현재 나이틀리 빌드에서 사용할 수 있으며, `pip install tf-nightly`로 설치할 수 있습니다.\n", + "\n", + "protobuf 구현의 한계로 인해 프로토 크기는 2GB를 초과할 수 없습니다. 이로 인해 매우 큰 모델을 저장하려고 하면 다음과 같은 오류가 발생할 수 있습니다:\n", + "\n", + "```\n", + "ValueError: Message tensorflow.SavedModel exceeds maximum protobuf size of 2GB: ...\n", + "```\n", + "\n", + "```\n", + "google.protobuf.message.DecodeError: Error parsing message as the message exceeded the protobuf limit with type 'tensorflow.GraphDef'\n", + "```\n", + "\n", + "2GB 제한을 초과하는 모델을 저장하려면 새로운 프로토 분할 옵션을 사용하여 저장해야 합니다:\n", + "\n", + "```python\n", + "tf.saved_model.save(\n", + " ...,\n", + " options=tf.saved_model.SaveOptions(experimental_image_format=True)\n", + ")\n", + "```\n", + "\n", + "자세한 내용은 [프로토콜 스플리터/병합 라이브러리 가이드](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/proto_splitter/in-depth-guide.md)에서 확인할 수 있습니다." + ] + }, { "cell_type": "markdown", "metadata": { @@ -784,8 +890,7 @@ "\n", "`MetaGraphDef`가 태그 세트에 *여러 개의* 태그를 가지고 있는 경우, 모든 태그를 지정해야 하며, 각 태그는 쉼표로 구분해야 합니다. 예를 들어:\n", "\n", - "
\n",
-        "$ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve,gpu\n",
+        "
$ saved_model_cli show --dir /tmp/saved_model_dir --tag_set serve,gpu\n",
         "
\n", "\n", "특정 `SignatureDef`에 대한 모든 입력 및 출력 텐서 정보(TensorInfo)를 표시하려면 `SignatureDef` 키를 `signature_def` 옵션으로 전달하십시오. 이것은 나중에 계산 그래프를 실행하기 위해 입력 텐서의 텐서 키 값, 크기 및 데이터 타입을 알고자 할 때 매우 유용합니다. 예를 들어:\n", @@ -808,8 +913,7 @@ "\n", "SavedModel에 사용 가능한 모든 정보를 표시하려면 `--all` 옵션을 사용하십시오. 예를 들어:\n", "\n", - "
\n",
-        "$ saved_model_cli show --dir /tmp/saved_model_dir --all\n",
+        "
$ saved_model_cli show --dir /tmp/saved_model_dir --all\n",
         "MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:\n",
         "\n",
         "signature_def['classify_x2_to_y3']:\n",
@@ -920,7 +1024,6 @@
   ],
   "metadata": {
     "colab": {
-      "collapsed_sections": [],
       "name": "saved_model.ipynb",
       "toc_visible": true
     },
diff --git a/site/ko/guide/tensor.ipynb b/site/ko/guide/tensor.ipynb
index c84bc0de84..f05ef9a2a7 100644
--- a/site/ko/guide/tensor.ipynb
+++ b/site/ko/guide/tensor.ipynb
@@ -16,7 +16,9 @@
         "cellView": "form",
         "id": "tuOe1ymfHZPu"
       },
-      "outputs": [],
+      "outputs": [
+
+      ],
       "source": [
         "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
         "# you may not use this file except in compliance with the License.\n",
@@ -47,10 +49,12 @@
       },
       "source": [
         "\n",
-        "  \n",
-        "  \n",
-        "  \n",
-        "  \n",
+        "  \n",
+        "  \n",
+        "  \n",
+        "  \n",
         "
TensorFlow.org에서 보기 Google Colab에서 실행GitHub에서소스 보기노트북 다운로드 TensorFlow.org에서 보기\n", + " Google Colab에서 실행하기\n", + "GitHub에서 소스 보기노트북 다운로드하기
" ] }, @@ -60,7 +64,9 @@ "metadata": { "id": "AL2hzxorJiWy" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "import numpy as np" @@ -105,7 +111,9 @@ "metadata": { "id": "d5JcgLFR6gHv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# This will be an int32 tensor by default; see \"dtypes\" below.\n", "rank_0_tensor = tf.constant(4)\n", @@ -127,7 +135,9 @@ "metadata": { "id": "oZos8o_R6oE7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Let's make this a float tensor.\n", "rank_1_tensor = tf.constant([2.0, 3.0, 4.0])\n", @@ -149,7 +159,9 @@ "metadata": { "id": "cnOIA_xb6u0M" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# If you want to be specific, you can set the dtype (see below) at creation time\n", "rank_2_tensor = tf.constant([[1, 2],\n", @@ -171,9 +183,13 @@ " 행렬, 형상: [3, 2] \n", "\n", "\n", - " \"A \n", - " \"The \n", - " \"A \n", + " \"스칼라,\n", + "\n", + "\n", + " \"3개의\n", + "\n", + " \"각\n", + "\n", "\n", "\n" ] @@ -193,7 +209,9 @@ "metadata": { "id": "sesW7gw6JkXy" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# There can be an arbitrary number of\n", "# axes (sometimes called \"dimensions\")\n", @@ -230,10 +248,15 @@ "\n", "\n", "\n", - " \n", - " \n", - " \n", + " \n", + "\n", + " \n", + "\n", + "\n", + " \n", + "\n", "\n", + "\n", "" ] }, @@ -252,7 +275,9 @@ "metadata": { "id": "J5u6_6ZYaS7B" }, - "outputs": [], + "outputs": [ + + ], "source": [ "np.array(rank_2_tensor)" ] @@ -263,7 +288,9 @@ "metadata": { "id": "c6Taz2gIaZeo" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rank_2_tensor.numpy()" ] @@ -300,12 +327,14 @@ "metadata": { "id": "-DTkjwDOIIDa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "a = tf.constant([[1, 2],\n", " [3, 4]])\n", "b = tf.constant([[1, 1],\n", - " [1, 1]]) # Could have also said `tf.ones([2,2])`\n", + " [1, 1]]) # Could have also said `tf.ones([2,2], dtype=tf.int32)`\n", "\n", "print(tf.add(a, b), \"\\n\")\n", "print(tf.multiply(a, b), \"\\n\")\n", @@ -318,7 +347,9 @@ "metadata": { "id": "2smoWeUz-N2q" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(a + b, \"\\n\") # element-wise addition\n", "print(a * b, \"\\n\") # element-wise multiplication\n", @@ -340,7 +371,9 @@ "metadata": { "id": "Gp4WUYzGIbnv" }, - "outputs": [], + "outputs": [ + + ], "source": [ "c = tf.constant([[4.0, 5.0], [10.0, 1.0]])\n", "\n", @@ -367,7 +400,9 @@ "metadata": { "id": "_wch0N8xNEt-" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.convert_to_tensor([1,2,3])" ] @@ -378,7 +413,9 @@ "metadata": { "id": "ngqIeWYeNJVI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.reduce_max([1,2,3])" ] @@ -389,7 +426,9 @@ "metadata": { "id": "ThVMxqbVNOq3" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.reduce_max(np.array([1,2,3]))" ] @@ -441,7 +480,9 @@ "metadata": { "id": "RyD3yewUKdnK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rank_4_tensor = tf.zeros([3, 2, 4, 5])" ] @@ -457,8 +498,10 @@ " 순위-4 텐서, 형상: [3, 2, 4, 5] \n", "\n", "\n", - " \"A \n", - " \"A \n", + " \"텐서\n", + "\n", + " \"4축\n", + "\n", " \n", "\n" ] @@ -469,7 +512,9 @@ "metadata": { "id": "MHm9vSqogsBk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Type of every element:\", rank_4_tensor.dtype)\n", "print(\"Number of axes:\", rank_4_tensor.ndim)\n", @@ -494,7 +539,9 @@ "metadata": { "id": "Ptq0-y6APCpD" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.rank(rank_4_tensor)" ] @@ -505,7 +552,9 @@ "metadata": { "id": "HslrDOEBPICN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.shape(rank_4_tensor)" ] @@ -523,7 +572,8 @@ "일반적인 축 순서\n", "\n", "\n", - " \"Keep \n", + " \"각\n", + "\n", "\n", "" ] @@ -558,7 +608,9 @@ "metadata": { "id": "SQ-CrJxLXTIM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "rank_1_tensor = tf.constant([0, 1, 1, 2, 3, 5, 8, 13, 21, 34])\n", "print(rank_1_tensor.numpy())" @@ -579,7 +631,9 @@ "metadata": { "id": "n6tqHciOWMt5" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"First:\", rank_1_tensor[0].numpy())\n", "print(\"Second:\", rank_1_tensor[1].numpy())\n", @@ -601,7 +655,9 @@ "metadata": { "id": "giVPPcfQX-cu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Everything:\", rank_1_tensor[:].numpy())\n", "print(\"Before 4:\", rank_1_tensor[:4].numpy())\n", @@ -637,7 +693,9 @@ "metadata": { "id": "Tc5X_WlsZXmd" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(rank_2_tensor.numpy())" ] @@ -657,7 +715,9 @@ "metadata": { "id": "PvILXc1PjqTM" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Pull out a single value from a 2-rank tensor\n", "print(rank_2_tensor[1, 1].numpy())" @@ -678,7 +738,9 @@ "metadata": { "id": "YTqNqsfJkJP_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Get row and column tensors\n", "print(\"Second row:\", rank_2_tensor[1, :].numpy())\n", @@ -704,7 +766,9 @@ "metadata": { "id": "GuLoMoCVSLxK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(rank_3_tensor[:, :, 4])" ] @@ -720,8 +784,9 @@ "배치에서 각 예의 모든 위치에서 마지막 특성 선택하기\n", "\n", "\n", - " \"A \n", - " \"The \n", + " \"마지막\n", + " \"선택한\n", + "\n", "\n", "" ] @@ -752,7 +817,9 @@ "metadata": { "id": "EMeTtga5Wq8j" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Shape returns a `TensorShape` object that shows the size along each axis\n", "x = tf.constant([[1], [2], [3]])\n", @@ -765,7 +832,9 @@ "metadata": { "id": "38jc2RXziT3W" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# You can convert this object into a Python list, too\n", "print(x.shape.as_list())" @@ -786,7 +855,9 @@ "metadata": { "id": "pa9JCgMLWy87" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# You can reshape a tensor to a new shape.\n", "# Note that you're passing in a list\n", @@ -799,7 +870,9 @@ "metadata": { "id": "Mcq7iXOkW3LK" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(x.shape)\n", "print(reshaped.shape)" @@ -820,7 +893,9 @@ "metadata": { "id": "7kMfM0RpUgI8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(rank_3_tensor)" ] @@ -840,7 +915,9 @@ "metadata": { "id": "COnHEPuaWDQp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# A `-1` passed in the `shape` argument says \"Whatever fits\".\n", "print(tf.reshape(rank_3_tensor, [-1]))" @@ -863,7 +940,9 @@ "metadata": { "id": "zP2Iqc7zWu_J" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(tf.reshape(rank_3_tensor, [3*2, 5]), \"\\n\")\n", "print(tf.reshape(rank_3_tensor, [3, -1]))" @@ -878,9 +957,12 @@ "\n", "\n", "\n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", "\n", "
몇 가지 좋은 재구성
\"A \"The \"The \"3x2x5\n", + " \"동일한\n", + " \"동일한\n", + "
\n" ] @@ -902,7 +984,9 @@ "metadata": { "id": "I9qDL_8u7cBH" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Bad examples: don't do this\n", "\n", @@ -926,11 +1010,14 @@ }, "source": [ "\n", - "\n", + "\n", "\n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", "\n", "
몇 가지 잘못된 재구성몇 가지 안 좋은 재구성
\"You \"Anything \"The \"축은\n", + " \"데이터\n", + " \"새\n", + "
" ] @@ -945,8 +1032,8 @@ "\n", "[tf.RaggedTensor](#ragged_tensors)를 제외하고 이러한 형상은 TensorFlow의 상징적인 그래프 빌딩 API 컨텍스트에서만 발생합니다.\n", "\n", - "- tf.function\n", - "- keras 함수형 API\n" + "- [tf.function](function.ipynb)\n", + "- [케라스 함수형 API](https://www.tensorflow.org/guide/keras/functional)\n" ] }, { @@ -961,7 +1048,7 @@ "\n", "Python 객체에서 `tf.Tensor`를 만들 때 선택적으로 데이터 유형을 지정할 수 있습니다.\n", "\n", - "그렇지 않으면, TensorFlow는 데이터를 나타낼 수 있는 데이터 유형을 선택합니다. TensorFlow는 Python 정수를 `tf.int32`로, 파이썬 부동 소수점 숫자를 `tf.float32`로 변환합니다. 그렇지 않으면, TensorFlow는 NumPy가 배열로 변환할 때 사용하는 것과 같은 규칙을 사용합니다.\n", + "그렇지 않으면, TensorFlow는 데이터를 나타낼 수 있는 데이터 유형을 선택합니다. TensorFlow는 Python 정수를 `tf.int32`로, Python 부동 소수점 숫자를 `tf.float32`로 변환합니다. 그렇지 않으면, TensorFlow는 NumPy가 배열로 변환할 때 사용하는 것과 같은 규칙을 사용합니다.\n", "\n", "유형별로 캐스팅할 수 있습니다." ] @@ -972,7 +1059,9 @@ "metadata": { "id": "5mSTDWbelUvu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "the_f64_tensor = tf.constant([2.2, 3.3, 4.4], dtype=tf.float64)\n", "the_f16_tensor = tf.cast(the_f64_tensor, dtype=tf.float16)\n", @@ -1000,7 +1089,9 @@ "metadata": { "id": "P8sypqmagHQN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant([1, 2, 3])\n", "\n", @@ -1029,7 +1120,9 @@ "metadata": { "id": "6sGmkPg3XANr" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# These are the same computations\n", "x = tf.reshape(x,[3,1])\n", @@ -1050,7 +1143,8 @@ " 추가 시 브로드캐스팅: [1, 4][3, 1]의 곱하기는 [3,4]입니다.\n", "\n", "\n", - " \"Adding \n", + " \"4x1\n", + "\n", "\n", "\n" ] @@ -1070,7 +1164,9 @@ "metadata": { "id": "elrF6v63igY8" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x_stretch = tf.constant([[1, 1, 1, 1],\n", " [2, 2, 2, 2],\n", @@ -1089,7 +1185,7 @@ "id": "14KobqYu85gi" }, "source": [ - "대부분의 경우 브로드캐스팅은 브로드캐스트 연산으로 메모리에서 확장된 텐서를 구체화하지 않으므로 시간과 공간 효율적입니다.\n", + "대부분의 경우 브로드캐스팅은 브로드캐스트 연산으로 메모리에 확장된 텐서를 구체화하지 않으므로 시간과 공간 효율적입니다.\n", "\n", "`tf.broadcast_to`를 사용하여 브로드캐스팅이 어떤 모습인지 알 수 있습니다." ] @@ -1100,7 +1196,9 @@ "metadata": { "id": "GW2Q59_r8hZ6" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(tf.broadcast_to(tf.constant([1, 2, 3]), [3, 3]))" ] @@ -1155,7 +1253,8 @@ " `tf.RaggedTensor`, 형상: [4, None] \n", "\n", "\n", - " \"A \n", + " \"2축\n", + "\n", "\n", "" ] @@ -1166,7 +1265,9 @@ "metadata": { "id": "VsbTjoFfNVBF" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ragged_list = [\n", " [0, 1, 2, 3],\n", @@ -1181,7 +1282,9 @@ "metadata": { "id": "p4xKTo57tutG" }, - "outputs": [], + "outputs": [ + + ], "source": [ "try:\n", " tensor = tf.constant(ragged_list)\n", @@ -1204,7 +1307,9 @@ "metadata": { "id": "XhF3QV3jiqTj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "ragged_tensor = tf.ragged.constant(ragged_list)\n", "print(ragged_tensor)" @@ -1225,7 +1330,9 @@ "metadata": { "id": "Eo_3wJUWNgqB" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(ragged_tensor.shape)" ] @@ -1258,7 +1365,9 @@ "metadata": { "id": "sBosmM8MkIh4" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Tensors can be strings, too here is a scalar string.\n", "scalar_string_tensor = tf.constant(\"Gray wolf\")\n", @@ -1285,7 +1394,8 @@ " 문자열의 벡터, 형상: [3,] \n", "\n", "\n", - " \"The \n", + " \"문자열\n", + "\n", "\n", "" ] @@ -1296,7 +1406,9 @@ "metadata": { "id": "41Dv2kL9QrtO" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# If you have three string tensors of different lengths, this is OK.\n", "tensor_of_strings = tf.constant([\"Gray wolf\",\n", @@ -1330,7 +1442,9 @@ "metadata": { "id": "GTgL53jxSMd9" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tf.constant(\"🥳👍\")" ] @@ -1350,7 +1464,9 @@ "metadata": { "id": "8k2K0VTFyj8e" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# You can use split to split a string into a set of tensors\n", "print(tf.strings.split(scalar_string_tensor, sep=\" \"))" @@ -1362,7 +1478,9 @@ "metadata": { "id": "zgGAn1dfR-04" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# ...but it turns into a `RaggedTensor` if you split up a tensor of strings,\n", "# as each string might be split into a different number of parts.\n", @@ -1380,7 +1498,8 @@ " 세 개의 분할된 문자열, 형상: [3, None] \n", "\n", "\n", - " \"Splitting \n", + " \"여러\n", + "\n", "\n", "" ] @@ -1391,7 +1510,7 @@ "id": "st9OxrUxWSKY" }, "source": [ - "`tf.string.to_number`:" + "`tf.strings.to_number`:" ] }, { @@ -1400,7 +1519,9 @@ "metadata": { "id": "3nRtx3X9WRfN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "text = tf.constant(\"1 10 100\")\n", "print(tf.strings.to_number(tf.strings.split(text, \" \")))" @@ -1421,7 +1542,9 @@ "metadata": { "id": "fo8BjmH7gyTj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "byte_strings = tf.strings.bytes_split(tf.constant(\"Duck\"))\n", "byte_ints = tf.io.decode_raw(tf.constant(\"Duck\"), tf.uint8)\n", @@ -1435,7 +1558,9 @@ "metadata": { "id": "uSQnZ7d1jCSQ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Or split it up as unicode and then decode it\n", "unicode_bytes = tf.constant(\"アヒル 🦆\")\n", @@ -1478,7 +1603,8 @@ " `tf.SparseTensor`, 형상: [3, 4] \n", "\n", "\n", - " \"An \n", + " \"셀\n", + "\n", "\n", "" ] @@ -1489,7 +1615,9 @@ "metadata": { "id": "B9nbO1E2kSUN" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Sparse tensors store values by index in a memory-efficient manner\n", "sparse_tensor = tf.sparse.SparseTensor(indices=[[0, 0], [1, 2]],\n", diff --git a/site/ko/guide/tf_numpy.ipynb b/site/ko/guide/tf_numpy.ipynb index 35ce6b5f3c..8732037558 100644 --- a/site/ko/guide/tf_numpy.ipynb +++ b/site/ko/guide/tf_numpy.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "sY3Ffd83hK3b" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -48,9 +50,10 @@ "source": [ "\n", " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행 GitHub에서 소스 보기 노트북 다운로드 Google Colab에서 실행하기\n", + "GitHub에서 소그 보기 노트북 다운로드하기
" ] }, @@ -62,7 +65,7 @@ "source": [ "## 개요\n", "\n", - "TensorFlow는 `tf.experimental.numpy`로 사용할 수 있는 [NumPy API](https://numpy.org/doc/1.16)의 하위 집합을 구현합니다. 이를 통해 TensorFlow에서 NumPy 코드를 빠르게 실행할 수 있으며 TensorFlow의 모든 API에 액세스할 수 있습니다." + "TensorFlow는 `tf.experimental.numpy`로 사용할 수 있는 [NumPy API](https://numpy.org/doc/stable/index.html)의 하위 집합을 구현합니다. 이를 통해 TensorFlow에서 NumPy 코드를 빠르게 실행할 수 있으며 TensorFlow의 모든 API에 액세스할 수 있습니다." ] }, { @@ -80,7 +83,9 @@ "metadata": { "id": "AJR558zjAZQu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", @@ -108,7 +113,9 @@ "metadata": { "id": "TfCyofpFDQxm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tnp.experimental_enable_numpy_behavior()" ] @@ -134,7 +141,7 @@ "\n", "**ND 배열**이라는 `tf.experimental.numpy.ndarray`의 인스턴스는 특정 기기에 배치된 주어진 `dtype`의 다차원 고밀도 배열을 나타냅니다. 이것은 `tf.Tensor`에 대한 별칭입니다. `ndarray.T`, `ndarray.reshape`, `ndarray.ravel` 등과 같은 유용한 메서드를 위한 ND 배열 클래스를 확인해 보세요.\n", "\n", - "먼저 ND 배열 객체를 만든 다음, 다양한 메서드를 호출합니다. " + "먼저 ND 배열 객체를 만든 다음, 다양한 메서드를 호출합니다." ] }, { @@ -143,7 +150,9 @@ "metadata": { "id": "-BHJjxigJ2H1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Create an ND array and check out different attributes.\n", "ones = tnp.ones([5, 3], dtype=tnp.float32)\n", @@ -162,11 +171,28 @@ { "cell_type": "markdown", "metadata": { - "id": "Mub8-dvJMUr4" + "id": "-BOY8CGRKEhE" }, "source": [ "### 형식 승격\n", "\n", + "TensorFlow의 형식 승격에는 4가지 옵션이 있습니다.\n", + "\n", + "- 기본적으로 TensorFlow는 혼합 유형 연산에서 형식을 승격하는 대신 오류를 발생시킵니다.\n", + "- `tf.numpy.experimental_enable_numpy_behavior()`를 실행하면 `NumPy` 형식 승격 규칙(아래 설명)을 사용하도록 TensorFlow를 전환합니다.\n", + "- TensorFlow 2.15 이후에 두 가지 새로운 옵션이 추가되었습니다(자세한 내용은 [TF NumPy 형식 승격](tf_numpy_type_promotion.ipynb) 참조).\n", + " - `tf.numpy.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")`는 Jax 형식 승격 규칙을 사용합니다.\n", + " - `tf.numpy.experimental_enable_numpy_behavior(dtype_conversion_mode=\"safe\")`는 Jax 형식 승격 규칙을 사용하지만 안전하지 않은 특정 승격은 허용하지 않습니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "SXskSHrX5J45" + }, + "source": [ + "#### NumPy 형식 승격\n", + "\n", "TensorFlow NumPy API에는 리터럴을 ND 배열로 변환하고 ND 배열 입력에 대해 형식 승격을 수행하기 위한 잘 정의된 의미 체계가 있습니다. 자세한 내용은 [`np.result_type`](https://numpy.org/doc/1.16/reference/generated/numpy.result_type.html)을 참조하세요." ] }, @@ -185,14 +211,16 @@ "metadata": { "id": "uHmBi4KZI2t1" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Type promotion for operations\")\n", "values = [tnp.asarray(1, dtype=d) for d in\n", " (tnp.int32, tnp.int64, tnp.float32, tnp.float64)]\n", "for i, v1 in enumerate(values):\n", " for v2 in values[i + 1:]:\n", - " print(\"%s + %s => %s\" % \n", + " print(\"%s + %s => %s\" %\n", " (v1.dtype.name, v2.dtype.name, (v1 + v2).dtype.name))" ] }, @@ -211,7 +239,9 @@ "metadata": { "id": "1m1cp8_VooNk" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Type inference during array creation\")\n", "print(\"tnp.asarray(1).dtype == tnp.%s\" % tnp.asarray(1).dtype.name)\n", @@ -233,7 +263,9 @@ "metadata": { "id": "4gKasnH0j84C" }, - "outputs": [], + "outputs": [ + + ], "source": [ "tnp.experimental_enable_numpy_behavior(prefer_float32=True)\n", "print(\"When prefer_float32 is True:\")\n", @@ -263,7 +295,9 @@ "metadata": { "id": "qlyOShxIO0s2" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tnp.ones([2, 3])\n", "y = tnp.ones([3])\n", @@ -289,7 +323,9 @@ "metadata": { "id": "lRsrtnd3YyMj" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tnp.arange(24).reshape(2, 3, 4)\n", "\n", @@ -309,7 +345,9 @@ "metadata": { "id": "yRAaiGhlaNw7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Mutation is currently not supported\n", "try:\n", @@ -335,7 +373,9 @@ "metadata": { "id": "kR_KCh4kYEhm" }, - "outputs": [], + "outputs": [ + + ], "source": [ "class Model(object):\n", " \"\"\"Model with a dense and a linear layer.\"\"\"\n", @@ -395,7 +435,9 @@ "metadata": { "id": "cMOCgzQmeXRU" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# ND array passed into NumPy function.\n", "np_sum = np.sum(tnp.ones([2, 3]))\n", @@ -412,7 +454,9 @@ "metadata": { "id": "ZaLPjzxft780" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# It is easy to plot ND arrays, given the __array__ interface.\n", "labels = 15 + 2 * tnp.random.randn(1, 1000)\n", @@ -453,7 +497,9 @@ "metadata": { "id": "Cbw8a3G_WUO7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tnp.ones([2]) + np.ones([2])\n", "print(\"x = %s\\nclass = %s\" % (x, x.__class__))" @@ -487,7 +533,9 @@ "metadata": { "id": "BkHVauKwnky_" }, - "outputs": [], + "outputs": [ + + ], "source": [ "x = tf.constant([1, 2])\n", "print(x)\n", @@ -520,7 +568,9 @@ "metadata": { "id": "-QvxNhrFoz09" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# ND array passed into TensorFlow function.\n", "tf_sum = tf.reduce_sum(tnp.ones([2, 3], tnp.float32))\n", @@ -550,7 +600,9 @@ "metadata": { "id": "T47C9KS8pbsP" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def create_batch(batch_size=32):\n", " \"\"\"Creates a batch of input and labels.\"\"\"\n", @@ -587,7 +639,9 @@ "metadata": { "id": "TujVPDFwrdqp" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Computes a batch of jacobians. Each row is the jacobian of an element in the\n", "# batch of outputs w.r.t. the corresponding input batch element.\n", @@ -623,7 +677,9 @@ "metadata": { "id": "05SrUulm1OlL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "inputs, labels = create_batch(512)\n", "print(\"Eager performance\")\n", @@ -657,7 +713,9 @@ "metadata": { "id": "PemSIrs5L-VJ" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def vectorized_per_example_gradients(inputs, labels):\n", @@ -688,7 +746,9 @@ "metadata": { "id": "_QZ5BjJmRAlG" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Benchmark the vectorized computation above and compare with\n", "# unvectorized sequential computation using `tf.map_fn`.\n", @@ -743,7 +803,9 @@ "metadata": { "id": "NDEAd9m9aemS" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"All logical devices:\", tf.config.list_logical_devices())\n", "print(\"All physical devices:\", tf.config.list_physical_devices())\n", @@ -772,7 +834,9 @@ "metadata": { "id": "c7ELvLmnazfV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "print(\"Using device: %s\" % str(device))\n", "# Run operations in the `tf.device` scope.\n", @@ -801,7 +865,9 @@ "metadata": { "id": "CCesyidaa-UT" }, - "outputs": [], + "outputs": [ + + ], "source": [ "with tf.device(\"/device:CPU:0\"):\n", " prediction_cpu = tnp.copy(prediction)\n", @@ -831,7 +897,9 @@ "cellView": "code", "id": "RExwjI9_pJG0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def benchmark(f, inputs, number=30, force_gpu_sync=False):\n", " \"\"\"Utility to benchmark `f` on each value in `inputs`.\"\"\"\n", @@ -870,7 +938,9 @@ "metadata": { "id": "p-fs_H1lkLfV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Define a simple implementation of `sigmoid`, and benchmark it using\n", "# NumPy and TensorFlow NumPy for different input sizes.\n", @@ -912,16 +982,15 @@ "source": [ "## 추가 자료\n", "\n", - "- [TensorFlow NumPy: Distributed Image Classification Tutorial](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/numpy_ops/g3doc/TensorFlow_Numpy_Distributed_Image_Classification.ipynb)\n", - "- [TensorFlow NumPy: Keras and Distribution Strategy](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/numpy_ops/g3doc/TensorFlow_NumPy_Keras_and_Distribution_Strategy.ipynb)\n", - "- [Sentiment Analysis with Trax and TensorFlow NumPy](https://github.com/google/trax/blob/master/trax/tf_numpy_and_keras.ipynb)" + "- [TensorFlow NumPy: 분산 이미지 분류 튜토리얼](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/numpy_ops/g3doc/TensorFlow_Numpy_Distributed_Image_Classification.ipynb)\n", + "- [TensorFlow NumPy: 케라스 및 분포 전략](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/numpy_ops/g3doc/TensorFlow_NumPy_Keras_and_Distribution_Strategy.ipynb)\n", + "- [Trax 및 TensorFlow NumPy를 사용한 감정 분석](https://github.com/google/trax/blob/master/trax/tf_numpy_and_keras.ipynb)" ] } ], "metadata": { "accelerator": "GPU", "colab": { - "collapsed_sections": [], "name": "tf_numpy.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/tf_numpy_type_promotion.ipynb b/site/ko/guide/tf_numpy_type_promotion.ipynb new file mode 100644 index 0000000000..948dec6d40 --- /dev/null +++ b/site/ko/guide/tf_numpy_type_promotion.ipynb @@ -0,0 +1,1193 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "ZjN_IJ8mhJ-4" + }, + "source": [ + "##### Copyright 2023 The TensorFlow Authors." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "sY3Ffd83hK3b" + }, + "outputs": [ + + ], + "source": [ + "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "03Pw58e6mTHI" + }, + "source": [ + "# TF-NumPy 형식 승격" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "l9nPKvxK-_pM" + }, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
TensorFlow.org에서 보기\n", + " Google Colab에서 실행하기\n", + "GitHub에서 소스 보기노트북 다운로드하기
" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uma-W5v__DYh" + }, + "source": [ + "## 개요\n", + "\n", + "TensorFlow의 형식 승격에는 4가지 옵션이 있습니다.\n", + "\n", + "- 기본적으로 TensorFlow는 혼합 형식 연산에서 형식을 승격하는 대신 오류를 발생시킵니다.\n", + "- `tf.numpy.experimental_enable_numpy_behavior()`를 실행하면 [NumPy 형식 승격 규칙](https://www.tensorflow.org/guide/tf_numpy#type_promotion)을 사용하도록 TensorFlow를 전환합니다.\n", + "- **이 문서**는 TensorFlow 2.15(또는 현재 `tf-nightly`)에서 사용할 수 있는 두 가지 새로운 옵션을 설명합니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "vMvEKDFOsau7" + }, + "outputs": [ + + ], + "source": [ + "!pip install -q tf_nightly" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "a6hOFBfPsd3y" + }, + "source": [ + "**참고**: `experimental_enable_numpy_behavior`는 모든 TensorFlow의 작동을 변경합니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ob1HNwUmYR5b" + }, + "source": [ + "## 설치하기" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "AJR558zjAZQu" + }, + "outputs": [ + + ], + "source": [ + "import numpy as np\n", + "import tensorflow as tf\n", + "import tensorflow.experimental.numpy as tnp\n", + "\n", + "print(\"Using TensorFlow version %s\" % tf.__version__)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "M6tacoy0DU6e" + }, + "source": [ + "### 새로운 형식 승격 활성화하기\n", + "\n", + "TF-Numpy에서 [JAX와 유사한 형식의 승격](https://jax.readthedocs.io/en/latest/type_promotion.html)을 사용하려면 TensorFlow에 대한 NumPy 작동을 활성화할 때 `'all'` 또는 `'safe'`를 dtype 변환 모드로 지정합니다.\n", + "\n", + "이 새로운 시스템(`dtype_conversion_mode=\"all\"` 사용)은 연관적이고 교환 가능하며, 최종적으로 어떤 너비의 부동 소수를 사용할지 쉽게 제어할 수 있습니다(더 넓은 부동 소수로 자동 변환되지는 않음). 오버플로우와 정밀도 손실의 위험이 있지만 `dtype_conversion_mode=\"safe\"`을 사용하면 이러한 경우를 명시적으로 처리할 수 있습니다. 이 두 가지 모드는 [다음 섹션](#two_modes)에서 더 자세히 설명됩니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "TfCyofpFDQxm" + }, + "outputs": [ + + ], + "source": [ + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "sEMXK8-ZWMun" + }, + "source": [ + "\n", + "\n", + "\n", + "## 두 가지 모드: ALL 모드와 SAFE 모드\n", + "\n", + "새로운 형식 승격 시스템에서는 `ALL` 모드와 `SAFE` 모드의 두 가지 모드를 도입합니다. `SAFE` 모드는 정밀도 손실 또는 비트 확대를 초래할 수 있는 '위험한' 승격에 대한 우려를 완화하는 데 사용됩니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-ULvTWj_KnHU" + }, + "source": [ + "### Dtypes\n", + "\n", + "간결하게 표현하기 위해 다음 약어를 사용하겠습니다.\n", + "\n", + "- `b`는 `tf.bool`을 의미합니다.\n", + "- `u8`은 `tf.uint8`을 의미합니다.\n", + "- `i16`은 `tf.int16`을 의미합니다.\n", + "- `i32`는 `tf.int32`를 의미합니다.\n", + "- `bf16`은 `tf.bfloat16`을 의미합니다.\n", + "- `f32`는 `tf.float32`를 의미합니다.\n", + "- `f64`는 `tf.float64`를 의미합니다.\n", + "- `i32*`는 Python `int` 또는 약한 형식의 `i32`를 의미합니다.\n", + "- `f32*`는 Python `float` 또는 약한 형식의 `f32`를 의미합니다.\n", + "- `c128*`은 Python `complex` 또는 약한 형식의 `c128`을 의미합니다.\n", + "\n", + "별표(*)는 해당 유형이 \"약한\" 형식임을 나타냅니다. 이러한 형식은 시스템에 의해 일시적으로 추론되며 다른 형식으로 대체될 수 있습니다. 이 개념은 [여기](#weak_tensor)에서 더 자세히 설명됩니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hXZxLCkuzzq3" + }, + "source": [ + "### 정밀도 손실 연산의 예제\n", + "\n", + "다음 예제에서 `i32` + `f32`는 `ALL` 모드에서는 허용되지만 `SAFE` 모드에서는 정밀도 손실의 위험 때문에 허용되지 않습니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Y-yeIvstWStL" + }, + "outputs": [ + + ], + "source": [ + "# i32 + f32 returns a f32 result in ALL mode.\n", + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", + "a = tf.constant(10, dtype = tf.int32)\n", + "b = tf.constant(5.0, dtype = tf.float32)\n", + "a + b # " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "JNNmZow2WY3G" + }, + "outputs": [ + + ], + "source": [ + "# This promotion is not allowed in SAFE mode.\n", + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"safe\")\n", + "a = tf.constant(10, dtype = tf.int32)\n", + "b = tf.constant(5.0, dtype = tf.float32)\n", + "try:\n", + " a + b\n", + "except TypeError as e:\n", + " print(f'{type(e)}: {e}') # TypeError: explicitly specify the dtype or switch to ALL mode." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "f0x4Qhff0AKS" + }, + "source": [ + "### 비트 확대 연산의 예제\n", + "\n", + "다음 예제에서 `i8` + `u32`는 `ALL` 모드에서 허용되지만 `SAFE` 모드에서는 입력의 비트 수보다 더 많은 비트를 사용하는 비트 확대 때문에 허용되지 않습니다. 새로운 타입 승격 의미 체계는 필요한 비트 확대만 허용합니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Etbv-WoWzUXf" + }, + "outputs": [ + + ], + "source": [ + "# i8 + u32 returns an i64 result in ALL mode.\n", + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", + "a = tf.constant(10, dtype = tf.int8)\n", + "b = tf.constant(5, dtype = tf.uint32)\n", + "a + b" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "yKRdvtvw0Lvt" + }, + "outputs": [ + + ], + "source": [ + "# This promotion is not allowed in SAFE mode.\n", + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"safe\")\n", + "a = tf.constant(10, dtype = tf.int8)\n", + "b = tf.constant(5, dtype = tf.uint32)\n", + "try:\n", + " a + b\n", + "except TypeError as e:\n", + " print(f'{type(e)}: {e}') # TypeError: explicitly specify the dtype or switch to ALL mode." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "yh2BwqUzH3C3" + }, + "source": [ + "## 격자 기반 시스템" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "HHUnfTPiYVN5" + }, + "source": [ + "### 형식 승격 격자\n", + "\n", + "새 형식 승격 작동은 다음 형식 승격 격자를 통해 결정됩니다.\n", + "\n", + "![Type Promotion Lattice](https://tensorflow.org/guide/images/new_type_promotion/type_promotion_lattice.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "QykluwRyDDle" + }, + "source": [ + "보다 구체적으로 말하면, 두 형식 사이에서 이루어지는 승격은 두 노드(노드 자체 포함)의 첫 번째 공통 하위 항목을 찾아야 결정됩니다.\n", + "\n", + "예를 들어 위 다이어그램에서 `i8`과 `i32`의 첫 번째 공통 하위 항목은 `i32`입니다. 그 이유는 화살표 방향을 따라가면 두 노드가 `i32`에서 처음으로 교차하기 때문입니다.\n", + "\n", + "다른 예제와 마찬가지로 `u64`와 `f16` 사이의 결과 승격 형식은 `f16`입니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nthziRHaDAUY" + }, + "source": [ + "\n", + "\n", + "\n", + "### 형식 승격 테이블\n", + "\n", + "격자를 따라가면 아래와 같은 바이너리 승격 테이블이 생성됩니다.\n", + "\n", + "**참고**: `SAFE` 모드에서는 강조 표시된 셀이 허용되지 않습니다. `ALL` 모드는 모든 경우를 허용합니다.\n", + "\n", + "![Type Promotion Table](https://tensorflow.org/guide/images/new_type_promotion/type_promotion_table.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "TPDt5QTkucSC" + }, + "source": [ + "## 새 형식 승격의 장점\n", + "\n", + "우리는 새로운 형식 승격을 위해 JAX와 유사한 격자 기반 시스템을 채택했으며 이 시스템의 장점은 다음과 같습니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "NUS_b13nue1p" + }, + "source": [ + "\n", + "\n", + "\n", + "#### 격자 기반 시스템의 장점\n", + "\n", + "먼저, 격자 기반 시스템을 사용하면 세 가지 매우 중요한 속성이 보장됩니다.\n", + "\n", + "- 존재(Existence): 모든 형식의 조합에 대해 고유한 결과 승격 형식이 있습니다.\n", + "- 교환 가능성(Commutativity): `a + b = b + a`\n", + "- 연관성(Associativity): `a + (b + c) = (a + b) = c`\n", + "\n", + "이 세 가지 속성은 일관되고 예측 가능한 형식 승격 의미 체계를 구성하는 데 매우 중요합니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Sz88hRR6uhls" + }, + "source": [ + "#### JAX와 유사한 격자 시스템의 장점\n", + "\n", + "JAX와 유사한 격자 시스템의 또 다른 중요한 장점은 서명되지 않은 인트를 제외하고는 필요 이상의 넓은 승격을 피할 수 있다는 것입니다. 즉, 64비트 입력 없이는 64비트 결과를 얻을 수 없습니다. 기존 형식 승격에서 빈번하게 발생했던 불필요한 64비트 값을 피할 수 있기에 이러한 장점은 가속기 작업에 매우 유용합니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rlylb7ieOVbJ" + }, + "source": [ + "그러나 이 시스템에도 단점은 있습니다. 부동 소수/정수 혼합 승격으로 인해 정밀도 손실이 발생할 가능성이 매우 높습니다. 예를 들어, 아래 예제에서 `i64` + `f16`을 사용하면 `i64`가 `f16`으로 승격됩니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "abqIkV02OXEF" + }, + "outputs": [ + + ], + "source": [ + "# The first input is promoted to f16 in ALL mode.\n", + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", + "tf.constant(1, tf.int64) + tf.constant(3.2, tf.float16) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "mYnh1gZdObfI" + }, + "source": [ + "이러한 우려를 해소하기 위해 우리는 이렇게 '위험한' 승격을 허용하지 않는 `SAFE` 모드를 도입했습니다.\n", + "\n", + "**참고**: 격자 시스템을 구성할 때 고려해야 할 디자인 고려 사항에 대한 자세한 내용은 [JAX용 형식 승격 의미 체계 디자인](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html)을 참조하세요." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gAc7LFV0S2dP" + }, + "source": [ + "\n", + "\n", + "\n", + "## WeakTensor" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "olQ2gsFlS9BH" + }, + "source": [ + "### 개요\n", + "\n", + "*약한 텐서*는 [JAX](https://jax.readthedocs.io/en/latest/type_promotion.html#weakly-typed-values-in-jax)의 개념과 유사한 \"약한 형식\"의 텐서입니다.\n", + "\n", + "`WeakTensor`의 dtype은 시스템에 의해 일시적으로 추론되며, 다른 dtype을 따를 수 있습니다. 이 개념은 Python 스칼라 리터럴과 같이 명시적으로 사용자가 지정한 형식이 없는 TF 값과 값 사이의 이진 연산 내에서 원치 않는 형식 승격을 방지하기 위해 새로운 형식 승격에 도입되었습니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MYmoFIqZTFtw" + }, + "source": [ + "예를 들어, 아래 예제에서 `tf.constant(1.2)`에는 특정 dtype이 없으므로 \"약한\" 형식으로 간주됩니다. 따라서 `tf.constant(1.2)`은 `tf.constant(3.1, tf.float16)`의 형식을 따라가며 그 결과로 `f16`을 출력합니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "eSBv_mzyTE97" + }, + "outputs": [ + + ], + "source": [ + "tf.constant(1.2) + tf.constant(3.1, tf.float16) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "KxuqBIFuTm5Z" + }, + "source": [ + "### WeakTensor 구성\n", + "\n", + "WeakTensor는 dtype을 지정하지 않고 텐서를 생성할 때 생성됩니다. 텐서의 문자열 표현 끝에서 약한 속성을 확인하면 텐서가 \"약한\" 텐서인지 확인할 수 있습니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "7UmunnJ8Tru3" + }, + "source": [ + "**첫 번째 사례**: 사용자가 특정한 dtype이 없는 입력으로 `tf.constant`을 호출한 경우." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "fLEtMluNTsI5" + }, + "outputs": [ + + ], + "source": [ + "tf.constant(5) # " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ZQX6MBWHTt__" + }, + "outputs": [ + + ], + "source": [ + "tf.constant([5.0, 10.0, 3]) # " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ftsKSC5BTweP" + }, + "outputs": [ + + ], + "source": [ + "# A normal Tensor is created when dtype arg is specified.\n", + "tf.constant(5, tf.int32) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "RqhoRy5iTyag" + }, + "source": [ + "**두 번째 사례**: 사용자가 특정한 dtype이 없는 입력이 [WeakTensor 지원 API](#weak_tensor_apis)로 전달되는 경우." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "DuwpgoQJTzE-" + }, + "outputs": [ + + ], + "source": [ + "tf.math.abs([100.0, 4.0]) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "UTcoR1xvR39k" + }, + "source": [ + "##새 형식 승격 사용 효과\n", + "\n", + "다음은 새 형식 승격을 사용한 결과로 발생하는 변경 사항의 대략적인 목록입니다.\n", + "\n", + "- 보다 일관되고 예측 가능한 프로모션 결과.\n", + "- 비트 확대 위험의 감소.\n", + "- `tf.Tensor` 수학적 더블 언더 메서드는 새로운 형식 승격을 사용.\n", + "- `tf.constant`는 `WeakTensor` 반환 가능.\n", + "- `tf.constant`는 `dtype` 인수와 다른 dtype을 가진 텐서 입력이 전달될 때 암시적 변환을 허용.\n", + "- `tf.Variable` 인플레이스 연산(`assign`, `assign-add`, `assign-sub`)의 경우 암시적 변환 허용.\n", + "- `tnp.array(1)` 및 `tnp.array(1.0)`은 32비트 WeakTensor를 반환.\n", + "- `WeakTensor`가 생성되어 [WeakTensor 지원 단항 및 바이너리 API](#weak_tensor_apis)에 사용됨.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "KyvonwYcsFX2" + }, + "source": [ + "### 보다 일관되고 예측 가능한 승격 결과\n", + "\n", + "[격자 기반 시스템](#lattice_system_design)을 사용하면 새로운 형식 승격으로 일관성 있고 예측 가능한 형식 승격 결과를 생성할 수 있습니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "q0Z1njfb7lRa" + }, + "source": [ + "#### 기존 형식 승격\n", + "\n", + "연산 순서를 변경하면 이전 형식 승격을 사용하는 일관되지 않은 결과가 나타납니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "M1Ca9v4m7z8e" + }, + "outputs": [ + + ], + "source": [ + "# Setup\n", + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"legacy\")\n", + "a = np.array(1, dtype=np.int8)\n", + "b = tf.constant(1)\n", + "c = np.array(1, dtype=np.float16)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "WwhTzJ-a4rTc" + }, + "outputs": [ + + ], + "source": [ + "# (a + b) + c throws an InvalidArgumentError.\n", + "try:\n", + " tf.add(tf.add(a, b), c)\n", + "except tf.errors.InvalidArgumentError as e:\n", + " print(f'{type(e)}: {e}') # InvalidArgumentError" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "d3qDgVYn7ezT" + }, + "outputs": [ + + ], + "source": [ + "# (b + a) + c returns an i32 result.\n", + "tf.add(tf.add(b, a), c) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "YMH1skEs7oI5" + }, + "source": [ + "#### 새 형식 승격\n", + "\n", + "새 형식 승격은 순서에 관계 없이 일관된 결과를 생성합니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "BOHyJJ8z8uCN" + }, + "outputs": [ + + ], + "source": [ + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", + "a = np.array(1, dtype=np.int8)\n", + "b = tf.constant(1)\n", + "c = np.array(1, dtype=np.float16)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ZUKU70jf7E1l" + }, + "outputs": [ + + ], + "source": [ + "# (a + b) + c returns a f16 result.\n", + "tf.add(tf.add(a, b), c) # " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "YOEycjFx7qDn" + }, + "outputs": [ + + ], + "source": [ + "# (b + a) + c also returns a f16 result.\n", + "tf.add(tf.add(b, a), c) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "FpGMkm6aJsn6" + }, + "source": [ + "### 비트 확대 위험의 감소" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JxV2AL-U9Grg" + }, + "source": [ + "#### 기존 형식 승격\n", + "\n", + "기존 형식 승격은 종종 64비트 결과를 생성했습니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "7L1pxyvn9MlP" + }, + "outputs": [ + + ], + "source": [ + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"legacy\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "zMJVFdWf4XHp" + }, + "outputs": [ + + ], + "source": [ + "np.array(3.2, np.float16) + tf.constant(1, tf.int8) + tf.constant(50) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fBhUH_wD9Is7" + }, + "source": [ + "#### 새 형식 승격\n", + "\n", + "새 형식 승격은 필요로 하는 최소 비트 수의 결과를 반환합니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "aJsj2ZyI9T9Y" + }, + "outputs": [ + + ], + "source": [ + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "jj0N_Plp4X9l" + }, + "outputs": [ + + ], + "source": [ + "np.array(3.2, np.float16) + tf.constant(1, tf.int8) + tf.constant(50) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "yKUx7xe-KZ5O" + }, + "source": [ + "### tf.Tensor 수학적 더블 언더 메서드\n", + "\n", + "모든 `tf.Tensor` 수학적 더블 언더 메서드는 새로운 형식 승격을 사용합니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "2c3icBUX4wNl" + }, + "outputs": [ + + ], + "source": [ + "-tf.constant(5) # " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ydJHQjid45s7" + }, + "outputs": [ + + ], + "source": [ + "tf.constant(5, tf.int16) - tf.constant(1, tf.float32) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "pLbIjIvbKqcU" + }, + "source": [ + "### tf.Variable 인플레이스 연산\n", + "\n", + "`tf.Variable` 인플레이스 연산에서는 암시적 변환이 허용됩니다.\n", + "\n", + "**참고**: 변수의 원래 dtype과 다른 dtype을 초래하는 모든 승격은 허용되지 않습니다. 이는 `tf.Variable`이 dtype을 변경할 수 없기 때문입니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "QsXhyK1h-i5S" + }, + "outputs": [ + + ], + "source": [ + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", + "a = tf.Variable(10, tf.int32)\n", + "a.assign_add(tf.constant(5, tf.int16)) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "PiA4H-otLDit" + }, + "source": [ + "### tf.constant 암시적 변환\n", + "\n", + "이전 형식 승격에서 `tf.constant`의 입력 텐서는 dtype 인수와 동일한 dtype을 가져야 했습니다. 그러나 새로운 형식 승격에서는 암시적으로 텐서를 지정된 dtype으로 변환합니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ArrQ9Dj0_OR8" + }, + "outputs": [ + + ], + "source": [ + "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", + "a = tf.constant(10, tf.int16)\n", + "tf.constant(a, tf.float32) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "WAcK_-XnLWaP" + }, + "source": [ + "### TF-NumPy 배열\n", + "\n", + "새 형식 승격을 사용하는 Python 입력의 경우 `tnp.array`의 기본값은 `i32*` 및 `f32*`입니다." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "K1pZnYNh_ahm" + }, + "outputs": [ + + ], + "source": [ + "tnp.array(1) # " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "QoQl2PYP_fMT" + }, + "outputs": [ + + ], + "source": [ + "tnp.array(1.0) # " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "wK5DpQ3Pz3k5" + }, + "source": [ + "##입력 형식 추론\n", + "\n", + "새 형식 승격에서 다양한 입력 형식이 추론되는 방식입니다.\n", + "\n", + "- `tf.Tensor`: `tf.Tensor`에는 dtype 속성이 있으므로 더 이상 추론을 수행하지 않습니다.\n", + "- NumPy 형식: 여기에는 `np.array(1)`, `np.int16(1)`, `np.float`와 같은 형식이 포함됩니다. NumPy 입력에도 dtype 속성이 있으므로 결과 추론 형식으로 dtype 속성을 사용합니다. NumPy의 기본값은 `i64` 및 `f64`입니다.\n", + "- Python 스칼라/중첩 형식: 여기에는 `1`, `[1, 2, 3]`, `(1.0, 2.0)`와 같은 형식이 포함됩니다.\n", + " - Python `int`는 `i32*`로 추론됩니다.\n", + " - Python `float`은 `f32*`로 추론됩니다.\n", + " - Python `complex`는 `c128*`로 추론됩니다.\n", + "- 입력이 위 범주에 속하지 않지만 dtype 속성이 있는 경우 dtype 속성을 결과 추론 형식으로 사용합니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "g_SPfalfSPgg" + }, + "source": [ + "# 추가 자료\n", + "\n", + "새 형식 승격은 JAX-NumPy의 형식 승겨과 매우 유사합니다. 새 형식 승격과 디자인 선택에 대한 자세한 내용을 알고 싶다면 아래 리소스를 확인해 주세요.\n", + "\n", + "- [JAX 형식 승격 의미 체계](https://jax.readthedocs.io/en/latest/type_promotion.html)\n", + "- [JAX용 형식 승격 의미 체계 디자인](https://jax.readthedocs.io/en/latest/jep/9407-type-promotion.html)\n", + "- [기존 TF-NumPy 승격 의미 체계](https://www.tensorflow.org/guide/tf_numpy#type_promotion)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Qg5xBbImT31S" + }, + "source": [ + "# 참고 자료" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gjB0CVhVXBfW" + }, + "source": [ + "\n", + "\n", + "\n", + "## WeakTensor 지원 API" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_GVbqlN9aBS2" + }, + "source": [ + "아래는 `WeakTensor`를 지원하는 API 목록입니다.\n", + "\n", + "단항 연산의 경우 사용자가 지정한 형식이 없는 입력이 전달되면 `WeakTensor`를 반환함을 의미합니다.\n", + "\n", + "바이너리 연산은 [이곳](#promotion_table)의 승격 테이블 를 따릅니다. 두 입력의 승격 결과에 따라 `WeakTensor`를 반환할 수도 있고 반환하지 않을 수도 있습니다.\n", + "\n", + "**참고**: 모든 계산 연산(`+`, `-`, `*`, ...)이 지원됩니다." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Gi-G68Z8WN2P" + }, + "source": [ + "- `tf.bitwise.invert`\n", + "- `tf.clip_by_value`\n", + "- `tf.debugging.check_numerics`\n", + "- `tf.expand_dims`\n", + "- `tf.identity`\n", + "- `tf.image.adjust_brightness`\n", + "- `tf.image.adjust_gamma`\n", + "- `tf.image.extract_patches`\n", + "- `tf.image.random_brightness`\n", + "- `tf.image.stateless_random_brightness`\n", + "- `tf.linalg.diag`\n", + "- `tf.linalg.diag_part`\n", + "- `tf.linalg.matmul`\n", + "- `tf.linalg.matrix_transpose`\n", + "- `tf.linalg.tensor_diag_part`\n", + "- `tf.linalg.trace`\n", + "- `tf.math.abs`\n", + "- `tf.math.acos`\n", + "- `tf.math.acosh`\n", + "- `tf.math.add`\n", + "- `tf.math.angle`\n", + "- `tf.math.asin`\n", + "- `tf.math.asinh`\n", + "- `tf.math.atan`\n", + "- `tf.math.atanh`\n", + "- `tf.math.ceil`\n", + "- `tf.math.conj`\n", + "- `tf.math.cos`\n", + "- `tf.math.cosh`\n", + "- `tf.math.digamma`\n", + "- `tf.math.divide_no_nan`\n", + "- `tf.math.divide`\n", + "- `tf.math.erf`\n", + "- `tf.math.erfc`\n", + "- `tf.math.erfcinv`\n", + "- `tf.math.erfinv`\n", + "- `tf.math.exp`\n", + "- `tf.math.expm1`\n", + "- `tf.math.floor`\n", + "- `tf.math.floordiv`\n", + "- `tf.math.floormod`\n", + "- `tf.math.imag`\n", + "- `tf.math.lgamma`\n", + "- `tf.math.log1p`\n", + "- `tf.math.log_sigmoid`\n", + "- `tf.math.log`\n", + "- `tf.math.multiply_no_nan`\n", + "- `tf.math.multiply`\n", + "- `tf.math.ndtri`\n", + "- `tf.math.negative`\n", + "- `tf.math.pow`\n", + "- `tf.math.real`\n", + "- `tf.math.real`\n", + "- `tf.math.reciprocal_no_nan`\n", + "- `tf.math.reciprocal`\n", + "- `tf.math.reduce_euclidean_norm`\n", + "- `tf.math.reduce_logsumexp`\n", + "- `tf.math.reduce_max`\n", + "- `tf.math.reduce_mean`\n", + "- `tf.math.reduce_min`\n", + "- `tf.math.reduce_prod`\n", + "- `tf.math.reduce_std`\n", + "- `tf.math.reduce_sum`\n", + "- `tf.math.reduce_variance`\n", + "- `tf.math.rint`\n", + "- `tf.math.round`\n", + "- `tf.math.rsqrt`\n", + "- `tf.math.scalar_mul`\n", + "- `tf.math.sigmoid`\n", + "- `tf.math.sign`\n", + "- `tf.math.sin`\n", + "- `tf.math.sinh`\n", + "- `tf.math.softplus`\n", + "- `tf.math.special.bessel_i0`\n", + "- `tf.math.special.bessel_i0e`\n", + "- `tf.math.special.bessel_i1`\n", + "- `tf.math.special.bessel_i1e`\n", + "- `tf.math.special.bessel_j0`\n", + "- `tf.math.special.bessel_j1`\n", + "- `tf.math.special.bessel_k0`\n", + "- `tf.math.special.bessel_k0e`\n", + "- `tf.math.special.bessel_k1`\n", + "- `tf.math.special.bessel_k1e`\n", + "- `tf.math.special.bessel_y0`\n", + "- `tf.math.special.bessel_y1`\n", + "- `tf.math.special.dawsn`\n", + "- `tf.math.special.expint`\n", + "- `tf.math.special.fresnel_cos`\n", + "- `tf.math.special.fresnel_sin`\n", + "- `tf.math.special.spence`\n", + "- `tf.math.sqrt`\n", + "- `tf.math.square`\n", + "- `tf.math.subtract`\n", + "- `tf.math.tan`\n", + "- `tf.math.tanh`\n", + "- `tf.nn.depth_to_space`\n", + "- `tf.nn.elu`\n", + "- `tf.nn.gelu`\n", + "- `tf.nn.leaky_relu`\n", + "- `tf.nn.log_softmax`\n", + "- `tf.nn.relu6`\n", + "- `tf.nn.relu`\n", + "- `tf.nn.selu`\n", + "- `tf.nn.softsign`\n", + "- `tf.nn.space_to_depth`\n", + "- `tf.nn.swish`\n", + "- `tf.ones_like`\n", + "- `tf.realdiv`\n", + "- `tf.reshape`\n", + "- `tf.squeeze`\n", + "- `tf.stop_gradient`\n", + "- `tf.transpose`\n", + "- `tf.truncatediv`\n", + "- `tf.truncatemod`\n", + "- `tf.zeros_like`\n", + "- `tf.experimental.numpy.abs`\n", + "- `tf.experimental.numpy.absolute`\n", + "- `tf.experimental.numpy.amax`\n", + "- `tf.experimental.numpy.amin`\n", + "- `tf.experimental.numpy.angle`\n", + "- `tf.experimental.numpy.arange`\n", + "- `tf.experimental.numpy.arccos`\n", + "- `tf.experimental.numpy.arccosh`\n", + "- `tf.experimental.numpy.arcsin`\n", + "- `tf.experimental.numpy.arcsinh`\n", + "- `tf.experimental.numpy.arctan`\n", + "- `tf.experimental.numpy.arctanh`\n", + "- `tf.experimental.numpy.around`\n", + "- `tf.experimental.numpy.array`\n", + "- `tf.experimental.numpy.asanyarray`\n", + "- `tf.experimental.numpy.asarray`\n", + "- `tf.experimental.numpy.ascontiguousarray`\n", + "- `tf.experimental.numpy.average`\n", + "- `tf.experimental.numpy.bitwise_not`\n", + "- `tf.experimental.numpy.cbrt`\n", + "- `tf.experimental.numpy.ceil`\n", + "- `tf.experimental.numpy.conj`\n", + "- `tf.experimental.numpy.conjugate`\n", + "- `tf.experimental.numpy.copy`\n", + "- `tf.experimental.numpy.cos`\n", + "- `tf.experimental.numpy.cosh`\n", + "- `tf.experimental.numpy.cumprod`\n", + "- `tf.experimental.numpy.cumsum`\n", + "- `tf.experimental.numpy.deg2rad`\n", + "- `tf.experimental.numpy.diag`\n", + "- `tf.experimental.numpy.diagflat`\n", + "- `tf.experimental.numpy.diagonal`\n", + "- `tf.experimental.numpy.diff`\n", + "- `tf.experimental.numpy.empty_like`\n", + "- `tf.experimental.numpy.exp2`\n", + "- `tf.experimental.numpy.exp`\n", + "- `tf.experimental.numpy.expand_dims`\n", + "- `tf.experimental.numpy.expm1`\n", + "- `tf.experimental.numpy.fabs`\n", + "- `tf.experimental.numpy.fix`\n", + "- `tf.experimental.numpy.flatten`\n", + "- `tf.experimental.numpy.flip`\n", + "- `tf.experimental.numpy.fliplr`\n", + "- `tf.experimental.numpy.flipud`\n", + "- `tf.experimental.numpy.floor`\n", + "- `tf.experimental.numpy.full_like`\n", + "- `tf.experimental.numpy.imag`\n", + "- `tf.experimental.numpy.log10`\n", + "- `tf.experimental.numpy.log1p`\n", + "- `tf.experimental.numpy.log2`\n", + "- `tf.experimental.numpy.log`\n", + "- `tf.experimental.numpy.max`\n", + "- `tf.experimental.numpy.mean`\n", + "- `tf.experimental.numpy.min`\n", + "- `tf.experimental.numpy.moveaxis`\n", + "- `tf.experimental.numpy.nanmean`\n", + "- `tf.experimental.numpy.negative`\n", + "- `tf.experimental.numpy.ones_like`\n", + "- `tf.experimental.numpy.positive`\n", + "- `tf.experimental.numpy.prod`\n", + "- `tf.experimental.numpy.rad2deg`\n", + "- `tf.experimental.numpy.ravel`\n", + "- `tf.experimental.numpy.real`\n", + "- `tf.experimental.numpy.reciprocal`\n", + "- `tf.experimental.numpy.repeat`\n", + "- `tf.experimental.numpy.reshape`\n", + "- `tf.experimental.numpy.rot90`\n", + "- `tf.experimental.numpy.round`\n", + "- `tf.experimental.numpy.signbit`\n", + "- `tf.experimental.numpy.sin`\n", + "- `tf.experimental.numpy.sinc`\n", + "- `tf.experimental.numpy.sinh`\n", + "- `tf.experimental.numpy.sort`\n", + "- `tf.experimental.numpy.sqrt`\n", + "- `tf.experimental.numpy.square`\n", + "- `tf.experimental.numpy.squeeze`\n", + "- `tf.experimental.numpy.std`\n", + "- `tf.experimental.numpy.sum`\n", + "- `tf.experimental.numpy.swapaxes`\n", + "- `tf.experimental.numpy.tan`\n", + "- `tf.experimental.numpy.tanh`\n", + "- `tf.experimental.numpy.trace`\n", + "- `tf.experimental.numpy.transpose`\n", + "- `tf.experimental.numpy.triu`\n", + "- `tf.experimental.numpy.vander`\n", + "- `tf.experimental.numpy.var`\n", + "- `tf.experimental.numpy.zeros_like`" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "name": "tf_numpy_type_promotion.ipynb", + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/site/ko/guide/tpu.ipynb b/site/ko/guide/tpu.ipynb index e345e72139..a00d31fd94 100644 --- a/site/ko/guide/tpu.ipynb +++ b/site/ko/guide/tpu.ipynb @@ -16,7 +16,9 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [], + "outputs": [ + + ], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -40,10 +42,12 @@ "# TPU 사용하기\n", "\n", "\n", - " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", + " \n", "
TensorFlow.org에서 보기Google Colab에서 실행 GitHub에서 소스 보기노트북 다운로드 TensorFlow.org에서 보기\n", + " Google Colab에서 실행하기\n", + "GitHub에서 소스 보기 노트북 다운로드하기
" ] }, @@ -84,7 +88,9 @@ "metadata": { "id": "Cw0WRaChRxTL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "import tensorflow as tf\n", "\n", @@ -118,7 +124,9 @@ "metadata": { "id": "dKPqF8d1wJCV" }, - "outputs": [], + "outputs": [ + + ], "source": [ "resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n", "tf.config.experimental_connect_to_cluster(resolver)\n", @@ -144,7 +152,9 @@ "metadata": { "id": "XRZ4kMoxBNND" }, - "outputs": [], + "outputs": [ + + ], "source": [ "a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n", "b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n", @@ -184,7 +194,9 @@ "metadata": { "id": "7SO23K8oRpjI" }, - "outputs": [], + "outputs": [ + + ], "source": [ "strategy = tf.distribute.TPUStrategy(resolver)" ] @@ -204,7 +216,9 @@ "metadata": { "id": "-90CL5uFPTOa" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def matmul_fn(x, y):\n", @@ -243,16 +257,37 @@ "metadata": { "id": "DiBiN-Z_R7P7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def create_model():\n", + " regularizer = tf.keras.regularizers.L2(1e-5)\n", " return tf.keras.Sequential(\n", - " [tf.keras.layers.Conv2D(256, 3, activation='relu', input_shape=(28, 28, 1)),\n", - " tf.keras.layers.Conv2D(256, 3, activation='relu'),\n", + " [tf.keras.layers.Conv2D(256, 3, input_shape=(28, 28, 1),\n", + " activation='relu',\n", + " kernel_regularizer=regularizer),\n", + " tf.keras.layers.Conv2D(256, 3,\n", + " activation='relu',\n", + " kernel_regularizer=regularizer),\n", " tf.keras.layers.Flatten(),\n", - " tf.keras.layers.Dense(256, activation='relu'),\n", - " tf.keras.layers.Dense(128, activation='relu'),\n", - " tf.keras.layers.Dense(10)])" + " tf.keras.layers.Dense(256,\n", + " activation='relu',\n", + " kernel_regularizer=regularizer),\n", + " tf.keras.layers.Dense(128,\n", + " activation='relu',\n", + " kernel_regularizer=regularizer),\n", + " tf.keras.layers.Dense(10,\n", + " kernel_regularizer=regularizer)])" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "h-2qaXgfyONQ" + }, + "source": [ + "이 모델은 각 레이어의 가중치에 L2 정규화 항을 배치하기에 아래의 사용자 정의 훈련 루프가 `Model.losses`에서 이러한 L2 정규화 항을 선택하는 방법을 보여줄 수 있습니다." ] }, { @@ -282,7 +317,9 @@ "metadata": { "id": "noAd416KSCo7" }, - "outputs": [], + "outputs": [ + + ], "source": [ "def get_dataset(batch_size, is_training=True):\n", " split = 'train' if is_training else 'test'\n", @@ -327,7 +364,9 @@ "metadata": { "id": "ubmDchPqSIx0" }, - "outputs": [], + "outputs": [ + + ], "source": [ "with strategy.scope():\n", " model = create_model()\n", @@ -364,7 +403,9 @@ "metadata": { "id": "M6e3aVVLUorL" }, - "outputs": [], + "outputs": [ + + ], "source": [ "with strategy.scope():\n", " model = create_model()\n", @@ -407,7 +448,9 @@ "metadata": { "id": "9aHhqwao2Fxi" }, - "outputs": [], + "outputs": [ + + ], "source": [ "# Create the model, optimizer and metrics inside the `tf.distribute.Strategy`\n", "# scope, so that the variables can be mirrored on each device.\n", @@ -434,9 +477,13 @@ " images, labels = inputs\n", " with tf.GradientTape() as tape:\n", " logits = model(images, training=True)\n", - " loss = tf.keras.losses.sparse_categorical_crossentropy(\n", + " per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(\n", " labels, logits, from_logits=True)\n", - " loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)\n", + " loss = tf.nn.compute_average_loss(per_example_loss)\n", + " model_losses = model.losses\n", + " if model_losses:\n", + " loss += tf.nn.scale_regularization_loss(tf.add_n(model_losses))\n", + "\n", " grads = tape.gradient(loss, model.trainable_variables)\n", " optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))\n", " training_loss.update_state(loss * strategy.num_replicas_in_sync)\n", @@ -460,7 +507,9 @@ "metadata": { "id": "1du5cXWt6Vtw" }, - "outputs": [], + "outputs": [ + + ], "source": [ "steps_per_eval = 10000 // batch_size\n", "\n", @@ -470,7 +519,7 @@ "\n", " for step in range(steps_per_epoch):\n", " train_step(train_iterator)\n", - " print('Current step: {}, training loss: {}, accuracy: {}%'.format(\n", + " print('Current step: {}, training loss: {}, training accuracy: {}%'.format(\n", " optimizer.iterations.numpy(),\n", " round(float(training_loss.result()), 4),\n", " round(float(training_accuracy.result()) * 100, 2)))\n", @@ -497,7 +546,9 @@ "metadata": { "id": "2grYvXLzJYkP" }, - "outputs": [], + "outputs": [ + + ], "source": [ "@tf.function\n", "def train_multiple_steps(iterator, steps):\n", @@ -508,9 +559,12 @@ " images, labels = inputs\n", " with tf.GradientTape() as tape:\n", " logits = model(images, training=True)\n", - " loss = tf.keras.losses.sparse_categorical_crossentropy(\n", + " per_example_loss = tf.keras.losses.sparse_categorical_crossentropy(\n", " labels, logits, from_logits=True)\n", - " loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)\n", + " loss = tf.nn.compute_average_loss(per_example_loss)\n", + " model_losses = model.losses\n", + " if model_losses:\n", + " loss += tf.nn.scale_regularization_loss(tf.add_n(model_losses))\n", " grads = tape.gradient(loss, model.trainable_variables)\n", " optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))\n", " training_loss.update_state(loss * strategy.num_replicas_in_sync)\n", @@ -523,7 +577,7 @@ "# retraced if the value changes.\n", "train_multiple_steps(train_iterator, tf.convert_to_tensor(steps_per_epoch))\n", "\n", - "print('Current step: {}, training loss: {}, accuracy: {}%'.format(\n", + "print('Current step: {}, training loss: {}, training accuracy: {}%'.format(\n", " optimizer.iterations.numpy(),\n", " round(float(training_loss.result()), 4),\n", " round(float(training_accuracy.result()) * 100, 2)))" @@ -554,7 +608,9 @@ "metadata": { "accelerator": "TPU", "colab": { - "collapsed_sections": [], + "collapsed_sections": [ + + ], "name": "tpu.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/versions.md b/site/ko/guide/versions.md index d8a34b7055..8d7f62c338 100644 --- a/site/ko/guide/versions.md +++ b/site/ko/guide/versions.md @@ -18,7 +18,7 @@ TensorFlow는 공개 API에 유의적 버저닝 2.0([semver](http://semver.org)) TensorFlow의 공개 API만이 부 버전 및 패치 버전에서 하위 호환성을 가집니다. 공개 API는 다음을 포함합니다. -- 모든 문서화된 [Python](https://gitlocalize.com/repo/4592/ko/site/en-snapshot/api_docs/python) `tensorflow` 모듈과 서브 모듈에 있는 함수와 클래스, 다음은 제외 +- `tensorflow` 모듈과 그 하위 모듈에 있는 클래스, 모든 문서화된 [Python](https://www.tensorflow.org/api_docs/python) 함수. 단 다음은 제외합니다. - 비공개 심볼: `_`로 시작하는 함수나 클래스 등 - 실험적인 및 `tf.contrib` 심볼, 자세한 내용은 [아래](#not_covered) 내용 참조 @@ -29,7 +29,14 @@ TensorFlow의 공개 API만이 부 버전 및 패치 버전에서 하위 호환 - 호환성 API(Python의 `tf.compat` 모듈). 주 버전에서 사용자들이 새로운 주 버전으로 옮겨가는 것을 도와주는 유틸리티와 추가적인 엔드포인트가 공개될 수도 있습니다. 이러한 API 심볼들은 없어지고 지원되지 않지만(즉, 기능을 추가하지 않고 취약성 이외의 버그를 수정하지 않음) 호환성은 보장됩니다. -- [C API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h) +- TensorFlow C API: + + - [tensorflow/c/c_api.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h)) + +- TensorFlow Lite C API: + + - [tensorflow/lite/c/c_api.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/c_api.h) + - [tensorflow/lite/c/c_api_types.h](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/c_api_types.h). - 다음의 프로토콜 버퍼 파일: @@ -58,9 +65,9 @@ TensorFlow의 일부분은 어떤 면에서도 하위 호환성이 없도록 변 - **다른 언어:** Python과 C 이외의 다음과 같은 TensorFlow API 언어: - - [C++](https://www.tensorflow.org/api_guides/cc/guide.md)([`tensorflow/cc`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/cc)의 헤더 파일을 통해 공개되어 있음). - - [Java](https://www.tensorflow.org/api_docs/java/reference/org/tensorflow/package-summary) - - [Go](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go) + - [C++](../install/lang_c.ipynb)([`tensorflow/cc`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/cc)의 헤더 파일을 통해 공개되어 있음) + - [Java](../install/lang_java_legacy.md) + - [Go](https://github.com/tensorflow/build/blob/master/golang_install_guide/README.md) - [JavaScript](https://js.tensorflow.org) - **합성 연산 세부사항:** Python의 많은 공개 함수가 일부 그래프의 원시 연산에 확장됩니다, 그리고 이러한 세부사항은 `GraphDef`로 디스크에 저장되는 그래프의 한 부분입니다. 이러한 세부사항은 부(minor) 버전에서 변경될 수 있습니다. 특히, 그래프간 정확한 매칭이 되는지 확인하는 회귀 테스트는 그래프의 행동이 변경되지 않고 기존의 체크포인트가 아직 동작할지라도 서로 다른 부 버전에서는 호환되지 않을 가능성이 높습니다. @@ -85,7 +92,7 @@ TensorFlow의 일부분은 어떤 면에서도 하위 호환성이 없도록 변 많은 TensorFlow 사용자들이 저장된 모델을 만들고 나중에 릴리즈된 TensorFlow에서 로드하여 실행합니다. [semver](https://semver.org)에 따라 한 버전의 TensorFlow에서 작성된 모델이 같은 주 버전에 속한 나중 버전의 TensorFlow에서 로드되고 평가될 수 있습니다. -*지원하는* 저장된 모델에서는 추가적인 보장이 있습니다. TensorFlow 주 버전 `N`에서 **사라지지 않고 실험적이지도 않으며 호환되지 않는 API**를 사용하여 만든 저장된 모델은 버전 `N`에서 지원됩니다. TensorFlow 주 버전 `N`에서 지원하는 모든 저장된 모델은 TensorFlow 주 버전 `N+1`에서도 로드되고 실행될 수 있습니다. 그러나, 그 모델을 만들고 수정하기 위해 필요한 기능들을 더 이상 사용할 수 없는 경우, 이 보장은 수정하지 않은 저장된 모델에만 적용됩니다. +*지원하는* 저장된 모델에서는 추가적인 보장이 있습니다. TensorFlow 주 버전 `N`에서 **사라지지 않고 실험적이지도 않으며 호환되지 않는 API**를 사용하여 만든 저장된 모델은 버전 `N`에서 지원됩니다. TensorFlow 주 버전 `N`에서 지원하는 모든 저장된 모델은 TensorFlow 주 버전 `N+1`에서도 로드되고 실행될 수 있습니다. 그러나, 그 모델을 만들고 수정하기 위해 필요한 기능들을 더 이상 사용할 수 없는 경우, 이 보장은 수정하지 않은 저장된 모델에만 적용됩니다. 가능하면 하위 호환성을 유지하기 위해 노력할 것이므로 직렬화된 파일들은 오랫동안 사용 가능합니다. diff --git a/site/ko/hub/common_saved_model_apis/images.md b/site/ko/hub/common_saved_model_apis/images.md index 1f390db4de..f4c32a0046 100644 --- a/site/ko/hub/common_saved_model_apis/images.md +++ b/site/ko/hub/common_saved_model_apis/images.md @@ -1,5 +1,3 @@ - - # 이미지 작업을 위한 일반적인 SavedModel API 이 페이지에서는 이미지 관련 작업용 [TF2 SavedModel](../tf2_saved_model.md)에서 [Reusable SavedModel API](../reusable_saved_models.md)를 구현하는 방법을 설명합니다. (이는 현재 지원 중단된 [TF1 Hub 형식](../common_signatures/images.md)의 [이미지에 대한 일반적인 서명](../tf1_hub_module)을 대체합니다.) @@ -46,8 +44,7 @@ Keras에서는 `hub.KerasLayer`에서 처리합니다. `trainable=True`로 초 이미지 특성 벡터용 Reusable SavedModel은 다음에서 사용됩니다. -- Colab 튜토리얼 [이미지 분류자 다시 훈련하기](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb) -- 명령 줄 도구 [make_image_classifier](https://github.com/tensorflow/hub/tree/master/tensorflow_hub/tools/make_image_classifier) +- Colab 튜토리얼 [이미지 분류자 다시 훈련하기](https://colab.research.google.com/github/tensorflow/docs/blob/master/g3doc/en/hub/tutorials/tf2_image_retraining.ipynb) @@ -55,7 +52,7 @@ Keras에서는 `hub.KerasLayer`에서 처리합니다. `trainable=True`로 초 ### 사용법 요약 -**이미지 분류**는 *모듈 게시자가 선택한 * 분류 체계의 등급(class)에서 이미지의 픽셀을 멤버십에 대한 선형 점수(logit)에 매핑합니다. 이를 통해 모델 소비자는 게시자 모듈에서 학습한 특정 분류에서 결론을 도출할 수 있습니다. (새로운 등급의 세트를 사용한 이미지 분류의 경우, 대신 새 분류자로 [이미지 특성 벡터](#feature-vector) 모델을 재사용하는 것이 일반적입니다.) +**이미지 분류**는 *모듈 게시자가 선택한 * 분류 체계의 등급(class)에서 이미지의 픽셀을 멤버십에 대한 선형 점수(로짓)에 매핑합니다. 이를 통해 모델 소비자는 게시자 모듈에서 학습한 특정 분류에서 결론을 도출할 수 있습니다. (새로운 등급의 세트를 사용한 이미지 분류의 경우, 대신 새 분류자로 [이미지 특성 벡터](#feature-vector) 모델을 재사용하는 것이 일반적입니다.) 이미지 분류를 위한 Reusable SavedModel에는 이미지 배치를 로짓 배치에 매핑하는 루트 객체에 대한 `__call__` 메서드가 있습니다. 다음과 같이 사용할 수 있습니다. diff --git a/site/ko/hub/common_saved_model_apis/text.md b/site/ko/hub/common_saved_model_apis/text.md index b5ee555617..32400a6ba1 100644 --- a/site/ko/hub/common_saved_model_apis/text.md +++ b/site/ko/hub/common_saved_model_apis/text.md @@ -1,5 +1,3 @@ - - # 텍스트 작업을 위한 일반적인 SavedModel API 이 페이지에서는 텍스트 관련 작업용 [TF2 SavedModel](../tf2_saved_model.md)에서 [Reusable SavedModel API](../reusable_saved_models.md)를 구현하는 방법을 설명합니다. (이는 현재 지원 중단된 [TF1 Hub 형식](../common_signatures/text.md)의 [텍스트에 대한 일반적인 서명](../tf1_hub_module)을 대체합니다.) @@ -64,7 +62,7 @@ embeddings = hub.KerasLayer("path/to/model", trainable=...)(text_input) ### 예 -- Colab 튜토리얼 [영화 리뷰로 텍스트 분류](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb) +- Colab 튜토리얼 [영화 리뷰로 텍스트 분류](https://colab.research.google.com/github/tensorflow/docs/blob/master/g3doc/en/hub/tutorials/tf2_text_classification.ipynb) diff --git a/site/ko/hub/installation.md b/site/ko/hub/installation.md index 68b734891a..9f2b3d42dc 100644 --- a/site/ko/hub/installation.md +++ b/site/ko/hub/installation.md @@ -1,5 +1,3 @@ - - # 설치 ## tensorflow_hub 설치하기 @@ -39,7 +37,7 @@ $ pip install --upgrade tf-hub-nightly - [라이브러리 개요](lib_overview.md) - 튜토리얼: - - [텍스트 분류](https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb) - - [이미지 분류](https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb) + - [텍스트 분류](https://github.com/tensorflow/docs/blob/master/g3doc/en/hub/tutorials/tf2_text_classification.ipynb) + - [이미지 분류](https://github.com/tensorflow/docs/blob/master/g3doc/en/hub/tutorials/tf2_image_retraining.ipynb) - [GitHub의](https://github.com/tensorflow/hub/blob/master/examples/README.md) 추가 예제 -- [tfhub.dev](https://tfhub.dev)에서 모델 찾기 +- [tfhub.dev](https://tfhub.dev)에서 모델을 찾아보세요. diff --git a/site/ko/hub/migration_tf2.md b/site/ko/hub/migration_tf2.md index d8a52cea20..6a1918fba7 100644 --- a/site/ko/hub/migration_tf2.md +++ b/site/ko/hub/migration_tf2.md @@ -1,10 +1,8 @@ - - # TensorFlow Hub를 사용하여 TF1에서 TF2로 마이그레이션하기 이 페이지에서는 TensorFlow 코드를 TensorFlow 1에서 TensorFlow 2로 마이그레이션하는 동안 TensorFlow Hub를 계속 사용하는 방법을 설명합니다. TensorFlow의 일반 [마이그레이션 가이드](https://www.tensorflow.org/guide/migrate)를 보완합니다. -TF2의 경우, TF Hub는 `tf.contrib.v1.layers`처럼 `tf.compat.v1.Graph`를 빌드하기 위해 레거시 `hub.Module` API에서 전환했습니다. 대신 `tf.keras.Model`(일반적으로 TF2의 새로운 [즉시 실행 환경](https://www.tensorflow.org/guide/eager_)) 및 하위 수준 TensorFlow 코드에 대한 기본 `hub.load()` 메서드를 빌드하기 위해 다른 Keras 레이어와 함께 사용할 수 있는 `hub.KerasLayer`가 있습니다. +TF2의 경우, TF Hub는 `tf.contrib.v1.layers`처럼 `tf.compat.v1.Graph`를 빌드하기 위해 레거시 `hub.Module` API에서 전환했습니다. 대신 `hub.KerasLayer`(일반적으로 TF2의 새로운 [즉시 실행 환경](https://www.tensorflow.org/api_docs/python/tf/executing_eagerly)) 및 하위 수준 TensorFlow 코드에 대한 기본 `hub.load()` 메서드를 빌드하기 위해 다른 Keras 레이어와 함께 사용할 `tf.keras.Model`가 있습니다. `hub.Module` API는 TF1 및 TF2의 TF1 호환성 모드에서 사용할 수 있도록 `tensorflow_hub` 라이브러리에서 계속 사용할 수 있습니다. [TF1 Hub 형식](tf1_hub_module.md)의 모델만 로드할 수 있습니다. @@ -31,8 +29,8 @@ model = tf.keras.Sequential([ 많은 튜토리얼에서 이러한 API가 실제로 동작하는 것을 보여줍니다. 특히, 다음을 참조하세요. -- [텍스트 분류 예제 노트북](https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb) -- [이미지 분류 예제 노트북](https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb) +- [텍스트 분류 예제 노트북](https://github.com/tensorflow/docs/blob/master/g3doc/en/hub/tutorials/tf2_text_classification.ipynb) +- [이미지 분류 예제 노트북](https://github.com/tensorflow/docs/blob/master/g3doc/en/hub/tutorials/tf2_image_retraining.ipynb) ### Estimator 훈련에서 새 API 사용하기 diff --git a/site/ko/hub/model_compatibility.md b/site/ko/hub/model_compatibility.md index 6b208f5886..20845361f6 100644 --- a/site/ko/hub/model_compatibility.md +++ b/site/ko/hub/model_compatibility.md @@ -63,7 +63,7 @@ outputs = m(inputs)
미세 조정 - Model.fit()으로 훈련되거나 custom model_fn 가이드에 따라 model_fn이 모델을 래핑하는 Estimator에서 훈련된 경우, tf.keras.Model에서 사용되는 hub.KerasLayer를 지원합니다.
참고: hub.KerasLayer는 이전 tf.compat.v1.layers 또는 hub.Module API처럼 그래프 모음을 채우지 않습니다.
+ Model.fit()으로 훈련되거나 custom model_fn 가이드에 따라 model_fn이 Model을 래핑하는 Estimator에서 훈련된 경우, tf.keras.Model에서 사용되는 hub.KerasLayer를 지원합니다.
참고: hub.KerasLayer는 이전 tf.compat.v1.layers 또는 hub.Module API처럼 그래프 모음을 채우지 않습니다.
완전 지원(전체 TF2 저장된 모델 미세 조정 가이드). 다음 중 하나를 사용하세요. hub.load:
 m = hub.load (handle) 출력 = m (inputs, training = is_training) 
또는 hub.KerasLayer :
 m = hub.KerasLayer (handle, trainable = True) outputs = m (inputs) 
From 958e8fb21ca0716e8b52b3d7b2e8cbb044314738 Mon Sep 17 00:00:00 2001 From: ilyaspiridonov Date: Wed, 17 Apr 2024 00:36:56 +0300 Subject: [PATCH 2/2] lint --- ...10_checkpointer_policysaver_tutorial.ipynb | 124 ++--- site/ko/agents/tutorials/1_dqn_tutorial.ipynb | 148 ++---- .../tutorials/2_environments_tutorial.ipynb | 60 +-- .../tutorials/3_policies_tutorial.ipynb | 56 +- .../agents/tutorials/4_drivers_tutorial.ipynb | 28 +- .../tutorials/5_replay_buffers_tutorial.ipynb | 40 +- .../tutorials/6_reinforce_tutorial.ipynb | 80 +-- .../tutorials/7_SAC_minitaur_tutorial.ipynb | 118 ++--- .../tutorials/8_networks_tutorial.ipynb | 32 +- site/ko/agents/tutorials/9_c51_tutorial.ipynb | 56 +- .../agents/tutorials/bandits_tutorial.ipynb | 84 +-- site/ko/agents/tutorials/intro_bandit.ipynb | 4 +- .../tutorials/per_arm_bandits_tutorial.ipynb | 56 +- .../agents/tutorials/ranking_tutorial.ipynb | 64 +-- site/ko/datasets/overview.ipynb | 92 +--- .../composing_learning_algorithms.ipynb | 64 +-- ...erated_algorithm_with_tff_optimizers.ipynb | 36 +- .../custom_federated_algorithms_1.ipynb | 132 ++--- ...onstruction_for_matrix_factorization.ipynb | 77 +-- .../tutorials/random_noise_generation.ipynb | 27 +- site/ko/federated/tutorials/simulations.ipynb | 16 +- site/ko/guide/advanced_autodiff.ipynb | 160 ++---- site/ko/guide/autodiff.ipynb | 128 ++--- site/ko/guide/basic_training_loops.ipynb | 60 +-- .../guide/core/logistic_regression_core.ipynb | 144 ++--- site/ko/guide/core/matrix_core.ipynb | 100 +--- site/ko/guide/core/mlp_core.ipynb | 130 ++--- site/ko/guide/data.ipynb | 500 +++++------------- site/ko/guide/distributed_training.ipynb | 90 +--- site/ko/guide/dtensor_overview.ipynb | 133 ++--- site/ko/guide/function.ipynb | 212 ++------ site/ko/guide/gpu.ipynb | 54 +- site/ko/guide/jax2tf.ipynb | 98 +--- site/ko/guide/migrate/saved_model.ipynb | 80 +-- site/ko/guide/migrate/tflite.ipynb | 40 +- site/ko/guide/mixed_precision.ipynb | 96 +--- site/ko/guide/ragged_tensor.ipynb | 344 +++--------- site/ko/guide/saved_model.ipynb | 112 +--- site/ko/guide/tensor.ipynb | 210 ++------ site/ko/guide/tf_numpy.ipynb | 110 +--- site/ko/guide/tf_numpy_type_promotion.ipynb | 126 ++--- site/ko/guide/tpu.ipynb | 62 +-- 42 files changed, 1119 insertions(+), 3264 deletions(-) diff --git a/site/ko/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb b/site/ko/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb index a712037e9a..4c7265b2d9 100644 --- a/site/ko/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb +++ b/site/ko/agents/tutorials/10_checkpointer_policysaver_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -45,8 +43,8 @@ " TensorFlow.org에서 보기 \n", " Google Colab에서 실행하기\n", "\n", - " GitHub에서 소스 보기\n", - " 노트북 다운로드하기\n", + " GitHub에서 소스 보기\n", + " 노트북 다운로드하기\n", "" ] }, @@ -89,9 +87,7 @@ "metadata": { "id": "Jv668dKvZmka" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "!sudo apt-get update\n", @@ -108,9 +104,7 @@ "metadata": { "id": "bQMULMo1dCEn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -155,9 +149,7 @@ "metadata": { "id": "AwIqiLdDCX9Q" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "# Set up a virtual display for rendering OpenAI gym environments.\n", @@ -192,9 +184,7 @@ "cellView": "both", "id": "yxFs6QU0dGI_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env_name = \"CartPole-v1\"\n", "\n", @@ -226,9 +216,7 @@ "metadata": { "id": "fZwK4d-bdI7Z" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_py_env = suite_gym.load(env_name)\n", "eval_py_env = suite_gym.load(env_name)\n", @@ -252,9 +240,7 @@ "metadata": { "id": "cUrFl83ieOvV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title\n", "q_net = q_network.QNetwork(\n", @@ -291,9 +277,7 @@ "metadata": { "id": "XiT1p78HdtSe" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title\n", "replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n", @@ -334,9 +318,7 @@ "metadata": { "id": "-rDC3leXdvm_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title\n", "# (Optional) Optimize by wrapping some of the code in a graph using TF function.\n", @@ -370,9 +352,7 @@ "metadata": { "id": "ZY6w-fcieFDW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title\n", "def embed_gif(gif_buffer):\n", @@ -412,9 +392,7 @@ "metadata": { "id": "FpmPLXWbdG70" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print ('global_step:')\n", "print (global_step)\n", @@ -447,9 +425,7 @@ "metadata": { "id": "2DzCJZ-6YYbX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "checkpoint_dir = os.path.join(tempdir, 'checkpoint')\n", "train_checkpointer = common.Checkpointer(\n", @@ -477,9 +453,7 @@ "metadata": { "id": "8mDZ_YMUWEY9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "policy_dir = os.path.join(tempdir, 'policy')\n", "tf_policy_saver = policy_saver.PolicySaver(agent.policy)" @@ -500,9 +474,7 @@ "metadata": { "id": "ql_D1iq8dl0X" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "print('Training one iteration....')\n", @@ -524,9 +496,7 @@ "metadata": { "id": "usDm_Wpsl0bu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_checkpointer.save(global_step)" ] @@ -548,9 +518,7 @@ "metadata": { "id": "l6l3EB-Yhwmz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_checkpointer.initialize_or_restore()\n", "global_step = tf.compat.v1.train.get_global_step()" @@ -571,9 +539,7 @@ "metadata": { "id": "3xHz09WCWjwA" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf_policy_saver.save(policy_dir)" ] @@ -595,9 +561,7 @@ "metadata": { "id": "J6T5KLTMh9ZB" }, - "outputs": [ - - ], + "outputs": [], "source": [ "saved_policy = tf.saved_model.load(policy_dir)\n", "run_episodes_and_create_video(saved_policy, eval_env, eval_py_env)" @@ -622,9 +586,7 @@ "metadata": { "id": "fd5Cj7DVjfH4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Create zip file and upload zip file (double-click to see the code)\n", "def create_zip_file(dirname, base_filename):\n", @@ -658,9 +620,7 @@ "metadata": { "id": "nhR8NeWzF4fe" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_checkpointer.save(global_step)\n", "checkpoint_zip_filename = create_zip_file(checkpoint_dir, os.path.join(tempdir, 'exported_cp'))" @@ -681,9 +641,7 @@ "metadata": { "id": "upFxb5k8b4MC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "if files is not None:\n", @@ -705,9 +663,7 @@ "metadata": { "id": "kg-bKgMsF-H_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "upload_and_unzip_file_to(checkpoint_dir)\n", @@ -739,9 +695,7 @@ "metadata": { "id": "s7qMn6D8eiIA" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf_policy_saver.save(policy_dir)\n", "policy_zip_filename = create_zip_file(policy_dir, os.path.join(tempdir, 'exported_policy'))" @@ -753,9 +707,7 @@ "metadata": { "id": "rrGvCEXwerJj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "if files is not None:\n", @@ -777,9 +729,7 @@ "metadata": { "id": "bgWLimRlXy5z" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "upload_and_unzip_file_to(policy_dir)\n", @@ -806,9 +756,7 @@ "metadata": { "id": "iUC5XuLf1jF7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "eager_py_policy = py_tf_eager_policy.SavedModelPyTFEagerPolicy(\n", " policy_dir, eval_py_env.time_step_spec(), eval_py_env.action_spec())\n", @@ -834,9 +782,7 @@ "metadata": { "id": "z9zonVBJ0z46" }, - "outputs": [ - - ], + "outputs": [], "source": [ "converter = tf.lite.TFLiteConverter.from_saved_model(policy_dir, signature_keys=[\"action\"])\n", "tflite_policy = converter.convert()\n", @@ -861,9 +807,7 @@ "metadata": { "id": "4GeUSWyZxMlN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import numpy as np\n", "interpreter = tf.lite.Interpreter(os.path.join(tempdir, 'policy.tflite'))\n", @@ -878,9 +822,7 @@ "metadata": { "id": "eVVrdTbRxnOC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "policy_runner(**{\n", " '0/discount':tf.constant(0.0),\n", @@ -892,9 +834,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "10_checkpointer_policysaver_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/1_dqn_tutorial.ipynb b/site/ko/agents/tutorials/1_dqn_tutorial.ipynb index 65d991e9f3..5ab1556f91 100644 --- a/site/ko/agents/tutorials/1_dqn_tutorial.ipynb +++ b/site/ko/agents/tutorials/1_dqn_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -97,9 +95,7 @@ "metadata": { "id": "KEHR2Ui-lo8O" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!sudo apt-get update\n", "!sudo apt-get install -y xvfb ffmpeg freeglut3-dev\n", @@ -115,9 +111,7 @@ "metadata": { "id": "sMitx5qSgJk1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from __future__ import absolute_import, division, print_function\n", "\n", @@ -155,9 +149,7 @@ "metadata": { "id": "J6HsdS5GbSjd" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Set up a virtual display for rendering OpenAI gym environments.\n", "display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()" @@ -169,9 +161,7 @@ "metadata": { "id": "NspmzG4nP3b9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.version.VERSION" ] @@ -191,9 +181,7 @@ "metadata": { "id": "HC1kNrOsLSIZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "num_iterations = 20000 # @param {type:\"integer\"}\n", "\n", @@ -228,9 +216,7 @@ "metadata": { "id": "pYEz-S9gEv2-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env_name = 'CartPole-v0'\n", "env = suite_gym.load(env_name)" @@ -251,9 +237,7 @@ "metadata": { "id": "RlO7WIQHu_7D" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "env.reset()\n", @@ -277,9 +261,7 @@ "metadata": { "id": "exDv57iHfwQV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('Observation Spec:')\n", "print(env.time_step_spec().observation)" @@ -291,9 +273,7 @@ "metadata": { "id": "UxiSyCbBUQPi" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('Reward Spec:')\n", "print(env.time_step_spec().reward)" @@ -314,9 +294,7 @@ "metadata": { "id": "bttJ4uxZUQBr" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('Action Spec:')\n", "print(env.action_spec())" @@ -345,9 +323,7 @@ "metadata": { "id": "V2UGR5t_iZX-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "time_step = env.reset()\n", "print('Time step:')\n", @@ -375,9 +351,7 @@ "metadata": { "id": "N7brXNIGWXjC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_py_env = suite_gym.load(env_name)\n", "eval_py_env = suite_gym.load(env_name)" @@ -400,9 +374,7 @@ "metadata": { "id": "Xp-Y4mD6eDhF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_env = tf_py_environment.TFPyEnvironment(train_py_env)\n", "eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)" @@ -438,9 +410,7 @@ "metadata": { "id": "TgkdEPg_muzV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "fc_layer_params = (100, 50)\n", "action_tensor_spec = tensor_spec.from_spec(env.action_spec())\n", @@ -483,9 +453,7 @@ "metadata": { "id": "jbY4yrjTEyc9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n", "\n", @@ -529,9 +497,7 @@ "metadata": { "id": "BwY7StuMkuV4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "eval_policy = agent.policy\n", "collect_policy = agent.collect_policy" @@ -552,9 +518,7 @@ "metadata": { "id": "HE37-UCIrE69" }, - "outputs": [ - - ], + "outputs": [], "source": [ "random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),\n", " train_env.action_spec())" @@ -579,9 +543,7 @@ "metadata": { "id": "5gCcpXswVAxk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "example_environment = tf_py_environment.TFPyEnvironment(\n", " suite_gym.load('CartPole-v0'))" @@ -593,9 +555,7 @@ "metadata": { "id": "D4DHZtq3Ndis" }, - "outputs": [ - - ], + "outputs": [], "source": [ "time_step = example_environment.reset()" ] @@ -606,9 +566,7 @@ "metadata": { "id": "PRFqAUzpNaAW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "random_policy.action(time_step)" ] @@ -632,9 +590,7 @@ "metadata": { "id": "bitzHo5_UbXy" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "def compute_avg_return(environment, policy, num_episodes=10):\n", @@ -674,9 +630,7 @@ "metadata": { "id": "9bgU6Q6BZ8Bp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "compute_avg_return(eval_env, random_policy, num_eval_episodes)" ] @@ -700,9 +654,7 @@ "metadata": { "id": "vX2zGUWJGWAl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "table_name = 'uniform_table'\n", "replay_buffer_signature = tensor_spec.from_spec(\n", @@ -747,9 +699,7 @@ "metadata": { "id": "_IZ-3HcqgE1z" }, - "outputs": [ - - ], + "outputs": [], "source": [ "agent.collect_data_spec" ] @@ -760,9 +710,7 @@ "metadata": { "id": "sy6g1tGcfRlw" }, - "outputs": [ - - ], + "outputs": [], "source": [ "agent.collect_data_spec._fields" ] @@ -786,9 +734,7 @@ "metadata": { "id": "wr1KSAEGG4h9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "py_driver.PyDriver(\n", @@ -814,9 +760,7 @@ "metadata": { "id": "4wZnLu2ViO4E" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# For the curious:\n", "# Uncomment to peel one of these off and inspect it.\n", @@ -842,9 +786,7 @@ "metadata": { "id": "ba7bilizt_qW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Dataset generates trajectories with shape [Bx2x...]\n", "dataset = replay_buffer.as_dataset(\n", @@ -861,9 +803,7 @@ "metadata": { "id": "K13AST-2ppOq" }, - "outputs": [ - - ], + "outputs": [], "source": [ "iterator = iter(dataset)\n", "print(iterator)" @@ -875,9 +815,7 @@ "metadata": { "id": "Th5w5Sff0b16" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# For the curious:\n", "# Uncomment to see what the dataset iterator is feeding to the agent.\n", @@ -911,9 +849,7 @@ "metadata": { "id": "0pTbJ3PeyF-u" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "try:\n", @@ -990,9 +926,7 @@ "metadata": { "id": "NxtL1mbOYCVO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -1029,9 +963,7 @@ "metadata": { "id": "ULaGr8pvOKbl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def embed_mp4(filename):\n", " \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n", @@ -1061,9 +993,7 @@ "metadata": { "id": "owOVWB158NlF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):\n", " filename = filename + \".mp4\"\n", @@ -1095,9 +1025,7 @@ "metadata": { "id": "pJZIdC37yNH4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "create_policy_eval_video(random_policy, \"random-agent\")" ] @@ -1105,9 +1033,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "1_dqn_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/2_environments_tutorial.ipynb b/site/ko/agents/tutorials/2_environments_tutorial.ipynb index 8a4d919ee7..eacfab74e6 100644 --- a/site/ko/agents/tutorials/2_environments_tutorial.ipynb +++ b/site/ko/agents/tutorials/2_environments_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -95,9 +93,7 @@ "metadata": { "id": "KKU2iY_7at8Y" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install tf-agents[reverb]\n" ] @@ -108,9 +104,7 @@ "metadata": { "id": "1ZAoFNwnRbKK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -163,9 +157,7 @@ "metadata": { "id": "GlD2Dd2vUTtg" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class PyEnvironment(object):\n", "\n", @@ -236,9 +228,7 @@ "metadata": { "id": "1kBPE5T-nb2-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "environment = suite_gym.load('CartPole-v0')\n", "print('action_spec:', environment.action_spec())\n", @@ -263,9 +253,7 @@ "metadata": { "id": "AzIbOJ0-0y12" }, - "outputs": [ - - ], + "outputs": [], "source": [ "action = np.array(1, dtype=np.int32)\n", "time_step = environment.reset()\n", @@ -304,9 +292,7 @@ "metadata": { "id": "9HD0cDykPL6I" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class CardGameEnv(py_environment.PyEnvironment):\n", "\n", @@ -370,9 +356,7 @@ "metadata": { "id": "6Hhm-5R7spVx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "environment = CardGameEnv()\n", "utils.validate_py_environment(environment, episodes=5)" @@ -393,9 +377,7 @@ "metadata": { "id": "FILylafAkMEx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "get_new_card_action = np.array(0, dtype=np.int32)\n", "end_round_action = np.array(1, dtype=np.int32)\n", @@ -457,9 +439,7 @@ "metadata": { "id": "AJxEoZ4HoyjR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env = suite_gym.load('Pendulum-v1')\n", "print('Action Spec:', env.action_spec())\n", @@ -506,9 +486,7 @@ "metadata": { "id": "WKBDDZqKTxsL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class TFEnvironment(object):\n", "\n", @@ -596,9 +574,7 @@ "metadata": { "id": "UYerqyNfnVRL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", @@ -641,9 +617,7 @@ "metadata": { "id": "gdvFqUqbdB7u" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env = suite_gym.load('CartPole-v0')\n", "\n", @@ -681,9 +655,7 @@ "metadata": { "id": "t561kUXMk-KM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", @@ -716,9 +688,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "2_environments_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/3_policies_tutorial.ipynb b/site/ko/agents/tutorials/3_policies_tutorial.ipynb index 9eb59e7291..91168b1a5f 100644 --- a/site/ko/agents/tutorials/3_policies_tutorial.ipynb +++ b/site/ko/agents/tutorials/3_policies_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -97,9 +95,7 @@ "metadata": { "id": "xsLTHlVdiZP3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install tf-agents" ] @@ -110,9 +106,7 @@ "metadata": { "id": "sdvop99JlYSM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -164,9 +158,7 @@ "metadata": { "id": "4PqNEVls1uqc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class Base(object):\n", "\n", @@ -250,9 +242,7 @@ "metadata": { "id": "QX8M4Nl-_0uu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "action_spec = array_spec.BoundedArraySpec((2,), np.int32, -10, 10)\n", "my_random_py_policy = random_py_policy.RandomPyPolicy(time_step_spec=None,\n", @@ -288,9 +278,7 @@ "metadata": { "id": "_mZ244m4BUYv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "action_spec = array_spec.BoundedArraySpec((2,), np.int32, -10, 10)\n", "action_script = [(1, np.array([5, 2], dtype=np.int32)), \n", @@ -352,9 +340,7 @@ "metadata": { "id": "nZ3pe5G4rjrW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "action_spec = tensor_spec.BoundedTensorSpec(\n", " (2,), tf.float32, minimum=-1, maximum=3)\n", @@ -406,9 +392,7 @@ "metadata": { "id": "S2wFgzJFteQX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class ActionNet(network.Network):\n", "\n", @@ -452,9 +436,7 @@ "metadata": { "id": "0UGmFTe7a5VQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "input_tensor_spec = tensor_spec.TensorSpec((4,), tf.float32)\n", "time_step_spec = ts.time_step_spec(input_tensor_spec)\n", @@ -486,9 +468,7 @@ "metadata": { "id": "fvsIsR0VfOA4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "batch_size = 2\n", "observations = tf.ones([2] + time_step_spec.observation.shape.as_list())\n", @@ -528,9 +508,7 @@ "metadata": { "id": "sSYzC9LobVsK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class ActionDistributionNet(ActionNet):\n", "\n", @@ -599,9 +577,7 @@ "metadata": { "id": "Haakr2VvjqKC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "input_tensor_spec = tensor_spec.TensorSpec((4,), tf.float32)\n", "time_step_spec = ts.time_step_spec(input_tensor_spec)\n", @@ -685,9 +661,7 @@ "metadata": { "id": "xsRPBeLZtXvu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "my_greedy_policy = greedy_policy.GreedyPolicy(my_q_policy)\n", "\n", @@ -703,9 +677,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "3_policies_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/4_drivers_tutorial.ipynb b/site/ko/agents/tutorials/4_drivers_tutorial.ipynb index 1e22bbd468..68a0d89637 100644 --- a/site/ko/agents/tutorials/4_drivers_tutorial.ipynb +++ b/site/ko/agents/tutorials/4_drivers_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -88,9 +86,7 @@ "metadata": { "id": "TnE2CgilrngG" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install tf-agents\n" ] @@ -101,9 +97,7 @@ "metadata": { "id": "whYNP894FSkA" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -188,9 +182,7 @@ "metadata": { "id": "Dj4_-77_5ExP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env = suite_gym.load('CartPole-v0')\n", "policy = random_py_policy.RandomPyPolicy(time_step_spec=env.time_step_spec(), \n", @@ -228,9 +220,7 @@ "metadata": { "id": "WC4ba3ObSceA" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", @@ -259,9 +249,7 @@ "metadata": { "id": "Sz5jhHnU0fX1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Continue running from previous state\n", "final_time_step, _ = driver.run(final_time_step, policy_state)\n", @@ -274,9 +262,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "4_drivers_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/5_replay_buffers_tutorial.ipynb b/site/ko/agents/tutorials/5_replay_buffers_tutorial.ipynb index 97f28d4de3..d5901436b6 100644 --- a/site/ko/agents/tutorials/5_replay_buffers_tutorial.ipynb +++ b/site/ko/agents/tutorials/5_replay_buffers_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -86,9 +84,7 @@ "metadata": { "id": "TnE2CgilrngG" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install tf-agents\n" ] @@ -99,9 +95,7 @@ "metadata": { "id": "whYNP894FSkA" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -217,9 +211,7 @@ "metadata": { "id": "Dj4_-77_5ExP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "data_spec = (\n", " tf.TensorSpec([3], tf.float32, 'action'),\n", @@ -257,9 +249,7 @@ "metadata": { "id": "nOvkp4vJhBOT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "action = tf.constant(1 * np.ones(\n", " data_spec[0].shape.as_list(), dtype=np.float32))\n", @@ -298,9 +288,7 @@ "metadata": { "id": "IlQ1eGhohM3M" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# add more items to the buffer before reading\n", "for _ in range(5):\n", @@ -350,9 +338,7 @@ "metadata": { "id": "F4neLPpL25wI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "replay_buffer_capacity = 1000*32 # same capacity as the TFUniformReplayBuffer\n", "\n", @@ -388,9 +374,7 @@ "metadata": { "id": "pCbTDO3Z5UCS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", @@ -443,9 +427,7 @@ "metadata": { "id": "gg8SUyXXnSMr" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Read the replay buffer as a Dataset,\n", "# read batches of 4 elements, each with 2 timesteps:\n", @@ -465,9 +447,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "5_replay_buffers_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/6_reinforce_tutorial.ipynb b/site/ko/agents/tutorials/6_reinforce_tutorial.ipynb index 0d2f4dc478..630486112d 100644 --- a/site/ko/agents/tutorials/6_reinforce_tutorial.ipynb +++ b/site/ko/agents/tutorials/6_reinforce_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -95,9 +93,7 @@ "metadata": { "id": "KEHR2Ui-lo8O" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!sudo apt-get update\n", "!sudo apt-get install -y xvfb ffmpeg freeglut3-dev\n", @@ -113,9 +109,7 @@ "metadata": { "id": "sMitx5qSgJk1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -163,9 +157,7 @@ "metadata": { "id": "HC1kNrOsLSIZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env_name = \"CartPole-v0\" # @param {type:\"string\"}\n", "num_iterations = 250 # @param {type:\"integer\"}\n", @@ -199,9 +191,7 @@ "metadata": { "id": "pYEz-S9gEv2-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env = suite_gym.load(env_name)" ] @@ -221,9 +211,7 @@ "metadata": { "id": "RlO7WIQHu_7D" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "env.reset()\n", @@ -245,9 +233,7 @@ "metadata": { "id": "exDv57iHfwQV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('Observation Spec:')\n", "print(env.time_step_spec().observation)\n", @@ -270,9 +256,7 @@ "metadata": { "id": "V2UGR5t_iZX-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "time_step = env.reset()\n", "print('Time step:')\n", @@ -300,9 +284,7 @@ "metadata": { "id": "Xp-Y4mD6eDhF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_py_env = suite_gym.load(env_name)\n", "eval_py_env = suite_gym.load(env_name)\n", @@ -332,9 +314,7 @@ "metadata": { "id": "TgkdEPg_muzV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "actor_net = actor_distribution_network.ActorDistributionNetwork(\n", " train_env.observation_spec(),\n", @@ -357,9 +337,7 @@ "metadata": { "id": "jbY4yrjTEyc9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n", "\n", @@ -394,9 +372,7 @@ "metadata": { "id": "BwY7StuMkuV4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "eval_policy = tf_agent.policy\n", "collect_policy = tf_agent.collect_policy" @@ -419,9 +395,7 @@ "metadata": { "id": "bitzHo5_UbXy" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "def compute_avg_return(environment, policy, num_episodes=10):\n", @@ -465,9 +439,7 @@ "metadata": { "id": "vX2zGUWJGWAl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "table_name = 'uniform_table'\n", "replay_buffer_signature = tensor_spec.from_spec(\n", @@ -523,9 +495,7 @@ "metadata": { "id": "wr1KSAEGG4h9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -560,9 +530,7 @@ "metadata": { "id": "0pTbJ3PeyF-u" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "try:\n", @@ -630,9 +598,7 @@ "metadata": { "id": "NxtL1mbOYCVO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -667,9 +633,7 @@ "metadata": { "id": "ULaGr8pvOKbl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def embed_mp4(filename):\n", " \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n", @@ -699,9 +663,7 @@ "metadata": { "id": "owOVWB158NlF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "num_episodes = 3\n", "video_filename = 'imageio.mp4'\n", @@ -721,9 +683,7 @@ "metadata": { "accelerator": "GPU", "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "6_reinforce_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb b/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb index c78144bf47..520fdb565d 100644 --- a/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb +++ b/site/ko/agents/tutorials/7_SAC_minitaur_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -44,7 +42,7 @@ "\n", " \n", " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행GitHub에서 소그 보기GitHub에서 소그 보기노트북 다운로드
\n" ] @@ -90,9 +88,7 @@ "metadata": { "id": "fskoLlB-AZ9j" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!sudo apt-get update\n", "!sudo apt-get install -y xvfb ffmpeg\n", @@ -126,9 +122,7 @@ "metadata": { "id": "sMitx5qSgJk1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import base64\n", "import imageio\n", @@ -177,9 +171,7 @@ "metadata": { "id": "HC1kNrOsLSIZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env_name = \"MinitaurBulletEnv-v0\" # @param {type:\"string\"}\n", "\n", @@ -231,9 +223,7 @@ "metadata": { "id": "RlO7WIQHu_7D" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env = suite_pybullet.load(env_name)\n", "env.reset()\n", @@ -257,9 +247,7 @@ "metadata": { "id": "exDv57iHfwQV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('Observation Spec:')\n", "print(env.time_step_spec().observation)\n", @@ -284,9 +272,7 @@ "metadata": { "id": "Xp-Y4mD6eDhF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "collect_env = suite_pybullet.load(env_name)\n", "eval_env = suite_pybullet.load(env_name)" @@ -345,9 +331,7 @@ "metadata": { "id": "ff5ZZRZI15ds" }, - "outputs": [ - - ], + "outputs": [], "source": [ "use_gpu = True #@param {type:\"boolean\"}\n", "\n", @@ -382,9 +366,7 @@ "metadata": { "id": "TgkdEPg_muzV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "observation_spec, action_spec, time_step_spec = (\n", " spec_utils.get_tensor_specs(collect_env))\n", @@ -416,9 +398,7 @@ "metadata": { "id": "TB5Y3Oub4u7f" }, - "outputs": [ - - ], + "outputs": [], "source": [ "with strategy.scope():\n", " actor_net = actor_distribution_network.ActorDistributionNetwork(\n", @@ -444,9 +424,7 @@ "metadata": { "id": "jbY4yrjTEyc9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "with strategy.scope():\n", " train_step = train_utils.create_train_step()\n", @@ -495,9 +473,7 @@ "metadata": { "id": "vX2zGUWJGWAl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "table_name = 'uniform_table'\n", "table = reverb.Table(\n", @@ -527,9 +503,7 @@ "metadata": { "id": "xVLUxyUo7HQR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "reverb_replay = reverb_replay_buffer.ReverbReplayBuffer(\n", " tf_agent.collect_data_spec,\n", @@ -553,9 +527,7 @@ "metadata": { "id": "ba7bilizt_qW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = reverb_replay.as_dataset(\n", " sample_batch_size=batch_size, num_steps=2).prefetch(50)\n", @@ -584,9 +556,7 @@ "metadata": { "id": "yq7JE8IwFe0E" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf_eval_policy = tf_agent.policy\n", "eval_policy = py_tf_eager_policy.PyTFEagerPolicy(\n", @@ -599,9 +569,7 @@ "metadata": { "id": "f_A4rZveEQzW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf_collect_policy = tf_agent.collect_policy\n", "collect_policy = py_tf_eager_policy.PyTFEagerPolicy(\n", @@ -623,9 +591,7 @@ "metadata": { "id": "BwY7StuMkuV4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "random_policy = random_py_policy.RandomPyPolicy(\n", " collect_env.time_step_spec(), collect_env.action_spec())" @@ -664,9 +630,7 @@ "metadata": { "id": "HbyGmdiNfNDc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rb_observer = reverb_utils.ReverbAddTrajectoryObserver(\n", " reverb_replay.py_client,\n", @@ -690,9 +654,7 @@ "metadata": { "id": "ZGq3SY0kKwsa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "initial_collect_actor = actor.Actor(\n", " collect_env,\n", @@ -718,9 +680,7 @@ "metadata": { "id": "A6ooXyk0FZ5j" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env_step_metric = py_metrics.EnvironmentSteps()\n", "collect_actor = actor.Actor(\n", @@ -748,9 +708,7 @@ "metadata": { "id": "vHY2BT5lFhgL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "eval_actor = actor.Actor(\n", " eval_env,\n", @@ -779,9 +737,7 @@ "metadata": { "id": "gi37YicSFTfF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "saved_model_dir = os.path.join(tempdir, learner.POLICY_SAVED_MODEL_DIR)\n", "\n", @@ -826,9 +782,7 @@ "metadata": { "id": "83iMSHUC71RG" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def get_eval_metrics():\n", " eval_actor.run()\n", @@ -846,9 +800,7 @@ "metadata": { "id": "jnOMvX_eZvOW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def log_eval_metrics(step, metrics):\n", " eval_results = (', ').join(\n", @@ -884,9 +836,7 @@ "metadata": { "id": "0pTbJ3PeyF-u" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "try:\n", @@ -947,9 +897,7 @@ "metadata": { "id": "rXKzyGt72HS8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -984,9 +932,7 @@ "metadata": { "id": "ULaGr8pvOKbl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def embed_mp4(filename):\n", " \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n", @@ -1016,9 +962,7 @@ "metadata": { "id": "PSgaQN1nXT-h" }, - "outputs": [ - - ], + "outputs": [], "source": [ "num_episodes = 3\n", "video_filename = 'sac_minitaur.mp4'\n", @@ -1037,9 +981,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "7_SAC_minitaur_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/8_networks_tutorial.ipynb b/site/ko/agents/tutorials/8_networks_tutorial.ipynb index b8ec8d7539..311af49fce 100644 --- a/site/ko/agents/tutorials/8_networks_tutorial.ipynb +++ b/site/ko/agents/tutorials/8_networks_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -101,9 +99,7 @@ "metadata": { "id": "xsLTHlVdiZP3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install tf-agents" ] @@ -114,9 +110,7 @@ "metadata": { "id": "sdvop99JlYSM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -210,9 +204,7 @@ "metadata": { "id": "Zp0TjAJhYo4s" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class ActorNetwork(network.Network):\n", "\n", @@ -291,9 +283,7 @@ "metadata": { "id": "E2XoNuuD66s5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "action_spec = array_spec.BoundedArraySpec((3,), np.float32, minimum=0, maximum=10)\n", "observation_spec = {\n", @@ -323,9 +313,7 @@ "metadata": { "id": "r9U6JVevTAJw" }, - "outputs": [ - - ], + "outputs": [], "source": [ "preprocessing_layers = {\n", " 'image': tf.keras.models.Sequential([tf.keras.layers.Conv2D(8, 4),\n", @@ -354,9 +342,7 @@ "metadata": { "id": "JOkkeu7vXoei" }, - "outputs": [ - - ], + "outputs": [], "source": [ "time_step = tf_env.reset()\n", "actor(time_step.observation, time_step.step_type)" @@ -374,9 +360,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "8_networks_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/9_c51_tutorial.ipynb b/site/ko/agents/tutorials/9_c51_tutorial.ipynb index 37eef5b3da..bc320f040a 100644 --- a/site/ko/agents/tutorials/9_c51_tutorial.ipynb +++ b/site/ko/agents/tutorials/9_c51_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -95,9 +93,7 @@ "metadata": { "id": "KEHR2Ui-lo8O" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!sudo apt-get update\n", "!sudo apt-get install -y xvfb ffmpeg freeglut3-dev\n", @@ -113,9 +109,7 @@ "metadata": { "id": "sMitx5qSgJk1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", @@ -162,9 +156,7 @@ "metadata": { "id": "HC1kNrOsLSIZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "env_name = \"CartPole-v1\" # @param {type:\"string\"}\n", "num_iterations = 15000 # @param {type:\"integer\"}\n", @@ -206,9 +198,7 @@ "metadata": { "id": "Xp-Y4mD6eDhF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_py_env = suite_gym.load(env_name)\n", "eval_py_env = suite_gym.load(env_name)\n", @@ -244,9 +234,7 @@ "metadata": { "id": "TgkdEPg_muzV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "categorical_q_net = categorical_q_network.CategoricalQNetwork(\n", " train_env.observation_spec(),\n", @@ -272,9 +260,7 @@ "metadata": { "id": "jbY4yrjTEyc9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)\n", "\n", @@ -332,9 +318,7 @@ "metadata": { "id": "bitzHo5_UbXy" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "def compute_avg_return(environment, policy, num_episodes=10):\n", @@ -381,9 +365,7 @@ "metadata": { "id": "wr1KSAEGG4h9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n", @@ -434,9 +416,7 @@ "metadata": { "id": "0pTbJ3PeyF-u" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "try:\n", @@ -501,9 +481,7 @@ "metadata": { "id": "NxtL1mbOYCVO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "\n", @@ -538,9 +516,7 @@ "metadata": { "id": "ULaGr8pvOKbl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def embed_mp4(filename):\n", " \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n", @@ -570,9 +546,7 @@ "metadata": { "id": "owOVWB158NlF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "num_episodes = 3\n", "video_filename = 'imageio.mp4'\n", @@ -602,9 +576,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "9_c51_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/bandits_tutorial.ipynb b/site/ko/agents/tutorials/bandits_tutorial.ipynb index d1ac22a578..094af5f10f 100644 --- a/site/ko/agents/tutorials/bandits_tutorial.ipynb +++ b/site/ko/agents/tutorials/bandits_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -82,9 +80,7 @@ "metadata": { "id": "KEHR2Ui-lo8O" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install tf-agents" ] @@ -104,9 +100,7 @@ "metadata": { "id": "3oCS94Z83Jo2" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import abc\n", "import numpy as np\n", @@ -193,9 +187,7 @@ "metadata": { "id": "TTaG2ZapQvHX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class BanditPyEnvironment(py_environment.PyEnvironment):\n", "\n", @@ -269,9 +261,7 @@ "metadata": { "id": "YV6DhsSi227-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class SimplePyEnvironment(BanditPyEnvironment):\n", "\n", @@ -305,9 +295,7 @@ "metadata": { "id": "Eo_uwSz2gAKX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "environment = SimplePyEnvironment()\n", "observation = environment.reset().observation\n", @@ -344,9 +332,7 @@ "metadata": { "id": "IPPpwSi3EtWz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf_environment = tf_py_environment.TFPyEnvironment(environment)" ] @@ -381,9 +367,7 @@ "metadata": { "id": "VpMZlplNK5ND" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class SignPolicy(tf_policy.TFPolicy):\n", " def __init__(self):\n", @@ -423,9 +407,7 @@ "metadata": { "id": "Z0_5vMDCVZWT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "sign_policy = SignPolicy()\n", "\n", @@ -455,9 +437,7 @@ "metadata": { "id": "CiB935of-wVv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "step = tf_environment.reset()\n", "action = 1\n", @@ -514,9 +494,7 @@ "metadata": { "id": "fte7-Mr8O0QR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class TwoWayPyEnvironment(BanditPyEnvironment):\n", "\n", @@ -573,9 +551,7 @@ "metadata": { "id": "Srm2jsGHVM8N" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class TwoWaySignPolicy(tf_policy.TFPolicy):\n", " def __init__(self, situation):\n", @@ -636,9 +612,7 @@ "metadata": { "id": "7f-0W0cMbS_z" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class SignAgent(tf_agent.TFAgent):\n", " def __init__(self):\n", @@ -718,9 +692,7 @@ "metadata": { "id": "gdSG1nv-HUJq" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# We need to add another dimension here because the agent expects the\n", "# trajectory of shape [batch_size, time, ...], but in this tutorial we assume\n", @@ -760,9 +732,7 @@ "metadata": { "id": "LPx43dZgoyKg" }, - "outputs": [ - - ], + "outputs": [], "source": [ "step = two_way_tf_environment.reset()\n", "for _ in range(10):\n", @@ -807,9 +777,7 @@ "metadata": { "id": "oEnXUwd-nZKl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Imports for example.\n", "from tf_agents.bandits.agents import lin_ucb_agent\n", @@ -845,9 +813,7 @@ "metadata": { "id": "gVa0hmQrpe6w" }, - "outputs": [ - - ], + "outputs": [], "source": [ "batch_size = 2 # @param\n", "arm0_param = [-3, 0, 1, -2] # @param\n", @@ -903,9 +869,7 @@ "metadata": { "id": "p4XmGgIusj-K" }, - "outputs": [ - - ], + "outputs": [], "source": [ "observation_spec = tensor_spec.TensorSpec([4], tf.float32)\n", "time_step_spec = ts.time_step_spec(observation_spec)\n", @@ -940,9 +904,7 @@ "metadata": { "id": "cX7MiFhNu3_L" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def compute_optimal_reward(observation):\n", " expected_reward_for_arms = [\n", @@ -981,9 +943,7 @@ "metadata": { "id": "4Ggn45g62DWx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "num_iterations = 90 # @param\n", "steps_per_loop = 1 # @param\n", @@ -1046,9 +1006,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "bandits_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/agents/tutorials/intro_bandit.ipynb b/site/ko/agents/tutorials/intro_bandit.ipynb index 460e872890..1cb899e2ca 100644 --- a/site/ko/agents/tutorials/intro_bandit.ipynb +++ b/site/ko/agents/tutorials/intro_bandit.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "nQnmcm0oI1Q-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", diff --git a/site/ko/agents/tutorials/per_arm_bandits_tutorial.ipynb b/site/ko/agents/tutorials/per_arm_bandits_tutorial.ipynb index 1361835391..4797dd8e4f 100644 --- a/site/ko/agents/tutorials/per_arm_bandits_tutorial.ipynb +++ b/site/ko/agents/tutorials/per_arm_bandits_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "6AZJOyCA4NpL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -140,9 +138,7 @@ "metadata": { "id": "QxiNIm5XSIIp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install tf-agents" ] @@ -162,9 +158,7 @@ "metadata": { "id": "BbQXsoeKR2ui" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import functools\n", "import matplotlib.pyplot as plt\n", @@ -198,9 +192,7 @@ "metadata": { "id": "bfna8xm0MSCn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# The dimension of the global features.\n", "GLOBAL_DIM = 40 #@param {type:\"integer\"}\n", @@ -246,9 +238,7 @@ "metadata": { "id": "kfTa5Y4ZYjhO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def global_context_sampling_fn():\n", " \"\"\"This function generates a single global observation vector.\"\"\"\n", @@ -281,9 +271,7 @@ "metadata": { "id": "Ny-yikfQQi9l" }, - "outputs": [ - - ], + "outputs": [], "source": [ "per_arm_py_env = p_a_env.StationaryStochasticPerArmPyEnvironment(\n", " global_context_sampling_fn,\n", @@ -310,9 +298,7 @@ "metadata": { "id": "J8ZVqMU5AOzs" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('observation spec: ', per_arm_tf_env.observation_spec())\n", "print('\\nAn observation: ', per_arm_tf_env.reset().observation)\n", @@ -360,9 +346,7 @@ "metadata": { "id": "MqzA9Zi0Q2No" }, - "outputs": [ - - ], + "outputs": [], "source": [ "observation_spec = per_arm_tf_env.observation_spec()\n", "time_step_spec = ts.time_step_spec(observation_spec)\n", @@ -400,9 +384,7 @@ "metadata": { "id": "YQRZd43o5M0j" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('training data spec: ', agent.training_data_spec)" ] @@ -422,9 +404,7 @@ "metadata": { "id": "lTBR1vgG-2AM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('observation spec in training: ', agent.training_data_spec.observation)" ] @@ -446,9 +426,7 @@ "metadata": { "id": "C0fHvLm0Cpq9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('chosen arm features: ', agent.training_data_spec.policy_info.chosen_arm_features)" ] @@ -486,9 +464,7 @@ "metadata": { "id": "N1_kJAOS6VPo" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def _all_rewards(observation, hidden_param):\n", " \"\"\"Outputs rewards for all actions, given an observation.\"\"\"\n", @@ -524,9 +500,7 @@ "metadata": { "id": "j2Iimtmkzs0-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "num_iterations = 20 # @param\n", "steps_per_loop = 1 # @param\n", @@ -568,9 +542,7 @@ "metadata": { "id": "p4DOYwhMSUVh" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plt.plot(regret_values)\n", "plt.title('Regret of LinUCB on the Linear per-arm environment')\n", diff --git a/site/ko/agents/tutorials/ranking_tutorial.ipynb b/site/ko/agents/tutorials/ranking_tutorial.ipynb index c3bdca9e7b..63db448697 100644 --- a/site/ko/agents/tutorials/ranking_tutorial.ipynb +++ b/site/ko/agents/tutorials/ranking_tutorial.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "E2347LPWgmcO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -73,9 +71,7 @@ "metadata": { "id": "tFs2W62pqUxk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install tf-agents[reverb]" ] @@ -86,9 +82,7 @@ "metadata": { "id": "1dbfZarwmB96" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Imports\n", "import matplotlib.pyplot as plt\n", @@ -271,9 +265,7 @@ "metadata": { "id": "sR8Id9Y7mhBK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "feedback_model = ranking_environment.FeedbackModel.CASCADING #@param[\"ranking_environment.FeedbackModel.SCORE_VECTOR\", \"ranking_environment.FeedbackModel.CASCADING\"] {type:\"raw\"}\n" ] @@ -297,9 +289,7 @@ "cellView": "form", "id": "3b1hYbTxi6Kg" }, - "outputs": [ - - ], + "outputs": [], "source": [ "click_type = \"ghost_actions\" #@param[\"distance_based\", \"ghost_actions\"]\n", "click_model = (ranking_environment.ClickModel.DISTANCE_BASED\n", @@ -322,9 +312,7 @@ "metadata": { "id": "ueVe8Gf77Lad" }, - "outputs": [ - - ], + "outputs": [], "source": [ "global_dim = 9 #@param{ type: \"integer\"}\n", "item_dim = 11 #@param{ type: \"integer\"}\n", @@ -372,9 +360,7 @@ "metadata": { "id": "MBJgkyFC64rR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Defining the Network and Training Params\n", "scoring_network = (\n", @@ -393,9 +379,7 @@ "metadata": { "id": "ufuiBZsg5YrR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Stack Ranking Deterministically by Scores\n", "\n", @@ -416,9 +400,7 @@ "metadata": { "id": "8ko9D3qO7gUs" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Sampling Sequentially Based on Scores\n", "\n", @@ -443,9 +425,7 @@ "cellView": "form", "id": "Gnv5c4w094A4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Sampling Sequentally and Taking Diversity into Account\n", "#@markdown The balance between ranking based on scores and taking diversity into account is governed by the following \"penalty mixture\" parameter. A low positive value results in rankings that hardly mix in diversity, a higher value will enforce more diversity.\n", @@ -471,9 +451,7 @@ "metadata": { "id": "ZMImW7rrWn5w" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Choosing the desired agent.\n", "agent_type = \"cosine_distance_agent\" #@param[\"cosine_distance_agent\", \"no_penalty_agent\", \"descending_scores_agent\"]\n", @@ -502,9 +480,7 @@ "metadata": { "id": "vpUfQXgD7y0t" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def order_items_from_action_fn(orig_trajectory):\n", " \"\"\"Puts the features of the selected items in the recommendation order.\n", @@ -561,9 +537,7 @@ "metadata": { "id": "VQDWXgDUsCZ1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Defininfing Parameters to Run the Agent on the Defined Environment\n", "num_iterations = 400 #@param{ type: \"number\" }\n", @@ -585,9 +559,7 @@ "metadata": { "id": "qt6ifI5AYWfu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "replay_buffer = bandit_replay_buffer.BanditReplayBuffer(\n", " data_spec=order_items_from_action_fn(agent.policy.trajectory_spec),\n", @@ -642,9 +614,7 @@ "metadata": { "id": "eZPOXzfyy5Sh" }, - "outputs": [ - - ], + "outputs": [], "source": [ "if feedback_model == ranking_environment.FeedbackModel.SCORE_VECTOR:\n", " reward = reward_values\n", @@ -671,9 +641,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "ranking_tutorial.ipynb", "toc_visible": true }, diff --git a/site/ko/datasets/overview.ipynb b/site/ko/datasets/overview.ipynb index 390507f18c..192566c06b 100644 --- a/site/ko/datasets/overview.ipynb +++ b/site/ko/datasets/overview.ipynb @@ -61,9 +61,7 @@ "cellView": "both", "id": "boeZp0sYbO41" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install -q tfds-nightly tensorflow matplotlib" ] @@ -74,9 +72,7 @@ "metadata": { "id": "TTBSvHcSLBzc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", @@ -102,9 +98,7 @@ "metadata": { "id": "FAvbSVzjLCIb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tfds.list_builders()" ] @@ -131,9 +125,7 @@ "metadata": { "id": "dCou80mnLLPV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds = tfds.load('mnist', split='train', shuffle_files=True)\n", "assert isinstance(ds, tf.data.Dataset)\n", @@ -172,9 +164,7 @@ "metadata": { "id": "2zN_jQ2ER40W" }, - "outputs": [ - - ], + "outputs": [], "source": [ "builder = tfds.builder('mnist')\n", "# 1. Create the tfrecord files (no-op if already exists)\n", @@ -220,9 +210,7 @@ "metadata": { "id": "JAGjXdk_bIYQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds = tfds.load('mnist', split='train')\n", "ds = ds.take(1) # Only take a single example\n", @@ -260,9 +248,7 @@ "metadata": { "id": "nJ4O0xy3djfV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds = tfds.load('mnist', split='train', as_supervised=True)\n", "ds = ds.take(1)\n", @@ -291,9 +277,7 @@ "metadata": { "id": "tzQTCUkAfe9R" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds = tfds.load('mnist', split='train', as_supervised=True)\n", "ds = ds.take(1)\n", @@ -321,9 +305,7 @@ "metadata": { "id": "Gg8BNsv-UzFl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "image, label = tfds.as_numpy(tfds.load(\n", " 'mnist',\n", @@ -361,9 +343,7 @@ "metadata": { "id": "ZyQzZ98bX3dM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds = tfds.load('mnist', split='train')\n", "ds = ds.batch(32).prefetch(1)\n", @@ -420,9 +400,7 @@ "metadata": { "id": "FKouwN_yVSGQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds, info = tfds.load('mnist', split='train', with_info=True)\n", "\n", @@ -446,9 +424,7 @@ "metadata": { "id": "DpE2FD56cSQR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds, info = tfds.load('mnist', split='train', with_info=True)\n", "\n", @@ -476,9 +452,7 @@ "metadata": { "id": "UgLgtcd1ljzt" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds, info = tfds.load('mnist', with_info=True)" ] @@ -498,9 +472,7 @@ "metadata": { "id": "nmq97QkilxeL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "builder = tfds.builder('mnist')\n", "info = builder.info" @@ -521,9 +493,7 @@ "metadata": { "id": "O-wLIKD-mZQT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(info)" ] @@ -545,9 +515,7 @@ "metadata": { "id": "RcyZXncqoFab" }, - "outputs": [ - - ], + "outputs": [], "source": [ "info.features" ] @@ -567,9 +535,7 @@ "metadata": { "id": "HhfzBH6qowpz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(info.features[\"label\"].num_classes)\n", "print(info.features[\"label\"].names)\n", @@ -592,9 +558,7 @@ "metadata": { "id": "SergV_wQowLY" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(info.features.shape)\n", "print(info.features.dtype)\n", @@ -619,9 +583,7 @@ "metadata": { "id": "FBbfwA8Sp4ax" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(info.splits)" ] @@ -641,9 +603,7 @@ "metadata": { "id": "fRBieOOquDzX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(list(info.splits.keys()))" ] @@ -663,9 +623,7 @@ "metadata": { "id": "-h_OSpRsqKpP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(info.splits['train'].num_examples)\n", "print(info.splits['train'].filenames)\n", @@ -687,9 +645,7 @@ "metadata": { "id": "HO5irBZ3uIzQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(info.splits['train[15%:75%]'].num_examples)\n", "print(info.splits['train[15%:75%]'].file_instructions)" @@ -753,9 +709,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "overview.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/tutorials/composing_learning_algorithms.ipynb b/site/ko/federated/tutorials/composing_learning_algorithms.ipynb index 6c069d0b29..8b3486127a 100644 --- a/site/ko/federated/tutorials/composing_learning_algorithms.ipynb +++ b/site/ko/federated/tutorials/composing_learning_algorithms.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "0asMuNro71hA" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -50,9 +48,9 @@ "source": [ "\n", " \n", - " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소스 보기GitHub에서 소스 보기노트북 다운로드
" ] @@ -74,9 +72,7 @@ "metadata": { "id": "ZrGitA_KnRO0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "!pip install --quiet --upgrade tensorflow-federated" @@ -88,9 +84,7 @@ "metadata": { "id": "HGTM6tWOLo8M" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from collections.abc import Callable\n", "\n", @@ -182,9 +176,7 @@ "metadata": { "id": "pIw7QQCqltdV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def client_update(model: tff.learning.models.VariableModel,\n", @@ -269,9 +261,7 @@ "metadata": { "id": "X-I-jPsZmmMy" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def build_gradient_clipping_client_work(\n", " model_fn: Callable[[], tff.learning.models.VariableModel],\n", @@ -341,9 +331,7 @@ "metadata": { "id": "-WdnFluLLo8P" }, - "outputs": [ - - ], + "outputs": [], "source": [ "emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()" ] @@ -365,9 +353,7 @@ "metadata": { "id": "Blrh8zJgLo8R" }, - "outputs": [ - - ], + "outputs": [], "source": [ "NUM_CLIENTS = 10\n", "BATCH_SIZE = 20\n", @@ -411,9 +397,7 @@ "metadata": { "id": "Yfld4oFNLo8Y" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def create_keras_model():\n", " initializer = tf.keras.initializers.GlorotNormal(seed=0)\n", @@ -456,9 +440,7 @@ "metadata": { "id": "kOO1ObqJ0cmX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "client_optimizer_fn = lambda: tf.keras.optimizers.SGD(learning_rate=0.01)\n", "server_optimizer_fn = lambda: tf.keras.optimizers.SGD(learning_rate=1.0)" @@ -481,9 +463,7 @@ "metadata": { "id": "iwXOTPeIx2nx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tff.tf_computation()\n", "def initial_model_weights_fn():\n", @@ -521,9 +501,7 @@ "metadata": { "id": "z_86iNeM0IBm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "fed_avg_with_clipping = tff.learning.templates.compose_learning_process(\n", " initial_model_weights_fn,\n", @@ -559,8 +537,7 @@ ] }, "execution_count": 26, - "metadata": { - }, + "metadata": {}, "output_type": "execute_result" } ], @@ -593,8 +570,7 @@ ] }, "execution_count": 27, - "metadata": { - }, + "metadata": {}, "output_type": "execute_result" } ], @@ -617,9 +593,7 @@ "metadata": { "id": "tKhPuBgW1-3c" }, - "outputs": [ - - ], + "outputs": [], "source": [ "learning_process_output = fed_avg_with_clipping.next(state, federated_train_data)" ] @@ -647,8 +621,7 @@ ] }, "execution_count": 29, - "metadata": { - }, + "metadata": {}, "output_type": "execute_result" } ], @@ -683,8 +656,7 @@ ] }, "execution_count": 30, - "metadata": { - }, + "metadata": {}, "output_type": "execute_result" } ], diff --git a/site/ko/federated/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb b/site/ko/federated/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb index 881d2b2656..7f09418d85 100644 --- a/site/ko/federated/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb +++ b/site/ko/federated/tutorials/custom_federated_algorithm_with_tff_optimizers.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "0asMuNro71hA" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -41,9 +39,9 @@ "source": [ "\n", " \n", - " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소그 보기GitHub에서 소그 보기노트북 다운로드
" ] @@ -76,9 +74,7 @@ "metadata": { "id": "ZrGitA_KnRO0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "!pip install --quiet --upgrade tensorflow-federated" @@ -90,9 +86,7 @@ "metadata": { "id": "HGTM6tWOLo8M" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from typing import Any\n", "import functools\n", @@ -119,9 +113,7 @@ "metadata": { "id": "Blrh8zJgLo8R" }, - "outputs": [ - - ], + "outputs": [], "source": [ "only_digits=True\n", "\n", @@ -225,9 +217,7 @@ "metadata": { "id": "c5rHPKreLo8g" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def client_update(model, dataset, server_weights, client_optimizer):\n", @@ -263,9 +253,7 @@ "metadata": { "id": "rYxErLvHLo8i" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@attrs.define(eq=False, frozen=True)\n", "class ServerState(object):\n", @@ -519,9 +507,7 @@ "metadata": { "id": "EdNgYoIwLo9P" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def evaluate(server_state):\n", " keras_model = create_keras_model()\n", @@ -571,9 +557,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "custom_federated_algorithm_with_tff_optimizers.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/tutorials/custom_federated_algorithms_1.ipynb b/site/ko/federated/tutorials/custom_federated_algorithms_1.ipynb index b10b25ff0c..020a9c0213 100644 --- a/site/ko/federated/tutorials/custom_federated_algorithms_1.ipynb +++ b/site/ko/federated/tutorials/custom_federated_algorithms_1.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "uDcWxmG9kh1Q" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -51,9 +49,9 @@ "\n", " \n", - " \n", - " \n", " \n", "
TensorFlow.org에서 보기\n", " Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", " GitHub에서 소스 보기\n", + " GitHub에서 소스 보기\n", "노트북 다운로드
" @@ -110,9 +108,7 @@ "metadata": { "id": "Ary-OZz5jMJI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "!pip install --quiet --upgrade tensorflow-federated" @@ -124,9 +120,7 @@ "metadata": { "id": "-skNC6aovM46" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import collections\n", "\n", @@ -150,9 +144,7 @@ }, "execution_count": 4, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -188,9 +180,7 @@ "metadata": { "id": "COe0tLPPtTbe" }, - "outputs": [ - - ], + "outputs": [], "source": [ "federated_float_on_clients = tff.type_at_clients(tf.float32)" ] @@ -219,9 +209,7 @@ }, "execution_count": 6, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -245,9 +233,7 @@ }, "execution_count": 7, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -280,9 +266,7 @@ }, "execution_count": 8, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -319,9 +303,7 @@ }, "execution_count": 9, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -354,9 +336,7 @@ }, "execution_count": 10, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -393,9 +373,7 @@ }, "execution_count": 11, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -433,9 +411,7 @@ }, "execution_count": 12, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -520,9 +496,7 @@ "metadata": { "id": "g38EkHwGGEUo" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tff.federated_computation(tff.type_at_clients(tf.float32))\n", "def get_average_temperature(sensor_readings):\n", @@ -557,9 +531,7 @@ }, "execution_count": 14, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -606,9 +578,7 @@ }, "execution_count": 15, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -709,9 +679,7 @@ "metadata": { "id": "dpdAqMcygnmr" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tff.tf_computation(tf.float32)\n", "def add_half(x):\n", @@ -750,9 +718,7 @@ }, "execution_count": 18, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -778,9 +744,7 @@ "metadata": { "id": "z08K5UKBlSJP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tff.federated_computation(tff.type_at_clients(tf.float32))\n", "def add_half_on_clients(x):\n", @@ -802,9 +766,7 @@ }, "execution_count": 20, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -841,9 +803,7 @@ }, "execution_count": 21, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -924,9 +884,7 @@ }, "execution_count": 23, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -972,9 +930,7 @@ }, "execution_count": 24, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -1000,9 +956,7 @@ "metadata": { "id": "cw0nen-D0Ks8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tff.tf_computation(tff.SequenceType(tf.float32))\n", "def get_local_temperature_average(local_temperatures):\n", @@ -1026,9 +980,7 @@ }, "execution_count": 26, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -1063,9 +1015,7 @@ }, "execution_count": 27, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -1104,9 +1054,7 @@ }, "execution_count": 28, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -1161,9 +1109,7 @@ }, "execution_count": 30, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -1187,9 +1133,7 @@ }, "execution_count": 31, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -1217,9 +1161,7 @@ "metadata": { "id": "hZIE1kl340at" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tff.federated_computation(\n", " tff.type_at_clients(tff.SequenceType(tf.float32)))\n", @@ -1254,9 +1196,7 @@ }, "execution_count": 33, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -1289,9 +1229,7 @@ }, "execution_count": 34, "metadata": { - "tags": [ - - ] + "tags": [] }, "output_type": "execute_result" } @@ -1312,9 +1250,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "custom_federated_algorithms_1.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/tutorials/federated_reconstruction_for_matrix_factorization.ipynb b/site/ko/federated/tutorials/federated_reconstruction_for_matrix_factorization.ipynb index 128a8f934e..e9b7e6ae5e 100644 --- a/site/ko/federated/tutorials/federated_reconstruction_for_matrix_factorization.ipynb +++ b/site/ko/federated/tutorials/federated_reconstruction_for_matrix_factorization.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "p8SrVqkmnDQv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -50,9 +48,9 @@ "source": [ "\n", " \n", - " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소그 보기GitHub에서 소그 보기노트북 다운로드하기
" ] @@ -74,9 +72,7 @@ "metadata": { "id": "I8pu6-dckG_u" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "!pip install --quiet --upgrade tensorflow-federated\n", @@ -92,9 +88,7 @@ "metadata": { "id": "2txfde-th95B" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import collections\n", "import functools\n", @@ -151,9 +145,7 @@ "metadata": { "id": "DwxoBLaWneOE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def download_movielens_data(dataset_path):\n", " \"\"\"Downloads and copies MovieLens data to local /tmp directory.\"\"\"\n", @@ -178,9 +170,7 @@ "metadata": { "id": "Y6_bskRUniqB" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def load_movielens_data(\n", " data_directory: str = \"/tmp\",\n", @@ -237,9 +227,7 @@ "metadata": { "id": "OkAh5nt_n4ll" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ratings_df, movies_df = load_movielens_data()" ] @@ -479,8 +467,7 @@ ] }, "execution_count": 5, - "metadata": { - }, + "metadata": {}, "output_type": "execute_result" } ], @@ -717,8 +704,7 @@ ] }, "execution_count": 6, - "metadata": { - }, + "metadata": {}, "output_type": "execute_result" } ], @@ -770,8 +756,7 @@ "
" ] }, - "metadata": { - }, + "metadata": {}, "output_type": "display_data" }, { @@ -819,8 +804,7 @@ "
" ] }, - "metadata": { - }, + "metadata": {}, "output_type": "display_data" } ], @@ -927,9 +911,7 @@ "metadata": { "id": "DHwb2AsvtIwO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def create_tf_datasets(ratings_df: pd.DataFrame,\n", " batch_size: int = 1,\n", @@ -993,9 +975,7 @@ "metadata": { "id": "T6pJVpHfns9q" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# We limit the number of clients to speed up dataset creation. Feel free to pass\n", "# max_clients=None to load all clients' data.\n", @@ -1073,8 +1053,7 @@ "
" ] }, - "metadata": { - }, + "metadata": {}, "output_type": "display_data" } ], @@ -1153,9 +1132,7 @@ "metadata": { "id": "nSLMxPDP3D72" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class UserEmbedding(tf.keras.layers.Layer):\n", " \"\"\"Keras layer representing an embedding for a single user, used below.\"\"\"\n", @@ -1240,9 +1217,7 @@ "metadata": { "id": "vNBRQW9EwneZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# This will be used to produce our training process.\n", "# User and item embeddings will be 50-dimensional.\n", @@ -1269,9 +1244,7 @@ "metadata": { "id": "FDJUfeSNwxIL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class RatingAccuracy(tf.keras.metrics.Mean):\n", " \"\"\"Keras metric computing accuracy of reconstructed ratings.\"\"\"\n", @@ -1317,9 +1290,7 @@ "metadata": { "id": "YQsX0FgtwsoE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# We'll use this by doing:\n", "# state = training_process.initialize()\n", @@ -1348,9 +1319,7 @@ "metadata": { "id": "KHi7J330PtxO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "evaluation_process = tff.learning.algorithms.build_fed_recon_eval(\n", " model_fn,\n", @@ -1568,8 +1537,7 @@ "
" ] }, - "metadata": { - }, + "metadata": {}, "output_type": "display_data" }, { @@ -1579,8 +1547,7 @@ "
" ] }, - "metadata": { - }, + "metadata": {}, "output_type": "display_data" } ], diff --git a/site/ko/federated/tutorials/random_noise_generation.ipynb b/site/ko/federated/tutorials/random_noise_generation.ipynb index 49528af914..ead87ac45a 100644 --- a/site/ko/federated/tutorials/random_noise_generation.ipynb +++ b/site/ko/federated/tutorials/random_noise_generation.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "koW3R4ntOgLS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -52,9 +50,9 @@ "source": [ "\n", " \n", - " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소그 보기GitHub에서 소그 보기노트북 다운로드
" ] @@ -76,9 +74,7 @@ "metadata": { "id": "ke7EyuvG0Zyn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@test {\"skip\": true}\n", "!pip install --quiet --upgrade tensorflow-federated" @@ -90,9 +86,7 @@ "metadata": { "id": "rtgStTrNIId-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import numpy as np\n", "import tensorflow as tf\n", @@ -122,8 +116,7 @@ ] }, "execution_count": 43, - "metadata": { - }, + "metadata": {}, "output_type": "execute_result" } ], @@ -539,9 +532,7 @@ "metadata": { "id": "NF1gaMgrKdwU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def timestamp_seed():\n", " # tf.timestamp returns microseconds as decimal places, thus scaling by 1e6.\n", @@ -631,9 +622,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "random_noise_generation.ipynb", "toc_visible": true }, diff --git a/site/ko/federated/tutorials/simulations.ipynb b/site/ko/federated/tutorials/simulations.ipynb index 612cab4a82..335c738397 100644 --- a/site/ko/federated/tutorials/simulations.ipynb +++ b/site/ko/federated/tutorials/simulations.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "PTdfUcwp8Eru" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -59,9 +57,9 @@ "source": [ "\n", " \n", - " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소그 보기GitHub에서 소그 보기노트북 다운론드하기
" ] @@ -112,9 +110,7 @@ "metadata": { "id": "2dVPgxN0MdG2" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import collections\n", "import time\n", @@ -222,9 +218,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "simulations.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/advanced_autodiff.ipynb b/site/ko/guide/advanced_autodiff.ipynb index 483c3f69b2..18536ff985 100644 --- a/site/ko/guide/advanced_autodiff.ipynb +++ b/site/ko/guide/advanced_autodiff.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -80,9 +78,7 @@ "metadata": { "id": "IqR2PQG4ZaZ0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "\n", @@ -124,9 +120,7 @@ "metadata": { "id": "mhFSYf7uQWxR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(2.0)\n", "y = tf.Variable(3.0)\n", @@ -160,9 +154,7 @@ "metadata": { "id": "lsMHsmrh4pqM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(2.0)\n", "y = tf.Variable(3.0)\n", @@ -198,9 +190,7 @@ "metadata": { "id": "30qnZMe48BkB" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(2.0)\n", "y = tf.Variable(3.0)\n", @@ -250,9 +240,7 @@ "metadata": { "id": "Mjj01w4NYtwd" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Establish an identity operation, but clip during the gradient pass.\n", "@tf.custom_gradient\n", @@ -297,9 +285,7 @@ "metadata": { "id": "Q5JBgIBYjN1I" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class MyModule(tf.Module):\n", "\n", @@ -316,9 +302,7 @@ "metadata": { "id": "xZTrgy2q-9pq" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.saved_model.save(\n", " model,\n", @@ -361,9 +345,7 @@ "metadata": { "id": "BJ0HdMvte0VZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x0 = tf.constant(0.0)\n", "x1 = tf.constant(0.0)\n", @@ -386,9 +368,7 @@ "metadata": { "id": "6ApAoMNFfNz6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tape0.gradient(ys, x0).numpy() # cos(x) => 1.0" ] @@ -399,9 +379,7 @@ "metadata": { "id": "rF1jrAJsfYW_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tape1.gradient(ys, x1).numpy() # sigmoid(x1)*(1-sigmoid(x1)) => 0.25" ] @@ -425,9 +403,7 @@ "metadata": { "id": "cPQgthZ7ugRJ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0\n", "\n", @@ -480,9 +456,7 @@ "metadata": { "id": "tH3ZFuUfDLrR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.random.normal([7, 5])\n", "\n", @@ -495,9 +469,7 @@ "metadata": { "id": "E6yOFsjEDR9u" }, - "outputs": [ - - ], + "outputs": [], "source": [ "with tf.GradientTape() as t2:\n", " # The inner tape only takes the gradient with respect to the input,\n", @@ -521,9 +493,7 @@ "metadata": { "id": "123QMq6PqK_d" }, - "outputs": [ - - ], + "outputs": [], "source": [ "[var.shape for var in dg1_mag]" ] @@ -586,9 +556,7 @@ "metadata": { "id": "bAFeIE8EuVIq" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.linspace(-10.0, 10.0, 200+1)\n", "delta = tf.Variable(0.0)\n", @@ -614,9 +582,7 @@ "metadata": { "id": "iZ6awnDzr_BA" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(y.shape)\n", "print(dy_dx.shape)" @@ -628,9 +594,7 @@ "metadata": { "id": "siNZaklc0_-e" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plt.plot(x.numpy(), y, label='y')\n", "plt.plot(x.numpy(), dy_dx, label='dy/dx')\n", @@ -664,9 +628,7 @@ "metadata": { "id": "39YXItgLxMBk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.random.normal([7, 5])\n", "layer = tf.keras.layers.Dense(10, activation=tf.nn.relu)\n", @@ -692,9 +654,7 @@ "metadata": { "id": "CigTWyfPvPuv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "layer.kernel.shape" ] @@ -714,9 +674,7 @@ "metadata": { "id": "pRLzTTbvEimH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "j = tape.jacobian(y, layer.kernel)\n", "j.shape" @@ -737,9 +695,7 @@ "metadata": { "id": "FJjZpYRnDjVa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "g = tape.gradient(y, layer.kernel)\n", "print('g.shape:', g.shape)\n", @@ -778,9 +734,7 @@ "metadata": { "id": "ELGTaell_j81" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.random.normal([7, 5])\n", "layer1 = tf.keras.layers.Dense(8, activation=tf.nn.relu)\n", @@ -803,9 +757,7 @@ "metadata": { "id": "FVqQuZj4XGjm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(f'layer.kernel.shape: {layer1.kernel.shape}')\n", "print(f'h.shape: {h.shape}')" @@ -826,9 +778,7 @@ "metadata": { "id": "6te7N6wVXwXX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "n_params = tf.reduce_prod(layer1.kernel.shape)\n", "\n", @@ -851,9 +801,7 @@ "metadata": { "id": "8TCHc7Vrf52S" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def imshow_zero_center(image, **kwargs):\n", " lim = tf.reduce_max(abs(image))\n", @@ -867,9 +815,7 @@ "metadata": { "id": "DExOxd7Ok2H0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "imshow_zero_center(h_mat)" ] @@ -889,9 +835,7 @@ "metadata": { "id": "3DdnbynBdSor" }, - "outputs": [ - - ], + "outputs": [], "source": [ "eps = 1e-3\n", "eye_eps = tf.eye(h_mat.shape[0])*eps" @@ -912,9 +856,7 @@ "metadata": { "id": "k1LYftgmswOO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# X(k+1) = X(k) - (∇²f(X(k)))^-1 @ ∇f(X(k))\n", "# h_mat = ∇²f(X(k))\n", @@ -960,9 +902,7 @@ "metadata": { "id": "tQMndhIUHMes" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.random.normal([7, 5])\n", "\n", @@ -992,9 +932,7 @@ "metadata": { "id": "1zSl2A5-HhMH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "j = tape.jacobian(y, x)\n", "j.shape" @@ -1015,9 +953,7 @@ "metadata": { "id": "ZFl9uj3ueVSH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "imshow_zero_center(j[:, 0, :, 0])\n", "_ = plt.title('A (batch, batch) slice')" @@ -1029,9 +965,7 @@ "metadata": { "id": "g4ZoRJcJNmy5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def plot_as_patches(j):\n", " # Reorder axes so the diagonals will each form a contiguous patch.\n", @@ -1064,9 +998,7 @@ "metadata": { "id": "v65OAjEgLQwl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "j_sum = tf.reduce_sum(j, axis=2)\n", "print(j_sum.shape)\n", @@ -1089,9 +1021,7 @@ "metadata": { "id": "YJLIl9WpHqYq" }, - "outputs": [ - - ], + "outputs": [], "source": [ "jb = tape.batch_jacobian(y, x)\n", "jb.shape" @@ -1103,9 +1033,7 @@ "metadata": { "id": "-5t_q5SfHw7T" }, - "outputs": [ - - ], + "outputs": [], "source": [ "error = tf.reduce_max(abs(jb - j_sum))\n", "assert error < 1e-3\n", @@ -1127,9 +1055,7 @@ "metadata": { "id": "tnDugVc-L4fj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.random.normal([7, 5])\n", "\n", @@ -1153,9 +1079,7 @@ "metadata": { "id": "SNyZ1WhJMVLm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plot_as_patches(j)\n", "\n", @@ -1178,9 +1102,7 @@ "metadata": { "id": "k8_mICHoasCi" }, - "outputs": [ - - ], + "outputs": [], "source": [ "jb = tape.batch_jacobian(y, x)\n", "print(f'jb.shape: {jb.shape}')" @@ -1189,9 +1111,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "advanced_autodiff.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/autodiff.ipynb b/site/ko/guide/autodiff.ipynb index 45d003c10c..d98cd6a0c1 100644 --- a/site/ko/guide/autodiff.ipynb +++ b/site/ko/guide/autodiff.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -84,9 +82,7 @@ "metadata": { "id": "IqR2PQG4ZaZ0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", @@ -124,9 +120,7 @@ "metadata": { "id": "Xq9GgTCP7a4A" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(3.0)\n", "\n", @@ -149,9 +143,7 @@ "metadata": { "id": "LsvrwF6bHroC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# dy = 2x * dx\n", "dy_dx = tape.gradient(y, x)\n", @@ -173,9 +165,7 @@ "metadata": { "id": "vacZ3-Ws5VdV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "w = tf.Variable(tf.random.normal((3, 2)), name='w')\n", "b = tf.Variable(tf.zeros(2, dtype=tf.float32), name='b')\n", @@ -201,9 +191,7 @@ "metadata": { "id": "luOtK1Da_BR0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "[dl_dw, dl_db] = tape.gradient(loss, [w, b])" ] @@ -223,9 +211,7 @@ "metadata": { "id": "aYbWRFPZqk4U" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(w.shape)\n", "print(dl_dw.shape)" @@ -246,9 +232,7 @@ "metadata": { "id": "d73cY6NOuaMd" }, - "outputs": [ - - ], + "outputs": [], "source": [ "my_vars = {\n", " 'w': w,\n", @@ -278,9 +262,7 @@ "metadata": { "id": "JvesHtbQESc-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "layer = tf.keras.layers.Dense(2, activation='relu')\n", "x = tf.constant([[1., 2., 3.]])\n", @@ -300,9 +282,7 @@ "metadata": { "id": "PR_ezr6UFrpI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for var, g in zip(layer.trainable_variables, grad):\n", " print(f'{var.name}, shape: {g.shape}')" @@ -340,9 +320,7 @@ "metadata": { "id": "Kj9gPckdB37a" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# A trainable variable\n", "x0 = tf.Variable(3.0, name='x0')\n", @@ -377,9 +355,7 @@ "metadata": { "id": "hwNwjW1eAkib" }, - "outputs": [ - - ], + "outputs": [], "source": [ "[var.name for var in tape.watched_variables()]" ] @@ -401,9 +377,7 @@ "metadata": { "id": "tVN1QqFRDHBK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant(3.0)\n", "with tf.GradientTape() as tape:\n", @@ -430,9 +404,7 @@ "metadata": { "id": "7QPzwWvSEwIp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x0 = tf.Variable(0.0)\n", "x1 = tf.Variable(10.0)\n", @@ -460,9 +432,7 @@ "metadata": { "id": "e6GM-3evH1Sz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# dys/dx1 = exp(x1) / (1 + exp(x1)) = sigmoid(x1)\n", "grad = tape.gradient(ys, {'x0': x0, 'x1': x1})\n", @@ -488,9 +458,7 @@ "metadata": { "id": "7XaPRAwUyYms" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant(3.0)\n", "\n", @@ -520,9 +488,7 @@ "metadata": { "id": "zZaCm3-9zVCi" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant([1, 3.0])\n", "with tf.GradientTape(persistent=True) as tape:\n", @@ -540,9 +506,7 @@ "metadata": { "id": "j8bv_jQFg6CN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "del tape # Drop the reference to the tape" ] @@ -586,9 +550,7 @@ "metadata": { "id": "qI0sDV_WeXBb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(2.0)\n", "with tf.GradientTape(persistent=True) as tape:\n", @@ -617,9 +579,7 @@ "metadata": { "id": "o4a6_YOcfWKS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(2.0)\n", "with tf.GradientTape() as tape:\n", @@ -644,9 +604,7 @@ "metadata": { "id": "DArPWqsSh5un" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(2.)\n", "\n", @@ -682,9 +640,7 @@ "metadata": { "id": "JQvk_jnMmTDS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.linspace(-10.0, 10.0, 200+1)\n", "\n", @@ -701,9 +657,7 @@ "metadata": { "id": "e_f2QgDPmcPE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plt.plot(x, y, label='y')\n", "plt.plot(x, dy_dx, label='dy/dx')\n", @@ -730,9 +684,7 @@ "metadata": { "id": "ciFLizhrrjy7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant(1.0)\n", "\n", @@ -769,9 +721,7 @@ "metadata": { "id": "8k05WmuAwPm7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dx = tape.gradient(result, x)\n", "\n", @@ -795,9 +745,7 @@ "metadata": { "id": "CU185WDM81Ut" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(2.)\n", "y = tf.Variable(3.)\n", @@ -835,9 +783,7 @@ "metadata": { "id": "QPKY4Tn9zX7_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable(2.0)\n", "\n", @@ -866,9 +812,7 @@ "metadata": { "id": "jmoLCDJb_yw1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable([[1.0, 2.0],\n", " [3.0, 4.0]], dtype=tf.float32)\n", @@ -905,9 +849,7 @@ "metadata": { "id": "9jlHXHqfASU3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant(10)\n", "\n", @@ -948,9 +890,7 @@ "metadata": { "id": "C1tLeeRFE479" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x0 = tf.Variable(3.0)\n", "x1 = tf.Variable(0.0)\n", @@ -1004,9 +944,7 @@ "metadata": { "id": "HSb20FXc_V0U" }, - "outputs": [ - - ], + "outputs": [], "source": [ "image = tf.Variable([[[0.5, 0.0, 0.0]]])\n", "delta = tf.Variable(0.1)\n", @@ -1054,9 +992,7 @@ "metadata": { "id": "U6zxk1sf9Ixx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.Variable([2., 2.])\n", "y = tf.Variable(3.)\n", diff --git a/site/ko/guide/basic_training_loops.ipynb b/site/ko/guide/basic_training_loops.ipynb index 4e6bcf0d20..8dd1b7380c 100644 --- a/site/ko/guide/basic_training_loops.ipynb +++ b/site/ko/guide/basic_training_loops.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "m8y3rGtQsYP2" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -82,9 +80,7 @@ "metadata": { "id": "NiolgWMPgpwI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "\n", @@ -136,9 +132,7 @@ "metadata": { "id": "NzivK2ATByOz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# The actual line\n", "TRUE_W = 3.0\n", @@ -166,9 +160,7 @@ "metadata": { "id": "IlFd_HVBFGIF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Plot all the data\n", "plt.plot(x, y, '.')\n", @@ -205,9 +197,7 @@ "metadata": { "id": "_WRu7Pze7wk8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class MyModel(tf.Module):\n", " def __init__(self, **kwargs):\n", @@ -255,9 +245,7 @@ "metadata": { "id": "Y0ysUFGY924U" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# This computes a single loss value for an entire batch\n", "def loss(target_y, predicted_y):\n", @@ -279,9 +267,7 @@ "metadata": { "id": "_eb83LtrB4nt" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plt.plot(x, y, '.', label=\"Data\")\n", "plt.plot(x, f(x), label=\"Ground truth\")\n", @@ -318,9 +304,7 @@ "metadata": { "id": "MBIACgdnA55X" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Given a callable model, inputs, outputs, and a learning rate...\n", "def train(model, x, y, learning_rate):\n", @@ -352,9 +336,7 @@ "metadata": { "id": "XdfkR223D9dW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "model = MyModel()\n", "\n", @@ -398,9 +380,7 @@ "metadata": { "id": "iRuNUghs1lHY" }, - "outputs": [ - - ], + "outputs": [], "source": [ "current_loss = loss(y, model(x))\n", "\n", @@ -425,9 +405,7 @@ "metadata": { "id": "ND1fQw8sbTNr" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plt.plot(epochs, weights, label='Weights', color=colors[0])\n", "plt.plot(epochs, [TRUE_W] * len(epochs), '--',\n", @@ -456,9 +434,7 @@ "metadata": { "id": "tpTEjWWex568" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plt.plot(x, y, '.', label=\"Data\")\n", "plt.plot(x, f(x), label=\"Ground truth\")\n", @@ -488,9 +464,7 @@ "metadata": { "id": "Z86hCI0x1YX3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class MyModelKeras(tf.keras.Model):\n", " def __init__(self, **kwargs):\n", @@ -529,9 +503,7 @@ "metadata": { "id": "-nbLLfPE2pEl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "keras_model = MyModelKeras()\n", "\n", @@ -568,9 +540,7 @@ "metadata": { "id": "zfAYqtu136PO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(x.shape[0])\n", "keras_model.fit(x, y, epochs=10, batch_size=1000)" diff --git a/site/ko/guide/core/logistic_regression_core.ipynb b/site/ko/guide/core/logistic_regression_core.ipynb index d637e4c73d..b0a832b7c1 100644 --- a/site/ko/guide/core/logistic_regression_core.ipynb +++ b/site/ko/guide/core/logistic_regression_core.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "AwOEIRJC6Une" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -84,9 +82,7 @@ "metadata": { "id": "5lZoUK6AVTos" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install -q seaborn" ] @@ -97,9 +93,7 @@ "metadata": { "id": "1rRo8oNqZ-Rj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "import pandas as pd\n", @@ -135,9 +129,7 @@ "metadata": { "id": "CiX2FI4gZtTt" }, - "outputs": [ - - ], + "outputs": [], "source": [ "url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data'\n", "\n", @@ -165,9 +157,7 @@ "metadata": { "id": "uvR2Bzb691lJ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = pd.read_csv(url, names=column_names)" ] @@ -178,9 +168,7 @@ "metadata": { "id": "YB9eq6Zq-IZ4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset.info()" ] @@ -200,9 +188,7 @@ "metadata": { "id": "hWxktwbv-KPp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset.head()" ] @@ -222,9 +208,7 @@ "metadata": { "id": "m2O60B-IVG9Q" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_dataset = dataset.sample(frac=0.75, random_state=1)" ] @@ -235,9 +219,7 @@ "metadata": { "id": "i06vHFv_QB24" }, - "outputs": [ - - ], + "outputs": [], "source": [ "len(train_dataset)" ] @@ -248,9 +230,7 @@ "metadata": { "id": "19JaochhaQ3m" }, - "outputs": [ - - ], + "outputs": [], "source": [ "test_dataset = dataset.drop(train_dataset.index)" ] @@ -261,9 +241,7 @@ "metadata": { "id": "LmHRcbAfaSag" }, - "outputs": [ - - ], + "outputs": [], "source": [ "len(test_dataset)" ] @@ -274,9 +252,7 @@ "metadata": { "id": "w6JxBhBc_wwO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# The `id` column can be dropped since each row is unique\n", "x_train, y_train = train_dataset.iloc[:, 2:], train_dataset.iloc[:, 1]\n", @@ -304,9 +280,7 @@ "metadata": { "id": "JEJHhN65a2VV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "y_train, y_test = y_train.map({'B': 0, 'M': 1}), y_test.map({'B': 0, 'M': 1})\n", "x_train, y_train = tf.convert_to_tensor(x_train, dtype=tf.float32), tf.convert_to_tensor(y_train, dtype=tf.float32)\n", @@ -328,9 +302,7 @@ "metadata": { "id": "oRKO_x8gWKv-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "sns.pairplot(train_dataset.iloc[:, 1:6], hue = 'diagnosis', diag_kind='kde');" ] @@ -352,9 +324,7 @@ "metadata": { "id": "yi2FzC3T21jR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_dataset.describe().transpose()[:10]" ] @@ -374,9 +344,7 @@ "metadata": { "id": "FrzKNFNjLQDl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class Normalize(tf.Module):\n", " def __init__(self, x):\n", @@ -449,9 +417,7 @@ "metadata": { "id": "ThHaV_RmucZl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.linspace(-10, 10, 500)\n", "x = tf.cast(x, tf.float32)\n", @@ -489,9 +455,7 @@ "metadata": { "id": "JVBInnSqS36W" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def log_loss(y_pred, y):\n", " # Compute the log loss function\n", @@ -545,9 +509,7 @@ "metadata": { "id": "c0sXM7qLlKfZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class LogisticRegression(tf.Module):\n", "\n", @@ -586,9 +548,7 @@ "metadata": { "id": "5bIovC0Z4QHJ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "log_reg = LogisticRegression()" ] @@ -599,9 +559,7 @@ "metadata": { "id": "QJ2ievISyf0p" }, - "outputs": [ - - ], + "outputs": [], "source": [ "y_pred = log_reg(x_train_norm[:5], train=False)\n", "y_pred.numpy()" @@ -622,9 +580,7 @@ "metadata": { "id": "ssnVcKg7oMe6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def predict_class(y_pred, thresh=0.5):\n", " # Return a tensor with `1` if `y_pred` > `0.5`, and `0` otherwise\n", @@ -656,9 +612,7 @@ "metadata": { "id": "vJD7-4U0etqa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "batch_size = 64\n", "train_dataset = tf.data.Dataset.from_tensor_slices((x_train_norm, y_train))\n", @@ -682,9 +636,7 @@ "metadata": { "id": "jNC3D1DGsGgK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Set training parameters\n", "epochs = 200\n", @@ -748,9 +700,7 @@ "metadata": { "id": "mv3oCQPvWhr0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plt.plot(range(epochs), train_losses, label = \"Training loss\")\n", "plt.plot(range(epochs), test_losses, label = \"Testing loss\")\n", @@ -766,9 +716,7 @@ "metadata": { "id": "D2HDVGLPODIE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plt.plot(range(epochs), train_accs, label = \"Training accuracy\")\n", "plt.plot(range(epochs), test_accs, label = \"Testing accuracy\")\n", @@ -784,9 +732,7 @@ "metadata": { "id": "jonKhUzuPyfa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(f\"Final training log loss: {train_losses[-1]:.3f}\")\n", "print(f\"Final testing log Loss: {test_losses[-1]:.3f}\")" @@ -798,9 +744,7 @@ "metadata": { "id": "d3DF4qyrPyke" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(f\"Final training accuracy: {train_accs[-1]:.3f}\")\n", "print(f\"Final testing accuracy: {test_accs[-1]:.3f}\")" @@ -825,9 +769,7 @@ "metadata": { "id": "OJO7YkA8ZDMU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def show_confusion_matrix(y, y_classes, typ):\n", " # Compute the confusion matrix and normalize it\n", @@ -852,9 +794,7 @@ "metadata": { "id": "OQ5DFcleiDFm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "show_confusion_matrix(y_train, train_classes, 'Training')" ] @@ -865,9 +805,7 @@ "metadata": { "id": "gtfcsAp_iCNR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "show_confusion_matrix(y_test, test_classes, 'Testing')" ] @@ -904,9 +842,7 @@ "metadata": { "id": "6KPRHCzg4ZxH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class ExportModule(tf.Module):\n", " def __init__(self, model, norm_x, class_pred):\n", @@ -930,9 +866,7 @@ "metadata": { "id": "2YzRclo5-yjO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "log_reg_export = ExportModule(model=log_reg,\n", " norm_x=norm_x,\n", @@ -954,9 +888,7 @@ "metadata": { "id": "a4Qum1Ts_pmF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "models = tempfile.mkdtemp()\n", "save_path = os.path.join(models, 'log_reg_export')\n", @@ -969,9 +901,7 @@ "metadata": { "id": "3KPILr1i_M_c" }, - "outputs": [ - - ], + "outputs": [], "source": [ "log_reg_loaded = tf.saved_model.load(save_path)\n", "test_preds = log_reg_loaded(x_test)\n", @@ -998,9 +928,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "logistic_regression_core.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/core/matrix_core.ipynb b/site/ko/guide/core/matrix_core.ipynb index 8529286654..0f05c669a4 100644 --- a/site/ko/guide/core/matrix_core.ipynb +++ b/site/ko/guide/core/matrix_core.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "AwOEIRJC6Une" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -93,9 +91,7 @@ "metadata": { "id": "1rRo8oNqZ-Rj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import matplotlib\n", "from matplotlib.image import imread\n", @@ -111,9 +107,7 @@ "metadata": { "id": "9xQKvCJ85kCQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "print(tf.__version__)" @@ -167,9 +161,7 @@ "metadata": { "id": "C3QAcgyoeIpv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "A = tf.random.uniform(shape=[40,30])\n", "# Compute the SVD factorization\n", @@ -201,9 +193,7 @@ "metadata": { "id": "TPE6QeMtADUn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "A_svd = tf.einsum('s,us,vs -> uv',s,U,V)\n", "print('\\nReconstructed Matrix, A_svd', A_svd)" @@ -256,9 +246,7 @@ "metadata": { "id": "2oY3pMPagJrO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def rank_r_approx(s, U, V, r, verbose=False):\n", " # Compute the matrices necessary for a rank-r approximation\n", @@ -288,9 +276,7 @@ "metadata": { "id": "O3ZRkYCkX2FQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(f\"Original Size of A: {tf.size(A)}\")\n", "s, U, V = tf.linalg.svd(A)" @@ -302,9 +288,7 @@ "metadata": { "id": "S1DR83VMX4cM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Rank-15 approximation\n", "A_15, A_15_size = rank_r_approx(s, U, V, 15, verbose = True)\n", @@ -317,9 +301,7 @@ "metadata": { "id": "KgFT70XFX57E" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Rank-3 approximation\n", "A_3, A_3_size = rank_r_approx(s, U, V, 3, verbose = True)\n", @@ -354,9 +336,7 @@ "metadata": { "id": "OVsZOQUAZ2C7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "img_link = \"https://imagen.research.google/main_gallery_images/a-photo-of-a-corgi-dog-riding-a-bike-in-times-square.jpg\"\n", "img_path = requests.get(img_link, stream=True).raw\n", @@ -370,9 +350,7 @@ "metadata": { "id": "Qvs7uftcZ54x" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def show_img(I):\n", " # Display the image in matplotlib\n", @@ -387,9 +365,7 @@ "metadata": { "id": "ZbesXO3HZ6Qs" }, - "outputs": [ - - ], + "outputs": [], "source": [ "show_img(I)" ] @@ -411,9 +387,7 @@ "metadata": { "id": "i7DDp0h7oSIk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def compress_image(I, r, verbose=False):\n", " # Compress an image with the SVD given a rank \n", @@ -456,9 +430,7 @@ "metadata": { "id": "7GlKkVLGDjre" }, - "outputs": [ - - ], + "outputs": [], "source": [ "I_100, I_100_prop = compress_image(I, 100, verbose=True)" ] @@ -469,9 +441,7 @@ "metadata": { "id": "XdvUkF5_E75D" }, - "outputs": [ - - ], + "outputs": [], "source": [ "I_50, I_50_prop = compress_image(I, 50, verbose=True)" ] @@ -482,9 +452,7 @@ "metadata": { "id": "MsCNZ8416Sbk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "I_10, I_10_prop = compress_image(I, 10, verbose=True)" ] @@ -517,9 +485,7 @@ "metadata": { "id": "O1ariNQe6Wbl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plt.figure(figsize=(11,6))\n", "plt.plot([100, 50, 10], [I_100_prop, I_50_prop, I_10_prop])\n", @@ -561,9 +527,7 @@ "metadata": { "id": "viVO-I60QynI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def compress_image_with_factor(I, compression_factor, verbose=False):\n", " # Returns a compressed image based on a desired compression factor\n", @@ -588,9 +552,7 @@ "metadata": { "id": "HVeeloIwQ1b6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "compression_factor = 0.15\n", "I_r_img = compress_image_with_factor(I, compression_factor, verbose=True)" @@ -613,9 +575,7 @@ "metadata": { "id": "CteJ6VbKlndu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def viz_energy(I):\n", " # Visualize the energy captured based on rank\n", @@ -639,9 +599,7 @@ "metadata": { "id": "Vl9PKow-GgCp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "viz_energy(I)" ] @@ -661,9 +619,7 @@ "metadata": { "id": "fum5Cvm7R5vH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def compress_image_with_energy(I, energy_factor, verbose=False):\n", " # Returns a compressed image based on a desired energy factor\n", @@ -697,9 +653,7 @@ "metadata": { "id": "xDXBaZQ4c5jF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "energy_factor = 0.75\n", "I_r_img = compress_image_with_energy(I, energy_factor, verbose=True)" @@ -726,9 +680,7 @@ "metadata": { "id": "hctOvN8BckiS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "s, U, V = tf.linalg.svd(A)\n", "A_10, A_10_size = rank_r_approx(s, U, V, 10)\n", @@ -758,9 +710,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "matrix_core.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/core/mlp_core.ipynb b/site/ko/guide/core/mlp_core.ipynb index 54081db1f4..f2bf66a03e 100644 --- a/site/ko/guide/core/mlp_core.ipynb +++ b/site/ko/guide/core/mlp_core.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "AwOEIRJC6Une" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -51,11 +49,11 @@ "\n", " \n", - " \n", - " \n", - " \n", + " \n", "
TensorFlow.org에서 보기\n", " Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", " GitHub에서 소스 보기\n", + " GitHub에서 소스 보기\n", " 노트북 다운로드하기 노트북 다운로드하기
" ] }, @@ -120,9 +118,7 @@ "metadata": { "id": "mSfgqmwBagw_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Use seaborn for countplot.\n", "!pip install -q seaborn" @@ -134,9 +130,7 @@ "metadata": { "id": "1rRo8oNqZ-Rj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import pandas as pd\n", "import matplotlib\n", @@ -154,9 +148,7 @@ "metadata": { "id": "9xQKvCJ85kCQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "import tensorflow_datasets as tfds\n", @@ -184,9 +176,7 @@ "metadata": { "id": "Uiuh0B098_3p" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_data, val_data, test_data = tfds.load(\"mnist\", \n", " split=['train[10000:]', 'train[0:10000]', 'test'],\n", @@ -208,9 +198,7 @@ "metadata": { "id": "6V8hSqJ7AMjQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x_viz, y_viz = tfds.load(\"mnist\", split=['train[:1500]'], batch_size=-1, as_supervised=True)[0]\n", "x_viz = tf.squeeze(x_viz, axis=3)\n", @@ -238,9 +226,7 @@ "metadata": { "id": "Rj3K4XgQE7qR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "sns.countplot(x=y_viz.numpy());\n", "plt.xlabel('Digits')\n", @@ -264,9 +250,7 @@ "metadata": { "id": "JSyCm2V2_AvI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def preprocess(x, y):\n", " # Reshaping the data\n", @@ -297,9 +281,7 @@ "metadata": { "id": "hYunzt3UyT9G" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.linspace(-2, 2, 201)\n", "x = tf.cast(x, tf.float32)\n", @@ -326,9 +308,7 @@ "metadata": { "id": "fVM8pvhWwuwI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.linspace(-4, 4, 201)\n", "x = tf.cast(x, tf.float32)\n", @@ -359,9 +339,7 @@ "metadata": { "id": "re1SSFyBdMrS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def xavier_init(shape):\n", " # Computes the xavier initialization values for a weight matrix\n", @@ -387,9 +365,7 @@ "metadata": { "id": "IM0yJos25FG5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class DenseLayer(tf.Module):\n", "\n", @@ -428,9 +404,7 @@ "metadata": { "id": "6XisRWiCyHAb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class MLP(tf.Module):\n", "\n", @@ -464,9 +438,7 @@ "metadata": { "id": "VmlACuki3oPi" }, - "outputs": [ - - ], + "outputs": [], "source": [ "hidden_layer_1_size = 700\n", "hidden_layer_2_size = 500\n", @@ -504,9 +476,7 @@ "metadata": { "id": "rskOYA7FVCwg" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def cross_entropy_loss(y_pred, y):\n", " # Compute cross entropy loss with a sparse operation\n", @@ -529,9 +499,7 @@ "metadata": { "id": "jPJMWx2UgiBm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def accuracy(y_pred, y):\n", " # Compute accuracy after extracting class predictions\n", @@ -557,9 +525,7 @@ "metadata": { "id": "iGIBDk3cAv6a" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class Adam:\n", "\n", @@ -608,9 +574,7 @@ "metadata": { "id": "CJLeY2ao1aw6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def train_step(x_batch, y_batch, loss, acc, model, optimizer):\n", " # Update the model state given a batch of data\n", @@ -636,9 +600,7 @@ "metadata": { "id": "oC85kuZgmh3q" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def train_model(mlp, train_data, val_data, loss, acc, optimizer, epochs):\n", " # Initialize data structures\n", @@ -692,9 +654,7 @@ "metadata": { "id": "zPlT8QfxptYl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train_losses, train_accs, val_losses, val_accs = train_model(mlp_model, train_data, val_data, \n", " loss=cross_entropy_loss, acc=accuracy,\n", @@ -718,9 +678,7 @@ "metadata": { "id": "VXTCYVtNDjAM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def plot_metrics(train_metric, val_metric, metric_type):\n", " # Visualize metrics vs training Epochs\n", @@ -739,9 +697,7 @@ "metadata": { "id": "DC-qIvZbHo0G" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plot_metrics(train_losses, val_losses, \"cross entropy loss\")" ] @@ -752,9 +708,7 @@ "metadata": { "id": "P-w2xk2PIDve" }, - "outputs": [ - - ], + "outputs": [], "source": [ "plot_metrics(train_accs, val_accs, \"accuracy\")" ] @@ -780,9 +734,7 @@ "metadata": { "id": "1sszfWuJJZoo" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class ExportModule(tf.Module):\n", " def __init__(self, model, preprocess, class_pred):\n", @@ -806,9 +758,7 @@ "metadata": { "id": "p8x6gjTDVi5d" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def preprocess_test(x):\n", " # The export module takes in unprocessed and unlabeled data\n", @@ -836,9 +786,7 @@ "metadata": { "id": "fN9pPBQTKTe3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mlp_model_export = ExportModule(model=mlp_model,\n", " preprocess=preprocess_test,\n", @@ -851,9 +799,7 @@ "metadata": { "id": "idS7rQKbKwRS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "models = tempfile.mkdtemp()\n", "save_path = os.path.join(models, 'mlp_model_export')\n", @@ -875,9 +821,7 @@ "metadata": { "id": "W5cwBTUqxldW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mlp_loaded = tf.saved_model.load(save_path)" ] @@ -888,9 +832,7 @@ "metadata": { "id": "bmv0u6j_b5OC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def accuracy_score(y_pred, y):\n", " # Generic accuracy function\n", @@ -918,9 +860,7 @@ "metadata": { "id": "UD8YiC1Vfeyp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Accuracy breakdown by digit:\")\n", "print(\"---------------------------\")\n", @@ -951,9 +891,7 @@ "metadata": { "id": "JqCaqPwwh1tN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import sklearn.metrics as sk_metrics\n", "\n", diff --git a/site/ko/guide/data.ipynb b/site/ko/guide/data.ipynb index 242da3339a..536b9ba38e 100644 --- a/site/ko/guide/data.ipynb +++ b/site/ko/guide/data.ipynb @@ -18,9 +18,7 @@ "cellView": "form", "id": "llMNufAK7nfK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n", "# you may not use this file except in compliance with the License.\n", @@ -81,9 +79,7 @@ "metadata": { "id": "UJIEjEIBdf-h" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf" ] @@ -94,9 +90,7 @@ "metadata": { "id": "7Y0JtWBNR9E5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import pathlib\n", "import os\n", @@ -130,9 +124,7 @@ "metadata": { "id": "0F-FDnjB6t6J" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])\n", "dataset" @@ -144,9 +136,7 @@ "metadata": { "id": "pwJsRJ-FbDcJ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for elem in dataset:\n", " print(elem.numpy())" @@ -167,9 +157,7 @@ "metadata": { "id": "03w9oxFfbDcM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "it = iter(dataset)\n", "\n", @@ -191,9 +179,7 @@ "metadata": { "id": "C2bHAeNxbDcO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(dataset.reduce(0, lambda state, value: state + value).numpy())" ] @@ -223,9 +209,7 @@ "metadata": { "id": "Mg0m1beIhXGn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10]))\n", "\n", @@ -238,9 +222,7 @@ "metadata": { "id": "cwyemaghhXaG" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset2 = tf.data.Dataset.from_tensor_slices(\n", " (tf.random.uniform([4]),\n", @@ -255,9 +237,7 @@ "metadata": { "id": "1CL7aB0ahXn_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset3 = tf.data.Dataset.zip((dataset1, dataset2))\n", "\n", @@ -270,9 +250,7 @@ "metadata": { "id": "m5bz7R1xhX1f" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Dataset containing a sparse tensor.\n", "dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]))\n", @@ -286,9 +264,7 @@ "metadata": { "id": "lVOPHur_hYQv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Use value_type to see the type of value represented by the element spec\n", "dataset4.element_spec.value_type" @@ -309,9 +285,7 @@ "metadata": { "id": "2myAr3Pxd-zF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset1 = tf.data.Dataset.from_tensor_slices(\n", " tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32))\n", @@ -325,9 +299,7 @@ "metadata": { "id": "woPXMP14gUTg" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for z in dataset1:\n", " print(z.numpy())" @@ -339,9 +311,7 @@ "metadata": { "id": "53PA4x6XgLar" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset2 = tf.data.Dataset.from_tensor_slices(\n", " (tf.random.uniform([4]),\n", @@ -356,9 +326,7 @@ "metadata": { "id": "2ju4sNSebDcR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset3 = tf.data.Dataset.zip((dataset1, dataset2))\n", "\n", @@ -371,9 +339,7 @@ "metadata": { "id": "BgxsfAS2g6gk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for a, (b,c) in dataset3:\n", " print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c))" @@ -407,9 +373,7 @@ "metadata": { "id": "NmaE6PjjhQ47" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train, test = tf.keras.datasets.fashion_mnist.load_data()" ] @@ -420,9 +384,7 @@ "metadata": { "id": "J6cNiuDBbDcU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "images, labels = train\n", "images = images/255\n", @@ -459,9 +421,7 @@ "metadata": { "id": "9njpME-jmDza" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def count(stop):\n", " i = 0\n", @@ -476,9 +436,7 @@ "metadata": { "id": "xwqLrjnTpD8Y" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for n in count(5):\n", " print(n)" @@ -503,9 +461,7 @@ "metadata": { "id": "GFga_OTwm0Je" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), )" ] @@ -516,9 +472,7 @@ "metadata": { "id": "fel1SUuBnDUE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for count_batch in ds_counter.repeat().batch(10).take(10):\n", " print(count_batch.numpy())" @@ -543,9 +497,7 @@ "metadata": { "id": "allFX1g8rGKe" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def gen_series():\n", " i = 0\n", @@ -561,9 +513,7 @@ "metadata": { "id": "6Ku26Yb9rcJX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for i, series in gen_series():\n", " print(i, \":\", str(series))\n", @@ -588,9 +538,7 @@ "metadata": { "id": "zDTfhEzhsliM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds_series = tf.data.Dataset.from_generator(\n", " gen_series, \n", @@ -615,9 +563,7 @@ "metadata": { "id": "A7jEpj3As1lO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds_series_batch = ds_series.shuffle(20).padded_batch(10)\n", "\n", @@ -644,9 +590,7 @@ "metadata": { "id": "g-_JCFRQ1CXM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "flowers = tf.keras.utils.get_file(\n", " 'flower_photos',\n", @@ -669,9 +613,7 @@ "metadata": { "id": "vPCZeBQE5DfH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20)" ] @@ -682,9 +624,7 @@ "metadata": { "id": "my4PxqfH26p6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "images, labels = next(img_gen.flow_from_directory(flowers))" ] @@ -695,9 +635,7 @@ "metadata": { "id": "Hd96nH1w3eKH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(images.dtype, images.shape)\n", "print(labels.dtype, labels.shape)" @@ -709,9 +647,7 @@ "metadata": { "id": "KvRwvt5E2rTH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds = tf.data.Dataset.from_generator(\n", " lambda: img_gen.flow_from_directory(flowers), \n", @@ -728,9 +664,7 @@ "metadata": { "id": "LcaULBCXj_2_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for images, labels in ds.take(1):\n", " print('images.shape: ', images.shape)\n", @@ -765,9 +699,7 @@ "metadata": { "id": "jZo_4fzdbDcW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Creates a dataset that reads all of the examples from two files.\n", "fsns_test_file = tf.keras.utils.get_file(\"fsns.tfrec\", \"https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001\")" @@ -788,9 +720,7 @@ "metadata": { "id": "e2WV5d7DRUA-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])\n", "dataset" @@ -811,9 +741,7 @@ "metadata": { "id": "3tk29nlMl5P3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "raw_example = next(iter(dataset))\n", "parsed = tf.train.Example.FromString(raw_example.numpy())\n", @@ -840,9 +768,7 @@ "metadata": { "id": "hQMoFu2TbDcc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'\n", "file_names = ['cowper.txt', 'derby.txt', 'butler.txt']\n", @@ -859,9 +785,7 @@ "metadata": { "id": "il4cOjiVwj95" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = tf.data.TextLineDataset(file_paths)" ] @@ -881,9 +805,7 @@ "metadata": { "id": "vpEHKyvHxu8A" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for line in dataset.take(5):\n", " print(line.numpy())" @@ -904,9 +826,7 @@ "metadata": { "id": "1UCveWOt7fDE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "files_ds = tf.data.Dataset.from_tensor_slices(file_paths)\n", "lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3)\n", @@ -932,9 +852,7 @@ "metadata": { "id": "X6b20Gua2jPO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n", "titanic_lines = tf.data.TextLineDataset(titanic_file)" @@ -946,9 +864,7 @@ "metadata": { "id": "5M1pauNT68B2" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for line in titanic_lines.take(10):\n", " print(line.numpy())" @@ -960,9 +876,7 @@ "metadata": { "id": "dEIP95cibDcf" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def survived(line):\n", " return tf.not_equal(tf.strings.substr(line, 0, 1), \"0\")\n", @@ -976,9 +890,7 @@ "metadata": { "id": "odQ4618h1XqD" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for line in survivors.take(10):\n", " print(line.numpy())" @@ -1012,9 +924,7 @@ "metadata": { "id": "kj28j5u49Bjm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")" ] @@ -1025,9 +935,7 @@ "metadata": { "id": "ghvtmW40LM0B" }, - "outputs": [ - - ], + "outputs": [], "source": [ "df = pd.read_csv(titanic_file)\n", "df.head()" @@ -1048,9 +956,7 @@ "metadata": { "id": "JmAMCiPJA0qO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df))\n", "\n", @@ -1078,9 +984,7 @@ "metadata": { "id": "zHUDrM_s_brq" }, - "outputs": [ - - ], + "outputs": [], "source": [ "titanic_batches = tf.data.experimental.make_csv_dataset(\n", " titanic_file, batch_size=4,\n", @@ -1093,9 +997,7 @@ "metadata": { "id": "TsZfhz79_Wlg" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for feature_batch, label_batch in titanic_batches.take(1):\n", " print(\"'survived': {}\".format(label_batch))\n", @@ -1119,9 +1021,7 @@ "metadata": { "id": "H9KNHyDwF2Sc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "titanic_batches = tf.data.experimental.make_csv_dataset(\n", " titanic_file, batch_size=4,\n", @@ -1134,9 +1034,7 @@ "metadata": { "id": "7C2uosFnGIT8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for feature_batch, label_batch in titanic_batches.take(1):\n", " print(\"'survived': {}\".format(label_batch))\n", @@ -1159,9 +1057,7 @@ "metadata": { "id": "wP1Y_NXA8bYl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string]\n", "dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True)\n", @@ -1185,9 +1081,7 @@ "metadata": { "id": "Qry-g90FMo2I" }, - "outputs": [ - - ], + "outputs": [], "source": [ "%%writefile missing.csv\n", "1,2,3,4\n", @@ -1204,9 +1098,7 @@ "metadata": { "id": "d5_hbiE9bDck" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Creates a dataset that reads all of the records from two CSV files, each with\n", "# four float columns which may have missing values.\n", @@ -1223,9 +1115,7 @@ "metadata": { "id": "__jc7iD9M9FC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for line in dataset:\n", " print(line.numpy())" @@ -1246,9 +1136,7 @@ "metadata": { "id": "p2IF_K0obDcm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Creates a dataset that reads all of the records from two CSV files with\n", "# headers, extracting float data from columns 2 and 4.\n", @@ -1264,9 +1152,7 @@ "metadata": { "id": "-5aLprDeRNb0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for line in dataset:\n", " print(line.numpy())" @@ -1296,9 +1182,7 @@ "metadata": { "id": "1dZwN3CS-jV2" }, - "outputs": [ - - ], + "outputs": [], "source": [ "flowers_root = tf.keras.utils.get_file(\n", " 'flower_photos',\n", @@ -1331,9 +1215,7 @@ "metadata": { "id": "_2iCXsHu6jJH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for item in flowers_root.glob(\"*\"):\n", " print(item.name)" @@ -1354,9 +1236,7 @@ "metadata": { "id": "lAkQp5uxoINu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))\n", "\n", @@ -1379,9 +1259,7 @@ "metadata": { "id": "-xhBRgvNqRRe" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def process_path(file_path):\n", " label = tf.strings.split(file_path, os.sep)[-2]\n", @@ -1396,9 +1274,7 @@ "metadata": { "id": "kxrl0lGdnpRz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for image_raw, label_text in labeled_ds.take(1):\n", " print(repr(image_raw.numpy()[:100]))\n", @@ -1438,9 +1314,7 @@ "metadata": { "id": "xB7KeceLoSH0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "inc_dataset = tf.data.Dataset.range(100)\n", "dec_dataset = tf.data.Dataset.range(0, -100, -1)\n", @@ -1466,9 +1340,7 @@ "metadata": { "id": "yN7hn7OBoSHx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "batched_dataset" ] @@ -1488,9 +1360,7 @@ "metadata": { "id": "BycWC7WCoSHt" }, - "outputs": [ - - ], + "outputs": [], "source": [ "batched_dataset = dataset.batch(7, drop_remainder=True)\n", "batched_dataset" @@ -1513,9 +1383,7 @@ "metadata": { "id": "kycwO0JooSHn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = tf.data.Dataset.range(100)\n", "dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x))\n", @@ -1569,9 +1437,7 @@ "metadata": { "id": "0tODHZzRoSHg" }, - "outputs": [ - - ], + "outputs": [], "source": [ "titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n", "titanic_lines = tf.data.TextLineDataset(titanic_file)" @@ -1583,9 +1449,7 @@ "metadata": { "id": "LMO6mlXxoSHc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def plot_batch_sizes(ds):\n", " batch_sizes = [batch.shape[0] for batch in ds]\n", @@ -1611,9 +1475,7 @@ "metadata": { "id": "nZ0G1cztoSHX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "titanic_batches = titanic_lines.repeat(3).batch(128)\n", "plot_batch_sizes(titanic_batches)" @@ -1634,9 +1496,7 @@ "metadata": { "id": "wmbmdK1qoSHS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "titanic_batches = titanic_lines.batch(128).repeat(3)\n", "\n", @@ -1658,9 +1518,7 @@ "metadata": { "id": "YyekyeY7oSHO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "epochs = 3\n", "dataset = titanic_lines.batch(128)\n", @@ -1699,9 +1557,7 @@ "metadata": { "id": "Io4iJH1toSHI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "lines = tf.data.TextLineDataset(titanic_file)\n", "counter = tf.data.experimental.Counter()\n", @@ -1727,9 +1583,7 @@ "metadata": { "id": "ayM3FFFAoSHC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "n,line_batch = next(iter(dataset))\n", "print(n.numpy())" @@ -1752,9 +1606,7 @@ "metadata": { "id": "YX3pe7zZoSG6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = tf.data.Dataset.zip((counter, lines))\n", "shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2)\n", @@ -1770,9 +1622,7 @@ "metadata": { "id": "H9hlE-lGoSGz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled]\n", "plt.plot(shuffle_repeat, label=\"shuffle().repeat()\")\n", @@ -1795,9 +1645,7 @@ "metadata": { "id": "Bhxb5YGZoSGm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = tf.data.Dataset.zip((counter, lines))\n", "shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10)\n", @@ -1813,9 +1661,7 @@ "metadata": { "id": "VAM4cbpZoSGL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled]\n", "\n", @@ -1859,9 +1705,7 @@ "metadata": { "id": "rMGlj8V-u-NH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))" ] @@ -1881,9 +1725,7 @@ "metadata": { "id": "fZObC0debDcr" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Reads an image from a file, decodes it into a dense tensor, and resizes it\n", "# to a fixed shape.\n", @@ -1913,9 +1755,7 @@ "metadata": { "id": "y8xuN_HBzGup" }, - "outputs": [ - - ], + "outputs": [], "source": [ "file_path = next(iter(list_ds))\n", "image, label = parse_image(file_path)\n", @@ -1944,9 +1784,7 @@ "metadata": { "id": "SzO8LI_H5Sk_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "images_ds = list_ds.map(parse_image)\n", "\n", @@ -1984,9 +1822,7 @@ "metadata": { "id": "tBUmbERt7Czz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import scipy.ndimage as ndimage\n", "\n", @@ -2001,9 +1837,7 @@ "metadata": { "id": "_wEyL7bS9S6t" }, - "outputs": [ - - ], + "outputs": [], "source": [ "image, label = next(iter(images_ds))\n", "image = random_rotate_image(image)\n", @@ -2025,9 +1859,7 @@ "metadata": { "id": "Cn2nIu92BMp0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def tf_random_rotate_image(image, label):\n", " im_shape = image.shape\n", @@ -2042,9 +1874,7 @@ "metadata": { "id": "bWPqKbTnbDct" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rot_ds = images_ds.map(tf_random_rotate_image)\n", "\n", @@ -2069,9 +1899,7 @@ "metadata": { "id": "6wnE134b32KY" }, - "outputs": [ - - ], + "outputs": [], "source": [ "fsns_test_file = tf.keras.utils.get_file(\"fsns.tfrec\", \"https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001\")\n", "dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])\n", @@ -2093,9 +1921,7 @@ "metadata": { "id": "4znsVNqnF73C" }, - "outputs": [ - - ], + "outputs": [], "source": [ "raw_example = next(iter(dataset))\n", "parsed = tf.train.Example.FromString(raw_example.numpy())\n", @@ -2114,9 +1940,7 @@ "metadata": { "id": "cwzqp8IGC_vQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "raw_example = next(iter(dataset))" ] @@ -2127,9 +1951,7 @@ "metadata": { "id": "y2X1dQNfC8Lu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def tf_parse(eg):\n", " example = tf.io.parse_example(\n", @@ -2146,9 +1968,7 @@ "metadata": { "id": "lGJhKDp_61A_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "img, txt = tf_parse(raw_example)\n", "print(txt.numpy())\n", @@ -2161,9 +1981,7 @@ "metadata": { "id": "8vFIUFzD5qIC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "decoded = dataset.map(tf_parse)\n", "decoded" @@ -2175,9 +1993,7 @@ "metadata": { "id": "vRYNYkEej7Ix" }, - "outputs": [ - - ], + "outputs": [], "source": [ "image_batch, text_batch = next(iter(decoded.batch(10)))\n", "image_batch.shape" @@ -2220,9 +2036,7 @@ "metadata": { "id": "kTQgo49skjuY" }, - "outputs": [ - - ], + "outputs": [], "source": [ "range_ds = tf.data.Dataset.range(100000)" ] @@ -2253,9 +2067,7 @@ "metadata": { "id": "pSs9XqwQpvIN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "batches = range_ds.batch(10, drop_remainder=True)\n", "\n", @@ -2278,9 +2090,7 @@ "metadata": { "id": "47XfwPhetkIN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def dense_1_step(batch):\n", " # Shift features and labels one step relative to each other.\n", @@ -2307,9 +2117,7 @@ "metadata": { "id": "FMmkQB1Gqo6x" }, - "outputs": [ - - ], + "outputs": [], "source": [ "batches = range_ds.batch(15, drop_remainder=True)\n", "\n", @@ -2338,9 +2146,7 @@ "metadata": { "id": "11dF3wyFjk2J" }, - "outputs": [ - - ], + "outputs": [], "source": [ "feature_length = 10\n", "label_length = 3\n", @@ -2378,9 +2184,7 @@ "metadata": { "id": "ZEI2W_EBw2OX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "window_size = 5\n", "\n", @@ -2404,9 +2208,7 @@ "metadata": { "id": "SB8AI03mnF8u" }, - "outputs": [ - - ], + "outputs": [], "source": [ " for x in windows.flat_map(lambda x: x).take(30):\n", " print(x.numpy(), end=' ')" @@ -2427,9 +2229,7 @@ "metadata": { "id": "5j_y84rmyVQa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def sub_to_batch(sub):\n", " return sub.batch(window_size, drop_remainder=True)\n", @@ -2455,9 +2255,7 @@ "metadata": { "id": "LdFRv_0D4FqW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def make_window_dataset(ds, window_size=5, shift=1, stride=1):\n", " windows = ds.window(window_size, shift=shift, stride=stride)\n", @@ -2475,9 +2273,7 @@ "metadata": { "id": "-iVxcVfEdf5b" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3)\n", "\n", @@ -2500,9 +2296,7 @@ "metadata": { "id": "F0fPfZkZw6j_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dense_labels_ds = ds.map(dense_1_step)\n", "\n", @@ -2529,9 +2323,7 @@ "metadata": { "id": "U2e8dxVUlFHO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "zip_path = tf.keras.utils.get_file(\n", " origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip',\n", @@ -2547,9 +2339,7 @@ "metadata": { "id": "EhkkM4Wx75S_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "creditcard_ds = tf.data.experimental.make_csv_dataset(\n", " csv_path, batch_size=1024, label_name=\"Class\",\n", @@ -2572,9 +2362,7 @@ "metadata": { "id": "a8-Ss69XlzXD" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def count(counts, batch):\n", " features, labels = batch\n", @@ -2596,9 +2384,7 @@ "metadata": { "id": "O1a3t_B4l_f6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "counts = creditcard_ds.take(10).reduce(\n", " initial_state={'class_0': 0, 'class_1': 0},\n", @@ -2646,9 +2432,7 @@ "metadata": { "id": "6YKfCPa-nioA" }, - "outputs": [ - - ], + "outputs": [], "source": [ "negative_ds = (\n", " creditcard_ds\n", @@ -2668,9 +2452,7 @@ "metadata": { "id": "8FNd3sQjzl9-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for features, label in positive_ds.batch(10).take(1):\n", " print(label.numpy())" @@ -2691,9 +2473,7 @@ "metadata": { "id": "vjdPVIFCngOb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "balanced_ds = tf.data.Dataset.sample_from_datasets(\n", " [negative_ds, positive_ds], [0.5, 0.5]).batch(10)" @@ -2714,9 +2494,7 @@ "metadata": { "id": "Myvkw21Rz-fH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for features, labels in balanced_ds.take(10):\n", " print(labels.numpy())" @@ -2752,9 +2530,7 @@ "metadata": { "id": "zC_Cuzw8lhI5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def class_func(features, label):\n", " return label" @@ -2777,9 +2553,7 @@ "metadata": { "id": "fY6VIhr3oGHG" }, - "outputs": [ - - ], + "outputs": [], "source": [ "resample_ds = (\n", " creditcard_ds\n", @@ -2804,9 +2578,7 @@ "metadata": { "id": "KpfCGU6BiaZq" }, - "outputs": [ - - ], + "outputs": [], "source": [ "balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label)" ] @@ -2826,9 +2598,7 @@ "metadata": { "id": "XGLYChBQwkDV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for features, labels in balanced_ds.take(10):\n", " print(labels.numpy())" @@ -2860,9 +2630,7 @@ "metadata": { "id": "3Fsm9wvKUsNC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "range_ds = tf.data.Dataset.range(20)\n", "\n", @@ -2914,9 +2682,7 @@ "metadata": { "id": "-bfjqm0hOfES" }, - "outputs": [ - - ], + "outputs": [], "source": [ "train, test = tf.keras.datasets.fashion_mnist.load_data()\n", "\n", @@ -2931,9 +2697,7 @@ "metadata": { "id": "wDhF3rGnbDdD" }, - "outputs": [ - - ], + "outputs": [], "source": [ "fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))\n", "fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)\n", @@ -2963,9 +2727,7 @@ "metadata": { "id": "9cu4kPzOHnlt" }, - "outputs": [ - - ], + "outputs": [], "source": [ "model.fit(fmnist_train_ds, epochs=2)" ] @@ -2985,9 +2747,7 @@ "metadata": { "id": "Bp1BpzlyJinb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20)" ] @@ -3007,9 +2767,7 @@ "metadata": { "id": "TnlRHlaL-XUI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "loss, accuracy = model.evaluate(fmnist_train_ds)\n", "print(\"Loss :\", loss)\n", @@ -3031,9 +2789,7 @@ "metadata": { "id": "uVgamf9HKDon" }, - "outputs": [ - - ], + "outputs": [], "source": [ "loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10)\n", "print(\"Loss :\", loss)\n", @@ -3055,9 +2811,7 @@ "metadata": { "id": "343lXJ-pIqWD" }, - "outputs": [ - - ], + "outputs": [], "source": [ "predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32)\n", "result = model.predict(predict_ds, steps = 10)\n", @@ -3079,9 +2833,7 @@ "metadata": { "id": "mgQJTPrT-2WF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "result = model.predict(fmnist_train_ds, steps = 10)\n", "print(result.shape)" @@ -3090,9 +2842,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "data.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/distributed_training.ipynb b/site/ko/guide/distributed_training.ipynb index 6ee4584ae1..1bac372186 100644 --- a/site/ko/guide/distributed_training.ipynb +++ b/site/ko/guide/distributed_training.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -51,10 +49,10 @@ "\n", " \n", - " \n", - " \n", - " \n", + " \n", + " \n", "
TensorFlow.org에서 보기\n", " Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소스 보기노트북 다운로드하기GitHub에서 소스 보기노트북 다운로드하기
" ] }, @@ -100,9 +98,7 @@ "metadata": { "id": "EVOZFbNgXghB" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf" ] @@ -154,9 +150,7 @@ "metadata": { "id": "9Z4FMAY9ADxK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mirrored_strategy = tf.distribute.MirroredStrategy()" ] @@ -178,9 +172,7 @@ "metadata": { "id": "nbGleskCACv_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mirrored_strategy = tf.distribute.MirroredStrategy(devices=[\"/gpu:0\", \"/gpu:1\"])" ] @@ -200,9 +192,7 @@ "metadata": { "id": "6-xIOIpgBItn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mirrored_strategy = tf.distribute.MirroredStrategy(\n", " cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())" @@ -259,9 +249,7 @@ "metadata": { "id": "m3a_6ebbEjre" }, - "outputs": [ - - ], + "outputs": [], "source": [ "strategy = tf.distribute.MultiWorkerMirroredStrategy()" ] @@ -281,9 +269,7 @@ "metadata": { "id": "QGX_QAEtFQSv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "communication_options = tf.distribute.experimental.CommunicationOptions(\n", " implementation=tf.distribute.experimental.CommunicationImplementation.NCCL)\n", @@ -364,9 +350,7 @@ "metadata": { "id": "rtjZOyaoMWrP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()" ] @@ -419,9 +403,7 @@ "metadata": { "id": "ibHleFOOmPn9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "default_strategy = tf.distribute.get_strategy()" ] @@ -443,9 +425,7 @@ "metadata": { "id": "WECeRzUdT6bU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# In optimizer or other library code\n", "# Get currently active strategy\n", @@ -468,9 +448,7 @@ "metadata": { "id": "O4Vmae5jmSE6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "if tf.config.list_physical_devices('GPU'):\n", " strategy = tf.distribute.MirroredStrategy()\n", @@ -538,9 +516,7 @@ "metadata": { "id": "gbbcpzRnPZ6V" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mirrored_strategy = tf.distribute.MirroredStrategy()\n", "\n", @@ -566,9 +542,7 @@ "metadata": { "id": "ZMmxEFRTEjH5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)\n", "model.fit(dataset, epochs=2)\n", @@ -590,9 +564,7 @@ "metadata": { "id": "Lqgd9SdxW5OW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import numpy as np\n", "\n", @@ -615,9 +587,7 @@ "metadata": { "id": "8ZmJqErtS4A1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mirrored_strategy.num_replicas_in_sync" ] @@ -628,9 +598,7 @@ "metadata": { "id": "quNNTytWdGBf" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Compute a global batch size using a number of replicas.\n", "BATCH_SIZE_PER_REPLICA = 5\n", @@ -699,9 +667,7 @@ "metadata": { "id": "W-3Bn-CaiPKD" }, - "outputs": [ - - ], + "outputs": [], "source": [ "with mirrored_strategy.scope():\n", " model = tf.keras.Sequential([\n", @@ -725,9 +691,7 @@ "metadata": { "id": "94BkvkLInkKd" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(\n", " global_batch_size)\n", @@ -749,9 +713,7 @@ "metadata": { "id": "NJxL5YrVniDe" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Sets `reduction=NONE` to leave it to tf.nn.compute_average_loss() below.\n", "loss_object = tf.keras.losses.BinaryCrossentropy(\n", @@ -814,9 +776,7 @@ "metadata": { "id": "Egq9eufToRf6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for dist_inputs in dist_dataset:\n", " print(distributed_train_step(dist_inputs))" @@ -839,9 +799,7 @@ "metadata": { "id": "e5BEvR0-LJAc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "iterator = iter(dist_dataset)\n", "for _ in range(10):\n", diff --git a/site/ko/guide/dtensor_overview.ipynb b/site/ko/guide/dtensor_overview.ipynb index cd0307062d..80a99280df 100644 --- a/site/ko/guide/dtensor_overview.ipynb +++ b/site/ko/guide/dtensor_overview.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -90,9 +88,7 @@ "metadata": { "id": "OKaPw8vwwZAC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install --quiet --upgrade --pre tensorflow" ] @@ -114,9 +110,7 @@ "metadata": { "id": "Q92lo0zjwej8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "from tensorflow.experimental import dtensor\n", @@ -182,9 +176,7 @@ "metadata": { "id": "QLH5fgdBmA58" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mesh_1d = dtensor.create_mesh([('x', 6)], devices=DEVICES)\n", "print(mesh_1d)" @@ -207,9 +199,7 @@ "metadata": { "id": "op6TmKUQE-sZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mesh_2d = dtensor.create_mesh([('x', 3), ('y', 2)], devices=DEVICES)\n", "print(mesh_2d)" @@ -258,9 +248,7 @@ "metadata": { "id": "-a3EnmZag6x1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh_1d)" ] @@ -283,9 +271,7 @@ "metadata": { "id": "7BgqL0jUvV5a" }, - "outputs": [ - - ], + "outputs": [], "source": [ "layout = dtensor.Layout([dtensor.UNSHARDED, 'x'], mesh_1d)" ] @@ -305,7 +291,6 @@ "id": "Eyp_qOSyvieo" }, "source": [ - "\n", "\"메시 \n" ] }, @@ -315,9 +300,7 @@ "metadata": { "id": "p8OrehEuhPbS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "layout = dtensor.Layout(['y', 'x'], mesh_2d)" ] @@ -340,9 +323,7 @@ "metadata": { "id": "IkWe6mVl7uRb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "layout = dtensor.Layout([\"x\", dtensor.UNSHARDED], mesh_2d)" ] @@ -386,9 +367,7 @@ "metadata": { "id": "s6aws-b8dN9L" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def dtensor_from_array(arr, layout, shape=None, dtype=None):\n", " \"\"\"Convert a DTensor from something that looks like an array or Tensor.\n", @@ -431,9 +410,7 @@ "metadata": { "id": "mQu_nScGUvYH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"x\", 6)], devices=DEVICES)\n", "layout = dtensor.Layout([dtensor.UNSHARDED], mesh)\n", @@ -463,9 +440,7 @@ "metadata": { "id": "dCSFyaAjmzGu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(dtensor.fetch_layout(my_first_dtensor))\n", "assert layout == dtensor.fetch_layout(my_first_dtensor)" @@ -492,9 +467,7 @@ "metadata": { "id": "BGbjqVAOnXMk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for component_tensor in dtensor.unpack(my_first_dtensor):\n", " print(\"Device:\", component_tensor.device, \",\", component_tensor)" @@ -526,9 +499,7 @@ "metadata": { "id": "9lT-6qQwxOgf" }, - "outputs": [ - - ], + "outputs": [], "source": [ "packed_dtensor = dtensor.pack(\n", " [[0, 1], [0, 1], [0, 1],\n", @@ -557,9 +528,7 @@ "metadata": { "id": "KWb9Ae0VJ-Rc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)" ] @@ -584,9 +553,7 @@ "metadata": { "id": "ax_ZHouJp1MX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "fully_sharded_dtensor = dtensor_from_array(\n", " tf.reshape(tf.range(6), (3, 2)),\n", @@ -616,9 +583,7 @@ "metadata": { "id": "xmyC6H6Ec90P" }, - "outputs": [ - - ], + "outputs": [], "source": [ "fully_replicated_dtensor = dtensor_from_array(\n", " tf.reshape(tf.range(6), (3, 2)),\n", @@ -655,9 +620,7 @@ "metadata": { "id": "DygnbkQ1Lu42" }, - "outputs": [ - - ], + "outputs": [], "source": [ "hybrid_sharded_dtensor = dtensor_from_array(\n", " tf.reshape(tf.range(6), (3, 2)),\n", @@ -695,9 +658,7 @@ "metadata": { "id": "hNdwmnL0jAXS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(fully_replicated_dtensor.numpy())\n", "\n", @@ -773,9 +734,7 @@ "metadata": { "id": "TiZf2J9JNd2D" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"x\", 6)], devices=DEVICES)\n", "layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)\n", @@ -814,9 +773,7 @@ "metadata": { "id": "EyVAUvMePbms" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)\n", "a_layout = dtensor.Layout([dtensor.UNSHARDED, 'x'], mesh)\n", @@ -848,9 +805,7 @@ "metadata": { "id": "0PYqe0neiOpR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)\n", "\n", @@ -888,9 +843,7 @@ "metadata": { "id": "J0jo_8NPtJiO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "help(dtensor.call_with_layout)" ] @@ -923,9 +876,7 @@ "metadata": { "id": "G1CuKYSFtFeM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "help(tf.ones)" ] @@ -936,9 +887,7 @@ "metadata": { "id": "2m_EAwy-ozOh" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)\n", "ones = dtensor.call_with_layout(tf.ones, dtensor.Layout(['x', 'y'], mesh), shape=(6, 4))\n", @@ -962,9 +911,7 @@ "metadata": { "id": "H8BQSTRFtCih" }, - "outputs": [ - - ], + "outputs": [], "source": [ "help(tf.random.stateless_normal)" ] @@ -975,9 +922,7 @@ "metadata": { "id": "TvP81eYopSPm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ones = dtensor.call_with_layout(\n", " tf.function(tf.random.stateless_normal),\n", @@ -1002,9 +947,7 @@ "metadata": { "id": "LbAtKrSkpOaq" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ones = dtensor.call_with_layout(\n", " tf.function(tf.ones),\n", @@ -1032,9 +975,7 @@ "metadata": { "id": "awRPuR26P0Sc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mesh = dtensor.create_mesh([(\"x\", 6)], devices=DEVICES)\n", "layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)\n", @@ -1066,9 +1007,7 @@ "metadata": { "id": "adxFw9wJpqQQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "a = dtensor.call_with_layout(tf.ones, layout=layout, shape=(64, 32))\n", "b = v + a # add DVariable and DTensor\n", @@ -1090,9 +1029,7 @@ "metadata": { "id": "oYwfiyw5P94U" }, - "outputs": [ - - ], + "outputs": [], "source": [ "v.assign(a) # assign a DTensor to a DVariable\n", "print(a)" @@ -1113,9 +1050,7 @@ "metadata": { "id": "3pckUugYP_r-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# variable's layout is immutable.\n", "another_mesh = dtensor.create_mesh([(\"x\", 3), (\"y\", 2)], devices=DEVICES)\n", @@ -1142,9 +1077,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "dtensor_overview.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/function.ipynb b/site/ko/guide/function.ipynb index 7e61a9ecf5..b964238e32 100644 --- a/site/ko/guide/function.ipynb +++ b/site/ko/guide/function.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "3jTMb1dySr3V" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -83,9 +81,7 @@ "metadata": { "id": "otIdN1TS8N7S" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf" ] @@ -105,9 +101,7 @@ "metadata": { "id": "D25apou9IOXa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import traceback\n", "import contextlib\n", @@ -153,9 +147,7 @@ "metadata": { "id": "SbtT1-Wm70F2" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function # The decorator converts `add` into a `Function`.\n", "def add(a, b):\n", @@ -170,9 +162,7 @@ "metadata": { "id": "uP-zUelB8DbX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "v = tf.Variable(1.0)\n", "with tf.GradientTape() as tape:\n", @@ -195,9 +185,7 @@ "metadata": { "id": "l5qRjdbBVdU6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def dense_layer(x, w, b):\n", @@ -221,9 +209,7 @@ "metadata": { "id": "zuXt4wRysI03" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import timeit\n", "conv_layer = tf.keras.layers.Conv2D(100, 3)\n", @@ -287,9 +273,7 @@ "metadata": { "id": "kojmJrgq8U9v" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def double(a):\n", @@ -319,9 +303,7 @@ "metadata": { "id": "hFccbWFRrsBp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# This doesn't print 'Tracing with ...'\n", "print(double(tf.constant(\"b\")))" @@ -342,9 +324,7 @@ "metadata": { "id": "IiQc4IKAb-NX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(double.pretty_printed_concrete_signatures())" ] @@ -435,9 +415,7 @@ "metadata": { "id": "_BDMIRmu1RGB" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))\n", "def next_collatz(x):\n", @@ -471,9 +449,7 @@ "metadata": { "id": "4Viun7dh7PmF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function(input_signature=(tf.TensorSpec(shape=[None], dtype=tf.int32),))\n", "def g(x):\n", @@ -504,9 +480,7 @@ "metadata": { "id": "uydzR5JYUU8H" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def train_one_step():\n", " pass\n", @@ -543,9 +517,7 @@ "metadata": { "id": "uHp4ousu4DdN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def f():\n", " print('Tracing!')\n", @@ -574,9 +546,7 @@ "metadata": { "id": "gZkIh7UaIKc6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def get_mixed_flavor(fruit_a, fruit_b):\n", @@ -660,9 +630,7 @@ "metadata": { "id": "mHg2CGtPQ3Hz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Obtaining concrete trace\")\n", "double_strings = double.get_concrete_function(tf.constant(\"a\"))\n", @@ -677,9 +645,7 @@ "metadata": { "id": "6IVZ-NVf9vsx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# You can also call get_concrete_function on an InputSpec\n", "double_strings_from_inputspec = double.get_concrete_function(tf.TensorSpec(shape=[], dtype=tf.string))\n", @@ -701,9 +667,7 @@ "metadata": { "id": "o3-JbkIk41r8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(double_strings)" ] @@ -723,9 +687,7 @@ "metadata": { "id": "nzbrqFABe0zG" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(double_strings.structured_input_signature)\n", "print(double_strings.structured_outputs)" @@ -746,9 +708,7 @@ "metadata": { "id": "G5eeTK-T5KYj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "with assert_raises(tf.errors.InvalidArgumentError):\n", " double_strings(tf.constant(1))" @@ -769,9 +729,7 @@ "metadata": { "id": "U_QyPSGoaC35" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def pow(a, b):\n", @@ -787,9 +745,7 @@ "metadata": { "id": "E76vIDhQbXIb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "assert square(tf.constant(10.0)) == 100\n", "\n", @@ -814,9 +770,7 @@ "metadata": { "id": "5UENeGHfaX8g" }, - "outputs": [ - - ], + "outputs": [], "source": [ "graph = double_strings.graph\n", "for node in graph.as_graph_def().node:\n", @@ -860,9 +814,7 @@ "metadata": { "id": "yCQTtTPTW3WF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# A simple loop\n", "\n", @@ -891,9 +843,7 @@ "metadata": { "id": "jlQD1ffRXJhl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(tf.autograph.to_code(f.python_function))" ] @@ -919,9 +869,7 @@ "metadata": { "id": "BOQl8PMq2Sf3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def fizzbuzz(n):\n", @@ -994,9 +942,7 @@ "metadata": { "id": "WGZ19LspbZ27" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def measure_graph_size(f, *args):\n", " g = f.get_concrete_function(*args).graph\n", @@ -1049,9 +995,7 @@ "metadata": { "id": "HJ3Vb3dXfefN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "batch_size = 2\n", "seq_len = 3\n", @@ -1108,9 +1052,7 @@ "metadata": { "id": "w2sACuZ9TTRk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def f(x):\n", @@ -1148,9 +1090,7 @@ "metadata": { "id": "7aJD--9qTWmg" }, - "outputs": [ - - ], + "outputs": [], "source": [ "external_list = []\n", "\n", @@ -1181,9 +1121,7 @@ "metadata": { "id": "5r6p7-9jk_3L" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class Model(tf.Module):\n", " def __init__(self):\n", @@ -1219,9 +1157,7 @@ "metadata": { "id": "An4MrIbrcvi8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class Model(tf.Module):\n", " def __init__(self):\n", @@ -1278,9 +1214,7 @@ "metadata": { "id": "FNPD4unZUedH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def buggy_consume_next(iterator):\n", @@ -1308,9 +1242,7 @@ "metadata": { "id": "8D_iKetXW6VE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def good_consume_next(iterator):\n", @@ -1343,9 +1275,7 @@ "metadata": { "id": "zrdp4rjxg6jo" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = None\n", "\n", @@ -1379,9 +1309,7 @@ "metadata": { "id": "PrcpPB8C5s9T" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def leaky_function(a):\n", @@ -1423,9 +1351,7 @@ "metadata": { "id": "D7bLe8y652wU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class MyClass:\n", "\n", @@ -1458,9 +1384,7 @@ "metadata": { "id": "QSN-T1m5EFcR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def recursive_fn(n):\n", @@ -1488,9 +1412,7 @@ "metadata": { "id": "7FlmTqfMUwmT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def recursive_fn(n):\n", @@ -1533,9 +1455,7 @@ "metadata": { "id": "oeJMdXd3M0cM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def buggy_add():\n", @@ -1556,9 +1476,7 @@ "metadata": { "id": "L3q7sUJWZOSU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Updating the value of `foo` to 100!\")\n", "foo = 100\n", @@ -1581,9 +1499,7 @@ "metadata": { "id": "oeJMdXd3M0cc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def variable_add():\n", @@ -1599,9 +1515,7 @@ "metadata": { "id": "L3q7sUJWZOSd" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Updating the value of `foo` to 100!\")\n", "foo.assign(100)\n", @@ -1636,9 +1550,7 @@ "metadata": { "id": "ux8KJESVWDxX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class SimpleModel(tf.Module):\n", " def __init__(self):\n", @@ -1661,9 +1573,7 @@ "metadata": { "id": "mUxRF4ghZZvX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Adding bias!\")\n", "simple_model.bias += 5.0\n", @@ -1689,9 +1599,7 @@ "metadata": { "id": "pFvWmWAAQjrv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def evaluate(model, x):\n", " return model.weight * x + model.bias\n", @@ -1708,9 +1616,7 @@ "metadata": { "id": "bdU2-jF4ZH0B" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Adding bias!\")\n", "new_model.bias += 5.0\n", @@ -1734,9 +1640,7 @@ "metadata": { "id": "daAP_lucwS6w" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class BetterModel:\n", "\n", @@ -1758,9 +1662,7 @@ "metadata": { "id": "ktqwMJBqwTFj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Adding bias!\")\n", "better_model.bias.assign_add(5.0) # Note: instead of better_model.bias += 5\n", @@ -1786,9 +1688,7 @@ "metadata": { "id": "Tx0Vvnb_9OB-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def f(x):\n", @@ -1814,9 +1714,7 @@ "metadata": { "id": "HQrG5_kOiKl_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class Count(tf.Module):\n", " def __init__(self):\n", @@ -1850,9 +1748,7 @@ "metadata": { "id": "yWQ3-r99Jvze" }, - "outputs": [ - - ], + "outputs": [], "source": [ "opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)\n", "opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)\n", @@ -1889,9 +1785,7 @@ "metadata": { "id": "YV5F2Gy9hSI3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)\n", "opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)\n", diff --git a/site/ko/guide/gpu.ipynb b/site/ko/guide/gpu.ipynb index 59e841fd20..7d7a5610db 100644 --- a/site/ko/guide/gpu.ipynb +++ b/site/ko/guide/gpu.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -44,7 +42,7 @@ "\n", " \n", " \n", - " \n", + " \n", " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행GitHub에서 소스 보기GitHub에서 소스 보기노트북 다운로드
" ] @@ -81,9 +79,7 @@ "metadata": { "id": "IqR2PQG4ZaZ0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "print(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))\n" @@ -132,9 +128,7 @@ "metadata": { "id": "2Dbw0tpEirCd" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.debugging.set_log_device_placement(True)\n", "\n", @@ -172,9 +166,7 @@ "metadata": { "id": "8wqaQfEhjHit" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.debugging.set_log_device_placement(True)\n", "\n", @@ -213,9 +205,7 @@ "metadata": { "id": "hPI--n_jhZhv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "gpus = tf.config.list_physical_devices('GPU')\n", "if gpus:\n", @@ -244,9 +234,7 @@ "metadata": { "id": "jr3Kf1boFnCO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "gpus = tf.config.list_physical_devices('GPU')\n", "if gpus:\n", @@ -274,9 +262,7 @@ "metadata": { "id": "2qO2cS9QFn42" }, - "outputs": [ - - ], + "outputs": [], "source": [ "gpus = tf.config.list_physical_devices('GPU')\n", "if gpus:\n", @@ -316,9 +302,7 @@ "metadata": { "id": "wep4iteljjG1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.debugging.set_log_device_placement(True)\n", "\n", @@ -349,9 +333,7 @@ "metadata": { "id": "sut_UHlkjvWd" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.config.set_soft_device_placement(True)\n", "tf.debugging.set_log_device_placement(True)\n", @@ -381,9 +363,7 @@ "metadata": { "id": "8EMGuGKbNkc6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "gpus = tf.config.list_physical_devices('GPU')\n", "if gpus:\n", @@ -426,9 +406,7 @@ "metadata": { "id": "1KgzY8V2AvRv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.debugging.set_log_device_placement(True)\n", "gpus = tf.config.list_logical_devices('GPU')\n", @@ -469,9 +447,7 @@ "metadata": { "id": "AqPo9ltUA_EY" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.debugging.set_log_device_placement(True)\n", "\n", @@ -495,9 +471,7 @@ "metadata": { "accelerator": "GPU", "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "gpu.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/jax2tf.ipynb b/site/ko/guide/jax2tf.ipynb index 9d0225a380..99bde4773f 100644 --- a/site/ko/guide/jax2tf.ipynb +++ b/site/ko/guide/jax2tf.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "NKvERjPVNWxu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -50,9 +48,9 @@ "source": [ "\n", " \n", - " \n", - " \n", - " \n", + " \n", + " \n", + " \n", "
TensorFlow.org에서 보기Google Colab에서 실행하기GitHub에서 소스 보기
노트북 다운로드하기
Google Colab에서 실행하기GitHub에서 소스 보기
노트북 다운로드하기
" ] }, @@ -94,9 +92,7 @@ "metadata": { "id": "9yqxfHzr0LPF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "import numpy as np\n", @@ -117,9 +113,7 @@ "metadata": { "id": "SDnTaZO0r872" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Needed for TensorFlow and JAX to coexist in GPU memory.\n", "os.environ['XLA_PYTHON_CLIENT_PREALLOCATE'] = \"false\"\n", @@ -140,9 +134,7 @@ "cellView": "form", "id": "BXOjCNJxDLil" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Visualization utilities\n", "\n", @@ -254,9 +246,7 @@ "metadata": { "id": "nbN7rmuF0VFB" }, - "outputs": [ - - ], + "outputs": [], "source": [ "(x_train, train_labels), (x_test, test_labels) = tf.keras.datasets.mnist.load_data()\n", "\n", @@ -296,9 +286,7 @@ "metadata": { "id": "3vbKB4yZ3aTL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Training hyperparameters.\n", "JAX_EPOCHS = 3\n", @@ -329,9 +317,7 @@ "metadata": { "id": "-ybqQF2zd2QX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class ConvModel(flax.linen.Module):\n", "\n", @@ -390,9 +376,7 @@ "metadata": { "id": "tmDwApcpgZzw" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# The training step.\n", "@partial(jax.jit, static_argnums=[0]) # this forces jax.jit to recompile for every new model\n", @@ -425,9 +409,7 @@ "metadata": { "id": "zbl5w-KUV7Qw" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def train(model, state, optimizer_state, train_data, epochs, losses, avg_losses, eval_losses, eval_accuracies):\n", " p = Progress(STEPS_PER_EPOCH)\n", @@ -473,9 +455,7 @@ "metadata": { "id": "mW5mkmCWtN8W" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# The model.\n", "model = ConvModel()\n", @@ -506,9 +486,7 @@ "metadata": { "id": "nmcofTTBZSIb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "new_state, new_optimizer_state = train(model, state, optimizer_state, train_data, JAX_EPOCHS+TF_EPOCHS, losses, avg_losses, eval_losses, eval_accuracies)" ] @@ -519,9 +497,7 @@ "metadata": { "id": "n_20vgvDXB5r" }, - "outputs": [ - - ], + "outputs": [], "source": [ "display_train_curves(losses, avg_losses, eval_losses, eval_accuracies, len(eval_losses), STEPS_PER_EPOCH, ignore_first_n=1*STEPS_PER_EPOCH)" ] @@ -543,9 +519,7 @@ "metadata": { "id": "KT-xqj5N7C6L" }, - "outputs": [ - - ], + "outputs": [], "source": [ "model = ConvModel()\n", "state = model.init({'params':jax.random.PRNGKey(0), 'dropout':jax.random.PRNGKey(0)}, one_batch, train=True) # Flax allows a separate RNG for \"dropout\"\n", @@ -563,9 +537,7 @@ "metadata": { "id": "oa362HMDbzDE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "state, optimizer_state = train(model, state, optimizer_state, train_data, JAX_EPOCHS, losses, avg_losses, eval_losses, eval_accuracies)" ] @@ -576,9 +548,7 @@ "metadata": { "id": "0IyZtUPPCt0y" }, - "outputs": [ - - ], + "outputs": [], "source": [ "display_train_curves(losses, avg_losses, eval_losses, eval_accuracies, len(eval_losses), STEPS_PER_EPOCH, ignore_first_n=1*STEPS_PER_EPOCH)" ] @@ -600,9 +570,7 @@ "metadata": { "id": "O653B3-5H8FL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Test data with a different batch size to test polymorphic shapes.\n", "x, y = next(iter(train_data.unbatch().batch(13)))\n", @@ -628,9 +596,7 @@ "metadata": { "id": "8HFx67zStgvo" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Test the converted function.\n", "print(\"Converted function predictions:\", np.argmax(m.predict(x).numpy(), axis=-1))\n", @@ -661,9 +627,7 @@ "metadata": { "id": "9mty52pmvDDp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "from collections import abc\n", "\n", @@ -687,9 +651,7 @@ "metadata": { "id": "3HEsKNXbCwXw" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class TFModel(tf.Module):\n", " def __init__(self, state, model):\n", @@ -756,9 +718,7 @@ "metadata": { "id": "znJrAVpcxO9u" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Instantiate the model.\n", "tf_model = TFModel(state, model)\n", @@ -782,9 +742,7 @@ "metadata": { "id": "i75yS3v2jPpM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "reloaded_model = tf.saved_model.load(\"./\")\n", "\n", @@ -813,9 +771,7 @@ "metadata": { "id": "MubFcO_jl2vE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "optimizer = tf.keras.optimizers.Adam(learning_rate=tflr_decay)\n", "\n", @@ -854,9 +810,7 @@ "metadata": { "id": "50V1FSmI6UTk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "display_train_curves(losses, avg_losses, eval_losses, eval_accuracies, len(eval_losses), STEPS_PER_EPOCH, ignore_first_n=2*STEPS_PER_EPOCH)\n", "\n", diff --git a/site/ko/guide/migrate/saved_model.ipynb b/site/ko/guide/migrate/saved_model.ipynb index 63974fa6f6..44e6996a2c 100644 --- a/site/ko/guide/migrate/saved_model.ipynb +++ b/site/ko/guide/migrate/saved_model.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "WvqLCVQ6I58i" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -84,9 +82,7 @@ "metadata": { "id": "B94QZyy-kOGQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "import tensorflow.compat.v1 as tf1\n", @@ -128,9 +124,7 @@ "metadata": { "id": "dcZDQaI8jl3h" }, - "outputs": [ - - ], + "outputs": [], "source": [ "remove_dir(\"saved-model-builder\")\n", "\n", @@ -158,9 +152,7 @@ "metadata": { "id": "PwtC27VFlwCa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!saved_model_cli run --dir saved-model-builder --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=10" @@ -181,9 +173,7 @@ "metadata": { "id": "jtMxe2rjHSq9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "remove_dir(\"simple-save\")\n", "\n", @@ -205,9 +195,7 @@ "metadata": { "id": "AdnqemvIHb2P" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!saved_model_cli run --dir simple-save --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=10" @@ -238,9 +226,7 @@ "metadata": { "id": "3nQ5Stnxjhfs" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def model_fn(features, labels, mode):\n", " output = add_two(features['input'])\n", @@ -280,9 +266,7 @@ "metadata": { "id": "8_gD2gkE7CMu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!saved_model_cli run --dir estimator-model --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=[10]" @@ -314,9 +298,7 @@ "metadata": { "id": "_j-PwgP_jrgw" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class MyModel(tf.Module):\n", " @tf.function\n", @@ -342,9 +324,7 @@ "metadata": { "id": "slvU4vZN756F" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!saved_model_cli run --dir tf2-save --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=10" @@ -369,9 +349,7 @@ "metadata": { "id": "mMcjhzyRjvp6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "inp = tf.keras.Input(3)\n", "out = add_two(inp)\n", @@ -391,9 +369,7 @@ "metadata": { "id": "4P93WP5R7-VT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!saved_model_cli run --dir keras-model --tag_set serve \\\n", " --signature_def serving_default --input_exprs input=10" @@ -433,9 +409,7 @@ "metadata": { "id": "IMO0laj-m0p9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def load_tf1(path, input):\n", " print('Loading from', path)\n", @@ -474,9 +448,7 @@ "metadata": { "id": "OA52ezWV_KgL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def load_tf2(path, input):\n", " print('Loading from', path)\n", @@ -507,9 +479,7 @@ "metadata": { "id": "IfMTp-TGAfOs" }, - "outputs": [ - - ], + "outputs": [], "source": [ "loaded = tf.saved_model.load('tf2-save')\n", "print('restored __call__:', loaded.__call__)\n", @@ -537,9 +507,7 @@ "metadata": { "id": "ZFUAxK0YeIAe" }, - "outputs": [ - - ], + "outputs": [], "source": [ "loaded_model = tf.keras.models.load_model('keras-model')\n", "loaded_model.predict_on_batch(tf.constant([1, 3, 4]))" @@ -566,9 +534,7 @@ "metadata": { "id": "grKam9zGnNRZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Save a simple multiplication computation:\n", "with tf.Graph().as_default() as g:\n", @@ -598,9 +564,7 @@ "metadata": { "id": "3bbcGZ4CoVDL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "with tf.Graph().as_default() as g:\n", "meta = tf1.train.import_meta_graph('multiply.pb')\n", @@ -626,9 +590,7 @@ "metadata": { "id": "yAyGmDLlpVBX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def import_multiply():\n", "# Any graph-building code is allowed here.\n", @@ -773,9 +735,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "saved_model.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/migrate/tflite.ipynb b/site/ko/guide/migrate/tflite.ipynb index 97703058cd..ced3e923de 100644 --- a/site/ko/guide/migrate/tflite.ipynb +++ b/site/ko/guide/migrate/tflite.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "HMUDt0CiUJk9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -124,9 +122,7 @@ "metadata": { "id": "iE0vSfMXumKI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "import tensorflow.compat.v1 as tf1\n", @@ -159,9 +155,7 @@ "metadata": { "id": "Bwq8EFiwjzjx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Create a TF1 SavedModel\n", "SAVED_MODEL_DIR = \"tf_saved_model/\"\n", @@ -223,9 +217,7 @@ "metadata": { "id": "dzXHHBQRdt_F" }, - "outputs": [ - - ], + "outputs": [], "source": [ "converter = tf1.lite.TFLiteConverter.from_saved_model(\n", " saved_model_dir=SAVED_MODEL_DIR,\n", @@ -255,9 +247,7 @@ "metadata": { "id": "0OyBjZ6Kdt_F" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Convert TF1 SavedModel to a TFLite model.\n", "converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir=SAVED_MODEL_DIR)\n", @@ -291,9 +281,7 @@ "metadata": { "id": "9EXO0xYq__fR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "converter = tf1.lite.TFLiteConverter.from_keras_model_file(model_file=KERAS_MODEL_PATH)\n", "converter.optimizations = {tf.lite.Optimize.DEFAULT}\n", @@ -318,9 +306,7 @@ "metadata": { "id": "IGB5ZMGl__fR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Convert TF1 Keras model file to TF2 SavedModel.\n", "model = tf.keras.models.load_model(KERAS_MODEL_PATH)\n", @@ -357,9 +343,7 @@ "metadata": { "id": "r7RvcdRv6lll" }, - "outputs": [ - - ], + "outputs": [], "source": [ "converter = tf1.lite.TFLiteConverter.from_frozen_graph(\n", " graph_def_file=GRAPH_DEF_MODEL_PATH,\n", @@ -389,9 +373,7 @@ "metadata": { "id": "Oigap0TZxjWG" }, - "outputs": [ - - ], + "outputs": [], "source": [ "## Convert TF1 frozen Graph to TF1 SavedModel.\n", "\n", @@ -438,9 +420,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "tflite.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/mixed_precision.ipynb b/site/ko/guide/mixed_precision.ipynb index 5fcc4f0d93..9f57ebaf80 100644 --- a/site/ko/guide/mixed_precision.ipynb +++ b/site/ko/guide/mixed_precision.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -52,8 +50,8 @@ " TensorFlow.org에서 보기\n", "\n", " Google Colab에서 실행\n", - " GitHub에서 소스 보기\n", - " 노트북 다운로드하기\n", + " GitHub에서 소스 보기\n", + " 노트북 다운로드하기\n", "" ] }, @@ -96,9 +94,7 @@ "metadata": { "id": "IqR2PQG4ZaZ0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "\n", @@ -139,9 +135,7 @@ "metadata": { "id": "j-Yzg_lfkoa_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!nvidia-smi -L" ] @@ -181,9 +175,7 @@ "metadata": { "id": "x3kElPVH-siO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "policy = mixed_precision.Policy('mixed_float16')\n", "mixed_precision.set_global_policy(policy)" @@ -204,9 +196,7 @@ "metadata": { "id": "6a8iNFoBUSqR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Equivalent to the two lines above\n", "mixed_precision.set_global_policy('mixed_float16')" @@ -227,9 +217,7 @@ "metadata": { "id": "GQRbYm4f8p-k" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print('Compute dtype: %s' % policy.compute_dtype)\n", "print('Variable dtype: %s' % policy.variable_dtype)" @@ -268,9 +256,7 @@ "metadata": { "id": "0DQM24hL_14Q" }, - "outputs": [ - - ], + "outputs": [], "source": [ "inputs = keras.Input(shape=(784,), name='digits')\n", "if tf.config.list_physical_devices('GPU'):\n", @@ -301,9 +287,7 @@ "metadata": { "id": "kC58MzP4PEcC" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(dense1.dtype_policy)\n", "print('x.dtype: %s' % x.dtype.name)\n", @@ -326,9 +310,7 @@ "metadata": { "id": "ybBq1JDwNIbz" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# INCORRECT: softmax and model output will be float16, when it should be float32\n", "outputs = layers.Dense(10, activation='softmax', name='predictions')(x)\n", @@ -352,9 +334,7 @@ "metadata": { "id": "IGqCGn4BsODw" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# CORRECT: softmax and model output are float32\n", "x = layers.Dense(10, name='dense_logits')(x)\n", @@ -383,9 +363,7 @@ "metadata": { "id": "dzVAoLI56jR8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# The linear activation is an identity function. So this simply casts 'outputs'\n", "# to float32. In this particular case, 'outputs' is already float32 so this is a\n", @@ -408,9 +386,7 @@ "metadata": { "id": "g4OT3Z6kqYAL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "model = keras.Model(inputs=inputs, outputs=outputs)\n", "model.compile(loss='sparse_categorical_crossentropy',\n", @@ -439,9 +415,7 @@ "metadata": { "id": "0UYs-u_DgiA5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "initial_weights = model.get_weights()" ] @@ -463,9 +437,7 @@ "metadata": { "id": "hxI7-0ewmC0A" }, - "outputs": [ - - ], + "outputs": [], "source": [ "history = model.fit(x_train, y_train,\n", " batch_size=8192,\n", @@ -525,9 +497,7 @@ "metadata": { "id": "CHmXRb-yRWbE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant(256, dtype='float16')\n", "(x ** 2).numpy() # Overflow" @@ -539,9 +509,7 @@ "metadata": { "id": "5unZLhN0RfQM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant(1e-5, dtype='float16')\n", "(x ** 2).numpy() # Underflow" @@ -627,9 +595,7 @@ "metadata": { "id": "ogZN3rIH0vpj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "optimizer = keras.optimizers.RMSprop()\n", "optimizer = mixed_precision.LossScaleOptimizer(optimizer)" @@ -659,9 +625,7 @@ "metadata": { "id": "9cE7Mm533hxe" }, - "outputs": [ - - ], + "outputs": [], "source": [ "loss_object = tf.keras.losses.SparseCategoricalCrossentropy()\n", "train_dataset = (tf.data.Dataset.from_tensor_slices((x_train, y_train))\n", @@ -689,9 +653,7 @@ "metadata": { "id": "V0vHlust4Rug" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def train_step(x, y):\n", @@ -729,9 +691,7 @@ "metadata": { "id": "nyk_xiZf42Tt" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def test_step(x):\n", @@ -753,9 +713,7 @@ "metadata": { "id": "jpzOe3WEhFUJ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "model.set_weights(initial_weights)" ] @@ -775,9 +733,7 @@ "metadata": { "id": "N274tJ3e4_6t" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for epoch in range(5):\n", " epoch_loss_avg = tf.keras.metrics.Mean()\n", @@ -867,9 +823,7 @@ "metadata": { "accelerator": "GPU", "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "mixed_precision.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/ragged_tensor.ipynb b/site/ko/guide/ragged_tensor.ipynb index dd0892faef..c94967ba05 100644 --- a/site/ko/guide/ragged_tensor.ipynb +++ b/site/ko/guide/ragged_tensor.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "tXAbWHtqs1Y2" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -73,9 +71,7 @@ "metadata": { "id": "KKvdSorS-pDD" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install --pre -U tensorflow\n", "import math\n", @@ -115,9 +111,7 @@ "metadata": { "id": "vGmJGSf_-PVB" }, - "outputs": [ - - ], + "outputs": [], "source": [ "digits = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])\n", "words = tf.ragged.constant([[\"So\", \"long\"], [\"thanks\", \"for\", \"all\", \"the\", \"fish\"]])\n", @@ -162,9 +156,7 @@ "metadata": { "id": "n8YMKXpI-PVH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(digits[0]) # First row" ] @@ -175,9 +167,7 @@ "metadata": { "id": "Awi8i9q5_DuX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(digits[:, :2]) # First two values in each row." ] @@ -188,9 +178,7 @@ "metadata": { "id": "sXgQtTcgHHMR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(digits[:, -2:]) # Last two values in each row." ] @@ -210,9 +198,7 @@ "metadata": { "id": "2tdUEtb7-PVL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(digits + 3)" ] @@ -223,9 +209,7 @@ "metadata": { "id": "X-bxG0nc_Nmf" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(digits + tf.ragged.constant([[1, 2, 3, 4], [], [5, 6, 7], [8], []]))" ] @@ -245,9 +229,7 @@ "metadata": { "id": "pvt5URbdEt-D" }, - "outputs": [ - - ], + "outputs": [], "source": [ "times_two_plus_one = lambda x: x * 2 + 1\n", "print(tf.ragged.map_flat_values(times_two_plus_one, digits))" @@ -268,9 +250,7 @@ "metadata": { "id": "A5NHb8ViA9dt" }, - "outputs": [ - - ], + "outputs": [], "source": [ "digits.to_list()" ] @@ -281,9 +261,7 @@ "metadata": { "id": "2o1wogVyA6Yp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "digits.numpy()" ] @@ -305,9 +283,7 @@ "metadata": { "id": "yhgKMozw-PVP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "sentences = tf.ragged.constant([\n", " [\"Let's\", \"build\", \"some\", \"ragged\", \"tensors\", \"!\"],\n", @@ -321,9 +297,7 @@ "metadata": { "id": "TW1g7eE2ee8M" }, - "outputs": [ - - ], + "outputs": [], "source": [ "paragraphs = tf.ragged.constant([\n", " [['I', 'have', 'a', 'cat'], ['His', 'name', 'is', 'Mat']],\n", @@ -353,9 +327,7 @@ "metadata": { "id": "SEvcPUcl-PVS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(tf.RaggedTensor.from_value_rowids(\n", " values=[3, 1, 4, 1, 5, 9, 2],\n", @@ -381,9 +353,7 @@ "metadata": { "id": "LBY81WXl-PVW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(tf.RaggedTensor.from_row_lengths(\n", " values=[3, 1, 4, 1, 5, 9, 2],\n", @@ -409,9 +379,7 @@ "metadata": { "id": "FwizuqZI-PVb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(tf.RaggedTensor.from_row_splits(\n", " values=[3, 1, 4, 1, 5, 9, 2],\n", @@ -446,9 +414,7 @@ "metadata": { "id": "SqbPBd_w-PVi" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(tf.ragged.constant([[\"Hi\"], [\"How\", \"are\", \"you\"]])) # ok: type=string, rank=2" ] @@ -459,9 +425,7 @@ "metadata": { "id": "83ZCSJnQAWAf" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(tf.ragged.constant([[[1, 2], [3]], [[4, 5]]])) # ok: type=int32, rank=3" ] @@ -472,9 +436,7 @@ "metadata": { "id": "ewA3cISdDfmP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "try:\n", " tf.ragged.constant([[\"one\", \"two\"], [3, 4]]) # bad: multiple types\n", @@ -488,9 +450,7 @@ "metadata": { "id": "EOWIlVidDl-n" }, - "outputs": [ - - ], + "outputs": [], "source": [ "try:\n", " tf.ragged.constant([\"A\", [\"B\", \"C\"]]) # bad: multiple nesting depths\n", @@ -515,9 +475,7 @@ "metadata": { "id": "ZBs_V7e--PVr" }, - "outputs": [ - - ], + "outputs": [], "source": [ "queries = tf.ragged.constant([['Who', 'is', 'Dan', 'Smith'],\n", " ['Pause'],\n", @@ -588,9 +546,7 @@ "metadata": { "id": "M2Wzx4JEIvmb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.ragged.constant([[\"Hi\"], [\"How\", \"are\", \"you\"]]).shape" ] @@ -610,9 +566,7 @@ "metadata": { "id": "5DHaqXHxlWi0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(tf.ragged.constant([[\"Hi\"], [\"How\", \"are\", \"you\"]]).bounding_shape())" ] @@ -641,9 +595,7 @@ "metadata": { "id": "ush7IGUWLXIn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ragged_x = tf.ragged.constant([[\"John\"], [\"a\", \"big\", \"dog\"], [\"my\", \"cat\"]])\n", "ragged_y = tf.ragged.constant([[\"fell\", \"asleep\"], [\"barked\"], [\"is\", \"fuzzy\"]])\n", @@ -667,9 +619,7 @@ "metadata": { "id": "eTIhGayQL0gI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "sparse_x = ragged_x.to_sparse()\n", "sparse_y = ragged_y.to_sparse()\n", @@ -712,9 +662,7 @@ "metadata": { "id": "pHls7hQVJlk5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Task: predict whether each sentence is a question or not.\n", "sentences = tf.constant(\n", @@ -761,9 +709,7 @@ "metadata": { "id": "xsiglYM7TXGr" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import google.protobuf.text_format as pbtext\n", "\n", @@ -804,9 +750,7 @@ "metadata": { "id": "xcdaIbYVT4mo" }, - "outputs": [ - - ], + "outputs": [], "source": [ "feature_specification = {\n", " 'colors': tf.io.RaggedFeature(tf.string),\n", @@ -843,9 +787,7 @@ "metadata": { "id": "fBml1m2G2vO9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Helper function used to print datasets in the examples below.\n", "def print_dictionary_dataset(dataset):\n", @@ -872,9 +814,7 @@ "metadata": { "id": "BuelF_y2mEq9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "dataset = tf.data.Dataset.from_tensor_slices(feature_tensors)\n", "print_dictionary_dataset(dataset)" @@ -906,9 +846,7 @@ "metadata": { "id": "lk62aRz63IZn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "batched_dataset = dataset.batch(2)\n", "print_dictionary_dataset(batched_dataset)" @@ -929,9 +867,7 @@ "metadata": { "id": "CxLlaPw_5Je4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "unbatched_dataset = batched_dataset.unbatch()\n", "print_dictionary_dataset(unbatched_dataset)" @@ -954,9 +890,7 @@ "metadata": { "id": "PYnhERwh3_mf" }, - "outputs": [ - - ], + "outputs": [], "source": [ "non_ragged_dataset = tf.data.Dataset.from_tensor_slices([1, 5, 3, 2, 8])\n", "non_ragged_dataset = non_ragged_dataset.map(tf.range)\n", @@ -983,9 +917,7 @@ "metadata": { "id": "Ios1GuG-pf9U" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def transform_lengths(features):\n", " return {\n", @@ -1012,9 +944,7 @@ "metadata": { "id": "PfyxgVaj_8tl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def make_palindrome(x, axis):\n", @@ -1027,9 +957,7 @@ "metadata": { "id": "vcZdzvEnDEt0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "make_palindrome(tf.constant([[1, 2], [3, 4], [5, 6]]), axis=1)" ] @@ -1040,9 +968,7 @@ "metadata": { "id": "4WfCMIgdDMxj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "make_palindrome(tf.ragged.constant([[1, 2], [3], [4, 5, 6]]), axis=1)" ] @@ -1062,9 +988,7 @@ "metadata": { "id": "k6-hkhdDBk6G" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function(\n", " input_signature=[tf.RaggedTensorSpec(shape=[None, None], dtype=tf.int32)])\n", @@ -1091,9 +1015,7 @@ "metadata": { "id": "yyJeXJ4wFWox" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def increment(x):\n", @@ -1130,9 +1052,7 @@ "metadata": { "id": "D-Dg9w7Je5pU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tempfile\n", "\n", @@ -1157,9 +1077,7 @@ "metadata": { "id": "Sfem1ESrdGzX" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class CustomModule(tf.Module):\n", " def __init__(self, variable_value):\n", @@ -1209,9 +1127,7 @@ "metadata": { "id": "skScd37P-PVu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]])\n", "y = tf.ragged.constant([[1, 1], [2], [3, 3, 3]])\n", @@ -1233,9 +1149,7 @@ "metadata": { "id": "IYybEEWc-PVx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]])\n", "print(x + 3)" @@ -1278,9 +1192,7 @@ "metadata": { "id": "MbSRZRDz-PV1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "queries = tf.ragged.constant(\n", " [['Who', 'is', 'George', 'Washington'],\n", @@ -1294,9 +1206,7 @@ "metadata": { "id": "2HRs2xhh-vZE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(queries[1]) # A single query" ] @@ -1307,9 +1217,7 @@ "metadata": { "id": "EFfjZV7YA3UH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(queries[1, 2]) # A single word" ] @@ -1320,9 +1228,7 @@ "metadata": { "id": "VISRPQSdA3xn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(queries[1:]) # Everything but the first row" ] @@ -1333,9 +1239,7 @@ "metadata": { "id": "J1PpSyKQBMng" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(queries[:, :3]) # The first 3 words of each query" ] @@ -1346,9 +1250,7 @@ "metadata": { "id": "ixrhHmJBeidy" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(queries[:, -2:]) # The last 2 words of each query" ] @@ -1368,9 +1270,7 @@ "metadata": { "id": "8VbqbKcE-PV6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt = tf.ragged.constant([[[1, 2, 3], [4]],\n", " [[5], [], [6]],\n", @@ -1384,9 +1284,7 @@ "metadata": { "id": "f9WPVWf4grVp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(rt[1]) # Second row (2D RaggedTensor)" ] @@ -1397,9 +1295,7 @@ "metadata": { "id": "ad8FGJoABjQH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(rt[3, 0]) # First element of fourth row (1D Tensor)" ] @@ -1410,9 +1306,7 @@ "metadata": { "id": "MPPr-a-bBjFE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(rt[:, 1:3]) # Items 1-3 of each row (3D RaggedTensor)" ] @@ -1423,9 +1317,7 @@ "metadata": { "id": "6SIDeoIUBi4z" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(rt[:, -1:]) # Last item of each row (3D RaggedTensor)" ] @@ -1456,9 +1348,7 @@ "metadata": { "id": "INnfmZGcBoU_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ragged_sentences = tf.ragged.constant([\n", " ['Hi'], ['Welcome', 'to', 'the', 'fair'], ['Have', 'fun']])" @@ -1470,9 +1360,7 @@ "metadata": { "id": "__iJ4iXtkGOx" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# RaggedTensor -> Tensor\n", "print(ragged_sentences.to_tensor(default_value='', shape=[None, 10]))" @@ -1484,9 +1372,7 @@ "metadata": { "id": "-rfiyYqne8QN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Tensor -> RaggedTensor\n", "x = [[1, 3, -1, -1], [2, -1, -1, -1], [4, 5, 8, 9]]\n", @@ -1499,9 +1385,7 @@ "metadata": { "id": "41WAZLXNnbwH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#RaggedTensor -> SparseTensor\n", "print(ragged_sentences.to_sparse())" @@ -1513,9 +1397,7 @@ "metadata": { "id": "S8MkYo2hfVhj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# SparseTensor -> RaggedTensor\n", "st = tf.SparseTensor(indices=[[0, 0], [2, 0], [2, 1]],\n", @@ -1546,9 +1428,7 @@ "metadata": { "id": "uMm1WMkc-PV_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt = tf.ragged.constant([[1, 2], [3, 4, 5], [6], [], [7]])\n", "print(\"Python list:\", rt.to_list())\n", @@ -1590,9 +1470,7 @@ "metadata": { "id": "btGDjT4uNgQy" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant([[1, 2], [3, 4], [5, 6]])\n", "x.shape # shape of a tf.tensor" @@ -1604,9 +1482,7 @@ "metadata": { "id": "__OgvmrGPEjq" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt = tf.ragged.constant([[1], [2, 3], [], [4]])\n", "rt.shape # shape of a tf.RaggedTensor" @@ -1638,9 +1514,7 @@ "metadata": { "id": "kWJ7Cn1EQTD_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant([['a', 'b'], ['c', 'd'], ['e', 'f']])\n", "tf.shape(x)" @@ -1661,9 +1535,7 @@ "metadata": { "id": "nZc2wqgQQUFU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt = tf.ragged.constant([[1], [2, 3, 4], [], [5, 6]])\n", "rt_shape = tf.shape(rt)\n", @@ -1687,9 +1559,7 @@ "metadata": { "id": "pclAODLXT6Gr" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(f\"tf.reshape(x, rt_shape) = {tf.reshape(x, rt_shape)}\")\n", "print(f\"tf.zeros(rt_shape) = {tf.zeros(rt_shape)}\")\n", @@ -1714,9 +1584,7 @@ "metadata": { "id": "MzQvPhsxS6HN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt_shape[0]" ] @@ -1736,9 +1604,7 @@ "metadata": { "id": "HgGMk0LeTGik" }, - "outputs": [ - - ], + "outputs": [], "source": [ "try:\n", " rt_shape[1]\n", @@ -1761,9 +1627,7 @@ "metadata": { "id": "APT72EaBU70t" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt_shape[:1]" ] @@ -1801,9 +1665,7 @@ "metadata": { "id": "NSRgD667WwIZ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.experimental.DynamicRaggedShape(\n", " row_partitions=[tf.experimental.RowPartition.from_row_lengths([5, 3, 2])],\n", @@ -1825,9 +1687,7 @@ "metadata": { "id": "gMxCzADUYIjY" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.experimental.DynamicRaggedShape.from_lengths([4, (2, 1, 0, 8), 12])" ] @@ -1873,9 +1733,7 @@ "metadata": { "id": "0n095XdR-PWU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# x (2D ragged): 2 x (num_rows)\n", "# y (scalar)\n", @@ -1891,9 +1749,7 @@ "metadata": { "id": "0SVYk5AP-PWW" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# x (2d ragged): 3 x (num_rows)\n", "# y (2d tensor): 3 x 1\n", @@ -1912,9 +1768,7 @@ "metadata": { "id": "MsfBMD80s8Ux" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# x (3d ragged): 2 x (r1) x 2\n", "# y (2d ragged): 1 x 1\n", @@ -1933,9 +1787,7 @@ "metadata": { "id": "rEj5QVfnva0t" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# x (3d ragged): 2 x (r1) x (r2) x 1\n", "# y (1d tensor): 3\n", @@ -1973,9 +1825,7 @@ "metadata": { "id": "UpI0FlfL4Eim" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# x (2d ragged): 3 x (r1)\n", "# y (2d tensor): 3 x 4 # trailing dimensions do not match\n", @@ -1993,9 +1843,7 @@ "metadata": { "id": "qGq1zOT4zMoc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# x (2d ragged): 3 x (r1)\n", "# y (2d ragged): 3 x (r2) # ragged dimensions do not match.\n", @@ -2013,9 +1861,7 @@ "metadata": { "id": "CvLae5vMqeji" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# x (3d ragged): 3 x (r1) x 2\n", "# y (3d ragged): 3 x (r1) x 3 # trailing dimensions do not match\n", @@ -2062,9 +1908,7 @@ "metadata": { "id": "MrLgMu0gPuo-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt = tf.RaggedTensor.from_row_splits(\n", " values=[3, 1, 4, 1, 5, 9, 2],\n", @@ -2106,9 +1950,7 @@ "metadata": { "id": "yy3IGT2a-PWb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt = tf.RaggedTensor.from_row_splits(\n", " values=tf.RaggedTensor.from_row_splits(\n", @@ -2135,9 +1977,7 @@ "metadata": { "id": "AKYhtFcT-PWd" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt = tf.RaggedTensor.from_nested_row_splits(\n", " flat_values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],\n", @@ -2162,9 +2002,7 @@ "metadata": { "id": "BXp-Tt2bClem" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# shape = [batch, (paragraph), (sentence), (word)]\n", "conversations = tf.ragged.constant(\n", @@ -2182,9 +2020,7 @@ "metadata": { "id": "DZUMrgxXFd5s" }, - "outputs": [ - - ], + "outputs": [], "source": [ "assert conversations.ragged_rank == len(conversations.nested_row_splits)\n", "conversations.ragged_rank # Number of partitioned dimensions." @@ -2196,9 +2032,7 @@ "metadata": { "id": "xXLSNpS0Fdvp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "conversations.flat_values.numpy()" ] @@ -2222,9 +2056,7 @@ "metadata": { "id": "z2sHwHdy-PWg" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt = tf.RaggedTensor.from_row_splits(\n", " values=[[1, 3], [0, 0], [1, 3], [5, 3], [3, 3], [1, 2]],\n", @@ -2255,9 +2087,7 @@ "metadata": { "id": "70q1aCKwySgS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rt = tf.RaggedTensor.from_uniform_row_length(\n", " values=tf.RaggedTensor.from_row_splits(\n", @@ -2272,9 +2102,7 @@ ], "metadata": { "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "ragged_tensor.ipynb", "toc_visible": true }, diff --git a/site/ko/guide/saved_model.ipynb b/site/ko/guide/saved_model.ipynb index 6baab54344..5b7e3e4701 100644 --- a/site/ko/guide/saved_model.ipynb +++ b/site/ko/guide/saved_model.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "FlUw7tSKbtg4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -51,10 +49,10 @@ "\n", " \n", - " \n", " \n", - " \n", + " \n", "
TensorFlow.org에서 보기\n", " Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소스 보기 노트북 다운로드하기 노트북 다운로드하기
" ] }, @@ -110,9 +108,7 @@ "metadata": { "id": "Le5OB-fBHHW7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import os\n", "import tempfile\n", @@ -130,9 +126,7 @@ "metadata": { "id": "wlho4HEWoHUT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "physical_devices = tf.config.list_physical_devices('GPU')\n", "for device in physical_devices:\n", @@ -145,9 +139,7 @@ "metadata": { "id": "SofdPKo0G8Lb" }, - "outputs": [ - - ], + "outputs": [], "source": [ "file = tf.keras.utils.get_file(\n", " \"grace_hopper.jpg\",\n", @@ -175,9 +167,7 @@ "metadata": { "id": "JhVecdzJTsKE" }, - "outputs": [ - - ], + "outputs": [], "source": [ "labels_path = tf.keras.utils.get_file(\n", " 'ImageNetLabels.txt',\n", @@ -191,9 +181,7 @@ "metadata": { "id": "aEHSYjW6JZHV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "pretrained_model = tf.keras.applications.MobileNet()\n", "result_before_save = pretrained_model(x)\n", @@ -218,9 +206,7 @@ "metadata": { "id": "8nfznDmHCW6F" }, - "outputs": [ - - ], + "outputs": [], "source": [ "mobilenet_save_path = os.path.join(tmpdir, \"mobilenet/1/\")\n", "tf.saved_model.save(pretrained_model, mobilenet_save_path)" @@ -243,9 +229,7 @@ "metadata": { "id": "NP2UpVFRV7N_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "loaded = tf.saved_model.load(mobilenet_save_path)\n", "print(list(loaded.signatures.keys())) # [\"serving_default\"]" @@ -266,9 +250,7 @@ "metadata": { "id": "ChFLpegYfQGR" }, - "outputs": [ - - ], + "outputs": [], "source": [ "infer = loaded.signatures[\"serving_default\"]\n", "print(infer.structured_outputs)" @@ -289,9 +271,7 @@ "metadata": { "id": "9WjGEaS3XfX7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]\n", "\n", @@ -330,9 +310,7 @@ "metadata": { "id": "6u3YZuYZXyTO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!ls {mobilenet_save_path}" ] @@ -354,9 +332,7 @@ "metadata": { "id": "Pus0dOYTYXbI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!saved_model_cli show --dir {mobilenet_save_path} --tag_set serve" ] @@ -376,9 +352,7 @@ "metadata": { "id": "EDYqhDlNZAC2" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!ls {mobilenet_save_path}/variables" ] @@ -415,9 +389,7 @@ "metadata": { "id": "6EPvKiqXMm3d" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class CustomModule(tf.Module):\n", "\n", @@ -458,9 +430,7 @@ "metadata": { "id": "85PUO9iWH7xn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "module_no_signatures_path = os.path.join(tmpdir, 'module_no_signatures')\n", "module(tf.constant(0.))\n", @@ -492,9 +462,7 @@ "metadata": { "id": "EMASjADPxPso" }, - "outputs": [ - - ], + "outputs": [], "source": [ "imported = tf.saved_model.load(module_no_signatures_path)\n", "assert imported(tf.constant(3.)).numpy() == 3\n", @@ -535,9 +503,7 @@ "metadata": { "id": "PEkQNarJ-7nT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "optimizer = tf.keras.optimizers.SGD(0.05)\n", "\n", @@ -556,9 +522,7 @@ "metadata": { "id": "p41NM6fF---3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "for _ in range(10):\n", " # \"v\" approaches 5, \"loss\" approaches 0\n", @@ -588,9 +552,7 @@ "metadata": { "id": "Y6EUFdY8_PRD" }, - "outputs": [ - - ], + "outputs": [], "source": [ "loaded = tf.saved_model.load(mobilenet_save_path)\n", "print(\"MobileNet has {} trainable variables: {}, ...\".format(\n", @@ -604,9 +566,7 @@ "metadata": { "id": "B-mQJ8iP_R0h" }, - "outputs": [ - - ], + "outputs": [], "source": [ "trainable_variable_ids = {id(v) for v in loaded.trainable_variables}\n", "non_trainable_variables = [v for v in loaded.variables\n", @@ -637,9 +597,7 @@ "metadata": { "id": "h-IB5Xa0NxLa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "assert len(imported.signatures) == 0" ] @@ -659,9 +617,7 @@ "metadata": { "id": "_pAdgIORR2yH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "module_with_signature_path = os.path.join(tmpdir, 'module_with_signature')\n", "call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))\n", @@ -674,9 +630,7 @@ "metadata": { "id": "nAzRHR0UT4hv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "imported_with_signatures = tf.saved_model.load(module_with_signature_path)\n", "list(imported_with_signatures.signatures.keys())\n" @@ -697,9 +651,7 @@ "metadata": { "id": "6VYAiQmLUiox" }, - "outputs": [ - - ], + "outputs": [], "source": [ "module_multiple_signatures_path = os.path.join(tmpdir, 'module_with_multiple_signatures')\n", "signatures = {\"serving_default\": call,\n", @@ -714,9 +666,7 @@ "metadata": { "id": "8IPx_0RWEx07" }, - "outputs": [ - - ], + "outputs": [], "source": [ "imported_with_multiple_signatures = tf.saved_model.load(module_multiple_signatures_path)\n", "list(imported_with_multiple_signatures.signatures.keys())" @@ -737,9 +687,7 @@ "metadata": { "id": "ACKPl1X8G1gw" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class CustomModuleWithOutputName(tf.Module):\n", " def __init__(self):\n", @@ -763,9 +711,7 @@ "metadata": { "id": "1yGVy4MuH-V0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "imported_with_output_name = tf.saved_model.load(module_output_path)\n", "imported_with_output_name.signatures['serving_default'].structured_outputs" diff --git a/site/ko/guide/tensor.ipynb b/site/ko/guide/tensor.ipynb index f05ef9a2a7..bf42d11542 100644 --- a/site/ko/guide/tensor.ipynb +++ b/site/ko/guide/tensor.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -51,10 +49,10 @@ "\n", " \n", - " \n", - " \n", - " \n", + " \n", + " \n", "
TensorFlow.org에서 보기\n", " Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소스 보기노트북 다운로드하기GitHub에서 소스 보기노트북 다운로드하기
" ] }, @@ -64,9 +62,7 @@ "metadata": { "id": "AL2hzxorJiWy" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "import numpy as np" @@ -111,9 +107,7 @@ "metadata": { "id": "d5JcgLFR6gHv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# This will be an int32 tensor by default; see \"dtypes\" below.\n", "rank_0_tensor = tf.constant(4)\n", @@ -135,9 +129,7 @@ "metadata": { "id": "oZos8o_R6oE7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Let's make this a float tensor.\n", "rank_1_tensor = tf.constant([2.0, 3.0, 4.0])\n", @@ -159,9 +151,7 @@ "metadata": { "id": "cnOIA_xb6u0M" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# If you want to be specific, you can set the dtype (see below) at creation time\n", "rank_2_tensor = tf.constant([[1, 2],\n", @@ -209,9 +199,7 @@ "metadata": { "id": "sesW7gw6JkXy" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# There can be an arbitrary number of\n", "# axes (sometimes called \"dimensions\")\n", @@ -275,9 +263,7 @@ "metadata": { "id": "J5u6_6ZYaS7B" }, - "outputs": [ - - ], + "outputs": [], "source": [ "np.array(rank_2_tensor)" ] @@ -288,9 +274,7 @@ "metadata": { "id": "c6Taz2gIaZeo" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rank_2_tensor.numpy()" ] @@ -327,9 +311,7 @@ "metadata": { "id": "-DTkjwDOIIDa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "a = tf.constant([[1, 2],\n", " [3, 4]])\n", @@ -347,9 +329,7 @@ "metadata": { "id": "2smoWeUz-N2q" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(a + b, \"\\n\") # element-wise addition\n", "print(a * b, \"\\n\") # element-wise multiplication\n", @@ -371,9 +351,7 @@ "metadata": { "id": "Gp4WUYzGIbnv" }, - "outputs": [ - - ], + "outputs": [], "source": [ "c = tf.constant([[4.0, 5.0], [10.0, 1.0]])\n", "\n", @@ -400,9 +378,7 @@ "metadata": { "id": "_wch0N8xNEt-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.convert_to_tensor([1,2,3])" ] @@ -413,9 +389,7 @@ "metadata": { "id": "ngqIeWYeNJVI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.reduce_max([1,2,3])" ] @@ -426,9 +400,7 @@ "metadata": { "id": "ThVMxqbVNOq3" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.reduce_max(np.array([1,2,3]))" ] @@ -480,9 +452,7 @@ "metadata": { "id": "RyD3yewUKdnK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rank_4_tensor = tf.zeros([3, 2, 4, 5])" ] @@ -512,9 +482,7 @@ "metadata": { "id": "MHm9vSqogsBk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Type of every element:\", rank_4_tensor.dtype)\n", "print(\"Number of axes:\", rank_4_tensor.ndim)\n", @@ -539,9 +507,7 @@ "metadata": { "id": "Ptq0-y6APCpD" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.rank(rank_4_tensor)" ] @@ -552,9 +518,7 @@ "metadata": { "id": "HslrDOEBPICN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.shape(rank_4_tensor)" ] @@ -608,9 +572,7 @@ "metadata": { "id": "SQ-CrJxLXTIM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "rank_1_tensor = tf.constant([0, 1, 1, 2, 3, 5, 8, 13, 21, 34])\n", "print(rank_1_tensor.numpy())" @@ -631,9 +593,7 @@ "metadata": { "id": "n6tqHciOWMt5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"First:\", rank_1_tensor[0].numpy())\n", "print(\"Second:\", rank_1_tensor[1].numpy())\n", @@ -655,9 +615,7 @@ "metadata": { "id": "giVPPcfQX-cu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Everything:\", rank_1_tensor[:].numpy())\n", "print(\"Before 4:\", rank_1_tensor[:4].numpy())\n", @@ -693,9 +651,7 @@ "metadata": { "id": "Tc5X_WlsZXmd" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(rank_2_tensor.numpy())" ] @@ -715,9 +671,7 @@ "metadata": { "id": "PvILXc1PjqTM" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Pull out a single value from a 2-rank tensor\n", "print(rank_2_tensor[1, 1].numpy())" @@ -738,9 +692,7 @@ "metadata": { "id": "YTqNqsfJkJP_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Get row and column tensors\n", "print(\"Second row:\", rank_2_tensor[1, :].numpy())\n", @@ -766,9 +718,7 @@ "metadata": { "id": "GuLoMoCVSLxK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(rank_3_tensor[:, :, 4])" ] @@ -817,9 +767,7 @@ "metadata": { "id": "EMeTtga5Wq8j" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Shape returns a `TensorShape` object that shows the size along each axis\n", "x = tf.constant([[1], [2], [3]])\n", @@ -832,9 +780,7 @@ "metadata": { "id": "38jc2RXziT3W" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# You can convert this object into a Python list, too\n", "print(x.shape.as_list())" @@ -855,9 +801,7 @@ "metadata": { "id": "pa9JCgMLWy87" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# You can reshape a tensor to a new shape.\n", "# Note that you're passing in a list\n", @@ -870,9 +814,7 @@ "metadata": { "id": "Mcq7iXOkW3LK" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(x.shape)\n", "print(reshaped.shape)" @@ -893,9 +835,7 @@ "metadata": { "id": "7kMfM0RpUgI8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(rank_3_tensor)" ] @@ -915,9 +855,7 @@ "metadata": { "id": "COnHEPuaWDQp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# A `-1` passed in the `shape` argument says \"Whatever fits\".\n", "print(tf.reshape(rank_3_tensor, [-1]))" @@ -940,9 +878,7 @@ "metadata": { "id": "zP2Iqc7zWu_J" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(tf.reshape(rank_3_tensor, [3*2, 5]), \"\\n\")\n", "print(tf.reshape(rank_3_tensor, [3, -1]))" @@ -984,9 +920,7 @@ "metadata": { "id": "I9qDL_8u7cBH" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Bad examples: don't do this\n", "\n", @@ -1059,9 +993,7 @@ "metadata": { "id": "5mSTDWbelUvu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "the_f64_tensor = tf.constant([2.2, 3.3, 4.4], dtype=tf.float64)\n", "the_f16_tensor = tf.cast(the_f64_tensor, dtype=tf.float16)\n", @@ -1089,9 +1021,7 @@ "metadata": { "id": "P8sypqmagHQN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant([1, 2, 3])\n", "\n", @@ -1120,9 +1050,7 @@ "metadata": { "id": "6sGmkPg3XANr" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# These are the same computations\n", "x = tf.reshape(x,[3,1])\n", @@ -1164,9 +1092,7 @@ "metadata": { "id": "elrF6v63igY8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x_stretch = tf.constant([[1, 1, 1, 1],\n", " [2, 2, 2, 2],\n", @@ -1196,9 +1122,7 @@ "metadata": { "id": "GW2Q59_r8hZ6" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(tf.broadcast_to(tf.constant([1, 2, 3]), [3, 3]))" ] @@ -1265,9 +1189,7 @@ "metadata": { "id": "VsbTjoFfNVBF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ragged_list = [\n", " [0, 1, 2, 3],\n", @@ -1282,9 +1204,7 @@ "metadata": { "id": "p4xKTo57tutG" }, - "outputs": [ - - ], + "outputs": [], "source": [ "try:\n", " tensor = tf.constant(ragged_list)\n", @@ -1307,9 +1227,7 @@ "metadata": { "id": "XhF3QV3jiqTj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "ragged_tensor = tf.ragged.constant(ragged_list)\n", "print(ragged_tensor)" @@ -1330,9 +1248,7 @@ "metadata": { "id": "Eo_3wJUWNgqB" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(ragged_tensor.shape)" ] @@ -1365,9 +1281,7 @@ "metadata": { "id": "sBosmM8MkIh4" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Tensors can be strings, too here is a scalar string.\n", "scalar_string_tensor = tf.constant(\"Gray wolf\")\n", @@ -1406,9 +1320,7 @@ "metadata": { "id": "41Dv2kL9QrtO" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# If you have three string tensors of different lengths, this is OK.\n", "tensor_of_strings = tf.constant([\"Gray wolf\",\n", @@ -1442,9 +1354,7 @@ "metadata": { "id": "GTgL53jxSMd9" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.constant(\"🥳👍\")" ] @@ -1464,9 +1374,7 @@ "metadata": { "id": "8k2K0VTFyj8e" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# You can use split to split a string into a set of tensors\n", "print(tf.strings.split(scalar_string_tensor, sep=\" \"))" @@ -1478,9 +1386,7 @@ "metadata": { "id": "zgGAn1dfR-04" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# ...but it turns into a `RaggedTensor` if you split up a tensor of strings,\n", "# as each string might be split into a different number of parts.\n", @@ -1519,9 +1425,7 @@ "metadata": { "id": "3nRtx3X9WRfN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "text = tf.constant(\"1 10 100\")\n", "print(tf.strings.to_number(tf.strings.split(text, \" \")))" @@ -1542,9 +1446,7 @@ "metadata": { "id": "fo8BjmH7gyTj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "byte_strings = tf.strings.bytes_split(tf.constant(\"Duck\"))\n", "byte_ints = tf.io.decode_raw(tf.constant(\"Duck\"), tf.uint8)\n", @@ -1558,9 +1460,7 @@ "metadata": { "id": "uSQnZ7d1jCSQ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Or split it up as unicode and then decode it\n", "unicode_bytes = tf.constant(\"アヒル 🦆\")\n", @@ -1615,9 +1515,7 @@ "metadata": { "id": "B9nbO1E2kSUN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Sparse tensors store values by index in a memory-efficient manner\n", "sparse_tensor = tf.sparse.SparseTensor(indices=[[0, 0], [1, 2]],\n", diff --git a/site/ko/guide/tf_numpy.ipynb b/site/ko/guide/tf_numpy.ipynb index 8732037558..c1ab028f40 100644 --- a/site/ko/guide/tf_numpy.ipynb +++ b/site/ko/guide/tf_numpy.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "sY3Ffd83hK3b" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -50,10 +48,10 @@ "source": [ "\n", " \n", - " \n", - " \n", - " \n", + " \n", + " \n", "
TensorFlow.org에서 보기 Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소그 보기 노트북 다운로드하기GitHub에서 소그 보기 노트북 다운로드하기
" ] }, @@ -83,9 +81,7 @@ "metadata": { "id": "AJR558zjAZQu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", @@ -113,9 +109,7 @@ "metadata": { "id": "TfCyofpFDQxm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.experimental_enable_numpy_behavior()" ] @@ -150,9 +144,7 @@ "metadata": { "id": "-BHJjxigJ2H1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Create an ND array and check out different attributes.\n", "ones = tnp.ones([5, 3], dtype=tnp.float32)\n", @@ -211,9 +203,7 @@ "metadata": { "id": "uHmBi4KZI2t1" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Type promotion for operations\")\n", "values = [tnp.asarray(1, dtype=d) for d in\n", @@ -239,9 +229,7 @@ "metadata": { "id": "1m1cp8_VooNk" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Type inference during array creation\")\n", "print(\"tnp.asarray(1).dtype == tnp.%s\" % tnp.asarray(1).dtype.name)\n", @@ -263,9 +251,7 @@ "metadata": { "id": "4gKasnH0j84C" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.experimental_enable_numpy_behavior(prefer_float32=True)\n", "print(\"When prefer_float32 is True:\")\n", @@ -295,9 +281,7 @@ "metadata": { "id": "qlyOShxIO0s2" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tnp.ones([2, 3])\n", "y = tnp.ones([3])\n", @@ -323,9 +307,7 @@ "metadata": { "id": "lRsrtnd3YyMj" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tnp.arange(24).reshape(2, 3, 4)\n", "\n", @@ -345,9 +327,7 @@ "metadata": { "id": "yRAaiGhlaNw7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Mutation is currently not supported\n", "try:\n", @@ -373,9 +353,7 @@ "metadata": { "id": "kR_KCh4kYEhm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "class Model(object):\n", " \"\"\"Model with a dense and a linear layer.\"\"\"\n", @@ -435,9 +413,7 @@ "metadata": { "id": "cMOCgzQmeXRU" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# ND array passed into NumPy function.\n", "np_sum = np.sum(tnp.ones([2, 3]))\n", @@ -454,9 +430,7 @@ "metadata": { "id": "ZaLPjzxft780" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# It is easy to plot ND arrays, given the __array__ interface.\n", "labels = 15 + 2 * tnp.random.randn(1, 1000)\n", @@ -497,9 +471,7 @@ "metadata": { "id": "Cbw8a3G_WUO7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tnp.ones([2]) + np.ones([2])\n", "print(\"x = %s\\nclass = %s\" % (x, x.__class__))" @@ -533,9 +505,7 @@ "metadata": { "id": "BkHVauKwnky_" }, - "outputs": [ - - ], + "outputs": [], "source": [ "x = tf.constant([1, 2])\n", "print(x)\n", @@ -568,9 +538,7 @@ "metadata": { "id": "-QvxNhrFoz09" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# ND array passed into TensorFlow function.\n", "tf_sum = tf.reduce_sum(tnp.ones([2, 3], tnp.float32))\n", @@ -600,9 +568,7 @@ "metadata": { "id": "T47C9KS8pbsP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def create_batch(batch_size=32):\n", " \"\"\"Creates a batch of input and labels.\"\"\"\n", @@ -639,9 +605,7 @@ "metadata": { "id": "TujVPDFwrdqp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Computes a batch of jacobians. Each row is the jacobian of an element in the\n", "# batch of outputs w.r.t. the corresponding input batch element.\n", @@ -677,9 +641,7 @@ "metadata": { "id": "05SrUulm1OlL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "inputs, labels = create_batch(512)\n", "print(\"Eager performance\")\n", @@ -713,9 +675,7 @@ "metadata": { "id": "PemSIrs5L-VJ" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def vectorized_per_example_gradients(inputs, labels):\n", @@ -746,9 +706,7 @@ "metadata": { "id": "_QZ5BjJmRAlG" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Benchmark the vectorized computation above and compare with\n", "# unvectorized sequential computation using `tf.map_fn`.\n", @@ -803,9 +761,7 @@ "metadata": { "id": "NDEAd9m9aemS" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"All logical devices:\", tf.config.list_logical_devices())\n", "print(\"All physical devices:\", tf.config.list_physical_devices())\n", @@ -834,9 +790,7 @@ "metadata": { "id": "c7ELvLmnazfV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "print(\"Using device: %s\" % str(device))\n", "# Run operations in the `tf.device` scope.\n", @@ -865,9 +819,7 @@ "metadata": { "id": "CCesyidaa-UT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "with tf.device(\"/device:CPU:0\"):\n", " prediction_cpu = tnp.copy(prediction)\n", @@ -897,9 +849,7 @@ "cellView": "code", "id": "RExwjI9_pJG0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def benchmark(f, inputs, number=30, force_gpu_sync=False):\n", " \"\"\"Utility to benchmark `f` on each value in `inputs`.\"\"\"\n", @@ -938,9 +888,7 @@ "metadata": { "id": "p-fs_H1lkLfV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Define a simple implementation of `sigmoid`, and benchmark it using\n", "# NumPy and TensorFlow NumPy for different input sizes.\n", diff --git a/site/ko/guide/tf_numpy_type_promotion.ipynb b/site/ko/guide/tf_numpy_type_promotion.ipynb index 948dec6d40..5d1ea8e6e9 100644 --- a/site/ko/guide/tf_numpy_type_promotion.ipynb +++ b/site/ko/guide/tf_numpy_type_promotion.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "sY3Ffd83hK3b" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -51,10 +49,10 @@ "\n", " \n", - " \n", - " \n", - " \n", + " \n", + " \n", "
TensorFlow.org에서 보기\n", " Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소스 보기노트북 다운로드하기GitHub에서 소스 보기노트북 다운로드하기
" ] }, @@ -79,9 +77,7 @@ "metadata": { "id": "vMvEKDFOsau7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "!pip install -q tf_nightly" ] @@ -110,9 +106,7 @@ "metadata": { "id": "AJR558zjAZQu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import numpy as np\n", "import tensorflow as tf\n", @@ -140,9 +134,7 @@ "metadata": { "id": "TfCyofpFDQxm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")" ] @@ -202,9 +194,7 @@ "metadata": { "id": "Y-yeIvstWStL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# i32 + f32 returns a f32 result in ALL mode.\n", "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", @@ -219,9 +209,7 @@ "metadata": { "id": "JNNmZow2WY3G" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# This promotion is not allowed in SAFE mode.\n", "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"safe\")\n", @@ -250,9 +238,7 @@ "metadata": { "id": "Etbv-WoWzUXf" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# i8 + u32 returns an i64 result in ALL mode.\n", "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", @@ -267,9 +253,7 @@ "metadata": { "id": "yKRdvtvw0Lvt" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# This promotion is not allowed in SAFE mode.\n", "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"safe\")\n", @@ -391,9 +375,7 @@ "metadata": { "id": "abqIkV02OXEF" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# The first input is promoted to f16 in ALL mode.\n", "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", @@ -451,9 +433,7 @@ "metadata": { "id": "eSBv_mzyTE97" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.constant(1.2) + tf.constant(3.1, tf.float16) # " ] @@ -484,9 +464,7 @@ "metadata": { "id": "fLEtMluNTsI5" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.constant(5) # " ] @@ -497,9 +475,7 @@ "metadata": { "id": "ZQX6MBWHTt__" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.constant([5.0, 10.0, 3]) # " ] @@ -510,9 +486,7 @@ "metadata": { "id": "ftsKSC5BTweP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# A normal Tensor is created when dtype arg is specified.\n", "tf.constant(5, tf.int32) # " @@ -533,9 +507,7 @@ "metadata": { "id": "DuwpgoQJTzE-" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.math.abs([100.0, 4.0]) # " ] @@ -588,9 +560,7 @@ "metadata": { "id": "M1Ca9v4m7z8e" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Setup\n", "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"legacy\")\n", @@ -605,9 +575,7 @@ "metadata": { "id": "WwhTzJ-a4rTc" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# (a + b) + c throws an InvalidArgumentError.\n", "try:\n", @@ -622,9 +590,7 @@ "metadata": { "id": "d3qDgVYn7ezT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# (b + a) + c returns an i32 result.\n", "tf.add(tf.add(b, a), c) # " @@ -647,9 +613,7 @@ "metadata": { "id": "BOHyJJ8z8uCN" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", "a = np.array(1, dtype=np.int8)\n", @@ -663,9 +627,7 @@ "metadata": { "id": "ZUKU70jf7E1l" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# (a + b) + c returns a f16 result.\n", "tf.add(tf.add(a, b), c) # " @@ -677,9 +639,7 @@ "metadata": { "id": "YOEycjFx7qDn" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# (b + a) + c also returns a f16 result.\n", "tf.add(tf.add(b, a), c) # " @@ -711,9 +671,7 @@ "metadata": { "id": "7L1pxyvn9MlP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"legacy\")" ] @@ -724,9 +682,7 @@ "metadata": { "id": "zMJVFdWf4XHp" }, - "outputs": [ - - ], + "outputs": [], "source": [ "np.array(3.2, np.float16) + tf.constant(1, tf.int8) + tf.constant(50) # " ] @@ -748,9 +704,7 @@ "metadata": { "id": "aJsj2ZyI9T9Y" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")" ] @@ -761,9 +715,7 @@ "metadata": { "id": "jj0N_Plp4X9l" }, - "outputs": [ - - ], + "outputs": [], "source": [ "np.array(3.2, np.float16) + tf.constant(1, tf.int8) + tf.constant(50) # " ] @@ -785,9 +737,7 @@ "metadata": { "id": "2c3icBUX4wNl" }, - "outputs": [ - - ], + "outputs": [], "source": [ "-tf.constant(5) # " ] @@ -798,9 +748,7 @@ "metadata": { "id": "ydJHQjid45s7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tf.constant(5, tf.int16) - tf.constant(1, tf.float32) # " ] @@ -824,9 +772,7 @@ "metadata": { "id": "QsXhyK1h-i5S" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", "a = tf.Variable(10, tf.int32)\n", @@ -850,9 +796,7 @@ "metadata": { "id": "ArrQ9Dj0_OR8" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.experimental_enable_numpy_behavior(dtype_conversion_mode=\"all\")\n", "a = tf.constant(10, tf.int16)\n", @@ -876,9 +820,7 @@ "metadata": { "id": "K1pZnYNh_ahm" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.array(1) # " ] @@ -889,9 +831,7 @@ "metadata": { "id": "QoQl2PYP_fMT" }, - "outputs": [ - - ], + "outputs": [], "source": [ "tnp.array(1.0) # " ] diff --git a/site/ko/guide/tpu.ipynb b/site/ko/guide/tpu.ipynb index a00d31fd94..d1b10960e1 100644 --- a/site/ko/guide/tpu.ipynb +++ b/site/ko/guide/tpu.ipynb @@ -16,9 +16,7 @@ "cellView": "form", "id": "tuOe1ymfHZPu" }, - "outputs": [ - - ], + "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", @@ -44,10 +42,10 @@ "\n", " \n", - " \n", - " \n", - " \n", + " \n", + " \n", "
TensorFlow.org에서 보기\n", " Google Colab에서 실행하기\n", + " Google Colab에서 실행하기\n", "GitHub에서 소스 보기 노트북 다운로드하기GitHub에서 소스 보기 노트북 다운로드하기
" ] }, @@ -88,9 +86,7 @@ "metadata": { "id": "Cw0WRaChRxTL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "import tensorflow as tf\n", "\n", @@ -124,9 +120,7 @@ "metadata": { "id": "dKPqF8d1wJCV" }, - "outputs": [ - - ], + "outputs": [], "source": [ "resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')\n", "tf.config.experimental_connect_to_cluster(resolver)\n", @@ -152,9 +146,7 @@ "metadata": { "id": "XRZ4kMoxBNND" }, - "outputs": [ - - ], + "outputs": [], "source": [ "a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n", "b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n", @@ -194,9 +186,7 @@ "metadata": { "id": "7SO23K8oRpjI" }, - "outputs": [ - - ], + "outputs": [], "source": [ "strategy = tf.distribute.TPUStrategy(resolver)" ] @@ -216,9 +206,7 @@ "metadata": { "id": "-90CL5uFPTOa" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def matmul_fn(x, y):\n", @@ -257,9 +245,7 @@ "metadata": { "id": "DiBiN-Z_R7P7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def create_model():\n", " regularizer = tf.keras.regularizers.L2(1e-5)\n", @@ -317,9 +303,7 @@ "metadata": { "id": "noAd416KSCo7" }, - "outputs": [ - - ], + "outputs": [], "source": [ "def get_dataset(batch_size, is_training=True):\n", " split = 'train' if is_training else 'test'\n", @@ -364,9 +348,7 @@ "metadata": { "id": "ubmDchPqSIx0" }, - "outputs": [ - - ], + "outputs": [], "source": [ "with strategy.scope():\n", " model = create_model()\n", @@ -403,9 +385,7 @@ "metadata": { "id": "M6e3aVVLUorL" }, - "outputs": [ - - ], + "outputs": [], "source": [ "with strategy.scope():\n", " model = create_model()\n", @@ -448,9 +428,7 @@ "metadata": { "id": "9aHhqwao2Fxi" }, - "outputs": [ - - ], + "outputs": [], "source": [ "# Create the model, optimizer and metrics inside the `tf.distribute.Strategy`\n", "# scope, so that the variables can be mirrored on each device.\n", @@ -507,9 +485,7 @@ "metadata": { "id": "1du5cXWt6Vtw" }, - "outputs": [ - - ], + "outputs": [], "source": [ "steps_per_eval = 10000 // batch_size\n", "\n", @@ -546,9 +522,7 @@ "metadata": { "id": "2grYvXLzJYkP" }, - "outputs": [ - - ], + "outputs": [], "source": [ "@tf.function\n", "def train_multiple_steps(iterator, steps):\n", @@ -608,9 +582,7 @@ "metadata": { "accelerator": "TPU", "colab": { - "collapsed_sections": [ - - ], + "collapsed_sections": [], "name": "tpu.ipynb", "toc_visible": true },