diff --git a/CNAME b/CNAME new file mode 100644 index 00000000000..4993baf651d --- /dev/null +++ b/CNAME @@ -0,0 +1 @@ +forklift-docs.konveyor.io diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 00000000000..ddee4673182 --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,128 @@ +# Contributor Covenant Code of Conduct + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, religion, or sexual identity +and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our +community include: + +- Demonstrating empathy and kindness toward other people +- Being respectful of differing opinions, viewpoints, and experiences +- Giving and gracefully accepting constructive feedback +- Accepting responsibility and apologizing to those affected by our mistakes, + and learning from the experience +- Focusing on what is best not just for us as individuals, but for the + overall community + +Examples of unacceptable behavior include: + +- The use of sexualized language or imagery, and sexual attention or + advances of any kind +- Trolling, insulting or derogatory comments, and personal or political attacks +- Public or private harassment +- Publishing others' private information, such as a physical or email + address, without their explicit permission +- Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported to the community leaders responsible for enforcement at +konveyor.io. +All complaints will be reviewed and investigated promptly and fairly. + +All community leaders are obligated to respect the privacy and security of the +reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction + +**Community Impact**: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested. + +### 2. Warning + +**Community Impact**: A violation through a single incident or series +of actions. + +**Consequence**: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or +permanent ban. + +### 3. Temporary Ban + +**Community Impact**: A serious violation of community standards, including +sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban + +**Community Impact**: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within +the community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 2.0, available at +https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. + +Community Impact Guidelines were inspired by [Mozilla's code of conduct +enforcement ladder](https://github.com/mozilla/diversity). + +[homepage]: https://www.contributor-covenant.org + +For answers to common questions about this code of conduct, see the FAQ at +https://www.contributor-covenant.org/faq. Translations are available at +https://www.contributor-covenant.org/translations. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 00000000000..7f375065b01 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,33 @@ +# Contributing to Forklift documentation + +This project is [Apache 2.0 licensed](LICENSE) and accepts contributions via +GitHub pull requests. + +Read the [Guidelines for Red Hat Documentation](https://redhat-documentation.github.io/) before opening a pull request. + +### Upstream and downstream variables + +This document uses the following variables to ensure that upstream and downstream product names and versions are rendered correctly. + +| Variable | Upstream value | Downstream value | +| -------- | -------------- | ---------------- | +| project-full | Forklift | Migration Toolkit for Virtualization | +| project-short | Forklift | MTV | +| project-version | 2.0 | 2.0 | +| virt | KubeVirt | OpenShift Virtualization | +| ocp | OKD | Red Hat OpenShift Container Platform | +| ocp-version | 4.7 | 4.7 | +| ocp-short | OKD | OCP | + +Variables cannot be used in CLI commands or code blocks unless you include the "attributes" keyword: + + [options="nowrap" subs="+quotes,+attributes"] + ---- + # ls {VariableName} + ---- + +You can hide or show specific blocks, paragraphs, warnings or chapters with the `build` variable. Its value can be set to "downstream" or "upstream": + + ifeval::["build" == "upstream"] + This content is only relevant for Forklift. + endif::[] diff --git a/Gemfile b/Gemfile new file mode 100644 index 00000000000..c7b0183bfd4 --- /dev/null +++ b/Gemfile @@ -0,0 +1,31 @@ +# frozen_string_literal: true +# Encoding.default_external = Encoding::UTF_8 +# Encoding.default_internal = Encoding::UTF_8 + +source "https://rubygems.org" + +# gem "asciidoctor-pdf" +gem "asciidoctor" +# gem "bundle" +# gem "html-proofer" +# gem "jekyll-theme-minimal" +# gem "jekyll-feed" +gem "jekyll-paginate" +# gem "jekyll-redirect-from" +# gem "jekyll-sitemap" +# gem "jekyll-tagging" +# gem 'jekyll-seo-tag' +# gem "jekyll", ">= 3.5" +# gem "premonition", ">= 4.0.0" +# gem "pygments.rb" +# gem "rake" +# +# +gem "github-pages", group: :jekyll_plugins + +# ensures that jekyll-asciidoc is loaded first +group :jekyll_plugins do + gem 'jekyll-asciidoc' +end + +gemspec diff --git a/Gemfile.lock b/Gemfile.lock new file mode 100644 index 00000000000..0fb6fe00518 --- /dev/null +++ b/Gemfile.lock @@ -0,0 +1,318 @@ +PATH + remote: . + specs: + jekyll-theme-cayman (0.1.1) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + +GEM + remote: https://rubygems.org/ + specs: + activesupport (7.1.1) + base64 + bigdecimal + concurrent-ruby (~> 1.0, >= 1.0.2) + connection_pool (>= 2.2.5) + drb + i18n (>= 1.6, < 2) + minitest (>= 5.1) + mutex_m + tzinfo (~> 2.0) + addressable (2.8.5) + public_suffix (>= 2.0.2, < 6.0) + asciidoctor (2.0.20) + ast (2.4.2) + base64 (0.1.1) + bigdecimal (3.1.4) + coffee-script (2.4.1) + coffee-script-source + execjs + coffee-script-source (1.11.1) + colorator (1.1.0) + commonmarker (0.23.10) + concurrent-ruby (1.2.2) + connection_pool (2.4.1) + dnsruby (1.70.0) + simpleidn (~> 0.2.1) + drb (2.1.1) + ruby2_keywords + em-websocket (0.5.3) + eventmachine (>= 0.12.9) + http_parser.rb (~> 0) + ethon (0.16.0) + ffi (>= 1.15.0) + eventmachine (1.2.7) + execjs (2.9.1) + faraday (2.7.11) + base64 + faraday-net_http (>= 2.0, < 3.1) + ruby2_keywords (>= 0.0.4) + faraday-net_http (3.0.2) + ffi (1.16.3) + forwardable-extended (2.6.0) + gemoji (3.0.1) + github-pages (228) + github-pages-health-check (= 1.17.9) + jekyll (= 3.9.3) + jekyll-avatar (= 0.7.0) + jekyll-coffeescript (= 1.1.1) + jekyll-commonmark-ghpages (= 0.4.0) + jekyll-default-layout (= 0.1.4) + jekyll-feed (= 0.15.1) + jekyll-gist (= 1.5.0) + jekyll-github-metadata (= 2.13.0) + jekyll-include-cache (= 0.2.1) + jekyll-mentions (= 1.6.0) + jekyll-optional-front-matter (= 0.3.2) + jekyll-paginate (= 1.1.0) + jekyll-readme-index (= 0.3.0) + jekyll-redirect-from (= 0.16.0) + jekyll-relative-links (= 0.6.1) + jekyll-remote-theme (= 0.4.3) + jekyll-sass-converter (= 1.5.2) + jekyll-seo-tag (= 2.8.0) + jekyll-sitemap (= 1.4.0) + jekyll-swiss (= 1.0.0) + jekyll-theme-architect (= 0.2.0) + jekyll-theme-cayman (= 0.2.0) + jekyll-theme-dinky (= 0.2.0) + jekyll-theme-hacker (= 0.2.0) + jekyll-theme-leap-day (= 0.2.0) + jekyll-theme-merlot (= 0.2.0) + jekyll-theme-midnight (= 0.2.0) + jekyll-theme-minimal (= 0.2.0) + jekyll-theme-modernist (= 0.2.0) + jekyll-theme-primer (= 0.6.0) + jekyll-theme-slate (= 0.2.0) + jekyll-theme-tactile (= 0.2.0) + jekyll-theme-time-machine (= 0.2.0) + jekyll-titles-from-headings (= 0.5.3) + jemoji (= 0.12.0) + kramdown (= 2.3.2) + kramdown-parser-gfm (= 1.1.0) + liquid (= 4.0.4) + mercenary (~> 0.3) + minima (= 2.5.1) + nokogiri (>= 1.13.6, < 2.0) + rouge (= 3.26.0) + terminal-table (~> 1.4) + github-pages-health-check (1.17.9) + addressable (~> 2.3) + dnsruby (~> 1.60) + octokit (~> 4.0) + public_suffix (>= 3.0, < 5.0) + typhoeus (~> 1.3) + html-pipeline (2.14.3) + activesupport (>= 2) + nokogiri (>= 1.4) + html-proofer (3.19.4) + addressable (~> 2.3) + mercenary (~> 0.3) + nokogiri (~> 1.13) + parallel (~> 1.10) + rainbow (~> 3.0) + typhoeus (~> 1.3) + yell (~> 2.0) + http_parser.rb (0.8.0) + i18n (1.14.1) + concurrent-ruby (~> 1.0) + jekyll (3.9.3) + addressable (~> 2.4) + colorator (~> 1.0) + em-websocket (~> 0.5) + i18n (>= 0.7, < 2) + jekyll-sass-converter (~> 1.0) + jekyll-watch (~> 2.0) + kramdown (>= 1.17, < 3) + liquid (~> 4.0) + mercenary (~> 0.3.3) + pathutil (~> 0.9) + rouge (>= 1.7, < 4) + safe_yaml (~> 1.0) + jekyll-asciidoc (3.0.0) + asciidoctor (>= 1.5.0) + jekyll (>= 3.0.0) + jekyll-avatar (0.7.0) + jekyll (>= 3.0, < 5.0) + jekyll-coffeescript (1.1.1) + coffee-script (~> 2.2) + coffee-script-source (~> 1.11.1) + jekyll-commonmark (1.4.0) + commonmarker (~> 0.22) + jekyll-commonmark-ghpages (0.4.0) + commonmarker (~> 0.23.7) + jekyll (~> 3.9.0) + jekyll-commonmark (~> 1.4.0) + rouge (>= 2.0, < 5.0) + jekyll-default-layout (0.1.4) + jekyll (~> 3.0) + jekyll-feed (0.15.1) + jekyll (>= 3.7, < 5.0) + jekyll-gist (1.5.0) + octokit (~> 4.2) + jekyll-github-metadata (2.13.0) + jekyll (>= 3.4, < 5.0) + octokit (~> 4.0, != 4.4.0) + jekyll-include-cache (0.2.1) + jekyll (>= 3.7, < 5.0) + jekyll-mentions (1.6.0) + html-pipeline (~> 2.3) + jekyll (>= 3.7, < 5.0) + jekyll-optional-front-matter (0.3.2) + jekyll (>= 3.0, < 5.0) + jekyll-paginate (1.1.0) + jekyll-readme-index (0.3.0) + jekyll (>= 3.0, < 5.0) + jekyll-redirect-from (0.16.0) + jekyll (>= 3.3, < 5.0) + jekyll-relative-links (0.6.1) + jekyll (>= 3.3, < 5.0) + jekyll-remote-theme (0.4.3) + addressable (~> 2.0) + jekyll (>= 3.5, < 5.0) + jekyll-sass-converter (>= 1.0, <= 3.0.0, != 2.0.0) + rubyzip (>= 1.3.0, < 3.0) + jekyll-sass-converter (1.5.2) + sass (~> 3.4) + jekyll-seo-tag (2.8.0) + jekyll (>= 3.8, < 5.0) + jekyll-sitemap (1.4.0) + jekyll (>= 3.7, < 5.0) + jekyll-swiss (1.0.0) + jekyll-theme-architect (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-dinky (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-hacker (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-leap-day (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-merlot (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-midnight (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-minimal (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-modernist (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-primer (0.6.0) + jekyll (> 3.5, < 5.0) + jekyll-github-metadata (~> 2.9) + jekyll-seo-tag (~> 2.0) + jekyll-theme-slate (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-tactile (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-theme-time-machine (0.2.0) + jekyll (> 3.5, < 5.0) + jekyll-seo-tag (~> 2.0) + jekyll-titles-from-headings (0.5.3) + jekyll (>= 3.3, < 5.0) + jekyll-watch (2.2.1) + listen (~> 3.0) + jemoji (0.12.0) + gemoji (~> 3.0) + html-pipeline (~> 2.2) + jekyll (>= 3.0, < 5.0) + json (2.6.3) + kramdown (2.3.2) + rexml + kramdown-parser-gfm (1.1.0) + kramdown (~> 2.0) + liquid (4.0.4) + listen (3.8.0) + rb-fsevent (~> 0.10, >= 0.10.3) + rb-inotify (~> 0.9, >= 0.9.10) + mercenary (0.3.6) + minima (2.5.1) + jekyll (>= 3.5, < 5.0) + jekyll-feed (~> 0.9) + jekyll-seo-tag (~> 2.1) + minitest (5.20.0) + mutex_m (0.1.2) + nokogiri (1.15.4-x86_64-linux) + racc (~> 1.4) + octokit (4.25.1) + faraday (>= 1, < 3) + sawyer (~> 0.9) + parallel (1.23.0) + parser (3.2.2.4) + ast (~> 2.4.1) + racc + pathutil (0.16.2) + forwardable-extended (~> 2.6) + public_suffix (4.0.7) + racc (1.7.3) + rainbow (3.1.1) + rb-fsevent (0.11.2) + rb-inotify (0.10.1) + ffi (~> 1.0) + regexp_parser (2.8.2) + rexml (3.2.6) + rouge (3.26.0) + rubocop (0.93.1) + parallel (~> 1.10) + parser (>= 2.7.1.5) + rainbow (>= 2.2.2, < 4.0) + regexp_parser (>= 1.8) + rexml + rubocop-ast (>= 0.6.0) + ruby-progressbar (~> 1.7) + unicode-display_width (>= 1.4.0, < 2.0) + rubocop-ast (1.30.0) + parser (>= 3.2.1.0) + ruby-progressbar (1.13.0) + ruby2_keywords (0.0.5) + rubyzip (2.3.2) + safe_yaml (1.0.5) + sass (3.7.4) + sass-listen (~> 4.0.0) + sass-listen (4.0.0) + rb-fsevent (~> 0.9, >= 0.9.4) + rb-inotify (~> 0.9, >= 0.9.7) + sawyer (0.9.2) + addressable (>= 2.3.5) + faraday (>= 0.17.3, < 3) + simpleidn (0.2.1) + unf (~> 0.1.4) + terminal-table (1.8.0) + unicode-display_width (~> 1.1, >= 1.1.1) + typhoeus (1.4.0) + ethon (>= 0.9.0) + tzinfo (2.0.6) + concurrent-ruby (~> 1.0) + unf (0.1.4) + unf_ext + unf_ext (0.0.8.2) + unicode-display_width (1.8.0) + w3c_validators (1.3.7) + json (>= 1.8) + nokogiri (~> 1.6) + rexml (~> 3.2) + yell (2.2.2) + +PLATFORMS + x86_64-linux-musl + +DEPENDENCIES + asciidoctor + github-pages + html-proofer (~> 3.0) + jekyll-asciidoc + jekyll-paginate + jekyll-theme-cayman! + rubocop (~> 0.50) + w3c_validators (~> 1.3) + +BUNDLED WITH + 2.3.25 diff --git a/LICENSE b/LICENSE new file mode 100644 index 00000000000..d6456956733 --- /dev/null +++ b/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/assets/css/style.css b/assets/css/style.css new file mode 100644 index 00000000000..266420bdba9 --- /dev/null +++ b/assets/css/style.css @@ -0,0 +1,352 @@ +/*! normalize.css v3.0.2 | MIT License | git.io/normalize */ +/** 1. Set default font family to sans-serif. 2. Prevent iOS text size adjust after orientation change, without disabling user zoom. */ +@import url("https://fonts.googleapis.com/css?family=Open+Sans:400,700"); +html { font-family: sans-serif; /* 1 */ -ms-text-size-adjust: 100%; /* 2 */ -webkit-text-size-adjust: 100%; /* 2 */ } + +/** Remove default margin. */ +body { margin: 0; } + +/* HTML5 display definitions ========================================================================== */ +/** Correct `block` display not defined for any HTML5 element in IE 8/9. Correct `block` display not defined for `details` or `summary` in IE 10/11 and Firefox. Correct `block` display not defined for `main` in IE 11. */ +article, aside, details, figcaption, figure, footer, header, hgroup, main, menu, nav, section, summary { display: block; } + +/** 1. Correct `inline-block` display not defined in IE 8/9. 2. Normalize vertical alignment of `progress` in Chrome, Firefox, and Opera. */ +audio, canvas, progress, video { display: inline-block; /* 1 */ vertical-align: baseline; /* 2 */ } + +/** Prevent modern browsers from displaying `audio` without controls. Remove excess height in iOS 5 devices. */ +audio:not([controls]) { display: none; height: 0; } + +/** Address `[hidden]` styling not present in IE 8/9/10. Hide the `template` element in IE 8/9/11, Safari, and Firefox < 22. */ +[hidden], template { display: none; } + +/* Links ========================================================================== */ +/** Remove the gray background color from active links in IE 10. */ +a { background-color: transparent; } + +/** Improve readability when focused and also mouse hovered in all browsers. */ +a:active, a:hover { outline: 0; } + +/* Text-level semantics ========================================================================== */ +/** Address styling not present in IE 8/9/10/11, Safari, and Chrome. */ +abbr[title] { border-bottom: 1px dotted; } + +/** Address style set to `bolder` in Firefox 4+, Safari, and Chrome. */ +b, strong { font-weight: bold; } + +/** Address styling not present in Safari and Chrome. */ +dfn { font-style: italic; } + +/** Address variable `h1` font-size and margin within `section` and `article` contexts in Firefox 4+, Safari, and Chrome. */ +h1 { font-size: 2em; margin: 0.67em 0; } + +/** Address styling not present in IE 8/9. */ +mark { background: #ff0; color: #000; } + +/** Address inconsistent and variable font size in all browsers. */ +small { font-size: 80%; } + +/** Prevent `sub` and `sup` affecting `line-height` in all browsers. */ +sub, sup { font-size: 75%; line-height: 0; position: relative; vertical-align: baseline; } + +sup { top: -0.5em; } + +sub { bottom: -0.25em; } + +/* Embedded content ========================================================================== */ +/** Remove border when inside `a` element in IE 8/9/10. */ +img { border: 0; } + +/** Correct overflow not hidden in IE 9/10/11. */ +svg:not(:root) { overflow: hidden; } + +/* Grouping content ========================================================================== */ +/** Address margin not present in IE 8/9 and Safari. */ +figure { margin: 1em 40px; } + +/** Address differences between Firefox and other browsers. */ +hr { box-sizing: content-box; height: 0; } + +/** Contain overflow in all browsers. */ +pre { overflow: auto; } + +/** Address odd `em`-unit font size rendering in all browsers. */ +code, kbd, pre, samp { font-family: monospace, monospace; font-size: 1em; } + +/* Forms ========================================================================== */ +/** Known limitation: by default, Chrome and Safari on OS X allow very limited styling of `select`, unless a `border` property is set. */ +/** 1. Correct color not being inherited. Known issue: affects color of disabled elements. 2. Correct font properties not being inherited. 3. Address margins set differently in Firefox 4+, Safari, and Chrome. */ +button, input, optgroup, select, textarea { color: inherit; /* 1 */ font: inherit; /* 2 */ margin: 0; /* 3 */ } + +/** Address `overflow` set to `hidden` in IE 8/9/10/11. */ +button { overflow: visible; } + +/** Address inconsistent `text-transform` inheritance for `button` and `select`. All other form control elements do not inherit `text-transform` values. Correct `button` style inheritance in Firefox, IE 8/9/10/11, and Opera. Correct `select` style inheritance in Firefox. */ +button, select { text-transform: none; } + +/** 1. Avoid the WebKit bug in Android 4.0.* where (2) destroys native `audio` and `video` controls. 2. Correct inability to style clickable `input` types in iOS. 3. Improve usability and consistency of cursor style between image-type `input` and others. */ +button, html input[type="button"], input[type="reset"], input[type="submit"] { -webkit-appearance: button; /* 2 */ cursor: pointer; /* 3 */ } + +/** Re-set default cursor for disabled elements. */ +button[disabled], html input[disabled] { cursor: default; } + +/** Remove inner padding and border in Firefox 4+. */ +button::-moz-focus-inner, input::-moz-focus-inner { border: 0; padding: 0; } + +/** Address Firefox 4+ setting `line-height` on `input` using `!important` in the UA stylesheet. */ +input { line-height: normal; } + +/** It's recommended that you don't attempt to style these elements. Firefox's implementation doesn't respect box-sizing, padding, or width. 1. Address box sizing set to `content-box` in IE 8/9/10. 2. Remove excess padding in IE 8/9/10. */ +input[type="checkbox"], input[type="radio"] { box-sizing: border-box; /* 1 */ padding: 0; /* 2 */ } + +/** Fix the cursor style for Chrome's increment/decrement buttons. For certain `font-size` values of the `input`, it causes the cursor style of the decrement button to change from `default` to `text`. */ +input[type="number"]::-webkit-inner-spin-button, input[type="number"]::-webkit-outer-spin-button { height: auto; } + +/** 1. Address `appearance` set to `searchfield` in Safari and Chrome. 2. Address `box-sizing` set to `border-box` in Safari and Chrome (include `-moz` to future-proof). */ +input[type="search"] { -webkit-appearance: textfield; /* 1 */ /* 2 */ box-sizing: content-box; } + +/** Remove inner padding and search cancel button in Safari and Chrome on OS X. Safari (but not Chrome) clips the cancel button when the search input has padding (and `textfield` appearance). */ +input[type="search"]::-webkit-search-cancel-button, input[type="search"]::-webkit-search-decoration { -webkit-appearance: none; } + +/** Define consistent border, margin, and padding. */ +fieldset { border: 1px solid #c0c0c0; margin: 0 2px; padding: 0.35em 0.625em 0.75em; } + +/** 1. Correct `color` not being inherited in IE 8/9/10/11. 2. Remove padding so people aren't caught out if they zero out fieldsets. */ +legend { border: 0; /* 1 */ padding: 0; /* 2 */ } + +/** Remove default vertical scrollbar in IE 8/9/10/11. */ +textarea { overflow: auto; } + +/** Don't inherit the `font-weight` (applied by a rule above). NOTE: the default cannot safely be changed in Chrome and Safari on OS X. */ +optgroup { font-weight: bold; } + +/* Tables ========================================================================== */ +/** Remove most spacing between table cells. */ +table { border-collapse: collapse; border-spacing: 0; } + +td, th { padding: 0; } + +.highlight table td { padding: 5px; } + +.highlight table pre { margin: 0; } + +.highlight .cm { color: #999988; font-style: italic; } + +.highlight .cp { color: #999999; font-weight: bold; } + +.highlight .c1 { color: #999988; font-style: italic; } + +.highlight .cs { color: #999999; font-weight: bold; font-style: italic; } + +.highlight .c, .highlight .cd { color: #999988; font-style: italic; } + +.highlight .err { color: #a61717; background-color: #e3d2d2; } + +.highlight .gd { color: #000000; background-color: #ffdddd; } + +.highlight .ge { color: #000000; font-style: italic; } + +.highlight .gr { color: #aa0000; } + +.highlight .gh { color: #999999; } + +.highlight .gi { color: #000000; background-color: #ddffdd; } + +.highlight .go { color: #888888; } + +.highlight .gp { color: #555555; } + +.highlight .gs { font-weight: bold; } + +.highlight .gu { color: #aaaaaa; } + +.highlight .gt { color: #aa0000; } + +.highlight .kc { color: #000000; font-weight: bold; } + +.highlight .kd { color: #000000; font-weight: bold; } + +.highlight .kn { color: #000000; font-weight: bold; } + +.highlight .kp { color: #000000; font-weight: bold; } + +.highlight .kr { color: #000000; font-weight: bold; } + +.highlight .kt { color: #445588; font-weight: bold; } + +.highlight .k, .highlight .kv { color: #000000; font-weight: bold; } + +.highlight .mf { color: #009999; } + +.highlight .mh { color: #009999; } + +.highlight .il { color: #009999; } + +.highlight .mi { color: #009999; } + +.highlight .mo { color: #009999; } + +.highlight .m, .highlight .mb, .highlight .mx { color: #009999; } + +.highlight .sb { color: #d14; } + +.highlight .sc { color: #d14; } + +.highlight .sd { color: #d14; } + +.highlight .s2 { color: #d14; } + +.highlight .se { color: #d14; } + +.highlight .sh { color: #d14; } + +.highlight .si { color: #d14; } + +.highlight .sx { color: #d14; } + +.highlight .sr { color: #009926; } + +.highlight .s1 { color: #d14; } + +.highlight .ss { color: #990073; } + +.highlight .s { color: #d14; } + +.highlight .na { color: #008080; } + +.highlight .bp { color: #999999; } + +.highlight .nb { color: #0086B3; } + +.highlight .nc { color: #445588; font-weight: bold; } + +.highlight .no { color: #008080; } + +.highlight .nd { color: #3c5d5d; font-weight: bold; } + +.highlight .ni { color: #800080; } + +.highlight .ne { color: #990000; font-weight: bold; } + +.highlight .nf { color: #990000; font-weight: bold; } + +.highlight .nl { color: #990000; font-weight: bold; } + +.highlight .nn { color: #555555; } + +.highlight .nt { color: #000080; } + +.highlight .vc { color: #008080; } + +.highlight .vg { color: #008080; } + +.highlight .vi { color: #008080; } + +.highlight .nv { color: #008080; } + +.highlight .ow { color: #000000; font-weight: bold; } + +.highlight .o { color: #000000; font-weight: bold; } + +.highlight .w { color: #bbbbbb; } + +.highlight { background-color: #f8f8f8; } + +* { box-sizing: border-box; } + +body { padding: 0; margin: 0; font-family: "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 16px; line-height: 1.5; color: #606c71; } + +#skip-to-content { height: 1px; width: 1px; position: absolute; overflow: hidden; top: -10px; } +#skip-to-content:focus { position: fixed; top: 10px; left: 10px; height: auto; width: auto; background: #e19447; outline: thick solid #e19447; } + +a { color: #1e6bb8; text-decoration: none; } +a:hover { text-decoration: underline; } + +.btn { display: inline-block; margin-bottom: 1rem; color: rgba(255, 255, 255, 0.7); background-color: rgba(255, 255, 255, 0.08); border-color: rgba(255, 255, 255, 0.2); border-style: solid; border-width: 1px; border-radius: 0.3rem; transition: color 0.2s, background-color 0.2s, border-color 0.2s; } +.btn:hover { color: rgba(255, 255, 255, 0.8); text-decoration: none; background-color: rgba(255, 255, 255, 0.2); border-color: rgba(255, 255, 255, 0.3); } +.btn + .btn { margin-left: 1rem; } +@media screen and (min-width: 64em) { .btn { padding: 0.75rem 1rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .btn { padding: 0.6rem 0.9rem; font-size: 0.9rem; } } +@media screen and (max-width: 42em) { .btn { display: block; width: 100%; padding: 0.75rem; font-size: 0.9rem; } + .btn + .btn { margin-top: 1rem; margin-left: 0; } } + +.page-header { color: #fff; text-align: center; background-color: #1f2067; background-image: linear-gradient(90deg, #3b3c93, #1f2067); } +@media screen and (min-width: 64em) { .page-header { padding: 5rem 6rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .page-header { padding: 3rem 4rem; } } +@media screen and (max-width: 42em) { .page-header { padding: 2rem 1rem; } } + +.project-name { margin-top: 0; margin-bottom: 0.1rem; } +@media screen and (min-width: 64em) { .project-name { font-size: 3.25rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .project-name { font-size: 2.25rem; } } +@media screen and (max-width: 42em) { .project-name { font-size: 1.75rem; } } + +.project-tagline { margin-bottom: 2rem; font-weight: normal; opacity: 0.7; } +@media screen and (min-width: 64em) { .project-tagline { font-size: 1.25rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .project-tagline { font-size: 1.15rem; } } +@media screen and (max-width: 42em) { .project-tagline { font-size: 1rem; } } + +.main-content { word-wrap: break-word; } +.main-content :first-child { margin-top: 0; } +@media screen and (min-width: 64em) { .main-content { max-width: 64rem; padding: 2rem 6rem; margin: 0 auto; font-size: 1.1rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .main-content { padding: 2rem 4rem; font-size: 1.1rem; } } +@media screen and (max-width: 42em) { .main-content { padding: 2rem 1rem; font-size: 1rem; } } +.main-content kbd { background-color: #fafbfc; border: 1px solid #c6cbd1; border-bottom-color: #959da5; border-radius: 3px; box-shadow: inset 0 -1px 0 #959da5; color: #444d56; display: inline-block; font-size: 11px; line-height: 10px; padding: 3px 5px; vertical-align: middle; } +.main-content img { max-width: 100%; } +.main-content h1, .main-content h2, .main-content h3, .main-content h4, .main-content h5, .main-content h6 { margin-top: 2rem; margin-bottom: 1rem; font-weight: normal; color: #3d3c93; } +.main-content p { margin-bottom: 1em; } +.main-content code { padding: 2px 4px; font-family: Consolas, "Liberation Mono", Menlo, Courier, monospace; font-size: 0.9rem; color: #567482; background-color: #f3f6fa; border-radius: 0.3rem; } +.main-content pre { padding: 0.8rem; margin-top: 0; margin-bottom: 1rem; font: 1rem Consolas, "Liberation Mono", Menlo, Courier, monospace; color: #567482; word-wrap: normal; background-color: #f3f6fa; border: solid 1px #dce6f0; border-radius: 0.3rem; } +.main-content pre > code { padding: 0; margin: 0; font-size: 0.9rem; color: #567482; word-break: normal; white-space: pre; background: transparent; border: 0; } +.main-content .highlight { margin-bottom: 1rem; } +.main-content .highlight pre { margin-bottom: 0; word-break: normal; } +.main-content .highlight pre, .main-content pre { padding: 0.8rem; overflow: auto; font-size: 0.9rem; line-height: 1.45; border-radius: 0.3rem; -webkit-overflow-scrolling: touch; } +.main-content pre code, .main-content pre tt { display: inline; max-width: initial; padding: 0; margin: 0; overflow: initial; line-height: inherit; word-wrap: normal; background-color: transparent; border: 0; } +.main-content pre code:before, .main-content pre code:after, .main-content pre tt:before, .main-content pre tt:after { content: normal; } +.main-content ul, .main-content ol { margin-top: 0; } +.main-content blockquote { padding: 0 1rem; margin-left: 0; color: #819198; border-left: 0.3rem solid #dce6f0; } +.main-content blockquote > :first-child { margin-top: 0; } +.main-content blockquote > :last-child { margin-bottom: 0; } +.main-content table { display: block; width: 100%; overflow: auto; word-break: normal; word-break: keep-all; -webkit-overflow-scrolling: touch; } +.main-content table th { font-weight: bold; } +.main-content table th, .main-content table td { padding: 0.5rem 1rem; border: 1px solid #e9ebec; } +.main-content dl { padding: 0; } +.main-content dl dt { padding: 0; margin-top: 1rem; font-size: 1rem; font-weight: bold; } +.main-content dl dd { padding: 0; margin-bottom: 1rem; } +.main-content hr { height: 2px; padding: 0; margin: 1rem 0; background-color: #eff0f1; border: 0; } + +.site-footer { padding-top: 2rem; margin-top: 2rem; border-top: solid 1px #eff0f1; } +@media screen and (min-width: 64em) { .site-footer { font-size: 1rem; } } +@media screen and (min-width: 42em) and (max-width: 64em) { .site-footer { font-size: 1rem; } } +@media screen and (max-width: 42em) { .site-footer { font-size: 0.9rem; } } + +.site-footer-owner { display: block; font-weight: bold; } + +.site-footer-credits { color: #819198; } + +h1#logo img { max-width: 100%; } + +h1#logo { margin-bottom: 0; } + +.main-logo { position: relative; z-index: 9; max-width: 70%; display: block; margin: 0 auto; margin-bottom: -5em; } + +.belt { width: 100%; color: #fff; } + +@keyframes beltmove { 100% { stroke-dashoffset: 600; } } +.belt path { transform: skew(-45deg); stroke-width: 35; stroke-dasharray: 2 10 2 10 2 10 2 10 2 10; animation: beltmove 20s linear infinite; } + +.main-logo use { fill: #a73; opacity: 0; animation: convey 3s linear forwards; } + +use:nth-child(1) { animation-delay: 5s; } + +use:nth-child(2) { animation-delay: 3s; } + +use:nth-child(3) { animation-delay: 1s; } + +@keyframes convey { 0% { transform: translate(40%, 40%); opacity: 0; } + 20% { opacity: 1; } + 80% { transform: translate(0%, 40%); } + 100% { opacity: 1; } } +@keyframes convey2 { 0% { transform: translate(50%, 60%); opacity: 0; } + 20% { opacity: 1; } + 80% { transform: translate(0%, 60%); } + 100% { opacity: 1; } } +use:nth-child(1) { animation: convey2 3s linear forwards 5s; } diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.eot b/assets/fonts/Noto-Sans-700/Noto-Sans-700.eot new file mode 100755 index 00000000000..03bf93fec2a Binary files /dev/null and b/assets/fonts/Noto-Sans-700/Noto-Sans-700.eot differ diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.svg b/assets/fonts/Noto-Sans-700/Noto-Sans-700.svg new file mode 100644 index 00000000000..925fe47475a --- /dev/null +++ b/assets/fonts/Noto-Sans-700/Noto-Sans-700.svg @@ -0,0 +1,336 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.ttf b/assets/fonts/Noto-Sans-700/Noto-Sans-700.ttf new file mode 100755 index 00000000000..4599e3ca9af Binary files /dev/null and b/assets/fonts/Noto-Sans-700/Noto-Sans-700.ttf differ diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff b/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff new file mode 100755 index 00000000000..9d0b78df811 Binary files /dev/null and b/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff differ diff --git a/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff2 b/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff2 new file mode 100755 index 00000000000..55fc44bcd12 Binary files /dev/null and b/assets/fonts/Noto-Sans-700/Noto-Sans-700.woff2 differ diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot new file mode 100755 index 00000000000..cb97b2b4dd5 Binary files /dev/null and b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.eot differ diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.svg b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.svg new file mode 100644 index 00000000000..abdafc0f53b --- /dev/null +++ b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.svg @@ -0,0 +1,334 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf new file mode 100755 index 00000000000..6640dbeb333 Binary files /dev/null and b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.ttf differ diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff new file mode 100755 index 00000000000..209739eeb09 Binary files /dev/null and b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff differ diff --git a/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff2 b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff2 new file mode 100755 index 00000000000..f5525aa28be Binary files /dev/null and b/assets/fonts/Noto-Sans-700italic/Noto-Sans-700italic.woff2 differ diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.eot b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.eot new file mode 100755 index 00000000000..a9973499352 Binary files /dev/null and b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.eot differ diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.svg b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.svg new file mode 100644 index 00000000000..dcd8fc89dc9 --- /dev/null +++ b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.svg @@ -0,0 +1,337 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.ttf b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.ttf new file mode 100755 index 00000000000..7f75a2d9096 Binary files /dev/null and b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.ttf differ diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff new file mode 100755 index 00000000000..6dce67cede1 Binary files /dev/null and b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff differ diff --git a/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff2 b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff2 new file mode 100755 index 00000000000..a9c14c49206 Binary files /dev/null and b/assets/fonts/Noto-Sans-italic/Noto-Sans-italic.woff2 differ diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.eot b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.eot new file mode 100755 index 00000000000..15fc8bfc91a Binary files /dev/null and b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.eot differ diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg new file mode 100644 index 00000000000..bd2894d6a27 --- /dev/null +++ b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.svg @@ -0,0 +1,335 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.ttf b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.ttf new file mode 100755 index 00000000000..a83bbf9fc89 Binary files /dev/null and b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.ttf differ diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff new file mode 100755 index 00000000000..17c85006d0d Binary files /dev/null and b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff differ diff --git a/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff2 b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff2 new file mode 100755 index 00000000000..a87d9cd7c61 Binary files /dev/null and b/assets/fonts/Noto-Sans-regular/Noto-Sans-regular.woff2 differ diff --git a/assets/img/forklift-logo-darkbg.svg b/assets/img/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/assets/img/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/img/forklift-logo-lightbg.svg b/assets/img/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/assets/img/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/assets/img/konveyor-logo-forklift.jpg b/assets/img/konveyor-logo-forklift.jpg new file mode 100644 index 00000000000..185460764ef Binary files /dev/null and b/assets/img/konveyor-logo-forklift.jpg differ diff --git a/assets/img/logo_location.txt b/assets/img/logo_location.txt new file mode 100644 index 00000000000..2d6d6c6b515 --- /dev/null +++ b/assets/img/logo_location.txt @@ -0,0 +1 @@ +https://github.com/konveyor/community/tree/main/brand/logo diff --git a/assets/js/scale.fix.js b/assets/js/scale.fix.js new file mode 100644 index 00000000000..2f4f8fd4d31 --- /dev/null +++ b/assets/js/scale.fix.js @@ -0,0 +1,30 @@ +(function (document) { + var metas = document.getElementsByTagName("meta"), + changeViewportContent = function (content) { + for (var i = 0; i < metas.length; i++) { + if (metas[i].name == "viewport") { + metas[i].content = content; + } + } + }, + initialize = function () { + changeViewportContent( + "width=device-width, minimum-scale=1.0, maximum-scale=1.0" + ); + }, + gestureStart = function () { + changeViewportContent( + "width=device-width, minimum-scale=0.25, maximum-scale=1.6" + ); + }, + gestureEnd = function () { + initialize(); + }; + + if (navigator.userAgent.match(/iPhone/i)) { + initialize(); + + document.addEventListener("touchstart", gestureStart, false); + document.addEventListener("touchend", gestureEnd, false); + } +})(document); diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/docinfo.xml b/documentation/doc-Migration_Toolkit_for_Virtualization/docinfo.xml new file mode 100644 index 00000000000..bb612757d2b --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/docinfo.xml @@ -0,0 +1,15 @@ +{user-guide-title} +{project-full} +{project-version} +{subtitle} + + {abstract} + + + + Red Hat Modernization and Migration + Documentation Team + ccs-mms-docs@redhat.com + + + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/master/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/master/index.html new file mode 100644 index 00000000000..cd8b7cc4c0b --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/master/index.html @@ -0,0 +1,5466 @@ + + + + + + + + Installing and using Forklift 2.3 | Forklift Documentation + + + + + + + + + + + + + +Installing and using Forklift 2.3 | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Installing and using Forklift 2.3

+
+
+ +
+
+

About Forklift

+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    OpenStack

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+ + +
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+ + + + + +
+ + +
+

Migration using OpenStack source providers only supports VMs that use only Cinder volumes.

+
+
+
+ +
+

About cold and warm migration

+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    OpenStack

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+ + + + + +
+ + +
+

Migration using OpenStack source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+

Cold migration

+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+
+

Warm migration

+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+ + +
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+
+
+
+

Prerequisites

+
+
+

Review the following prerequisites to ensure that your environment is prepared for migration.

+
+
+

Software requirements

+
+

You must install compatible versions of OKD and KubeVirt.

+
+
+
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+
+ + + + + +
+ + +
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+ + +
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+
+
+

Network prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+

Ports

+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 3. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt.

    +
  • +
  • +

    VM names must contain only lowercase letters (a-z), numbers (0-9), or hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

    +
  • +
  • +

    VM names must not duplicate the name of a VM in the KubeVirt environment.

    +
    + + + + + +
    + + +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone entered a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+
+
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+ +
+
+ + + + + +
+ + +
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
  • +

    Migration of Fibre Channel LUNs is not supported.

    +
  • +
+
+
+
+
+
+

OpenStack prerequisites

+
+

The following prerequisites apply to OpenStack migrations:

+
+
+ +
+
+ + + + + +
+ + +
+

Migration using OpenStack source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+

Additional authentication methods for migrations with OpenStack source providers

+
+

Forklift versions 2.5 and later support the following authentication methods for migrations with OpenStack source providers in addition to the standard username and password credential set:

+
+
+
    +
  • +

    Token authentication

    +
  • +
  • +

    Application credential authentication

    +
  • +
+
+
+

You can use these methods to migrate virtual machines with OpenStack source providers using the CLI the same way you migrate other virtual machines, except for how you prepare the Secret manifest.

+
+
+
Using token authentication with an OpenStack source provider
+
+

You can use token authentication, instead of username and password authentication, when you create an OpenStack source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an OpenStack account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the OpenStack web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+
+
+
Using application credential authentication with an OpenStack source provider
+
+

You can use application credential authentication, instead of username and password authentication, when you create an OpenStack source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an OpenStack account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the OpenStack web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+
+
+
+
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    You must install VMware Tools on all source virtual machines (VMs).

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    You must obtain the SHA-1 fingerprint of the vCenter host.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+ + +
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+ + +
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 4. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+ + +
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

+
+

Creating a VDDK image

+
+

Forklift uses the VMware Virtual Disk Development Kit (VDDK) SDK to transfer virtual disks from VMware vSphere.

+
+
+

You must download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You need the VDDK init image path in order to add a VMware source provider.

+
+
+ + + + + +
+ + +
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
    + + + + + +
    + + +
    +

    In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

    +
    +
    +
    +
  6. +
  7. +

    Save the VDDK archive file in the temporary directory.

    +
  8. +
  9. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  10. +
  11. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  12. +
  13. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  14. +
  15. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  16. +
  17. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  18. +
+
+
+
+

Obtaining the SHA-1 fingerprint of a vCenter host

+
+

You must obtain the SHA-1 fingerprint of a vCenter host in order to create a Secret CR.

+
+
+
Procedure
+
    +
  • +

    Run the following command:

    +
    +
    +
    $ openssl s_client \
    +    -connect <vcenter_host>:443 \ (1)
    +    < /dev/null 2>/dev/null \
    +    | openssl x509 -fingerprint -noout -in /dev/stdin \
    +    | cut -d '=' -f 2
    +
    +
    +
    + + + + + +
    1Specify the IP address or FQDN of the vCenter host.
    +
    +
    +
    Example output
    +
    +
    01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
    +
    +
    +
  • +
+
+
+
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+
+
+
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+ + +
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+
+
+

Software compatibility guidelines

+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 5. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.5.1

4.12 or later

4.12 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+ + +
Migration from oVirt 4.3
+
+

MTV 2.5 was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3.

+
+
+

As oVirt 4.3 lacks the improvements that were introduced in oVirt 4.4 for Forklift, and new features were not tested with oVirt 4.3, migrations from oVirt 4.3 may not function at the same level as migrations from oVirt 4.4, with some functionality may be missing.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+
+
+
+
+

Installing the Forklift Operator

+
+
+

You can install the Forklift Operator by using the OKD web console or the command line interface (CLI).

+
+
+

In Forklift version 2.4 and later, the Forklift Operator includes the Forklift plugin for the OKD web console.

+
+
+

Installing the Forklift Operator by using the OKD web console

+
+

You can install the Forklift Operator by using the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsOperatorHub.

    +
  2. +
  3. +

    Use the Filter by keyword field to search for forklift-operator.

    +
    + + + + + +
    + + +
    +

    The Forklift Operator is a Community Operator. Red Hat does not support Community Operators.

    +
    +
    +
    +
  4. +
  5. +

    Click Migration Toolkit for Virtualization Operator and then click Install.

    +
  6. +
  7. +

    Click Create ForkliftController when the button becomes active.

    +
  8. +
  9. +

    Click Create.

    +
    +

    Your ForkliftController appears in the list that is displayed.

    +
    +
  10. +
  11. +

    Click WorkloadsPods to verify that the Forklift pods are running.

    +
  12. +
  13. +

    Click OperatorsInstalled Operators to verify that Migration Toolkit for Virtualization Operator appears in the konveyor-forklift project with the status Succeeded.

    +
    +

    When the plugin is ready you will be prompted to reload the page. The Migration menu item is automatically added to the navigation bar, displayed on the left of the OKD web console.

    +
    +
  14. +
+
+
+
+

Installing the Forklift Operator from the command line interface

+
+

You can install the Forklift Operator from the command line interface (CLI).

+
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create the konveyor-forklift project:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: project.openshift.io/v1
    +kind: Project
    +metadata:
    +  name: konveyor-forklift
    +EOF
    +
    +
    +
  2. +
  3. +

    Create an OperatorGroup CR called migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: operators.coreos.com/v1
    +kind: OperatorGroup
    +metadata:
    +  name: migration
    +  namespace: konveyor-forklift
    +spec:
    +  targetNamespaces:
    +    - konveyor-forklift
    +EOF
    +
    +
    +
  4. +
  5. +

    Create a Subscription CR for the Operator:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: operators.coreos.com/v1alpha1
    +kind: Subscription
    +metadata:
    +  name: forklift-operator
    +  namespace: konveyor-forklift
    +spec:
    +  channel: development
    +  installPlanApproval: Automatic
    +  name: forklift-operator
    +  source: community-operators
    +  sourceNamespace: openshift-marketplace
    +  startingCSV: "konveyor-forklift-operator.2.3.0"
    +EOF
    +
    +
    +
  6. +
  7. +

    Create a ForkliftController CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: ForkliftController
    +metadata:
    +  name: forklift-controller
    +  namespace: konveyor-forklift
    +spec:
    +  olm_managed: true
    +EOF
    +
    +
    +
  8. +
  9. +

    Verify that the Forklift pods are running:

    +
    +
    +
    $ kubectl get pods -n konveyor-forklift
    +
    +
    +
    +
    Example output
    +
    +
    NAME                                                    READY   STATUS    RESTARTS   AGE
    +forklift-api-bb45b8db4-cpzlg                            1/1     Running   0          6m34s
    +forklift-controller-7649db6845-zd25p                    2/2     Running   0          6m38s
    +forklift-must-gather-api-78fb4bcdf6-h2r4m               1/1     Running   0          6m28s
    +forklift-operator-59c87cfbdc-pmkfc                      1/1     Running   0          28m
    +forklift-ui-plugin-5c5564f6d6-zpd85                     1/1     Running   0          6m24s
    +forklift-validation-7d84c74c6f-fj9xg                    1/1     Running   0          6m30s
    +forklift-volume-populator-controller-85d5cb64b6-mrlmc   1/1     Running   0          6m36s
    +
    +
    +
  10. +
+
+
+
+
+
+

Migrating virtual machines by using the OKD web console

+
+
+

You can migrate virtual machines (VMs) to KubeVirt by using the OKD web console.

+
+
+ + + + + +
+ + +
+

You must ensure that all prerequisites are met.

+
+
+

VMware only: You must have the minimal set of VMware privileges.

+
+
+

VMware only: Creating a VMware Virtual Disk Development Kit (VDDK) image will increase migration speed.

+
+
+
+
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+
+
+

The MTV Overview page

+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page displays the following information:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator.

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+
+
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 6. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+
+
+

Adding providers

+
+

You can add source providers and destination providers for a virtual machine migration by using the OKD web console.

+
+
+

Adding source providers

+
+

You can use Forklift to migrate VMs from the following source providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt

    +
  • +
  • +

    OpenStack

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    KubeVirt

    +
  • +
+
+
+

You can add a source provider by using the OKD web console.

+
+
+
Adding a VMware vSphere source provider
+
+

You can add a VMware vSphere source provider by using the OKD web console.

+
+
+ + + + + +
+ + +
+

EMS enforcement is disabled for migrations with VMware vSphere source providers in order to enable migrations from versions of vSphere that are supported by Forklift but do not comply with the 2023 FIPS requirements. Therefore, users should consider whether migrations from vSphere source providers risk their compliance with FIPS. Supported versions of vSphere are specified in Software compatibility guidelines.

+
+
+
+
+
Prerequisites
+
    +
  • +

    VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click vSphere.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider.

      +
    • +
    • +

      URL: URL of the SDK endpoint of the vCenter on which the source VM is mounted. Ensure that the URL includes the sdk path, usually /sdk. For example, https://vCenter-host-example.com/sdk. If a certificate for FQDN is specified, the value of this field needs to match the FQDN in the certificate.

      +
    • +
    • +

      VDDK init image: VDDKInitImage path. It is strongly recommended to create a VDDK init image to accelerate migrations. For more information, see Creating a VDDK image.

      +
    • +
    • +

      Username: vCenter user. For example, user@vsphere.local.

      +
    • +
    • +

      Password: vCenter user password.

      +
    • +
    • +

      SHA-1 fingerprint: The provider currently requires the SHA-1 fingerprint of the vCenter Server’s TLS certificate in all circumstances. vSphere calls this the server’s thumbprint.

      +
    • +
    +
    +
  8. +
  9. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
    • +
    • +

      Use the system CA certificates: Migrate after validating the system CA certificates.

      +
      +
        +
      1. +

        To skip certificate validation, select the Skip certificate validation check box.

        +
      2. +
      3. +

        To validate the system CA certificates, leave the Skip certificate validation check box cleared.

        +
      4. +
      +
      +
    • +
    +
    +
  10. +
  11. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  12. +
+
+
+
+
Adding an oVirt source provider
+
+

You can add an oVirt source provider by using the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Engine CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Engine Apache CA certificate

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Red Hat Virtualization

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider.

      +
    • +
    • +

      URL: URL of the API endpoint of the oVirt Manager (RHVM) on which the source VM is mounted. Ensure that the URL includes the path leading to the RHVM API server, usually /ovirt-engine/api. For example, https://rhv-host-example.com/ovirt-engine/api.

      +
    • +
    • +

      Username: Username.

      +
    • +
    • +

      Password: Password.

      +
    • +
    +
    +
  8. +
  9. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
    • +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
      +
        +
      1. +

        To skip certificate validation, select the Skip certificate validation check box.

        +
      2. +
      3. +

        To validate a custom CA certificate, leave the Skip certificate validation check box cleared and either drag the CA certificate to the text box or browse for it and click Select.

        +
      4. +
      +
      +
    • +
    +
    +
  10. +
  11. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  12. +
+
+
+
+
Adding an OpenStack source provider
+
+

You can add an OpenStack source provider by using the OKD web console.

+
+
+ + + + + +
+ + +
+

Migration using OpenStack source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click OpenStack.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider.

      +
    • +
    • +

      URL: URL of the OpenStack Identity (Keystone) endpoint. For example, http://controller:5000/v3.

      +
    • +
    • +

      Authentication type: Choose one of the following methods of authentication and supply the information related to your choice. For example, if you choose Application credential ID as the authentication type, the Application credential ID and the Application credential secret fields become active, and you need to supply the ID and the secret.

      +
      +
        +
      • +

        Application credential ID

        +
        + +
        +
      • +
      • +

        Application credential name

        +
        +
          +
        • +

          Application credential name: OpenStack application credential name

          +
        • +
        • +

          Application credential secret: : OpenStack application credential Secret

          +
        • +
        • +

          Username: OpenStack username

          +
        • +
        • +

          Domain: OpenStack domain name

          +
        • +
        +
        +
      • +
      • +

        Token with user ID

        +
        +
          +
        • +

          Token: OpenStack token

          +
        • +
        • +

          User ID: OpenStack user ID

          +
        • +
        • +

          Project ID: OpenStack project ID

          +
        • +
        +
        +
      • +
      • +

        Token with user Name

        +
        +
          +
        • +

          Token: OpenStack token

          +
        • +
        • +

          Username: OpenStack username

          +
        • +
        • +

          Project: OpenStack project

          +
        • +
        • +

          Domain name: OpenStack domain name

          +
        • +
        +
        +
      • +
      • +

        Password

        +
        +
          +
        • +

          Username: OpenStack username

          +
        • +
        • +

          Password: OpenStack password

          +
        • +
        • +

          Project: OpenStack project

          +
        • +
        • +

          Domain: OpenStack domain name

          +
        • +
        +
        +
      • +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Choose one of the following options for validating CA certificates:

    +
    +
      +
    • +

      Skip certificate validation : Migrate without validating a CA certificate.

      +
    • +
    • +

      Use a custom CA certificate: Migrate after validating a custom CA certificate.

      +
      +
        +
      1. +

        To skip certificate validation, select the Skip certificate validation check box.

        +
      2. +
      3. +

        To validate a custom CA certificate, leave the Skip certificate validation check box cleared and either drag the CA certificate to the text box or browse for it and click Select.

        +
      4. +
      +
      +
    • +
    +
    +
  10. +
  11. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  12. +
+
+
+
+
Adding an Open Virtual Appliance (OVA) source provider
+
+

You can add Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the OKD web console.

+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+ + +
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Open Virtual Appliance (OVA).

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider

      +
    • +
    • +

      URL: URL of the NFS file share that serves the OVA

      +
    • +
    +
    +
  8. +
  9. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
    + + + + + +
    + + +
    +

    An error message might appear that states that an error has occurred. You can ignore this message.

    +
    +
    +
    +
  10. +
+
+
+
+
Adding a Red Hat KubeVirt source provider
+
+

You can use a Red Hat KubeVirt provider as both a source provider and destination provider.

+
+
+

Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.

+
+
+

You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click KubeVirt.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider

      +
    • +
    • +

      URL: URL of the endpoint of the API server

      +
    • +
    • +

      Service account bearer token: Token for a service account with cluster-admin privileges

      +
      +

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+
+
+
+

Adding destination providers

+
+

You can add a KubeVirt destination provider by using the OKD web console.

+
+
+
Adding a KubeVirt destination provider
+
+

You can use a Red Hat KubeVirt provider as both a source provider and destination provider.

+
+
+

Specifically, the host cluster that is automatically added as a KubeVirt provider can be used as both a source provider and a destination provider.

+
+
+

You can also add another KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the cluster where you installed Forklift.

+
+
+

You can migrate VMs from the cluster that Forklift is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click KubeVirt.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider resource name: Name of the source provider

      +
    • +
    • +

      URL: URL of the endpoint of the API server

      +
    • +
    • +

      Service account bearer token: Token for a service account with cluster-admin privileges

      +
      +

      If both URL and Service account bearer token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+
+
+
Selecting a migration network for a KubeVirt provider
+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+ + +
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the Options menu kebab.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+
+
+
+
+

Creating a network mapping

+
+

You can create one or more network mappings by using the OKD web console to map source networks to KubeVirt networks.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationNetworkMaps for virtualization.

    +
  2. +
  3. +

    Click Create NetworkMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the network mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    Select a Source network and a Target namespace/network.

    +
  8. +
  9. +

    Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

    +
  10. +
  11. +

    If you create an additional network mapping, select the network attachment definition as the target network.

    +
  12. +
  13. +

    Click Create.

    +
    +

    The network mapping is displayed on the NetworkMaps screen.

    +
    +
  14. +
+
+
+
+

Creating a storage mapping

+
+

You can create a storage mapping by using the OKD web console to map source disk storages to KubeVirt storage classes.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    Local and shared persistent storage that support VM migration.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationStorageMaps for virtualization.

    +
  2. +
  3. +

    Click Create StorageMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the storage mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    To create a storage mapping, click Add and map storage sources to target storage classes as follows:

    +
    +
      +
    1. +

      If your source provider is VMware vSphere, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is OpenStack, select a Source volume type and a Target storage class.

      +
    6. +
    7. +

      If your source provider is a set of one or more OVA files, select a Source and a Target storage class for the dummy storage that applies to all virtual disks within the OVA files.

      +
    8. +
    9. +

      If your storage provider is KubeVirt. select a Source storage class and a Target storage class.

      +
    10. +
    11. +

      Optional: Click Add to create additional storage mappings, including mapping multiple storage sources to a single target storage class.

      +
    12. +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The mapping is displayed on the StorageMaps page.

    +
    +
  10. +
+
+
+
+

Creating a migration plan

+
+

You can create a migration plan by using the OKD web console.

+
+
+

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

+
+
+

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

+
+
+
Prerequisites
+
    +
  • +

    If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Create plan.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Plan name: Enter a migration plan name to display in the migration plan list.

      +
    • +
    • +

      Plan description: Optional: Brief description of the migration plan.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    • +

      Target namespace: Do one of the following:

      +
      +
        +
      • +

        Select a target namespace from the list

        +
      • +
      • +

        Create a target namespace by typing its name in the text box, and then clicking create "<the_name_you_entered>"

        +
      • +
      +
      +
    • +
    • +

      You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and then clicking Select.

      +
      +

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the pod network is used.

      +
      +
    • +
    +
    +
  6. +
  7. +

    Click Next.

    +
  8. +
  9. +

    Select options to filter the list of source VMs and click Next.

    +
  10. +
  11. +

    Select the VMs to migrate and then click Next.

    +
  12. +
  13. +

    Select an existing network mapping or create a new network mapping.

    +
  14. +
  15. +

    . Optional: Click Add to add an additional network mapping.

    +
    +

    To create a new network mapping:

    +
    +
    +
      +
    • +

      Select a target network for each source network.

      +
    • +
    • +

      Optional: Select Save current mapping as a template and enter a name for the network mapping.

      +
    • +
    +
    +
  16. +
  17. +

    Click Next.

    +
  18. +
  19. +

    Select an existing storage mapping, which you can modify, or create a new storage mapping.

    +
    +

    To create a new storage mapping:

    +
    +
    +
      +
    1. +

      If your source provider is VMware, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is OpenStack, select a Source volume type and a Target storage class.

      +
    6. +
    +
    +
  20. +
  21. +

    Optional: Select Save current mapping as a template and enter a name for the storage mapping.

    +
  22. +
  23. +

    Click Next.

    +
  24. +
  25. +

    Select a migration type and click Next.

    +
    +
      +
    • +

      Cold migration: The source VMs are stopped while the data is copied.

      +
    • +
    • +

      Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.

      +
      + + + + + +
      + + +
      +

      Warm migration is supported only from vSphere and oVirt.

      +
      +
      +
      +
    • +
    +
    +
  26. +
  27. +

    Click Next.

    +
  28. +
  29. +

    Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    +
    +
      +
    1. +

      Click Add hook.

      +
    2. +
    3. +

      Select the Step when the hook will be run: pre-migration or post-migration.

      +
    4. +
    5. +

      Select a Hook definition:

      +
      +
        +
      • +

        Ansible playbook: Browse to the Ansible playbook or paste it into the field.

        +
      • +
      • +

        Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        +
        + + + + + +
        + + +
        +

        The registry must be accessible to your OKD cluster.

        +
        +
        +
        +
      • +
      +
      +
    6. +
    +
    +
  30. +
  31. +

    Click Next.

    +
  32. +
  33. +

    Review your migration plan and click Finish.

    +
    +

    The migration plan is saved on the Plans page.

    +
    +
    +

    You can click the Options menu kebab of the migration plan and select View details to verify the migration plan details.

    +
    +
  34. +
+
+
+
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+
+
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the Options menu kebab beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    + + +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    + + +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+
+
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+
+
+
+
+

Migrating virtual machines from the command line

+
+
+

You can migrate virtual machines to KubeVirt from the command line.

+
+
+ + + + + +
+ + +
+ +
+
+
+
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 7. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 8. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyer.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyer.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyer.io

hooks

get, list, watch

forklift.konveyer.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyer.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyer.io

storagemaps

get, list, watch

forklift.konveyer.io

forkliftcontrollers

+
+ + + + + +
+ + +
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+
+
+

Migrating virtual machines

+
+

You migrate virtual machines (VMs) from the command line (CLI) by creating Forklift custom resources (CRs).

+
+
+ + + + + +
+ + +
+

You must specify a name for cluster-scoped CRs.

+
+
+

You must specify both a name and a namespace for namespace-scoped CRs.

+
+
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+ + +
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+ + + + + +
+ + +
+

Migration using OpenStack source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Prerequisites
+
    +
  • +

    VMware only: You must have a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.

    +
  • +
  • +

    oVirt (oVirt) only: If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

    +
  • +
+
+
+ + + + + +
+ + +
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
  • +

    Migration of Fibre Channel LUNs is not supported.

    +
  • +
+
+
+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: <provider_type> (2)
    +    createdForResourceType: providers
    +type: Opaque
    +stringData: (3)
    +  user: <user> (4)
    +  password: <password> (5)
    +  insecureSkipVerify: <true/false> (6)
    +  domainName: <domain_name> (7)
    +  projectName: <project_name> (8)
    +  regionName: <region name> (9)
    +  cacert: | (10)
    +    <ca_certificate>
    +  url: <api_end_point> (11)
    +  thumbprint: <vcenter_fingerprint> (12)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1The ownerReferences section is optional.
    2Specify the type of source provider. Allowed values are ovirt, vsphere, openstack, and ova. This label is needed to verify the credentials are correct when the remote system is accessible and, for oVirt, to retrieve the Engine CA certificate when a third-party certificate is specified.
    3The stringData section for OVA is different and is described in a note that follows the description of the Secret manifest.
    4Specify the vCenter user, the oVirt Engine user, or the OpenStack user.
    5Specify the user password.
    6Specify <true> to skip certificate verification, which proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. Specifying <false> verifies the certificate.
    7OpenStack only: Specify the domain name.
    8OpenStack only: Specify the project name.
    9OpenStack only: Specify the name of the OpenStack region.
    10oVirt and OpenStack only: For oVirt, enter the Engine CA certificate unless it was replaced by a third-party certificate, in which case enter the Engine Apache CA certificate. You can retrieve the Engine CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA. For OpenStack, enter the CA certificate for connecting to the source environment. The certificate is not used when insecureSkipVerify is set to <true>.
    11Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for OpenStack.
    12VMware only: Specify the vCenter SHA-1 fingerprint.
    +
    +
    + + + + + +
    + + +
    +

    The stringData section for an OVA Secret manifest is as follows:

    +
    +
    +
    +
    stringData:
    +  url: <nfs_server:/nfs_path>
    +
    +
    +
    +

    where:
    +nfs_server: An IP or hostname of the server where the share was created.
    +nfs_path : The path on the server where the OVA files are stored.

    +
    +
    +
    +
  2. +
  3. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  settings:
    +    vddkInitImage: <registry_route_or_server_path>/vddk:<tag> (3)
    +  secret:
    +    name: <secret> (4)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + +
    1Allowed values are ovirt, vsphere, and openstack.
    2Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for OpenStack.
    3VMware only: Specify the VDDK image that you created.
    4Specify the name of provider Secret CR.
    +
    +
  4. +
  5. +

    VMware only: Create a Host manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Host
    +metadata:
    +  name: <vmware_host>
    +  namespace: <namespace>
    +spec:
    +  provider:
    +    namespace: <namespace>
    +    name: <source_provider> (1)
    +  id: <source_host_mor> (2)
    +  ipAddress: <source_network_ip> (3)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + +
    1Specify the name of the VMware Provider CR.
    2Specify the managed object reference (MOR) of the VMware host.
    3Specify the IP address of the VMware migration network.
    +
    +
  6. +
  7. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source: (2)
    +        id: <source_network_id> (3)
    +        name: <source_network_name>
    +    - destination:
    +        name: <network_attachment_definition> (4)
    +        namespace: <network_attachment_definition_namespace> (5)
    +        type: multus
    +      source:
    +        id: <source_network_id>
    +        name: <source_network_name>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + +
    1Allowed values are pod and multus.
    2You can use either the id or the name parameter to specify the source network.
    3Specify the VMware network MOR, the oVirt network UUID, or the OpenStack network UUID.
    4Specify a network attachment definition for each additional KubeVirt network.
    5Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.
    +
    +
  8. +
  9. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        id: <source_datastore> (2)
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode>
    +      source:
    +        id: <source_datastore>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1Allowed values are ReadWriteOnce and ReadWriteMany.
    2Specify the VMware data storage MOR, the oVirt storage domain UUID, or the OpenStack volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.
    +
    +
  10. +
  11. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner (1)
    +  playbook: | (2)
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    + + + + + + + + + +
    1You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.
    2Optional: Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.
    +
    +
  12. +
  13. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +spec:
    +  warm: true (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (3)
    +    network: (4)
    +      name: <network_map> (5)
    +      namespace: <namespace>
    +    storage: (6)
    +      name: <storage_map> (7)
    +      namespace: <namespace>
    +  targetNamespace: <target_namespace>
    +  vms: (8)
    +    - id: <source_vm> (9)
    +    - name: <source_vm>
    +      namespace: <namespace> (10)
    +      hooks: (11)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (12)
    +          step: <step> (13)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1Specify the name of the Plan CR.
    2Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.
    3Specify only one network map and one storage map per plan.
    4Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.
    5Specify the name of the NetworkMap CR.
    6Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.
    7Specify the name of the StorageMap CR.
    8For all source providers except for KubeVirt, you can use either the id or the name parameter to specify the source VMs.
    +KubeVirt source provider only: You can use only the name parameter, not the id. parameter to specify the source VMs.
    9Specify the VMware VM MOR, oVirt VM UUID or the OpenStack VM UUID.
    10KubeVirt source provider only.
    11Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.
    12Specify the name of the Hook CR.
    13Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.
    +
    +
  14. +
  15. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration> (1)
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <plan> (2)
    +    namespace: <namespace>
    +  cutover: <cutover_time> (3)
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + +
    1Specify the name of the Migration CR.
    2Specify the name of the Plan CR that you are running. The Migration CR creates a VirtualMachine CR for each VM that is migrated.
    3Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example, 2021-04-04T01:23:45.678+09:00.
    +
    +
    +

    You can associate multiple Migration CRs with a single Plan CR. If a migration does not complete, you can create a new Migration CR, without changing the Plan CR, to migrate the remaining VMs.

    +
    +
  16. +
  17. +

    Retrieve the Migration CR to monitor the progress of the migration:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  18. +
+
+
+
+

Obtaining the SHA-1 fingerprint of a vCenter host

+
+

You must obtain the SHA-1 fingerprint of a vCenter host in order to create a Secret CR.

+
+
+
Procedure
+
    +
  • +

    Run the following command:

    +
    +
    +
    $ openssl s_client \
    +    -connect <vcenter_host>:443 \ (1)
    +    < /dev/null 2>/dev/null \
    +    | openssl x509 -fingerprint -noout -in /dev/stdin \
    +    | cut -d '=' -f 2
    +
    +
    +
    + + + + + +
    1Specify the IP address or FQDN of the vCenter host.
    +
    +
    +
    Example output
    +
    +
    01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
    +
    +
    +
  • +
+
+
+
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    + + + + + +
    1Specify the name of the Migration CR.
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    + + + + + +
    1You can specify a VM by using the id key or the name key.
    +
    +
    +

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+
+
+
+
+

Advanced migration options

+
+
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    + + + + + +
    1Specify the precopy interval in minutes. The default value is 60.
    +
    +
    +

    You do not need to restart the forklift-controller pod.

    +
    +
  • +
+
+
+
+

Creating custom rules for the Validation service

+
+

The Validation service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The Validation service generates a list of concerns for each VM, which are stored in the Provider Inventory service as VM attributes. The web console displays the concerns for each VM in the provider inventory.

+
+
+

You can create custom rules to extend the default ruleset of the Validation service. For example, you can create a rule that checks whether a VM has multiple disks.

+
+
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+ + + + + + + + + +
1Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.
2Query parameters are based on the input key of the Validation service JSON.
+
+
+
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    + + + + + +
    1Specify vmware or ovirt.
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+
+
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    + + + + + +
    1Allowed values for the provider are vsphere, ovirt, and openstack.
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+
+
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+ + +
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + + + + + +
    1Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.
    2Specify the concerns name and Rego query.
    3Specify the concerns name and flag parameter values.
    4Allowed values are Critical, Warning, and Information.
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    + + + + + + + + + + + + + +
    1Allowed values are ovirt, vsphere, and openstack.
    2Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for OpenStack.
    3Specify the name of the provider Secret CR.
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+
+
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+
+
+
+
+
+

Upgrading Forklift

+
+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled OperatorsMigration Toolkit for Virtualization OperatorSubscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Addding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+
+
+
+

Uninstalling Forklift

+
+
+

You can uninstall Forklift by using the OKD web console or the command line interface (CLI).

+
+
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console to delete the konveyor-forklift project and custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click HomeProjects.

    +
  2. +
  3. +

    Locate the konveyor-forklift project.

    +
  4. +
  5. +

    On the right side of the project, select Delete Project from the Options menu kebab.

    +
  6. +
  7. +

    In the Delete Project pane, enter the project name and click Delete.

    +
  8. +
  9. +

    Click AdministrationCustomResourceDefinitions.

    +
  10. +
  11. +

    Enter forklift in the Search field to locate the CRDs in the forklift.konveyor.io group.

    +
  12. +
  13. +

    On the right side of each CRD, select Delete CustomResourceDefinition from the Options menu kebab.

    +
  14. +
+
+
+
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI) by deleting the konveyor-forklift project and the forklift.konveyor.io custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the project:

    +
    +
    +
    $ kubectl delete project konveyor-forklift
    +
    +
    +
  2. +
  3. +

    Delete the CRDs:

    +
    +
    +
    $ kubectl get crd -o name | grep 'forklift' | xargs kubectl delete
    +
    +
    +
  4. +
  5. +

    Delete the OAuthClient:

    +
    +
    +
    $ kubectl delete oauthclient/forklift-ui
    +
    +
    +
  6. +
+
+
+
+
+
+

Troubleshooting

+
+
+

This section provides information for troubleshooting common migration issues.

+
+
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to be more than 10%.

+
+
+
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+ + +
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      + + + + + +
      1Specify the VM ID as it appears in the Plan CR.
      +
      +
    • +
    +
    +
  6. +
+
+
+
+

Architecture

+
+

This section describes Forklift custom resources, services, and workflows.

+
+
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+
+
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+
+
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote OpenShift cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or OpenStack to the local OpenShift cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is OpenStack.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local OpenShift cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+
+
+
+

Logs and custom resources

+
+

You can download logs and custom resource (CR) information for troubleshooting. For more information, see the detailed migration workflow.

+
+
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      + + +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+
+
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+
+
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+ + +
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      + + + + + +
      1You must specify the VM name, not the VM ID, as it appears in the Plan CR.
      +
      +
    • +
    +
    +
  6. +
+
+
+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-cold-warm-migration/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-cold-warm-migration/index.html new file mode 100644 index 00000000000..fa80d03228d --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-cold-warm-migration/index.html @@ -0,0 +1,159 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About cold and warm migration

+
+
+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
+
+

Cold migration

+
+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+
+
+

Warm migration

+
+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+
Important
+
+
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-rego-files/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-rego-files/index.html new file mode 100644 index 00000000000..62d65f3c50b --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-rego-files/index.html @@ -0,0 +1,104 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+
    +
  1. +

    Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

    +
  2. +
  3. +

    Query parameters are based on the input key of the Validation service JSON.

    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-default-validation-rules/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-default-validation-rules/index.html new file mode 100644 index 00000000000..167ca55c4da --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-default-validation-rules/index.html @@ -0,0 +1,108 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    +
      +
    1. +

      Specify vmware or ovirt.

      +
    2. +
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-cli/index.html new file mode 100644 index 00000000000..47c3a4ddc1d --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-cli/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        You must specify the VM name, not the VM ID, as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-ui/index.html new file mode 100644 index 00000000000..f60a5a86fbc --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-hooks/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-hooks/index.html new file mode 100644 index 00000000000..ae251d4b064 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-hooks/index.html @@ -0,0 +1,106 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding hooks

+
+

Hooks are custom code that you can run at certain stages of the migration. You can define a hook by using an Ansible playbook or a custom hook container.

+
+
+

You can create a hook before a migration plan or while creating a migration plan.

+
+
+
Prerequisites
+
    +
  • +

    You must create an Ansible playbook or a custom hook container.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the web console, click Hooks.

    +
  2. +
  3. +

    Click Create hook.

    +
  4. +
  5. +

    Specify the hook Name.

    +
  6. +
  7. +

    Select Ansible playbook or Custom container image as the Hook definition.

    +
  8. +
  9. +

    If you select Custom container image, specify the image location, for example, quay.io/github_project/container_name:container_id.

    +
  10. +
  11. +

    Select a migration step and click Add.

    +
    +

    The new migration hook appears in the Hooks list.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-source-provider/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-source-provider/index.html new file mode 100644 index 00000000000..dd1d6b96eb9 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-source-provider/index.html @@ -0,0 +1,82 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-virt-provider/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-virt-provider/index.html new file mode 100644 index 00000000000..37f40e5ac63 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-virt-provider/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding a KubeVirt destination provider

+
+

You can add a KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the provider where you installed Forklift.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select KubeVirt from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Specify the provider name to display in the list of target providers.

      +
    • +
    • +

      Kubernetes API server URL: Specify the OKD cluster API endpoint.

      +
    • +
    • +

      Service account token: Specify the cluster-admin service account token.

      +
      +

      If both URL and Service account token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-cli/index.html new file mode 100644 index 00000000000..f624bf5369e --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-cli/index.html @@ -0,0 +1,132 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can specify a VM by using the id key or the name key.

      +
    2. +
    +
    +
    +

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-ui/index.html new file mode 100644 index 00000000000..dff546e7d68 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/changing-precopy-intervals/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/changing-precopy-intervals/index.html new file mode 100644 index 00000000000..f3004ac3c70 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/changing-precopy-intervals/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    +
      +
    1. +

      Specify the precopy interval in minutes. The default value is 60.

      +
    2. +
    +
    +
    +

    You do not need to restart the forklift-controller pod.

    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/collected-logs-cr-info/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/collected-logs-cr-info/index.html new file mode 100644 index 00000000000..48cede2a567 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/collected-logs-cr-info/index.html @@ -0,0 +1,183 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/common-attributes/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/common-attributes/index.html new file mode 100644 index 00000000000..27df2f79454 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/common-attributes/index.html @@ -0,0 +1,66 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/compatibility-guidelines/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/compatibility-guidelines/index.html new file mode 100644 index 00000000000..85f85bf94dc --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/compatibility-guidelines/index.html @@ -0,0 +1,125 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Software compatibility guidelines

+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.5.1

4.12 or later

4.12 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+
Note
+
+
Migration from oVirt 4.3
+
+

MTV 2.5 was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3.

+
+
+

As oVirt 4.3 lacks the improvements that were introduced in oVirt 4.4 for Forklift, and new features were not tested with oVirt 4.3, migrations from oVirt 4.3 may not function at the same level as migrations from oVirt 4.4, with some functionality may be missing.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan/index.html new file mode 100644 index 00000000000..901d9cfbe21 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan/index.html @@ -0,0 +1,270 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a migration plan

+
+

You can create a migration plan by using the OKD web console.

+
+
+

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

+
+
+

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

+
+
+
Prerequisites
+
    +
  • +

    If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Create plan.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Plan name: Enter a migration plan name to display in the migration plan list.

      +
    • +
    • +

      Plan description: Optional: Brief description of the migration plan.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    • +

      Target namespace: Do one of the following:

      +
      +
        +
      • +

        Select a target namespace from the list

        +
      • +
      • +

        Create a target namespace by typing its name in the text box, and then clicking create "<the_name_you_entered>"

        +
      • +
      +
      +
    • +
    • +

      You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and then clicking Select.

      +
      +

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the pod network is used.

      +
      +
    • +
    +
    +
  6. +
  7. +

    Click Next.

    +
  8. +
  9. +

    Select options to filter the list of source VMs and click Next.

    +
  10. +
  11. +

    Select the VMs to migrate and then click Next.

    +
  12. +
  13. +

    Select an existing network mapping or create a new network mapping.

    +
  14. +
  15. +

    . Optional: Click Add to add an additional network mapping.

    +
    +

    To create a new network mapping:

    +
    +
    +
      +
    • +

      Select a target network for each source network.

      +
    • +
    • +

      Optional: Select Save current mapping as a template and enter a name for the network mapping.

      +
    • +
    +
    +
  16. +
  17. +

    Click Next.

    +
  18. +
  19. +

    Select an existing storage mapping, which you can modify, or create a new storage mapping.

    +
    +

    To create a new storage mapping:

    +
    +
    +
      +
    1. +

      If your source provider is VMware, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    +
    +
  20. +
  21. +

    Optional: Select Save current mapping as a template and enter a name for the storage mapping.

    +
  22. +
  23. +

    Click Next.

    +
  24. +
  25. +

    Select a migration type and click Next.

    +
    +
      +
    • +

      Cold migration: The source VMs are stopped while the data is copied.

      +
    • +
    • +

      Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Warm migration is supported only from vSphere and oVirt.

      +
      +
      +
      +
    • +
    +
    +
  26. +
  27. +

    Click Next.

    +
  28. +
  29. +

    Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    +
    +
      +
    1. +

      Click Add hook.

      +
    2. +
    3. +

      Select the Step when the hook will be run: pre-migration or post-migration.

      +
    4. +
    5. +

      Select a Hook definition:

      +
      +
        +
      • +

        Ansible playbook: Browse to the Ansible playbook or paste it into the field.

        +
      • +
      • +

        Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        +
        + + + + + +
        +
        Note
        +
        +
        +

        The registry must be accessible to your OKD cluster.

        +
        +
        +
        +
      • +
      +
      +
    6. +
    +
    +
  30. +
  31. +

    Click Next.

    +
  32. +
  33. +

    Review your migration plan and click Finish.

    +
    +

    The migration plan is saved on the Plans page.

    +
    +
    +

    You can click the {kebab} of the migration plan and select View details to verify the migration plan details.

    +
    +
  34. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-network-mapping/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-network-mapping/index.html new file mode 100644 index 00000000000..781d64c5e07 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-network-mapping/index.html @@ -0,0 +1,122 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a network mapping

+
+

You can create one or more network mappings by using the OKD web console to map source networks to KubeVirt networks.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationNetworkMaps for virtualization.

    +
  2. +
  3. +

    Click Create NetworkMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the network mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    Select a Source network and a Target namespace/network.

    +
  8. +
  9. +

    Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

    +
  10. +
  11. +

    If you create an additional network mapping, select the network attachment definition as the target network.

    +
  12. +
  13. +

    Click Create.

    +
    +

    The network mapping is displayed on the NetworkMaps screen.

    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-storage-mapping/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-storage-mapping/index.html new file mode 100644 index 00000000000..54e81c28cf1 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-storage-mapping/index.html @@ -0,0 +1,138 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a storage mapping

+
+

You can create a storage mapping by using the OKD web console to map source disk storages to KubeVirt storage classes.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    Local and shared persistent storage that support VM migration.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationStorageMaps for virtualization.

    +
  2. +
  3. +

    Click Create StorageMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the storage mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    To create a storage mapping, click Add and map storage sources to target storage classes as follows:

    +
    +
      +
    1. +

      If your source provider is VMware vSphere, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    7. +

      If your source provider is a set of one or more OVA files, select a Source and a Target storage class for the dummy storage that applies to all virtual disks within the OVA files.

      +
    8. +
    9. +

      If your storage provider is KubeVirt. select a Source storage class and a Target storage class.

      +
    10. +
    11. +

      Optional: Click Add to create additional storage mappings, including mapping multiple storage sources to a single target storage class.

      +
    12. +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The mapping is displayed on the StorageMaps page.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-validation-rule/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-validation-rule/index.html new file mode 100644 index 00000000000..178eea51384 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-validation-rule/index.html @@ -0,0 +1,238 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+
Important
+
+
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

      +
    2. +
    3. +

      Specify the concerns name and Rego query.

      +
    4. +
    5. +

      Specify the concerns name and flag parameter values.

      +
    6. +
    7. +

      Allowed values are Critical, Warning, and Information.

      +
    8. +
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      Specify the name of the provider Secret CR.

      +
    6. +
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-vddk-image/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-vddk-image/index.html new file mode 100644 index 00000000000..1498111059d --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-vddk-image/index.html @@ -0,0 +1,177 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a VDDK image

+
+

Forklift uses the VMware Virtual Disk Development Kit (VDDK) SDK to transfer virtual disks from VMware vSphere.

+
+
+

You must download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You need the VDDK init image path in order to add a VMware source provider.

+
+
+ + + + + +
+
Note
+
+
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

    +
    +
    +
    +
  6. +
  7. +

    Save the VDDK archive file in the temporary directory.

    +
  8. +
  9. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  10. +
  11. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  12. +
  13. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  14. +
  15. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  16. +
  17. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  18. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/error-messages/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/error-messages/index.html new file mode 100644 index 00000000000..e4e6a339d1d --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/error-messages/index.html @@ -0,0 +1,83 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to be more than 10%.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..999c62adec4 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_OpenShift_0121 diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..473e21ba4e2 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_OpenShift_0121 diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..33a031a0909 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_0121 diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..e73192c0102 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_0121 diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.png b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.png new file mode 100644 index 00000000000..06e9d1b2494 Binary files /dev/null and b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.png differ diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.png b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.png new file mode 100644 index 00000000000..8dba83d97f8 Binary files /dev/null and b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.png differ diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.svg b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/kebab.png b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/kebab.png new file mode 100644 index 00000000000..81893bd4ad1 Binary files /dev/null and b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/kebab.png differ diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/mtv-ui.png b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/mtv-ui.png new file mode 100644 index 00000000000..009c9b46386 Binary files /dev/null and b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/images/mtv-ui.png differ diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/increasing-nfc-memory-vmware-host/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/increasing-nfc-memory-vmware-host/index.html new file mode 100644 index 00000000000..e467e148360 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/increasing-nfc-memory-vmware-host/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/installing-mtv-operator/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/installing-mtv-operator/index.html new file mode 100644 index 00000000000..b6caa0f9721 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/installing-mtv-operator/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue.md b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue.md new file mode 100644 index 00000000000..30d52ab9cba --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue.md @@ -0,0 +1,15 @@ +## Summary + +(Describe the problem. Don't worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.) + +## What is the problem? + +(Paste the text or a screenshot here. Remember to include the **task number** so that we know which module is affected.) + +## What is the solution? + +(Correct text, link, or task.) + +## Notes + +(Do we need to fix something else?) diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue/index.html new file mode 100644 index 00000000000..379dcde3c9d --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue/index.html @@ -0,0 +1,79 @@ + + + + + + + + Summary | Forklift Documentation + + + + + + + + + + + + + +Summary | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + +
+

Summary

+ +

(Describe the problem. Don’t worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.)

+ +

What is the problem?

+ +

(Paste the text or a screenshot here. Remember to include the task number so that we know which module is affected.)

+ +

What is the solution?

+ +

(Correct text, link, or task.)

+ +

Notes

+ +

(Do we need to fix something else?)

+ + + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/making-open-source-more-inclusive/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/making-open-source-more-inclusive/index.html new file mode 100644 index 00000000000..fd6f8e8eac5 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/making-open-source-more-inclusive/index.html @@ -0,0 +1,69 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Making open source more inclusive

+
+

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migrating-virtual-machines-cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migrating-virtual-machines-cli/index.html new file mode 100644 index 00000000000..dd9a7600d40 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migrating-virtual-machines-cli/index.html @@ -0,0 +1,549 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migrating virtual machines

+
+

You migrate virtual machines (VMs) from the command line (CLI) by creating Forklift custom resources (CRs).

+
+
+ + + + + +
+
Important
+
+
+

You must specify a name for cluster-scoped CRs.

+
+
+

You must specify both a name and a namespace for namespace-scoped CRs.

+
+
+
+
+

Unresolved directive in migrating-virtual-machines-cli.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Prerequisites
+
    +
  • +

    VMware only: You must have a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.

    +
  • +
  • +

    oVirt (oVirt) only: If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

    +
  • +
+
+
+

Unresolved directive in migrating-virtual-machines-cli.adoc - include::snip-migrating-luns.adoc[]

+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: <provider_type> (2)
    +    createdForResourceType: providers
    +type: Opaque
    +stringData: (3)
    +  user: <user> (4)
    +  password: <password> (5)
    +  insecureSkipVerify: <true/false> (6)
    +  domainName: <domain_name> (7)
    +  projectName: <project_name> (8)
    +  regionName: <region name> (9)
    +  cacert: | (10)
    +    <ca_certificate>
    +  url: <api_end_point> (11)
    +  thumbprint: <vcenter_fingerprint> (12)
    +EOF
    +
    +
    +
    +
      +
    1. +

      The ownerReferences section is optional.

      +
    2. +
    3. +

      Specify the type of source provider. Allowed values are ovirt, vsphere, openstack, and ova. This label is needed to verify the credentials are correct when the remote system is accessible and, for oVirt, to retrieve the Engine CA certificate when a third-party certificate is specified.

      +
    4. +
    5. +

      The stringData section for OVA is different and is described in a note that follows the description of the Secret manifest.

      +
    6. +
    7. +

      Specify the vCenter user, the oVirt Engine user, or the {osp} user.

      +
    8. +
    9. +

      Specify the user password.

      +
    10. +
    11. +

      Specify <true> to skip certificate verification, which proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. Specifying <false> verifies the certificate.

      +
    12. +
    13. +

      {osp} only: Specify the domain name.

      +
    14. +
    15. +

      {osp} only: Specify the project name.

      +
    16. +
    17. +

      {osp} only: Specify the name of the {osp} region.

      +
    18. +
    19. +

      oVirt and {osp} only: For oVirt, enter the Engine CA certificate unless it was replaced by a third-party certificate, in which case enter the Engine Apache CA certificate. You can retrieve the Engine CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA. For {osp}, enter the CA certificate for connecting to the source environment. The certificate is not used when insecureSkipVerify is set to <true>.

      +
    20. +
    21. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    22. +
    23. +

      VMware only: Specify the vCenter SHA-1 fingerprint.

      +
    24. +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    The stringData section for an OVA Secret manifest is as follows:

    +
    +
    +
    +
    stringData:
    +  url: <nfs_server:/nfs_path>
    +
    +
    +
    +

    where:
    +nfs_server: An IP or hostname of the server where the share was created.
    +nfs_path : The path on the server where the OVA files are stored.

    +
    +
    +
    +
  2. +
  3. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  settings:
    +    vddkInitImage: <registry_route_or_server_path>/vddk:<tag> (3)
    +  secret:
    +    name: <secret> (4)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      VMware only: Specify the VDDK image that you created.

      +
    6. +
    7. +

      Specify the name of provider Secret CR.

      +
    8. +
    +
    +
  4. +
  5. +

    VMware only: Create a Host manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Host
    +metadata:
    +  name: <vmware_host>
    +  namespace: <namespace>
    +spec:
    +  provider:
    +    namespace: <namespace>
    +    name: <source_provider> (1)
    +  id: <source_host_mor> (2)
    +  ipAddress: <source_network_ip> (3)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the VMware Provider CR.

      +
    2. +
    3. +

      Specify the managed object reference (MOR) of the VMware host.

      +
    4. +
    5. +

      Specify the IP address of the VMware migration network.

      +
    6. +
    +
    +
  6. +
  7. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source: (2)
    +        id: <source_network_id> (3)
    +        name: <source_network_name>
    +    - destination:
    +        name: <network_attachment_definition> (4)
    +        namespace: <network_attachment_definition_namespace> (5)
    +        type: multus
    +      source:
    +        id: <source_network_id>
    +        name: <source_network_name>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are pod and multus.

      +
    2. +
    3. +

      You can use either the id or the name parameter to specify the source network.

      +
    4. +
    5. +

      Specify the VMware network MOR, the oVirt network UUID, or the {osp} network UUID.

      +
    6. +
    7. +

      Specify a network attachment definition for each additional KubeVirt network.

      +
    8. +
    9. +

      Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.

      +
    10. +
    +
    +
  8. +
  9. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        id: <source_datastore> (2)
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode>
    +      source:
    +        id: <source_datastore>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ReadWriteOnce and ReadWriteMany.

      +
    2. +
    3. +

      Specify the VMware data storage MOR, the oVirt storage domain UUID, or the {osp} volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.

      +
    4. +
    +
    +
  10. +
  11. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner (1)
    +  playbook: | (2)
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

      +
    2. +
    3. +

      Optional: Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.

      +
    4. +
    +
    +
  12. +
  13. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +spec:
    +  warm: true (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (3)
    +    network: (4)
    +      name: <network_map> (5)
    +      namespace: <namespace>
    +    storage: (6)
    +      name: <storage_map> (7)
    +      namespace: <namespace>
    +  targetNamespace: <target_namespace>
    +  vms: (8)
    +    - id: <source_vm> (9)
    +    - name: <source_vm>
    +      namespace: <namespace> (10)
    +      hooks: (11)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (12)
    +          step: <step> (13)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Plan CR.

      +
    2. +
    3. +

      Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.

      +
    4. +
    5. +

      Specify only one network map and one storage map per plan.

      +
    6. +
    7. +

      Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.

      +
    8. +
    9. +

      Specify the name of the NetworkMap CR.

      +
    10. +
    11. +

      Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.

      +
    12. +
    13. +

      Specify the name of the StorageMap CR.

      +
    14. +
    15. +

      For all source providers except for KubeVirt, you can use either the id or the name parameter to specify the source VMs.
      +KubeVirt source provider only: You can use only the name parameter, not the id. parameter to specify the source VMs.

      +
    16. +
    17. +

      Specify the VMware VM MOR, oVirt VM UUID or the {osp} VM UUID.

      +
    18. +
    19. +

      KubeVirt source provider only.

      +
    20. +
    21. +

      Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.

      +
    22. +
    23. +

      Specify the name of the Hook CR.

      +
    24. +
    25. +

      Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.

      +
    26. +
    +
    +
  14. +
  15. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration> (1)
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <plan> (2)
    +    namespace: <namespace>
    +  cutover: <cutover_time> (3)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    3. +

      Specify the name of the Plan CR that you are running. The Migration CR creates a VirtualMachine CR for each VM that is migrated.

      +
    4. +
    5. +

      Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example, 2021-04-04T01:23:45.678+09:00.

      +
    6. +
    +
    +
    +

    You can associate multiple Migration CRs with a single Plan CR. If a migration does not complete, you can create a new Migration CR, without changing the Plan CR, to migrate the remaining VMs.

    +
    +
  16. +
  17. +

    Retrieve the Migration CR to monitor the progress of the migration:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  18. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migration-plan-options-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migration-plan-options-ui/index.html new file mode 100644 index 00000000000..190e9bbf3f1 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migration-plan-options-ui/index.html @@ -0,0 +1,141 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the {kebab} beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-overview-page/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-overview-page/index.html new file mode 100644 index 00000000000..d819cd9b8ab --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-overview-page/index.html @@ -0,0 +1,142 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV Overview page

+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page displays the following information:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator.

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-resources-and-services/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-resources-and-services/index.html new file mode 100644 index 00000000000..bb3be8f4bcc --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-resources-and-services/index.html @@ -0,0 +1,131 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-settings/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-settings/index.html new file mode 100644 index 00000000000..7f1124ae96c --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-settings/index.html @@ -0,0 +1,133 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-ui/index.html new file mode 100644 index 00000000000..aa6f822aede --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-ui/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-workflow/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-workflow/index.html new file mode 100644 index 00000000000..8aafa5d8f2f --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-workflow/index.html @@ -0,0 +1,113 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/network-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/network-prerequisites/index.html new file mode 100644 index 00000000000..6e0c4c026f4 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/network-prerequisites/index.html @@ -0,0 +1,196 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Network prerequisites

+
+
+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+
+
+

Ports

+
+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/non-admin-permissions-for-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/non-admin-permissions-for-ui/index.html new file mode 100644 index 00000000000..1a1aa11bed5 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/non-admin-permissions-for-ui/index.html @@ -0,0 +1,187 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyer.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyer.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyer.io

hooks

get, list, watch

forklift.konveyer.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyer.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyer.io

storagemaps

get, list, watch

forklift.konveyer.io

forkliftcontrollers

+
+ + + + + +
+
Note
+
+
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-console-url/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-console-url/index.html new file mode 100644 index 00000000000..b31c1171e99 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-console-url/index.html @@ -0,0 +1,107 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Getting the Forklift web console URL

+
+

You can get the Forklift web console URL at any time by using either the OKD web console, or the command line.

+
+
+
Prerequisites
+
    +
  • +

    KubeVirt Operator installed.

    +
  • +
  • +

    Forklift Operator installed.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  • +

    If you are using the OKD web console, follow these steps:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_web.adoc[]

+
+
+
    +
  • +

    If you are using the command line, get the Forklift web console URL with the following command:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_cli.adoc[]

+
+
+

You can now launch a browser and navigate to the Forklift web console.

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-vmware-fingerprint/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-vmware-fingerprint/index.html new file mode 100644 index 00000000000..c6b06c34902 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-vmware-fingerprint/index.html @@ -0,0 +1,99 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Obtaining the SHA-1 fingerprint of a vCenter host

+
+

You must obtain the SHA-1 fingerprint of a vCenter host in order to create a Secret CR.

+
+
+
Procedure
+
    +
  • +

    Run the following command:

    +
    +
    +
    $ openssl s_client \
    +    -connect <vcenter_host>:443 \ (1)
    +    < /dev/null 2>/dev/null \
    +    | openssl x509 -fingerprint -noout -in /dev/stdin \
    +    | cut -d '=' -f 2
    +
    +
    +
    +
      +
    1. +

      Specify the IP address or FQDN of the vCenter host.

      +
    2. +
    +
    +
    +
    Example output
    +
    +
    01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
    +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/openstack-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/openstack-prerequisites/index.html new file mode 100644 index 00000000000..a31c1308244 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/openstack-prerequisites/index.html @@ -0,0 +1,90 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

OpenStack prerequisites

+
+

The following prerequisites apply to {osp} migrations:

+
+
+ +
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/osh-adding-source-provider/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/osh-adding-source-provider/index.html new file mode 100644 index 00000000000..619e1c280e0 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/osh-adding-source-provider/index.html @@ -0,0 +1,137 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding an {osp} source provider

+
+

You can add an {osp} source provider by using the OKD web console.

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select Red Hat OpenStack Platform from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Name to display in the list of providers

      +
    • +
    • +

      {osp} Identity server URL: {osp} Identity (Keystone) endpoint, for example, http://controller:5000/v3

      +
    • +
    • +

      {osp} username: For example, admin

      +
    • +
    • +

      {osp} password:

      +
    • +
    • +

      Domain:

      +
    • +
    • +

      Project:

      +
    • +
    • +

      Region:

      +
    • +
    +
    +
  8. +
  9. +

    To allow a migration without validating the provider’s CA certificate, select the Skip certificate validation check box. By default, the checkbox is cleared, meaning that the certificate will be validated.

    +
  10. +
  11. +

    If you did not select Skip certificate validation, the CA certificate field is visible. Drag the CA certificate used to connect to the source environment to the text box or browse for it and click Select. If you did select the check box, the CA certificate text box is not visible.

    +
  12. +
  13. +

    Click Create to add and save the provider.

    +
    +

    The source provider appears in the list of providers.

    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-app-cred-auth/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-app-cred-auth/index.html new file mode 100644 index 00000000000..d8edeab2e76 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-app-cred-auth/index.html @@ -0,0 +1,189 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using application credential authentication with an {osp} source provider

+
+

You can use application credential authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-token-auth/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-token-auth/index.html new file mode 100644 index 00000000000..475d2068470 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-token-auth/index.html @@ -0,0 +1,180 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using token authentication with an {osp} source provider

+
+

You can use token authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ova-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ova-prerequisites/index.html new file mode 100644 index 00000000000..a4d57935259 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ova-prerequisites/index.html @@ -0,0 +1,130 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-validation-service-json/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-validation-service-json/index.html new file mode 100644 index 00000000000..16c8bd2ce65 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-validation-service-json/index.html @@ -0,0 +1,483 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    +
      +
    1. +

      Allowed values for the provider are vsphere, ovirt, and openstack.

      +
    2. +
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rhv-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rhv-prerequisites/index.html new file mode 100644 index 00000000000..b2672ca7206 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rhv-prerequisites/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+ +
+
+

Unresolved directive in rhv-prerequisites.adoc - include::snip-migrating-luns.adoc[]

+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.0/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.0/index.html new file mode 100644 index 00000000000..a3a27705991 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.0/index.html @@ -0,0 +1,163 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.0

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Warm migration
+

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

+
+
+
Cancel migration
+

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

+
+
+
Migration network
+

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

+
+
+
Validation service
+

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

+
+
+ + + + + +
+
Important
+
+
+

The validation service is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
+
+

Known issues

+
+
+

This section describes known issues and mitigations.

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Network map displays a "Destination network not found" error
+

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

+
+
+
Warm migration gets stuck during third precopy
+

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

+
+
+

You can do one of the following to mitigate this issue:

+
+
+
    +
  • +

    Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

    +
  • +
  • +

    Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    +
    +
    +
    $ kubectl patch configmap/vm-import-controller-config \
    +  -n openshift-cnv -p '{"data": \
    +  {"warmImport.intervalMinutes": "720"}}'
    +
    +
    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.1/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.1/index.html new file mode 100644 index 00000000000..078bd2b1d09 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.1/index.html @@ -0,0 +1,191 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.1

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

Technical changes

+
+
+
VDDK image added to HyperConverged custom resource
+

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Cold migration from oVirt
+

You can perform a cold migration of VMs from oVirt.

+
+
+
Migration hooks
+

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

+
+
+
Filtered must-gather data collection
+

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

+
+
+
SR-IOV network support
+

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

+
+
+
+
+

Known issues

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Disk copy stage does not progress
+

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

+
+
+

The cause of this problem might be one of the following conditions:

+
+
+
    +
  • +

    The storage class does not exist on the target cluster.

    +
  • +
  • +

    The VDDK image has not been added to the HyperConverged custom resource.

    +
  • +
  • +

    The VM does not have a disk.

    +
  • +
  • +

    The VM disk is locked.

    +
  • +
  • +

    The VM time zone is not set to UTC.

    +
  • +
  • +

    The VM is configured for a USB device.

    +
  • +
+
+
+

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

+
+
+

To determine the cause:

+
+
+
    +
  1. +

    Click WorkloadsVirtualization in the OKD web console.

    +
  2. +
  3. +

    Click the Virtual Machines tab.

    +
  4. +
  5. +

    Select a virtual machine to open the Virtual Machine Overview screen.

    +
  6. +
  7. +

    Click Status to view the status of the virtual machine.

    +
  8. +
+
+
+
VM time zone must be UTC with no offset
+

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

+
+
+
oVirt resource UUID causes a "Provider not found" error
+

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

+
+
+

You must use the resource name. (BZ#1994037)

+
+
+
Same oVirt resource name in different data centers causes ambiguous reference
+

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

+
+
+

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

+
+
+
Snapshots are not deleted after warm migration
+

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.2/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.2/index.html new file mode 100644 index 00000000000..9486f8cf82c --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.2/index.html @@ -0,0 +1,219 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.2

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the precopy time interval for warm migration
+

You can set the time interval between snapshots taken during the precopy stage of warm migration.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Creating validation rules
+

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

+
+
+
Downloading logs by using the web console
+

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

+
+
+
Duplicating a migration plan by using the web console
+

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

+
+
+
Archiving a migration plan by using the web console
+

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Certain Validation service issues do not block migration
+

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

+
+
+

The following Validation service assessments do not block migration:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Issues that do not block migration
AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Missing resource causes error message in current.log file
+

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

+
+
+

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

+
+
+
Importer pod log is unavailable after warm migration
+

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

+
+
+

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

+
+
+

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.
+

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR
+

If you delete a target VirtualMachine CR during the 'Convert image to kubevirt' step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.3/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.3/index.html new file mode 100644 index 00000000000..3a60d777211 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.3/index.html @@ -0,0 +1,156 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.3

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the VddkInitImage path is part of the procedure of adding VMware provider.
+

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

+
+
+
The StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Forklift 2.3 supports warm migration from oVirt
+

You can use warm migration to migrate VMs from both VMware and oVirt.

+
+
+
The minimal sufficient set of privileges for VMware users is established
+

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

+
+
+
Forklift documentation is updated with instructions on using hooks
+

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Some warm migrations from oVirt might fail
+

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

+
+
+
Snapshots are not deleted after warm migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

+
+
+
Warm migration from oVirt fails if a snapshot operation is performed on the source VM
+

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

The problem occurs for both vSphere and oVirt migrations.

+
+
+
Forklift 2.3.4 only: When the source provider is oVirt, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.
+

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.4/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.4/index.html new file mode 100644 index 00000000000..55582a18d17 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.4/index.html @@ -0,0 +1,260 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.4

+
+
+
+

Migrate virtual machines (VMs) from VMware vSphere or oVirt or {osp} to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Faster disk image migration from oVirt
+

Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

+
+
+
Faster disk transfers by ovirt-imageio client (ovirt-img)
+

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

+
+
+
Faster migration using conversion pod disk transfer
+

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

+
+
+
Migrated virtual machines are not scheduled on the target OCP cluster
+

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

+
+
+
StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

+
+
+
VDDK 8 can be used in the VDDK image
+

Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:

+
+
+
    +
  • +

    If you are migrating to OCP 4.12 or earlier, use VDDK version 7.

    +
  • +
  • +

    If you are migrating to OCP 4.13 or later, use VDDK version 8.

    +
  • +
+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
OpenStack migration
+

Forklift now supports migrations with {osp} as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

+
+
+
OCP console plugin
+

The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

+
+
+
Skip certification option
+

'Skip certificate validation' option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

+
+
+
Only third-party certificate required
+

Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.

+
+
+
Conversion of VMs with RHEL9 guest operating system
+

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

+
+
+
Snapshots that are created during the migration in OpenStack are not deleted
+

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

+
+
+
oVirt snapshots are not deleted after a successful migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)

+
+
+
Migration fails during precopy/cutover while a snapshot operation is executed on the source VM
+

Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

+
+
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

+
+
+
Deleting migrated VM does not remove PVC and PV
+

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

+
+ +
+
Improve invalid/conflicting VM name handling
+

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

+
+
+
Prevent locking user accounts due to incorrect credentials
+

If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

+
+
+
Users without cluster-admin role can create new providers
+

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

+
+
+
Convert i440fx to q35
+

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

+
+
+
Preserve the UUID setting in SMBIOS for a VM that is migrated from oVirt
+

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)

+
+
+
Do not expose password for oVirt in error messages
+

Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.

+
+
+
QEMU guest agent is now installed on migrated VMs
+

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.5/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.5/index.html new file mode 100644 index 00000000000..20e04078e3c --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.5/index.html @@ -0,0 +1,325 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.5

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Migration from OpenStack moves to being a fully supported feature
+

In this version, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

+
+
+
Disabling FIPS
+

EMS enforcement is disabled for migrations with VMware vSphere source providers to enable migrations from versions of vSphere that are supported by Forklift but do not comply with the 2023 FIPS requirements.

+
+
+
Integration of the create and update provider user interface
+

The user interface of create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.

+
+
+
Standalone UI
+

The old UI of MTV 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Migration using OVA files created by VMware vSphere
+

In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+

Unresolved directive in rn-2.5.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+
Migrating VMs between OKD clusters
+

In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider as well as a destination provider. You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)

+
+
+
Migration of VMs with direct LUNs from RHV
+

During the migration from RHV, direct LUNs are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

+
+
+
Additional authentication methods for OpenStack
+

In addition to standard password authentication, the following authentication methods are supported: Token authentication and Application credential authentication. (MTV-539)

+
+
+
Validation rules for OpenStack
+

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

+
+
+
VDDK is now optional for VMware vSphere providers
+

The VMware vSphere source provider can now be created without specifying a VDDK init image. It is strongly recommended to create a VDDK init image to accelerate migrations.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and running a new migration plan with the same name, or if deleting a migrated VM and remigrating the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while a snapshot operation is performed on the source VM
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If a user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Import OVA: ConnectionTestFailed message appears when adding OVA provider
+

When adding an OVA provider, the error message ConnectionTestFailed may instantly appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function
+

A flaw was found in the Gin-Gonic Gin Web Framework. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function.  A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
CVE-2023-26144 mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
+

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means MTV 2.5 versions before MTV 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+
+

For more information, see CVE-2023-26144.

+
+
+
Ensure up-to-date data is displayed in the create and update provider forms
+

In previous releases of Forklift, the create and update provider forms could have presented stale data.

+
+
+

This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

+
+
+
Snapshots that are created during a migration in OpenStack are not deleted
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

+
+
+

This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

+
+
+
oVirt snapshots are not deleted after a successful migration
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.

+
+
+

This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

+
+
+
Warm migration fails when cutover conflicts with precopy
+

In previous releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

+
+
+

This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

+
+
+
Warm migration fails when VM is locked
+

In previous releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because the snapshot creation could not be triggered.

+
+
+

This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

+
+
+
Deleting migrated VM does not remove PVC and PV
+

In previous releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

In previous releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

In previous releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.

+
+
+

This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated are able to boot on the target OKD cluster. (MTV-433)

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+

Upgrade notes

+
+
+

It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/running-migration-plan/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/running-migration-plan/index.html new file mode 100644 index 00000000000..e71c52c20bf --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/running-migration-plan/index.html @@ -0,0 +1,135 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-virt-provider/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-virt-provider/index.html new file mode 100644 index 00000000000..1b035508ba2 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-virt-provider/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a KubeVirt provider

+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+
Note
+
+
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the {kebab}.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-vmware-source-provider/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-vmware-source-provider/index.html new file mode 100644 index 00000000000..5ce83d5f49a --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-vmware-source-provider/index.html @@ -0,0 +1,139 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a VMware source provider

+
+

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

+
+
+

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

+
+
+
Prerequisites
+
    +
  • +

    The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

    +
  • +
  • +

    The migration network must be accessible to the KubeVirt nodes through the default gateway.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

    +
    +
    +
    +
  • +
  • +

    The migration network must have jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click the host number in the Hosts column beside a provider to view a list of hosts.

    +
  4. +
  5. +

    Select one or more hosts and click Select migration network.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Network: Network name

      +
    • +
    • +

      ESXi host admin username: For example, root

      +
    • +
    • +

      ESXi host admin password: Password

      +
    • +
    +
    +
  8. +
  9. +

    Click Save.

    +
  10. +
  11. +

    Verify that the status of each host is Ready.

    +
    +

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network/index.html new file mode 100644 index 00000000000..1c6e32c077d --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network/index.html @@ -0,0 +1,118 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a source provider

+
+

You can select a migration network for a source provider in the Forklift web console for improved performance.

+
+
+

If a source network is not optimal for migration, a Warning icon is displayed beside the host number in the Hosts column of the provider list.

+
+
+
Prerequisites
+

The migration network has the following prerequisites:

+
+
+
    +
  • +

    Minimum speed of 10 Gbps.

    +
  • +
  • +

    Accessible to the OpenShift nodes through the default gateway. The source disks are copied by a pod that is connected to the pod network of the target namespace.

    +
  • +
  • +

    Jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click Providers.

    +
  2. +
  3. +

    Click the host number of a provider to view the host list and network details.

    +
  4. +
  5. +

    Select the host to be updated and click Select migration network.

    +
  6. +
  7. +

    Select a Network from the list of available networks.

    +
    +

    The network list displays only the networks accessible to all the selected hosts. The hosts must have

    +
    +
  8. +
  9. +

    Click Check connection to verify the credentials.

    +
  10. +
  11. +

    Click Select to select the migration network.

    +
    +

    The migration network appears in the network details of the updated hosts.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-migrating-luns/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-migrating-luns/index.html new file mode 100644 index 00000000000..4368a1af7f6 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-migrating-luns/index.html @@ -0,0 +1,89 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
  • +

    Migration of Fibre Channel LUNs is not supported.

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_permissions-info/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_permissions-info/index.html new file mode 100644 index 00000000000..49f2186b95e --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_permissions-info/index.html @@ -0,0 +1,85 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

If you are an administrator, you can see and work with components (providers, plans, etc.) for all projects.

+
+
+

If you are a non-administrator, you can only see and work only with the components of projects you have permissions for.

+
+
+ + + + + +
+
Tip
+
+
+

You can see which projects you have permissions for by clicking the Project list, which is in the upper-left of every page in the Migrations section except for the Overview.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_cli/index.html new file mode 100644 index 00000000000..9d977712fa8 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_cli/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+

+
+
+
+
$ kubectl get route virt -n konveyor-forklift \
+  -o custom-columns=:.spec.host
+
+
+
+

+ +The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

+
+
+

+ +.Example output

+
+
+
+
https://virt-konveyor-forklift.apps.cluster.openshift.com.
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_web/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_web/index.html new file mode 100644 index 00000000000..0bf52b711ca --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_web/index.html @@ -0,0 +1,84 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Log in to the OKD web console.

    +
  2. +
  3. +

    Click NetworkingRoutes.

    +
  4. +
  5. +

    Select the {namespace} project in the Project: list.

    +
    +

    The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

    +
    +
    +

    Click the URL to navigate to the Forklift web console.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_ova_tech_preview/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_ova_tech_preview/index.html new file mode 100644 index 00000000000..7375fb22403 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_ova_tech_preview/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+
Important
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/source-vm-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/source-vm-prerequisites/index.html new file mode 100644 index 00000000000..3703538acbe --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/source-vm-prerequisites/index.html @@ -0,0 +1,121 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt.

    +
  • +
  • +

    VM names must contain only lowercase letters (a-z), numbers (0-9), or hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

    +
  • +
  • +

    VM names must not duplicate the name of a VM in the KubeVirt environment.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone entered a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/storage-support/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/storage-support/index.html new file mode 100644 index 00000000000..9257a8ae05f --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/storage-support/index.html @@ -0,0 +1,188 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+
+ + + + + +
+
Note
+
+
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+
Note
+
+
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technology-preview/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technology-preview/index.html new file mode 100644 index 00000000000..ba25626ad8a --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technology-preview/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

{FeatureName} is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-cli/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-cli/index.html new file mode 100644 index 00000000000..9c6d71ede52 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-cli/index.html @@ -0,0 +1,106 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI) by deleting the {namespace} project and the forklift.konveyor.io custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the project:

    +
    +
    +
    $ kubectl delete project konveyor-forklift
    +
    +
    +
  2. +
  3. +

    Delete the CRDs:

    +
    +
    +
    $ kubectl get crd -o name | grep 'forklift' | xargs kubectl delete
    +
    +
    +
  4. +
  5. +

    Delete the OAuthClient:

    +
    +
    +
    $ kubectl delete oauthclient/forklift-ui
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-ui/index.html new file mode 100644 index 00000000000..4a229c8bd2f --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-ui/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console to delete the {namespace} project and custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click HomeProjects.

    +
  2. +
  3. +

    Locate the konveyor-forklift project.

    +
  4. +
  5. +

    On the right side of the project, select Delete Project from the {kebab}.

    +
  6. +
  7. +

    In the Delete Project pane, enter the project name and click Delete.

    +
  8. +
  9. +

    Click AdministrationCustomResourceDefinitions.

    +
  10. +
  11. +

    Enter forklift in the Search field to locate the CRDs in the forklift.konveyor.io group.

    +
  12. +
  13. +

    On the right side of each CRD, select Delete CustomResourceDefinition from the {kebab}.

    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/updating-validation-rules-version/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/updating-validation-rules-version/index.html new file mode 100644 index 00000000000..5115ce48784 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/updating-validation-rules-version/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/upgrading-mtv-ui/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/upgrading-mtv-ui/index.html new file mode 100644 index 00000000000..22c33426e47 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/upgrading-mtv-ui/index.html @@ -0,0 +1,127 @@ + + + + + + + + Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + +Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Upgrading Forklift

+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled Operators{operator-name-ui}Subscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Addding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/using-must-gather/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/using-must-gather/index.html new file mode 100644 index 00000000000..9072c313f69 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/using-must-gather/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        Specify the VM ID as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/virt-migration-workflow/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/virt-migration-workflow/index.html new file mode 100644 index 00000000000..16a40ea92c8 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/virt-migration-workflow/index.html @@ -0,0 +1,209 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or {osp} to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is {osp}.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Migration_Toolkit_for_Virtualization/modules/vmware-prerequisites/index.html b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/vmware-prerequisites/index.html new file mode 100644 index 00000000000..c3ff705bf62 --- /dev/null +++ b/documentation/doc-Migration_Toolkit_for_Virtualization/modules/vmware-prerequisites/index.html @@ -0,0 +1,248 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    You must install VMware Tools on all source virtual machines (VMs).

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    You must obtain the SHA-1 fingerprint of the vCenter host.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+
Note
+
+
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+
Note
+
+
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

+ + +
+ + diff --git a/documentation/doc-Release_notes/docinfo.xml b/documentation/doc-Release_notes/docinfo.xml new file mode 100644 index 00000000000..b35cd5a2260 --- /dev/null +++ b/documentation/doc-Release_notes/docinfo.xml @@ -0,0 +1,15 @@ +{rn-title} +{project-full} +{project-version} +Version {project-version} + + This document describes new features, known issues, and resolved issues for {the-lc} {project-full} {project-version}. + + + + Red Hat Modernization and Migration + Documentation Team + ccs-mms-docs@redhat.com + + + diff --git a/documentation/doc-Release_notes/master/index.html b/documentation/doc-Release_notes/master/index.html new file mode 100644 index 00000000000..3c49379fac9 --- /dev/null +++ b/documentation/doc-Release_notes/master/index.html @@ -0,0 +1,1018 @@ + + + + + + + + Release notes | Forklift Documentation + + + + + + + + + + + + + +Release notes | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Release notes

+ +
+

Forklift 2.5

+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    OpenStack

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+

Technical changes

+
+

This release has the following technical changes:

+
+
+
Migration from OpenStack moves to being a fully supported feature
+

In this version, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

+
+
+
Disabling FIPS
+

EMS enforcement is disabled for migrations with VMware vSphere source providers to enable migrations from versions of vSphere that are supported by Forklift but do not comply with the 2023 FIPS requirements.

+
+
+
Integration of the create and update provider user interface
+

The user interface of create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.

+
+
+
Standalone UI
+

The old UI of MTV 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

+
+
+
+

New features and enhancements

+
+

This release has the following features and improvements:

+
+
+
Migration using OVA files created by VMware vSphere
+

In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

+
+
+ + + + + +
+ + +
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+ + +
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
Migrating VMs between OKD clusters
+

In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider as well as a destination provider. You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)

+
+
+
Migration of VMs with direct LUNs from RHV
+

During the migration from RHV, direct LUNs are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

+
+
+
Additional authentication methods for OpenStack
+

In addition to standard password authentication, the following authentication methods are supported: Token authentication and Application credential authentication. (MTV-539)

+
+
+
Validation rules for OpenStack
+

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

+
+
+
VDDK is now optional for VMware vSphere providers
+

The VMware vSphere source provider can now be created without specifying a VDDK init image. It is strongly recommended to create a VDDK init image to accelerate migrations.

+
+
+
+

Known issues

+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and running a new migration plan with the same name, or if deleting a migrated VM and remigrating the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while a snapshot operation is performed on the source VM
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If a user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Import OVA: ConnectionTestFailed message appears when adding OVA provider
+

When adding an OVA provider, the error message ConnectionTestFailed may instantly appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+

Resolved issues

+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function
+

A flaw was found in the Gin-Gonic Gin Web Framework. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function.  A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
CVE-2023-26144 mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
+

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means MTV 2.5 versions before MTV 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+
+

For more information, see CVE-2023-26144.

+
+
+
Ensure up-to-date data is displayed in the create and update provider forms
+

In previous releases of Forklift, the create and update provider forms could have presented stale data.

+
+
+

This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

+
+
+
Snapshots that are created during a migration in OpenStack are not deleted
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

+
+
+

This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

+
+
+
oVirt snapshots are not deleted after a successful migration
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.

+
+
+

This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

+
+
+
Warm migration fails when cutover conflicts with precopy
+

In previous releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

+
+
+

This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

+
+
+
Warm migration fails when VM is locked
+

In previous releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because the snapshot creation could not be triggered.

+
+
+

This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

+
+
+
Deleting migrated VM does not remove PVC and PV
+

In previous releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

In previous releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

In previous releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.

+
+
+

This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated are able to boot on the target OKD cluster. (MTV-433)

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+

Upgrade notes

+
+

It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field spec.selector of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+
+
+

Forklift 2.4

+
+
+

Migrate virtual machines (VMs) from VMware vSphere or oVirt or OpenStack to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+

Technical changes

+
+

This release has the following technical changes:

+
+
+
Faster disk image migration from oVirt
+

Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

+
+
+
Faster disk transfers by ovirt-imageio client (ovirt-img)
+

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

+
+
+
Faster migration using conversion pod disk transfer
+

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

+
+
+
Migrated virtual machines are not scheduled on the target OCP cluster
+

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

+
+
+
StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

+
+
+
VDDK 8 can be used in the VDDK image
+

Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:

+
+
+
    +
  • +

    If you are migrating to OCP 4.12 or earlier, use VDDK version 7.

    +
  • +
  • +

    If you are migrating to OCP 4.13 or later, use VDDK version 8.

    +
  • +
+
+
+
+

New features and enhancements

+
+

This release has the following features and improvements:

+
+
+
OpenStack migration
+

Forklift now supports migrations with OpenStack as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

+
+
+
OCP console plugin
+

The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

+
+
+
Skip certification option
+

Skip certificate validation option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

+
+
+
Only third-party certificate required
+

Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.

+
+
+
Conversion of VMs with RHEL9 guest operating system
+

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

+
+
+
+

Known issues

+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

+
+
+
Snapshots that are created during the migration in OpenStack are not deleted
+

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

+
+
+
oVirt snapshots are not deleted after a successful migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)

+
+
+
Migration fails during precopy/cutover while a snapshot operation is executed on the source VM
+

Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

+
+
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

+
+
+
Deleting migrated VM does not remove PVC and PV
+

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field spec.selector of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+

Resolved issues

+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

+
+ +
+
Improve invalid/conflicting VM name handling
+

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

+
+
+
Prevent locking user accounts due to incorrect credentials
+

If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

+
+
+
Users without cluster-admin role can create new providers
+

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

+
+
+
Convert i440fx to q35
+

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

+
+
+
Preserve the UUID setting in SMBIOS for a VM that is migrated from oVirt
+

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)

+
+
+
Do not expose password for oVirt in error messages
+

Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.

+
+
+
QEMU guest agent is now installed on migrated VMs
+

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

+
+
+
+
+
+

Forklift 2.3

+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+

Technical changes

+
+

This release has the following technical changes:

+
+
+
Setting the VddkInitImage path is part of the procedure of adding VMware provider.
+

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

+
+
+
The StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

+
+
+
+

New features and enhancements

+
+

This release has the following features and improvements:

+
+
+
Forklift 2.3 supports warm migration from oVirt
+

You can use warm migration to migrate VMs from both VMware and oVirt.

+
+
+
The minimal sufficient set of privileges for VMware users is established
+

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

+
+
+
Forklift documentation is updated with instructions on using hooks
+

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

+
+
+
+

Known issues

+
+

This release has the following known issues:

+
+
+
Some warm migrations from oVirt might fail
+

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

+
+
+
Snapshots are not deleted after warm migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

+
+
+
Warm migration from oVirt fails if a snapshot operation is performed on the source VM
+

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

The problem occurs for both vSphere and oVirt migrations.

+
+
+
Forklift 2.3.4 only: When the source provider is oVirt, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.
+

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

+
+
+
+
+
+

Forklift 2.2

+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+

Technical changes

+
+

This release has the following technical changes:

+
+
+
Setting the precopy time interval for warm migration
+

You can set the time interval between snapshots taken during the precopy stage of warm migration.

+
+
+
+

New features and enhancements

+
+

This release has the following features and improvements:

+
+
+
Creating validation rules
+

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

+
+
+
Downloading logs by using the web console
+

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

+
+
+
Duplicating a migration plan by using the web console
+

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

+
+
+
Archiving a migration plan by using the web console
+

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

+
+
+
+

Known issues

+
+

This release has the following known issues:

+
+
+
Certain Validation service issues do not block migration
+

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

+
+
+

The following Validation service assessments do not block migration:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Issues that do not block migration
AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Missing resource causes error message in current.log file
+

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

+
+
+

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

+
+
+
Importer pod log is unavailable after warm migration
+

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

+
+
+

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

+
+
+

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.
+

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR
+

If you delete a target VirtualMachine CR during the Convert image to kubevirt step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

+
+
+
+
+
+

Forklift 2.1

+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+

Technical changes

+
+
VDDK image added to HyperConverged custom resource
+

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

+
+
+
+

New features and enhancements

+
+

This release adds the following features and improvements.

+
+
+
Cold migration from oVirt
+

You can perform a cold migration of VMs from oVirt.

+
+
+
Migration hooks
+

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

+
+
+
Filtered must-gather data collection
+

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

+
+
+
SR-IOV network support
+

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

+
+
+
+

Known issues

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Disk copy stage does not progress
+

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

+
+
+

The cause of this problem might be one of the following conditions:

+
+
+
    +
  • +

    The storage class does not exist on the target cluster.

    +
  • +
  • +

    The VDDK image has not been added to the HyperConverged custom resource.

    +
  • +
  • +

    The VM does not have a disk.

    +
  • +
  • +

    The VM disk is locked.

    +
  • +
  • +

    The VM time zone is not set to UTC.

    +
  • +
  • +

    The VM is configured for a USB device.

    +
  • +
+
+
+

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

+
+
+

To determine the cause:

+
+
+
    +
  1. +

    Click WorkloadsVirtualization in the OKD web console.

    +
  2. +
  3. +

    Click the Virtual Machines tab.

    +
  4. +
  5. +

    Select a virtual machine to open the Virtual Machine Overview screen.

    +
  6. +
  7. +

    Click Status to view the status of the virtual machine.

    +
  8. +
+
+
+
VM time zone must be UTC with no offset
+

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

+
+
+
oVirt resource UUID causes a "Provider not found" error
+

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

+
+
+

You must use the resource name. (BZ#1994037)

+
+
+
Same oVirt resource name in different data centers causes ambiguous reference
+

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

+
+
+

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

+
+
+
Snapshots are not deleted after warm migration
+

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

+
+
+
+
+
+

Forklift 2.0

+
+
+

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+

New features and enhancements

+
+

This release adds the following features and improvements.

+
+
+
Warm migration
+

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

+
+
+
Cancel migration
+

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

+
+
+
Migration network
+

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

+
+
+
Validation service
+

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

+
+
+ + + + + +
+ + +
+

The validation service is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
+

Known issues

+
+

This section describes known issues and mitigations.

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Network map displays a "Destination network not found" error
+

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

+
+
+
Warm migration gets stuck during third precopy
+

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

+
+
+

You can do one of the following to mitigate this issue:

+
+
+
    +
  • +

    Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

    +
  • +
  • +

    Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    +
    +
    +
    $ kubectl patch configmap/vm-import-controller-config \
    +  -n openshift-cnv -p '{"data": \
    +  {"warmImport.intervalMinutes": "720"}}'
    +
    +
    +
  • +
+
+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/about-cold-warm-migration/index.html b/documentation/doc-Release_notes/modules/about-cold-warm-migration/index.html new file mode 100644 index 00000000000..36b02f3444d --- /dev/null +++ b/documentation/doc-Release_notes/modules/about-cold-warm-migration/index.html @@ -0,0 +1,159 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About cold and warm migration

+
+
+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
+
+

Cold migration

+
+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+
+
+

Warm migration

+
+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+
Important
+
+
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/about-rego-files/index.html b/documentation/doc-Release_notes/modules/about-rego-files/index.html new file mode 100644 index 00000000000..371ee5aa7cf --- /dev/null +++ b/documentation/doc-Release_notes/modules/about-rego-files/index.html @@ -0,0 +1,104 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+
    +
  1. +

    Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

    +
  2. +
  3. +

    Query parameters are based on the input key of the Validation service JSON.

    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/accessing-default-validation-rules/index.html b/documentation/doc-Release_notes/modules/accessing-default-validation-rules/index.html new file mode 100644 index 00000000000..6ade6be05fb --- /dev/null +++ b/documentation/doc-Release_notes/modules/accessing-default-validation-rules/index.html @@ -0,0 +1,108 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    +
      +
    1. +

      Specify vmware or ovirt.

      +
    2. +
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/accessing-logs-cli/index.html b/documentation/doc-Release_notes/modules/accessing-logs-cli/index.html new file mode 100644 index 00000000000..e45a54be3ed --- /dev/null +++ b/documentation/doc-Release_notes/modules/accessing-logs-cli/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        You must specify the VM name, not the VM ID, as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/accessing-logs-ui/index.html b/documentation/doc-Release_notes/modules/accessing-logs-ui/index.html new file mode 100644 index 00000000000..e9192fcc7c7 --- /dev/null +++ b/documentation/doc-Release_notes/modules/accessing-logs-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/adding-hooks/index.html b/documentation/doc-Release_notes/modules/adding-hooks/index.html new file mode 100644 index 00000000000..9145116dd31 --- /dev/null +++ b/documentation/doc-Release_notes/modules/adding-hooks/index.html @@ -0,0 +1,106 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding hooks

+
+

Hooks are custom code that you can run at certain stages of the migration. You can define a hook by using an Ansible playbook or a custom hook container.

+
+
+

You can create a hook before a migration plan or while creating a migration plan.

+
+
+
Prerequisites
+
    +
  • +

    You must create an Ansible playbook or a custom hook container.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the web console, click Hooks.

    +
  2. +
  3. +

    Click Create hook.

    +
  4. +
  5. +

    Specify the hook Name.

    +
  6. +
  7. +

    Select Ansible playbook or Custom container image as the Hook definition.

    +
  8. +
  9. +

    If you select Custom container image, specify the image location, for example, quay.io/github_project/container_name:container_id.

    +
  10. +
  11. +

    Select a migration step and click Add.

    +
    +

    The new migration hook appears in the Hooks list.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/adding-source-provider/index.html b/documentation/doc-Release_notes/modules/adding-source-provider/index.html new file mode 100644 index 00000000000..6a0cd50db66 --- /dev/null +++ b/documentation/doc-Release_notes/modules/adding-source-provider/index.html @@ -0,0 +1,82 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/adding-virt-provider/index.html b/documentation/doc-Release_notes/modules/adding-virt-provider/index.html new file mode 100644 index 00000000000..47a7ae62b9b --- /dev/null +++ b/documentation/doc-Release_notes/modules/adding-virt-provider/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding a KubeVirt destination provider

+
+

You can add a KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the provider where you installed Forklift.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select KubeVirt from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Specify the provider name to display in the list of target providers.

      +
    • +
    • +

      Kubernetes API server URL: Specify the OKD cluster API endpoint.

      +
    • +
    • +

      Service account token: Specify the cluster-admin service account token.

      +
      +

      If both URL and Service account token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/canceling-migration-cli/index.html b/documentation/doc-Release_notes/modules/canceling-migration-cli/index.html new file mode 100644 index 00000000000..01a2aae2708 --- /dev/null +++ b/documentation/doc-Release_notes/modules/canceling-migration-cli/index.html @@ -0,0 +1,132 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can specify a VM by using the id key or the name key.

      +
    2. +
    +
    +
    +

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/canceling-migration-ui/index.html b/documentation/doc-Release_notes/modules/canceling-migration-ui/index.html new file mode 100644 index 00000000000..ab5808a2daa --- /dev/null +++ b/documentation/doc-Release_notes/modules/canceling-migration-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/changing-precopy-intervals/index.html b/documentation/doc-Release_notes/modules/changing-precopy-intervals/index.html new file mode 100644 index 00000000000..96df19ddef4 --- /dev/null +++ b/documentation/doc-Release_notes/modules/changing-precopy-intervals/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    +
      +
    1. +

      Specify the precopy interval in minutes. The default value is 60.

      +
    2. +
    +
    +
    +

    You do not need to restart the forklift-controller pod.

    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/collected-logs-cr-info/index.html b/documentation/doc-Release_notes/modules/collected-logs-cr-info/index.html new file mode 100644 index 00000000000..8f13257c8af --- /dev/null +++ b/documentation/doc-Release_notes/modules/collected-logs-cr-info/index.html @@ -0,0 +1,183 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/common-attributes/index.html b/documentation/doc-Release_notes/modules/common-attributes/index.html new file mode 100644 index 00000000000..9c89a393f9f --- /dev/null +++ b/documentation/doc-Release_notes/modules/common-attributes/index.html @@ -0,0 +1,66 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+ + diff --git a/documentation/doc-Release_notes/modules/compatibility-guidelines/index.html b/documentation/doc-Release_notes/modules/compatibility-guidelines/index.html new file mode 100644 index 00000000000..f74c3a5e3e5 --- /dev/null +++ b/documentation/doc-Release_notes/modules/compatibility-guidelines/index.html @@ -0,0 +1,125 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Software compatibility guidelines

+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.5.1

4.12 or later

4.12 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+
Note
+
+
Migration from oVirt 4.3
+
+

MTV 2.5 was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3.

+
+
+

As oVirt 4.3 lacks the improvements that were introduced in oVirt 4.4 for Forklift, and new features were not tested with oVirt 4.3, migrations from oVirt 4.3 may not function at the same level as migrations from oVirt 4.4, with some functionality may be missing.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-migration-plan/index.html b/documentation/doc-Release_notes/modules/creating-migration-plan/index.html new file mode 100644 index 00000000000..263a3d3154c --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-migration-plan/index.html @@ -0,0 +1,270 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a migration plan

+
+

You can create a migration plan by using the OKD web console.

+
+
+

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

+
+
+

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

+
+
+
Prerequisites
+
    +
  • +

    If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Create plan.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Plan name: Enter a migration plan name to display in the migration plan list.

      +
    • +
    • +

      Plan description: Optional: Brief description of the migration plan.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    • +

      Target namespace: Do one of the following:

      +
      +
        +
      • +

        Select a target namespace from the list

        +
      • +
      • +

        Create a target namespace by typing its name in the text box, and then clicking create "<the_name_you_entered>"

        +
      • +
      +
      +
    • +
    • +

      You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and then clicking Select.

      +
      +

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the pod network is used.

      +
      +
    • +
    +
    +
  6. +
  7. +

    Click Next.

    +
  8. +
  9. +

    Select options to filter the list of source VMs and click Next.

    +
  10. +
  11. +

    Select the VMs to migrate and then click Next.

    +
  12. +
  13. +

    Select an existing network mapping or create a new network mapping.

    +
  14. +
  15. +

    . Optional: Click Add to add an additional network mapping.

    +
    +

    To create a new network mapping:

    +
    +
    +
      +
    • +

      Select a target network for each source network.

      +
    • +
    • +

      Optional: Select Save current mapping as a template and enter a name for the network mapping.

      +
    • +
    +
    +
  16. +
  17. +

    Click Next.

    +
  18. +
  19. +

    Select an existing storage mapping, which you can modify, or create a new storage mapping.

    +
    +

    To create a new storage mapping:

    +
    +
    +
      +
    1. +

      If your source provider is VMware, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    +
    +
  20. +
  21. +

    Optional: Select Save current mapping as a template and enter a name for the storage mapping.

    +
  22. +
  23. +

    Click Next.

    +
  24. +
  25. +

    Select a migration type and click Next.

    +
    +
      +
    • +

      Cold migration: The source VMs are stopped while the data is copied.

      +
    • +
    • +

      Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Warm migration is supported only from vSphere and oVirt.

      +
      +
      +
      +
    • +
    +
    +
  26. +
  27. +

    Click Next.

    +
  28. +
  29. +

    Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    +
    +
      +
    1. +

      Click Add hook.

      +
    2. +
    3. +

      Select the Step when the hook will be run: pre-migration or post-migration.

      +
    4. +
    5. +

      Select a Hook definition:

      +
      +
        +
      • +

        Ansible playbook: Browse to the Ansible playbook or paste it into the field.

        +
      • +
      • +

        Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        +
        + + + + + +
        +
        Note
        +
        +
        +

        The registry must be accessible to your OKD cluster.

        +
        +
        +
        +
      • +
      +
      +
    6. +
    +
    +
  30. +
  31. +

    Click Next.

    +
  32. +
  33. +

    Review your migration plan and click Finish.

    +
    +

    The migration plan is saved on the Plans page.

    +
    +
    +

    You can click the {kebab} of the migration plan and select View details to verify the migration plan details.

    +
    +
  34. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-network-mapping/index.html b/documentation/doc-Release_notes/modules/creating-network-mapping/index.html new file mode 100644 index 00000000000..93ead0816e3 --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-network-mapping/index.html @@ -0,0 +1,122 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a network mapping

+
+

You can create one or more network mappings by using the OKD web console to map source networks to KubeVirt networks.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationNetworkMaps for virtualization.

    +
  2. +
  3. +

    Click Create NetworkMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the network mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    Select a Source network and a Target namespace/network.

    +
  8. +
  9. +

    Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

    +
  10. +
  11. +

    If you create an additional network mapping, select the network attachment definition as the target network.

    +
  12. +
  13. +

    Click Create.

    +
    +

    The network mapping is displayed on the NetworkMaps screen.

    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-storage-mapping/index.html b/documentation/doc-Release_notes/modules/creating-storage-mapping/index.html new file mode 100644 index 00000000000..5b297df084a --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-storage-mapping/index.html @@ -0,0 +1,138 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a storage mapping

+
+

You can create a storage mapping by using the OKD web console to map source disk storages to KubeVirt storage classes.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    Local and shared persistent storage that support VM migration.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationStorageMaps for virtualization.

    +
  2. +
  3. +

    Click Create StorageMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the storage mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    To create a storage mapping, click Add and map storage sources to target storage classes as follows:

    +
    +
      +
    1. +

      If your source provider is VMware vSphere, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    7. +

      If your source provider is a set of one or more OVA files, select a Source and a Target storage class for the dummy storage that applies to all virtual disks within the OVA files.

      +
    8. +
    9. +

      If your storage provider is KubeVirt. select a Source storage class and a Target storage class.

      +
    10. +
    11. +

      Optional: Click Add to create additional storage mappings, including mapping multiple storage sources to a single target storage class.

      +
    12. +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The mapping is displayed on the StorageMaps page.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-validation-rule/index.html b/documentation/doc-Release_notes/modules/creating-validation-rule/index.html new file mode 100644 index 00000000000..a32e25f2bed --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-validation-rule/index.html @@ -0,0 +1,238 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+
Important
+
+
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

      +
    2. +
    3. +

      Specify the concerns name and Rego query.

      +
    4. +
    5. +

      Specify the concerns name and flag parameter values.

      +
    6. +
    7. +

      Allowed values are Critical, Warning, and Information.

      +
    8. +
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      Specify the name of the provider Secret CR.

      +
    6. +
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/creating-vddk-image/index.html b/documentation/doc-Release_notes/modules/creating-vddk-image/index.html new file mode 100644 index 00000000000..2e609e962cd --- /dev/null +++ b/documentation/doc-Release_notes/modules/creating-vddk-image/index.html @@ -0,0 +1,177 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a VDDK image

+
+

Forklift uses the VMware Virtual Disk Development Kit (VDDK) SDK to transfer virtual disks from VMware vSphere.

+
+
+

You must download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You need the VDDK init image path in order to add a VMware source provider.

+
+
+ + + + + +
+
Note
+
+
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

    +
    +
    +
    +
  6. +
  7. +

    Save the VDDK archive file in the temporary directory.

    +
  8. +
  9. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  10. +
  11. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  12. +
  13. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  14. +
  15. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  16. +
  17. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  18. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/error-messages/index.html b/documentation/doc-Release_notes/modules/error-messages/index.html new file mode 100644 index 00000000000..370d7541b72 --- /dev/null +++ b/documentation/doc-Release_notes/modules/error-messages/index.html @@ -0,0 +1,83 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to be more than 10%.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..999c62adec4 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_OpenShift_0121 diff --git a/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg b/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..473e21ba4e2 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_OpenShift_0121 diff --git a/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..33a031a0909 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_0121 diff --git a/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg b/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..e73192c0102 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_0121 diff --git a/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.png b/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.png new file mode 100644 index 00000000000..06e9d1b2494 Binary files /dev/null and b/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.png differ diff --git a/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.svg b/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.png b/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.png new file mode 100644 index 00000000000..8dba83d97f8 Binary files /dev/null and b/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.png differ diff --git a/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.svg b/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/documentation/doc-Release_notes/modules/images/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/doc-Release_notes/modules/images/kebab.png b/documentation/doc-Release_notes/modules/images/kebab.png new file mode 100644 index 00000000000..81893bd4ad1 Binary files /dev/null and b/documentation/doc-Release_notes/modules/images/kebab.png differ diff --git a/documentation/doc-Release_notes/modules/images/mtv-ui.png b/documentation/doc-Release_notes/modules/images/mtv-ui.png new file mode 100644 index 00000000000..009c9b46386 Binary files /dev/null and b/documentation/doc-Release_notes/modules/images/mtv-ui.png differ diff --git a/documentation/doc-Release_notes/modules/increasing-nfc-memory-vmware-host/index.html b/documentation/doc-Release_notes/modules/increasing-nfc-memory-vmware-host/index.html new file mode 100644 index 00000000000..58a2c71374c --- /dev/null +++ b/documentation/doc-Release_notes/modules/increasing-nfc-memory-vmware-host/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/installing-mtv-operator/index.html b/documentation/doc-Release_notes/modules/installing-mtv-operator/index.html new file mode 100644 index 00000000000..60b1f72f2e3 --- /dev/null +++ b/documentation/doc-Release_notes/modules/installing-mtv-operator/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/issue_templates/issue.md b/documentation/doc-Release_notes/modules/issue_templates/issue.md new file mode 100644 index 00000000000..30d52ab9cba --- /dev/null +++ b/documentation/doc-Release_notes/modules/issue_templates/issue.md @@ -0,0 +1,15 @@ +## Summary + +(Describe the problem. Don't worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.) + +## What is the problem? + +(Paste the text or a screenshot here. Remember to include the **task number** so that we know which module is affected.) + +## What is the solution? + +(Correct text, link, or task.) + +## Notes + +(Do we need to fix something else?) diff --git a/documentation/doc-Release_notes/modules/issue_templates/issue/index.html b/documentation/doc-Release_notes/modules/issue_templates/issue/index.html new file mode 100644 index 00000000000..feb5f253958 --- /dev/null +++ b/documentation/doc-Release_notes/modules/issue_templates/issue/index.html @@ -0,0 +1,79 @@ + + + + + + + + Summary | Forklift Documentation + + + + + + + + + + + + + +Summary | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + +
+

Summary

+ +

(Describe the problem. Don’t worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.)

+ +

What is the problem?

+ +

(Paste the text or a screenshot here. Remember to include the task number so that we know which module is affected.)

+ +

What is the solution?

+ +

(Correct text, link, or task.)

+ +

Notes

+ +

(Do we need to fix something else?)

+ + + +
+ + diff --git a/documentation/doc-Release_notes/modules/making-open-source-more-inclusive/index.html b/documentation/doc-Release_notes/modules/making-open-source-more-inclusive/index.html new file mode 100644 index 00000000000..fe5612f1ed1 --- /dev/null +++ b/documentation/doc-Release_notes/modules/making-open-source-more-inclusive/index.html @@ -0,0 +1,69 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Making open source more inclusive

+
+

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/migrating-virtual-machines-cli/index.html b/documentation/doc-Release_notes/modules/migrating-virtual-machines-cli/index.html new file mode 100644 index 00000000000..09021cac8db --- /dev/null +++ b/documentation/doc-Release_notes/modules/migrating-virtual-machines-cli/index.html @@ -0,0 +1,549 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migrating virtual machines

+
+

You migrate virtual machines (VMs) from the command line (CLI) by creating Forklift custom resources (CRs).

+
+
+ + + + + +
+
Important
+
+
+

You must specify a name for cluster-scoped CRs.

+
+
+

You must specify both a name and a namespace for namespace-scoped CRs.

+
+
+
+
+

Unresolved directive in migrating-virtual-machines-cli.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Prerequisites
+
    +
  • +

    VMware only: You must have a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.

    +
  • +
  • +

    oVirt (oVirt) only: If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

    +
  • +
+
+
+

Unresolved directive in migrating-virtual-machines-cli.adoc - include::snip-migrating-luns.adoc[]

+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: <provider_type> (2)
    +    createdForResourceType: providers
    +type: Opaque
    +stringData: (3)
    +  user: <user> (4)
    +  password: <password> (5)
    +  insecureSkipVerify: <true/false> (6)
    +  domainName: <domain_name> (7)
    +  projectName: <project_name> (8)
    +  regionName: <region name> (9)
    +  cacert: | (10)
    +    <ca_certificate>
    +  url: <api_end_point> (11)
    +  thumbprint: <vcenter_fingerprint> (12)
    +EOF
    +
    +
    +
    +
      +
    1. +

      The ownerReferences section is optional.

      +
    2. +
    3. +

      Specify the type of source provider. Allowed values are ovirt, vsphere, openstack, and ova. This label is needed to verify the credentials are correct when the remote system is accessible and, for oVirt, to retrieve the Engine CA certificate when a third-party certificate is specified.

      +
    4. +
    5. +

      The stringData section for OVA is different and is described in a note that follows the description of the Secret manifest.

      +
    6. +
    7. +

      Specify the vCenter user, the oVirt Engine user, or the {osp} user.

      +
    8. +
    9. +

      Specify the user password.

      +
    10. +
    11. +

      Specify <true> to skip certificate verification, which proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. Specifying <false> verifies the certificate.

      +
    12. +
    13. +

      {osp} only: Specify the domain name.

      +
    14. +
    15. +

      {osp} only: Specify the project name.

      +
    16. +
    17. +

      {osp} only: Specify the name of the {osp} region.

      +
    18. +
    19. +

      oVirt and {osp} only: For oVirt, enter the Engine CA certificate unless it was replaced by a third-party certificate, in which case enter the Engine Apache CA certificate. You can retrieve the Engine CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA. For {osp}, enter the CA certificate for connecting to the source environment. The certificate is not used when insecureSkipVerify is set to <true>.

      +
    20. +
    21. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    22. +
    23. +

      VMware only: Specify the vCenter SHA-1 fingerprint.

      +
    24. +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    The stringData section for an OVA Secret manifest is as follows:

    +
    +
    +
    +
    stringData:
    +  url: <nfs_server:/nfs_path>
    +
    +
    +
    +

    where:
    +nfs_server: An IP or hostname of the server where the share was created.
    +nfs_path : The path on the server where the OVA files are stored.

    +
    +
    +
    +
  2. +
  3. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  settings:
    +    vddkInitImage: <registry_route_or_server_path>/vddk:<tag> (3)
    +  secret:
    +    name: <secret> (4)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      VMware only: Specify the VDDK image that you created.

      +
    6. +
    7. +

      Specify the name of provider Secret CR.

      +
    8. +
    +
    +
  4. +
  5. +

    VMware only: Create a Host manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Host
    +metadata:
    +  name: <vmware_host>
    +  namespace: <namespace>
    +spec:
    +  provider:
    +    namespace: <namespace>
    +    name: <source_provider> (1)
    +  id: <source_host_mor> (2)
    +  ipAddress: <source_network_ip> (3)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the VMware Provider CR.

      +
    2. +
    3. +

      Specify the managed object reference (MOR) of the VMware host.

      +
    4. +
    5. +

      Specify the IP address of the VMware migration network.

      +
    6. +
    +
    +
  6. +
  7. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source: (2)
    +        id: <source_network_id> (3)
    +        name: <source_network_name>
    +    - destination:
    +        name: <network_attachment_definition> (4)
    +        namespace: <network_attachment_definition_namespace> (5)
    +        type: multus
    +      source:
    +        id: <source_network_id>
    +        name: <source_network_name>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are pod and multus.

      +
    2. +
    3. +

      You can use either the id or the name parameter to specify the source network.

      +
    4. +
    5. +

      Specify the VMware network MOR, the oVirt network UUID, or the {osp} network UUID.

      +
    6. +
    7. +

      Specify a network attachment definition for each additional KubeVirt network.

      +
    8. +
    9. +

      Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.

      +
    10. +
    +
    +
  8. +
  9. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        id: <source_datastore> (2)
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode>
    +      source:
    +        id: <source_datastore>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ReadWriteOnce and ReadWriteMany.

      +
    2. +
    3. +

      Specify the VMware data storage MOR, the oVirt storage domain UUID, or the {osp} volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.

      +
    4. +
    +
    +
  10. +
  11. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner (1)
    +  playbook: | (2)
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

      +
    2. +
    3. +

      Optional: Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.

      +
    4. +
    +
    +
  12. +
  13. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +spec:
    +  warm: true (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (3)
    +    network: (4)
    +      name: <network_map> (5)
    +      namespace: <namespace>
    +    storage: (6)
    +      name: <storage_map> (7)
    +      namespace: <namespace>
    +  targetNamespace: <target_namespace>
    +  vms: (8)
    +    - id: <source_vm> (9)
    +    - name: <source_vm>
    +      namespace: <namespace> (10)
    +      hooks: (11)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (12)
    +          step: <step> (13)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Plan CR.

      +
    2. +
    3. +

      Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.

      +
    4. +
    5. +

      Specify only one network map and one storage map per plan.

      +
    6. +
    7. +

      Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.

      +
    8. +
    9. +

      Specify the name of the NetworkMap CR.

      +
    10. +
    11. +

      Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.

      +
    12. +
    13. +

      Specify the name of the StorageMap CR.

      +
    14. +
    15. +

      For all source providers except for KubeVirt, you can use either the id or the name parameter to specify the source VMs.
      +KubeVirt source provider only: You can use only the name parameter, not the id. parameter to specify the source VMs.

      +
    16. +
    17. +

      Specify the VMware VM MOR, oVirt VM UUID or the {osp} VM UUID.

      +
    18. +
    19. +

      KubeVirt source provider only.

      +
    20. +
    21. +

      Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.

      +
    22. +
    23. +

      Specify the name of the Hook CR.

      +
    24. +
    25. +

      Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.

      +
    26. +
    +
    +
  14. +
  15. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration> (1)
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <plan> (2)
    +    namespace: <namespace>
    +  cutover: <cutover_time> (3)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    3. +

      Specify the name of the Plan CR that you are running. The Migration CR creates a VirtualMachine CR for each VM that is migrated.

      +
    4. +
    5. +

      Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example, 2021-04-04T01:23:45.678+09:00.

      +
    6. +
    +
    +
    +

    You can associate multiple Migration CRs with a single Plan CR. If a migration does not complete, you can create a new Migration CR, without changing the Plan CR, to migrate the remaining VMs.

    +
    +
  16. +
  17. +

    Retrieve the Migration CR to monitor the progress of the migration:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  18. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/migration-plan-options-ui/index.html b/documentation/doc-Release_notes/modules/migration-plan-options-ui/index.html new file mode 100644 index 00000000000..c7402a508eb --- /dev/null +++ b/documentation/doc-Release_notes/modules/migration-plan-options-ui/index.html @@ -0,0 +1,141 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the {kebab} beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-overview-page/index.html b/documentation/doc-Release_notes/modules/mtv-overview-page/index.html new file mode 100644 index 00000000000..a9cf037d463 --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-overview-page/index.html @@ -0,0 +1,142 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV Overview page

+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page displays the following information:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator.

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-resources-and-services/index.html b/documentation/doc-Release_notes/modules/mtv-resources-and-services/index.html new file mode 100644 index 00000000000..7c16fe1917d --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-resources-and-services/index.html @@ -0,0 +1,131 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-settings/index.html b/documentation/doc-Release_notes/modules/mtv-settings/index.html new file mode 100644 index 00000000000..a4daf77498d --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-settings/index.html @@ -0,0 +1,133 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-ui/index.html b/documentation/doc-Release_notes/modules/mtv-ui/index.html new file mode 100644 index 00000000000..f31f68f1e05 --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-ui/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/mtv-workflow/index.html b/documentation/doc-Release_notes/modules/mtv-workflow/index.html new file mode 100644 index 00000000000..7b0e31b860b --- /dev/null +++ b/documentation/doc-Release_notes/modules/mtv-workflow/index.html @@ -0,0 +1,113 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/network-prerequisites/index.html b/documentation/doc-Release_notes/modules/network-prerequisites/index.html new file mode 100644 index 00000000000..a574385a6a4 --- /dev/null +++ b/documentation/doc-Release_notes/modules/network-prerequisites/index.html @@ -0,0 +1,196 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Network prerequisites

+
+
+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+
+
+

Ports

+
+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/non-admin-permissions-for-ui/index.html b/documentation/doc-Release_notes/modules/non-admin-permissions-for-ui/index.html new file mode 100644 index 00000000000..ed05e0a0040 --- /dev/null +++ b/documentation/doc-Release_notes/modules/non-admin-permissions-for-ui/index.html @@ -0,0 +1,187 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyer.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyer.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyer.io

hooks

get, list, watch

forklift.konveyer.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyer.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyer.io

storagemaps

get, list, watch

forklift.konveyer.io

forkliftcontrollers

+
+ + + + + +
+
Note
+
+
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/obtaining-console-url/index.html b/documentation/doc-Release_notes/modules/obtaining-console-url/index.html new file mode 100644 index 00000000000..5809fbb9084 --- /dev/null +++ b/documentation/doc-Release_notes/modules/obtaining-console-url/index.html @@ -0,0 +1,107 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Getting the Forklift web console URL

+
+

You can get the Forklift web console URL at any time by using either the OKD web console, or the command line.

+
+
+
Prerequisites
+
    +
  • +

    KubeVirt Operator installed.

    +
  • +
  • +

    Forklift Operator installed.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  • +

    If you are using the OKD web console, follow these steps:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_web.adoc[]

+
+
+
    +
  • +

    If you are using the command line, get the Forklift web console URL with the following command:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_cli.adoc[]

+
+
+

You can now launch a browser and navigate to the Forklift web console.

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/obtaining-vmware-fingerprint/index.html b/documentation/doc-Release_notes/modules/obtaining-vmware-fingerprint/index.html new file mode 100644 index 00000000000..31bb36e1b4d --- /dev/null +++ b/documentation/doc-Release_notes/modules/obtaining-vmware-fingerprint/index.html @@ -0,0 +1,99 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Obtaining the SHA-1 fingerprint of a vCenter host

+
+

You must obtain the SHA-1 fingerprint of a vCenter host in order to create a Secret CR.

+
+
+
Procedure
+
    +
  • +

    Run the following command:

    +
    +
    +
    $ openssl s_client \
    +    -connect <vcenter_host>:443 \ (1)
    +    < /dev/null 2>/dev/null \
    +    | openssl x509 -fingerprint -noout -in /dev/stdin \
    +    | cut -d '=' -f 2
    +
    +
    +
    +
      +
    1. +

      Specify the IP address or FQDN of the vCenter host.

      +
    2. +
    +
    +
    +
    Example output
    +
    +
    01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
    +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/openstack-prerequisites/index.html b/documentation/doc-Release_notes/modules/openstack-prerequisites/index.html new file mode 100644 index 00000000000..03236009cd2 --- /dev/null +++ b/documentation/doc-Release_notes/modules/openstack-prerequisites/index.html @@ -0,0 +1,90 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

OpenStack prerequisites

+
+

The following prerequisites apply to {osp} migrations:

+
+
+ +
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/osh-adding-source-provider/index.html b/documentation/doc-Release_notes/modules/osh-adding-source-provider/index.html new file mode 100644 index 00000000000..334b535eec1 --- /dev/null +++ b/documentation/doc-Release_notes/modules/osh-adding-source-provider/index.html @@ -0,0 +1,137 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding an {osp} source provider

+
+

You can add an {osp} source provider by using the OKD web console.

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select Red Hat OpenStack Platform from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Name to display in the list of providers

      +
    • +
    • +

      {osp} Identity server URL: {osp} Identity (Keystone) endpoint, for example, http://controller:5000/v3

      +
    • +
    • +

      {osp} username: For example, admin

      +
    • +
    • +

      {osp} password:

      +
    • +
    • +

      Domain:

      +
    • +
    • +

      Project:

      +
    • +
    • +

      Region:

      +
    • +
    +
    +
  8. +
  9. +

    To allow a migration without validating the provider’s CA certificate, select the Skip certificate validation check box. By default, the checkbox is cleared, meaning that the certificate will be validated.

    +
  10. +
  11. +

    If you did not select Skip certificate validation, the CA certificate field is visible. Drag the CA certificate used to connect to the source environment to the text box or browse for it and click Select. If you did select the check box, the CA certificate text box is not visible.

    +
  12. +
  13. +

    Click Create to add and save the provider.

    +
    +

    The source provider appears in the list of providers.

    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/ostack-app-cred-auth/index.html b/documentation/doc-Release_notes/modules/ostack-app-cred-auth/index.html new file mode 100644 index 00000000000..4ebf471dba0 --- /dev/null +++ b/documentation/doc-Release_notes/modules/ostack-app-cred-auth/index.html @@ -0,0 +1,189 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using application credential authentication with an {osp} source provider

+
+

You can use application credential authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/ostack-token-auth/index.html b/documentation/doc-Release_notes/modules/ostack-token-auth/index.html new file mode 100644 index 00000000000..420700dcdf0 --- /dev/null +++ b/documentation/doc-Release_notes/modules/ostack-token-auth/index.html @@ -0,0 +1,180 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using token authentication with an {osp} source provider

+
+

You can use token authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/ova-prerequisites/index.html b/documentation/doc-Release_notes/modules/ova-prerequisites/index.html new file mode 100644 index 00000000000..552daf7c115 --- /dev/null +++ b/documentation/doc-Release_notes/modules/ova-prerequisites/index.html @@ -0,0 +1,130 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/retrieving-validation-service-json/index.html b/documentation/doc-Release_notes/modules/retrieving-validation-service-json/index.html new file mode 100644 index 00000000000..abeb0f6da2a --- /dev/null +++ b/documentation/doc-Release_notes/modules/retrieving-validation-service-json/index.html @@ -0,0 +1,483 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    +
      +
    1. +

      Allowed values for the provider are vsphere, ovirt, and openstack.

      +
    2. +
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rhv-prerequisites/index.html b/documentation/doc-Release_notes/modules/rhv-prerequisites/index.html new file mode 100644 index 00000000000..87af267afab --- /dev/null +++ b/documentation/doc-Release_notes/modules/rhv-prerequisites/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+ +
+
+

Unresolved directive in rhv-prerequisites.adoc - include::snip-migrating-luns.adoc[]

+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.0/index.html b/documentation/doc-Release_notes/modules/rn-2.0/index.html new file mode 100644 index 00000000000..7a0f14a7374 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.0/index.html @@ -0,0 +1,163 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.0

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Warm migration
+

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

+
+
+
Cancel migration
+

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

+
+
+
Migration network
+

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

+
+
+
Validation service
+

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

+
+
+ + + + + +
+
Important
+
+
+

The validation service is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
+
+

Known issues

+
+
+

This section describes known issues and mitigations.

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Network map displays a "Destination network not found" error
+

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

+
+
+
Warm migration gets stuck during third precopy
+

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

+
+
+

You can do one of the following to mitigate this issue:

+
+
+
    +
  • +

    Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

    +
  • +
  • +

    Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    +
    +
    +
    $ kubectl patch configmap/vm-import-controller-config \
    +  -n openshift-cnv -p '{"data": \
    +  {"warmImport.intervalMinutes": "720"}}'
    +
    +
    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.1/index.html b/documentation/doc-Release_notes/modules/rn-2.1/index.html new file mode 100644 index 00000000000..dbb65ef5e77 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.1/index.html @@ -0,0 +1,191 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.1

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

Technical changes

+
+
+
VDDK image added to HyperConverged custom resource
+

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Cold migration from oVirt
+

You can perform a cold migration of VMs from oVirt.

+
+
+
Migration hooks
+

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

+
+
+
Filtered must-gather data collection
+

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

+
+
+
SR-IOV network support
+

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

+
+
+
+
+

Known issues

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Disk copy stage does not progress
+

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

+
+
+

The cause of this problem might be one of the following conditions:

+
+
+
    +
  • +

    The storage class does not exist on the target cluster.

    +
  • +
  • +

    The VDDK image has not been added to the HyperConverged custom resource.

    +
  • +
  • +

    The VM does not have a disk.

    +
  • +
  • +

    The VM disk is locked.

    +
  • +
  • +

    The VM time zone is not set to UTC.

    +
  • +
  • +

    The VM is configured for a USB device.

    +
  • +
+
+
+

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

+
+
+

To determine the cause:

+
+
+
    +
  1. +

    Click WorkloadsVirtualization in the OKD web console.

    +
  2. +
  3. +

    Click the Virtual Machines tab.

    +
  4. +
  5. +

    Select a virtual machine to open the Virtual Machine Overview screen.

    +
  6. +
  7. +

    Click Status to view the status of the virtual machine.

    +
  8. +
+
+
+
VM time zone must be UTC with no offset
+

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

+
+
+
oVirt resource UUID causes a "Provider not found" error
+

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

+
+
+

You must use the resource name. (BZ#1994037)

+
+
+
Same oVirt resource name in different data centers causes ambiguous reference
+

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

+
+
+

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

+
+
+
Snapshots are not deleted after warm migration
+

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.2/index.html b/documentation/doc-Release_notes/modules/rn-2.2/index.html new file mode 100644 index 00000000000..48ef0d30125 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.2/index.html @@ -0,0 +1,219 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.2

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the precopy time interval for warm migration
+

You can set the time interval between snapshots taken during the precopy stage of warm migration.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Creating validation rules
+

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

+
+
+
Downloading logs by using the web console
+

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

+
+
+
Duplicating a migration plan by using the web console
+

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

+
+
+
Archiving a migration plan by using the web console
+

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Certain Validation service issues do not block migration
+

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

+
+
+

The following Validation service assessments do not block migration:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Issues that do not block migration
AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Missing resource causes error message in current.log file
+

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

+
+
+

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

+
+
+
Importer pod log is unavailable after warm migration
+

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

+
+
+

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

+
+
+

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.
+

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR
+

If you delete a target VirtualMachine CR during the 'Convert image to kubevirt' step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.3/index.html b/documentation/doc-Release_notes/modules/rn-2.3/index.html new file mode 100644 index 00000000000..c839b6c69f5 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.3/index.html @@ -0,0 +1,156 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.3

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the VddkInitImage path is part of the procedure of adding VMware provider.
+

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

+
+
+
The StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Forklift 2.3 supports warm migration from oVirt
+

You can use warm migration to migrate VMs from both VMware and oVirt.

+
+
+
The minimal sufficient set of privileges for VMware users is established
+

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

+
+
+
Forklift documentation is updated with instructions on using hooks
+

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Some warm migrations from oVirt might fail
+

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

+
+
+
Snapshots are not deleted after warm migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

+
+
+
Warm migration from oVirt fails if a snapshot operation is performed on the source VM
+

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

The problem occurs for both vSphere and oVirt migrations.

+
+
+
Forklift 2.3.4 only: When the source provider is oVirt, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.
+

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.4/index.html b/documentation/doc-Release_notes/modules/rn-2.4/index.html new file mode 100644 index 00000000000..34d805b5e15 --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.4/index.html @@ -0,0 +1,260 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.4

+
+
+
+

Migrate virtual machines (VMs) from VMware vSphere or oVirt or {osp} to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Faster disk image migration from oVirt
+

Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

+
+
+
Faster disk transfers by ovirt-imageio client (ovirt-img)
+

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

+
+
+
Faster migration using conversion pod disk transfer
+

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

+
+
+
Migrated virtual machines are not scheduled on the target OCP cluster
+

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

+
+
+
StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

+
+
+
VDDK 8 can be used in the VDDK image
+

Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:

+
+
+
    +
  • +

    If you are migrating to OCP 4.12 or earlier, use VDDK version 7.

    +
  • +
  • +

    If you are migrating to OCP 4.13 or later, use VDDK version 8.

    +
  • +
+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
OpenStack migration
+

Forklift now supports migrations with {osp} as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

+
+
+
OCP console plugin
+

The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

+
+
+
Skip certification option
+

'Skip certificate validation' option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

+
+
+
Only third-party certificate required
+

Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.

+
+
+
Conversion of VMs with RHEL9 guest operating system
+

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

+
+
+
Snapshots that are created during the migration in OpenStack are not deleted
+

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

+
+
+
oVirt snapshots are not deleted after a successful migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)

+
+
+
Migration fails during precopy/cutover while a snapshot operation is executed on the source VM
+

Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

+
+
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

+
+
+
Deleting migrated VM does not remove PVC and PV
+

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

+
+ +
+
Improve invalid/conflicting VM name handling
+

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

+
+
+
Prevent locking user accounts due to incorrect credentials
+

If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

+
+
+
Users without cluster-admin role can create new providers
+

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

+
+
+
Convert i440fx to q35
+

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

+
+
+
Preserve the UUID setting in SMBIOS for a VM that is migrated from oVirt
+

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)

+
+
+
Do not expose password for oVirt in error messages
+

Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.

+
+
+
QEMU guest agent is now installed on migrated VMs
+

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/rn-2.5/index.html b/documentation/doc-Release_notes/modules/rn-2.5/index.html new file mode 100644 index 00000000000..0f61133a97a --- /dev/null +++ b/documentation/doc-Release_notes/modules/rn-2.5/index.html @@ -0,0 +1,325 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.5

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Migration from OpenStack moves to being a fully supported feature
+

In this version, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

+
+
+
Disabling FIPS
+

EMS enforcement is disabled for migrations with VMware vSphere source providers to enable migrations from versions of vSphere that are supported by Forklift but do not comply with the 2023 FIPS requirements.

+
+
+
Integration of the create and update provider user interface
+

The user interface of create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.

+
+
+
Standalone UI
+

The old UI of MTV 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Migration using OVA files created by VMware vSphere
+

In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+

Unresolved directive in rn-2.5.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+
Migrating VMs between OKD clusters
+

In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider as well as a destination provider. You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)

+
+
+
Migration of VMs with direct LUNs from RHV
+

During the migration from RHV, direct LUNs are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

+
+
+
Additional authentication methods for OpenStack
+

In addition to standard password authentication, the following authentication methods are supported: Token authentication and Application credential authentication. (MTV-539)

+
+
+
Validation rules for OpenStack
+

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

+
+
+
VDDK is now optional for VMware vSphere providers
+

The VMware vSphere source provider can now be created without specifying a VDDK init image. It is strongly recommended to create a VDDK init image to accelerate migrations.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and running a new migration plan with the same name, or if deleting a migrated VM and remigrating the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while a snapshot operation is performed on the source VM
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If a user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Import OVA: ConnectionTestFailed message appears when adding OVA provider
+

When adding an OVA provider, the error message ConnectionTestFailed may instantly appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function
+

A flaw was found in the Gin-Gonic Gin Web Framework. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function.  A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
CVE-2023-26144 mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
+

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means MTV 2.5 versions before MTV 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+
+

For more information, see CVE-2023-26144.

+
+
+
Ensure up-to-date data is displayed in the create and update provider forms
+

In previous releases of Forklift, the create and update provider forms could have presented stale data.

+
+
+

This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

+
+
+
Snapshots that are created during a migration in OpenStack are not deleted
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

+
+
+

This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

+
+
+
oVirt snapshots are not deleted after a successful migration
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.

+
+
+

This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

+
+
+
Warm migration fails when cutover conflicts with precopy
+

In previous releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

+
+
+

This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

+
+
+
Warm migration fails when VM is locked
+

In previous releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because the snapshot creation could not be triggered.

+
+
+

This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

+
+
+
Deleting migrated VM does not remove PVC and PV
+

In previous releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

In previous releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

In previous releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.

+
+
+

This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated are able to boot on the target OKD cluster. (MTV-433)

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+

Upgrade notes

+
+
+

It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/running-migration-plan/index.html b/documentation/doc-Release_notes/modules/running-migration-plan/index.html new file mode 100644 index 00000000000..6d71cdd1892 --- /dev/null +++ b/documentation/doc-Release_notes/modules/running-migration-plan/index.html @@ -0,0 +1,135 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/selecting-migration-network-for-virt-provider/index.html b/documentation/doc-Release_notes/modules/selecting-migration-network-for-virt-provider/index.html new file mode 100644 index 00000000000..09c1c2dc3ff --- /dev/null +++ b/documentation/doc-Release_notes/modules/selecting-migration-network-for-virt-provider/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a KubeVirt provider

+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+
Note
+
+
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the {kebab}.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/selecting-migration-network-for-vmware-source-provider/index.html b/documentation/doc-Release_notes/modules/selecting-migration-network-for-vmware-source-provider/index.html new file mode 100644 index 00000000000..59df7e003ac --- /dev/null +++ b/documentation/doc-Release_notes/modules/selecting-migration-network-for-vmware-source-provider/index.html @@ -0,0 +1,139 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a VMware source provider

+
+

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

+
+
+

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

+
+
+
Prerequisites
+
    +
  • +

    The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

    +
  • +
  • +

    The migration network must be accessible to the KubeVirt nodes through the default gateway.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

    +
    +
    +
    +
  • +
  • +

    The migration network must have jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click the host number in the Hosts column beside a provider to view a list of hosts.

    +
  4. +
  5. +

    Select one or more hosts and click Select migration network.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Network: Network name

      +
    • +
    • +

      ESXi host admin username: For example, root

      +
    • +
    • +

      ESXi host admin password: Password

      +
    • +
    +
    +
  8. +
  9. +

    Click Save.

    +
  10. +
  11. +

    Verify that the status of each host is Ready.

    +
    +

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/selecting-migration-network/index.html b/documentation/doc-Release_notes/modules/selecting-migration-network/index.html new file mode 100644 index 00000000000..f4e81a5777d --- /dev/null +++ b/documentation/doc-Release_notes/modules/selecting-migration-network/index.html @@ -0,0 +1,118 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a source provider

+
+

You can select a migration network for a source provider in the Forklift web console for improved performance.

+
+
+

If a source network is not optimal for migration, a Warning icon is displayed beside the host number in the Hosts column of the provider list.

+
+
+
Prerequisites
+

The migration network has the following prerequisites:

+
+
+
    +
  • +

    Minimum speed of 10 Gbps.

    +
  • +
  • +

    Accessible to the OpenShift nodes through the default gateway. The source disks are copied by a pod that is connected to the pod network of the target namespace.

    +
  • +
  • +

    Jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click Providers.

    +
  2. +
  3. +

    Click the host number of a provider to view the host list and network details.

    +
  4. +
  5. +

    Select the host to be updated and click Select migration network.

    +
  6. +
  7. +

    Select a Network from the list of available networks.

    +
    +

    The network list displays only the networks accessible to all the selected hosts. The hosts must have

    +
    +
  8. +
  9. +

    Click Check connection to verify the credentials.

    +
  10. +
  11. +

    Click Select to select the migration network.

    +
    +

    The migration network appears in the network details of the updated hosts.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip-migrating-luns/index.html b/documentation/doc-Release_notes/modules/snip-migrating-luns/index.html new file mode 100644 index 00000000000..8753c5fa698 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip-migrating-luns/index.html @@ -0,0 +1,89 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
  • +

    Migration of Fibre Channel LUNs is not supported.

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snip_permissions-info/index.html b/documentation/doc-Release_notes/modules/snip_permissions-info/index.html new file mode 100644 index 00000000000..a5fdb126c9b --- /dev/null +++ b/documentation/doc-Release_notes/modules/snip_permissions-info/index.html @@ -0,0 +1,85 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

If you are an administrator, you can see and work with components (providers, plans, etc.) for all projects.

+
+
+

If you are a non-administrator, you can only see and work only with the components of projects you have permissions for.

+
+
+ + + + + +
+
Tip
+
+
+

You can see which projects you have permissions for by clicking the Project list, which is in the upper-left of every page in the Migrations section except for the Overview.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_cli/index.html b/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_cli/index.html new file mode 100644 index 00000000000..9f1953b896a --- /dev/null +++ b/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_cli/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+

+
+
+
+
$ kubectl get route virt -n konveyor-forklift \
+  -o custom-columns=:.spec.host
+
+
+
+

+ +The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

+
+
+

+ +.Example output

+
+
+
+
https://virt-konveyor-forklift.apps.cluster.openshift.com.
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_web/index.html b/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_web/index.html new file mode 100644 index 00000000000..8bfb5f0b0e3 --- /dev/null +++ b/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_web/index.html @@ -0,0 +1,84 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Log in to the OKD web console.

    +
  2. +
  3. +

    Click NetworkingRoutes.

    +
  4. +
  5. +

    Select the {namespace} project in the Project: list.

    +
    +

    The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

    +
    +
    +

    Click the URL to navigate to the Forklift web console.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/snippet_ova_tech_preview/index.html b/documentation/doc-Release_notes/modules/snippet_ova_tech_preview/index.html new file mode 100644 index 00000000000..2b48521a80b --- /dev/null +++ b/documentation/doc-Release_notes/modules/snippet_ova_tech_preview/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+
Important
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/source-vm-prerequisites/index.html b/documentation/doc-Release_notes/modules/source-vm-prerequisites/index.html new file mode 100644 index 00000000000..c0fdceddc74 --- /dev/null +++ b/documentation/doc-Release_notes/modules/source-vm-prerequisites/index.html @@ -0,0 +1,121 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt.

    +
  • +
  • +

    VM names must contain only lowercase letters (a-z), numbers (0-9), or hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

    +
  • +
  • +

    VM names must not duplicate the name of a VM in the KubeVirt environment.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone entered a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/storage-support/index.html b/documentation/doc-Release_notes/modules/storage-support/index.html new file mode 100644 index 00000000000..9bb07e135c7 --- /dev/null +++ b/documentation/doc-Release_notes/modules/storage-support/index.html @@ -0,0 +1,188 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+
+ + + + + +
+
Note
+
+
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+
Note
+
+
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/technology-preview/index.html b/documentation/doc-Release_notes/modules/technology-preview/index.html new file mode 100644 index 00000000000..ee05b2d4293 --- /dev/null +++ b/documentation/doc-Release_notes/modules/technology-preview/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

{FeatureName} is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/uninstalling-mtv-cli/index.html b/documentation/doc-Release_notes/modules/uninstalling-mtv-cli/index.html new file mode 100644 index 00000000000..7203fd570ad --- /dev/null +++ b/documentation/doc-Release_notes/modules/uninstalling-mtv-cli/index.html @@ -0,0 +1,106 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI) by deleting the {namespace} project and the forklift.konveyor.io custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the project:

    +
    +
    +
    $ kubectl delete project konveyor-forklift
    +
    +
    +
  2. +
  3. +

    Delete the CRDs:

    +
    +
    +
    $ kubectl get crd -o name | grep 'forklift' | xargs kubectl delete
    +
    +
    +
  4. +
  5. +

    Delete the OAuthClient:

    +
    +
    +
    $ kubectl delete oauthclient/forklift-ui
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/uninstalling-mtv-ui/index.html b/documentation/doc-Release_notes/modules/uninstalling-mtv-ui/index.html new file mode 100644 index 00000000000..c6487e07b0f --- /dev/null +++ b/documentation/doc-Release_notes/modules/uninstalling-mtv-ui/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console to delete the {namespace} project and custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click HomeProjects.

    +
  2. +
  3. +

    Locate the konveyor-forklift project.

    +
  4. +
  5. +

    On the right side of the project, select Delete Project from the {kebab}.

    +
  6. +
  7. +

    In the Delete Project pane, enter the project name and click Delete.

    +
  8. +
  9. +

    Click AdministrationCustomResourceDefinitions.

    +
  10. +
  11. +

    Enter forklift in the Search field to locate the CRDs in the forklift.konveyor.io group.

    +
  12. +
  13. +

    On the right side of each CRD, select Delete CustomResourceDefinition from the {kebab}.

    +
  14. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/updating-validation-rules-version/index.html b/documentation/doc-Release_notes/modules/updating-validation-rules-version/index.html new file mode 100644 index 00000000000..80f71bd045c --- /dev/null +++ b/documentation/doc-Release_notes/modules/updating-validation-rules-version/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/upgrading-mtv-ui/index.html b/documentation/doc-Release_notes/modules/upgrading-mtv-ui/index.html new file mode 100644 index 00000000000..43586de90e1 --- /dev/null +++ b/documentation/doc-Release_notes/modules/upgrading-mtv-ui/index.html @@ -0,0 +1,127 @@ + + + + + + + + Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + +Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Upgrading Forklift

+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled Operators{operator-name-ui}Subscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Addding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/using-must-gather/index.html b/documentation/doc-Release_notes/modules/using-must-gather/index.html new file mode 100644 index 00000000000..95573622938 --- /dev/null +++ b/documentation/doc-Release_notes/modules/using-must-gather/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        Specify the VM ID as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/virt-migration-workflow/index.html b/documentation/doc-Release_notes/modules/virt-migration-workflow/index.html new file mode 100644 index 00000000000..93bdf12cc18 --- /dev/null +++ b/documentation/doc-Release_notes/modules/virt-migration-workflow/index.html @@ -0,0 +1,209 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or {osp} to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is {osp}.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/doc-Release_notes/modules/vmware-prerequisites/index.html b/documentation/doc-Release_notes/modules/vmware-prerequisites/index.html new file mode 100644 index 00000000000..1b7219c6d96 --- /dev/null +++ b/documentation/doc-Release_notes/modules/vmware-prerequisites/index.html @@ -0,0 +1,248 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    You must install VMware Tools on all source virtual machines (VMs).

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    You must obtain the SHA-1 fingerprint of the vCenter host.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+
Note
+
+
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+
Note
+
+
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

+ + +
+ + diff --git a/documentation/modules/about-cold-warm-migration/index.html b/documentation/modules/about-cold-warm-migration/index.html new file mode 100644 index 00000000000..570c1a2c540 --- /dev/null +++ b/documentation/modules/about-cold-warm-migration/index.html @@ -0,0 +1,159 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About cold and warm migration

+
+
+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
+
+

Cold migration

+
+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+
+
+

Warm migration

+
+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+
Important
+
+
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+ + +
+ + diff --git a/documentation/modules/about-rego-files/index.html b/documentation/modules/about-rego-files/index.html new file mode 100644 index 00000000000..c44dfe6b827 --- /dev/null +++ b/documentation/modules/about-rego-files/index.html @@ -0,0 +1,104 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+
    +
  1. +

    Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

    +
  2. +
  3. +

    Query parameters are based on the input key of the Validation service JSON.

    +
  4. +
+
+ + +
+ + diff --git a/documentation/modules/accessing-default-validation-rules/index.html b/documentation/modules/accessing-default-validation-rules/index.html new file mode 100644 index 00000000000..4da29dd9f9b --- /dev/null +++ b/documentation/modules/accessing-default-validation-rules/index.html @@ -0,0 +1,108 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    +
      +
    1. +

      Specify vmware or ovirt.

      +
    2. +
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/accessing-logs-cli/index.html b/documentation/modules/accessing-logs-cli/index.html new file mode 100644 index 00000000000..2ae05fe47a7 --- /dev/null +++ b/documentation/modules/accessing-logs-cli/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        You must specify the VM name, not the VM ID, as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/accessing-logs-ui/index.html b/documentation/modules/accessing-logs-ui/index.html new file mode 100644 index 00000000000..152c7a68696 --- /dev/null +++ b/documentation/modules/accessing-logs-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/adding-hooks/index.html b/documentation/modules/adding-hooks/index.html new file mode 100644 index 00000000000..fa0fcf52ac1 --- /dev/null +++ b/documentation/modules/adding-hooks/index.html @@ -0,0 +1,106 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding hooks

+
+

Hooks are custom code that you can run at certain stages of the migration. You can define a hook by using an Ansible playbook or a custom hook container.

+
+
+

You can create a hook before a migration plan or while creating a migration plan.

+
+
+
Prerequisites
+
    +
  • +

    You must create an Ansible playbook or a custom hook container.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the web console, click Hooks.

    +
  2. +
  3. +

    Click Create hook.

    +
  4. +
  5. +

    Specify the hook Name.

    +
  6. +
  7. +

    Select Ansible playbook or Custom container image as the Hook definition.

    +
  8. +
  9. +

    If you select Custom container image, specify the image location, for example, quay.io/github_project/container_name:container_id.

    +
  10. +
  11. +

    Select a migration step and click Add.

    +
    +

    The new migration hook appears in the Hooks list.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/adding-source-provider/index.html b/documentation/modules/adding-source-provider/index.html new file mode 100644 index 00000000000..95af66e39b7 --- /dev/null +++ b/documentation/modules/adding-source-provider/index.html @@ -0,0 +1,82 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/adding-virt-provider/index.html b/documentation/modules/adding-virt-provider/index.html new file mode 100644 index 00000000000..c3c16f71a00 --- /dev/null +++ b/documentation/modules/adding-virt-provider/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding a KubeVirt destination provider

+
+

You can add a KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the provider where you installed Forklift.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select KubeVirt from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Specify the provider name to display in the list of target providers.

      +
    • +
    • +

      Kubernetes API server URL: Specify the OKD cluster API endpoint.

      +
    • +
    • +

      Service account token: Specify the cluster-admin service account token.

      +
      +

      If both URL and Service account token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/canceling-migration-cli/index.html b/documentation/modules/canceling-migration-cli/index.html new file mode 100644 index 00000000000..b6e25ae1b2d --- /dev/null +++ b/documentation/modules/canceling-migration-cli/index.html @@ -0,0 +1,132 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can specify a VM by using the id key or the name key.

      +
    2. +
    +
    +
    +

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+ + +
+ + diff --git a/documentation/modules/canceling-migration-ui/index.html b/documentation/modules/canceling-migration-ui/index.html new file mode 100644 index 00000000000..1f6034a313a --- /dev/null +++ b/documentation/modules/canceling-migration-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+ + +
+ + diff --git a/documentation/modules/changing-precopy-intervals/index.html b/documentation/modules/changing-precopy-intervals/index.html new file mode 100644 index 00000000000..80791383a68 --- /dev/null +++ b/documentation/modules/changing-precopy-intervals/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    +
      +
    1. +

      Specify the precopy interval in minutes. The default value is 60.

      +
    2. +
    +
    +
    +

    You do not need to restart the forklift-controller pod.

    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/collected-logs-cr-info/index.html b/documentation/modules/collected-logs-cr-info/index.html new file mode 100644 index 00000000000..5f392e15266 --- /dev/null +++ b/documentation/modules/collected-logs-cr-info/index.html @@ -0,0 +1,183 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+ + +
+ + diff --git a/documentation/modules/common-attributes/index.html b/documentation/modules/common-attributes/index.html new file mode 100644 index 00000000000..63411784d3b --- /dev/null +++ b/documentation/modules/common-attributes/index.html @@ -0,0 +1,66 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+ + diff --git a/documentation/modules/compatibility-guidelines/index.html b/documentation/modules/compatibility-guidelines/index.html new file mode 100644 index 00000000000..c11d0b8bd86 --- /dev/null +++ b/documentation/modules/compatibility-guidelines/index.html @@ -0,0 +1,125 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Software compatibility guidelines

+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.5.1

4.12 or later

4.12 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+
Note
+
+
Migration from oVirt 4.3
+
+

MTV 2.5 was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3.

+
+
+

As oVirt 4.3 lacks the improvements that were introduced in oVirt 4.4 for Forklift, and new features were not tested with oVirt 4.3, migrations from oVirt 4.3 may not function at the same level as migrations from oVirt 4.4, with some functionality may be missing.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+ + +
+ + diff --git a/documentation/modules/creating-migration-plan/index.html b/documentation/modules/creating-migration-plan/index.html new file mode 100644 index 00000000000..9f61ef87ebf --- /dev/null +++ b/documentation/modules/creating-migration-plan/index.html @@ -0,0 +1,270 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a migration plan

+
+

You can create a migration plan by using the OKD web console.

+
+
+

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

+
+
+

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

+
+
+
Prerequisites
+
    +
  • +

    If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Create plan.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Plan name: Enter a migration plan name to display in the migration plan list.

      +
    • +
    • +

      Plan description: Optional: Brief description of the migration plan.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    • +

      Target namespace: Do one of the following:

      +
      +
        +
      • +

        Select a target namespace from the list

        +
      • +
      • +

        Create a target namespace by typing its name in the text box, and then clicking create "<the_name_you_entered>"

        +
      • +
      +
      +
    • +
    • +

      You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and then clicking Select.

      +
      +

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the pod network is used.

      +
      +
    • +
    +
    +
  6. +
  7. +

    Click Next.

    +
  8. +
  9. +

    Select options to filter the list of source VMs and click Next.

    +
  10. +
  11. +

    Select the VMs to migrate and then click Next.

    +
  12. +
  13. +

    Select an existing network mapping or create a new network mapping.

    +
  14. +
  15. +

    . Optional: Click Add to add an additional network mapping.

    +
    +

    To create a new network mapping:

    +
    +
    +
      +
    • +

      Select a target network for each source network.

      +
    • +
    • +

      Optional: Select Save current mapping as a template and enter a name for the network mapping.

      +
    • +
    +
    +
  16. +
  17. +

    Click Next.

    +
  18. +
  19. +

    Select an existing storage mapping, which you can modify, or create a new storage mapping.

    +
    +

    To create a new storage mapping:

    +
    +
    +
      +
    1. +

      If your source provider is VMware, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    +
    +
  20. +
  21. +

    Optional: Select Save current mapping as a template and enter a name for the storage mapping.

    +
  22. +
  23. +

    Click Next.

    +
  24. +
  25. +

    Select a migration type and click Next.

    +
    +
      +
    • +

      Cold migration: The source VMs are stopped while the data is copied.

      +
    • +
    • +

      Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Warm migration is supported only from vSphere and oVirt.

      +
      +
      +
      +
    • +
    +
    +
  26. +
  27. +

    Click Next.

    +
  28. +
  29. +

    Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    +
    +
      +
    1. +

      Click Add hook.

      +
    2. +
    3. +

      Select the Step when the hook will be run: pre-migration or post-migration.

      +
    4. +
    5. +

      Select a Hook definition:

      +
      +
        +
      • +

        Ansible playbook: Browse to the Ansible playbook or paste it into the field.

        +
      • +
      • +

        Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        +
        + + + + + +
        +
        Note
        +
        +
        +

        The registry must be accessible to your OKD cluster.

        +
        +
        +
        +
      • +
      +
      +
    6. +
    +
    +
  30. +
  31. +

    Click Next.

    +
  32. +
  33. +

    Review your migration plan and click Finish.

    +
    +

    The migration plan is saved on the Plans page.

    +
    +
    +

    You can click the {kebab} of the migration plan and select View details to verify the migration plan details.

    +
    +
  34. +
+
+ + +
+ + diff --git a/documentation/modules/creating-network-mapping/index.html b/documentation/modules/creating-network-mapping/index.html new file mode 100644 index 00000000000..273de7f0d57 --- /dev/null +++ b/documentation/modules/creating-network-mapping/index.html @@ -0,0 +1,122 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a network mapping

+
+

You can create one or more network mappings by using the OKD web console to map source networks to KubeVirt networks.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationNetworkMaps for virtualization.

    +
  2. +
  3. +

    Click Create NetworkMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the network mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    Select a Source network and a Target namespace/network.

    +
  8. +
  9. +

    Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

    +
  10. +
  11. +

    If you create an additional network mapping, select the network attachment definition as the target network.

    +
  12. +
  13. +

    Click Create.

    +
    +

    The network mapping is displayed on the NetworkMaps screen.

    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/modules/creating-storage-mapping/index.html b/documentation/modules/creating-storage-mapping/index.html new file mode 100644 index 00000000000..188cb4f4b1e --- /dev/null +++ b/documentation/modules/creating-storage-mapping/index.html @@ -0,0 +1,138 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a storage mapping

+
+

You can create a storage mapping by using the OKD web console to map source disk storages to KubeVirt storage classes.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    Local and shared persistent storage that support VM migration.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationStorageMaps for virtualization.

    +
  2. +
  3. +

    Click Create StorageMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the storage mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    To create a storage mapping, click Add and map storage sources to target storage classes as follows:

    +
    +
      +
    1. +

      If your source provider is VMware vSphere, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    7. +

      If your source provider is a set of one or more OVA files, select a Source and a Target storage class for the dummy storage that applies to all virtual disks within the OVA files.

      +
    8. +
    9. +

      If your storage provider is KubeVirt. select a Source storage class and a Target storage class.

      +
    10. +
    11. +

      Optional: Click Add to create additional storage mappings, including mapping multiple storage sources to a single target storage class.

      +
    12. +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The mapping is displayed on the StorageMaps page.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/creating-validation-rule/index.html b/documentation/modules/creating-validation-rule/index.html new file mode 100644 index 00000000000..b1105e63a43 --- /dev/null +++ b/documentation/modules/creating-validation-rule/index.html @@ -0,0 +1,238 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+
Important
+
+
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

      +
    2. +
    3. +

      Specify the concerns name and Rego query.

      +
    4. +
    5. +

      Specify the concerns name and flag parameter values.

      +
    6. +
    7. +

      Allowed values are Critical, Warning, and Information.

      +
    8. +
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      Specify the name of the provider Secret CR.

      +
    6. +
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+ + +
+ + diff --git a/documentation/modules/creating-vddk-image/index.html b/documentation/modules/creating-vddk-image/index.html new file mode 100644 index 00000000000..192fe08a353 --- /dev/null +++ b/documentation/modules/creating-vddk-image/index.html @@ -0,0 +1,177 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a VDDK image

+
+

Forklift uses the VMware Virtual Disk Development Kit (VDDK) SDK to transfer virtual disks from VMware vSphere.

+
+
+

You must download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You need the VDDK init image path in order to add a VMware source provider.

+
+
+ + + + + +
+
Note
+
+
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

    +
    +
    +
    +
  6. +
  7. +

    Save the VDDK archive file in the temporary directory.

    +
  8. +
  9. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  10. +
  11. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  12. +
  13. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  14. +
  15. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  16. +
  17. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  18. +
+
+ + +
+ + diff --git a/documentation/modules/error-messages/index.html b/documentation/modules/error-messages/index.html new file mode 100644 index 00000000000..344310291f2 --- /dev/null +++ b/documentation/modules/error-messages/index.html @@ -0,0 +1,83 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to be more than 10%.

+
+ + +
+ + diff --git a/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..999c62adec4 --- /dev/null +++ b/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_OpenShift_0121 diff --git a/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg b/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..473e21ba4e2 --- /dev/null +++ b/documentation/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_OpenShift_0121 diff --git a/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg b/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..33a031a0909 --- /dev/null +++ b/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_0121 diff --git a/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg b/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..e73192c0102 --- /dev/null +++ b/documentation/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_0121 diff --git a/documentation/modules/images/forklift-logo-darkbg.png b/documentation/modules/images/forklift-logo-darkbg.png new file mode 100644 index 00000000000..06e9d1b2494 Binary files /dev/null and b/documentation/modules/images/forklift-logo-darkbg.png differ diff --git a/documentation/modules/images/forklift-logo-darkbg.svg b/documentation/modules/images/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/documentation/modules/images/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/modules/images/forklift-logo-lightbg.png b/documentation/modules/images/forklift-logo-lightbg.png new file mode 100644 index 00000000000..8dba83d97f8 Binary files /dev/null and b/documentation/modules/images/forklift-logo-lightbg.png differ diff --git a/documentation/modules/images/forklift-logo-lightbg.svg b/documentation/modules/images/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/documentation/modules/images/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/documentation/modules/images/kebab.png b/documentation/modules/images/kebab.png new file mode 100644 index 00000000000..81893bd4ad1 Binary files /dev/null and b/documentation/modules/images/kebab.png differ diff --git a/documentation/modules/images/mtv-ui.png b/documentation/modules/images/mtv-ui.png new file mode 100644 index 00000000000..009c9b46386 Binary files /dev/null and b/documentation/modules/images/mtv-ui.png differ diff --git a/documentation/modules/increasing-nfc-memory-vmware-host/index.html b/documentation/modules/increasing-nfc-memory-vmware-host/index.html new file mode 100644 index 00000000000..6589aa39318 --- /dev/null +++ b/documentation/modules/increasing-nfc-memory-vmware-host/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/installing-mtv-operator/index.html b/documentation/modules/installing-mtv-operator/index.html new file mode 100644 index 00000000000..e130a4f3b9a --- /dev/null +++ b/documentation/modules/installing-mtv-operator/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/issue_templates/issue.md b/documentation/modules/issue_templates/issue.md new file mode 100644 index 00000000000..30d52ab9cba --- /dev/null +++ b/documentation/modules/issue_templates/issue.md @@ -0,0 +1,15 @@ +## Summary + +(Describe the problem. Don't worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.) + +## What is the problem? + +(Paste the text or a screenshot here. Remember to include the **task number** so that we know which module is affected.) + +## What is the solution? + +(Correct text, link, or task.) + +## Notes + +(Do we need to fix something else?) diff --git a/documentation/modules/issue_templates/issue/index.html b/documentation/modules/issue_templates/issue/index.html new file mode 100644 index 00000000000..4ec54cc086a --- /dev/null +++ b/documentation/modules/issue_templates/issue/index.html @@ -0,0 +1,79 @@ + + + + + + + + Summary | Forklift Documentation + + + + + + + + + + + + + +Summary | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + +
+

Summary

+ +

(Describe the problem. Don’t worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.)

+ +

What is the problem?

+ +

(Paste the text or a screenshot here. Remember to include the task number so that we know which module is affected.)

+ +

What is the solution?

+ +

(Correct text, link, or task.)

+ +

Notes

+ +

(Do we need to fix something else?)

+ + + +
+ + diff --git a/documentation/modules/making-open-source-more-inclusive/index.html b/documentation/modules/making-open-source-more-inclusive/index.html new file mode 100644 index 00000000000..93dbbbf38bb --- /dev/null +++ b/documentation/modules/making-open-source-more-inclusive/index.html @@ -0,0 +1,69 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Making open source more inclusive

+
+

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

+
+ + +
+ + diff --git a/documentation/modules/migrating-virtual-machines-cli/index.html b/documentation/modules/migrating-virtual-machines-cli/index.html new file mode 100644 index 00000000000..efed172710f --- /dev/null +++ b/documentation/modules/migrating-virtual-machines-cli/index.html @@ -0,0 +1,549 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migrating virtual machines

+
+

You migrate virtual machines (VMs) from the command line (CLI) by creating Forklift custom resources (CRs).

+
+
+ + + + + +
+
Important
+
+
+

You must specify a name for cluster-scoped CRs.

+
+
+

You must specify both a name and a namespace for namespace-scoped CRs.

+
+
+
+
+

Unresolved directive in migrating-virtual-machines-cli.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Prerequisites
+
    +
  • +

    VMware only: You must have a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.

    +
  • +
  • +

    oVirt (oVirt) only: If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

    +
  • +
+
+
+

Unresolved directive in migrating-virtual-machines-cli.adoc - include::snip-migrating-luns.adoc[]

+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: <provider_type> (2)
    +    createdForResourceType: providers
    +type: Opaque
    +stringData: (3)
    +  user: <user> (4)
    +  password: <password> (5)
    +  insecureSkipVerify: <true/false> (6)
    +  domainName: <domain_name> (7)
    +  projectName: <project_name> (8)
    +  regionName: <region name> (9)
    +  cacert: | (10)
    +    <ca_certificate>
    +  url: <api_end_point> (11)
    +  thumbprint: <vcenter_fingerprint> (12)
    +EOF
    +
    +
    +
    +
      +
    1. +

      The ownerReferences section is optional.

      +
    2. +
    3. +

      Specify the type of source provider. Allowed values are ovirt, vsphere, openstack, and ova. This label is needed to verify the credentials are correct when the remote system is accessible and, for oVirt, to retrieve the Engine CA certificate when a third-party certificate is specified.

      +
    4. +
    5. +

      The stringData section for OVA is different and is described in a note that follows the description of the Secret manifest.

      +
    6. +
    7. +

      Specify the vCenter user, the oVirt Engine user, or the {osp} user.

      +
    8. +
    9. +

      Specify the user password.

      +
    10. +
    11. +

      Specify <true> to skip certificate verification, which proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. Specifying <false> verifies the certificate.

      +
    12. +
    13. +

      {osp} only: Specify the domain name.

      +
    14. +
    15. +

      {osp} only: Specify the project name.

      +
    16. +
    17. +

      {osp} only: Specify the name of the {osp} region.

      +
    18. +
    19. +

      oVirt and {osp} only: For oVirt, enter the Engine CA certificate unless it was replaced by a third-party certificate, in which case enter the Engine Apache CA certificate. You can retrieve the Engine CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA. For {osp}, enter the CA certificate for connecting to the source environment. The certificate is not used when insecureSkipVerify is set to <true>.

      +
    20. +
    21. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    22. +
    23. +

      VMware only: Specify the vCenter SHA-1 fingerprint.

      +
    24. +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    The stringData section for an OVA Secret manifest is as follows:

    +
    +
    +
    +
    stringData:
    +  url: <nfs_server:/nfs_path>
    +
    +
    +
    +

    where:
    +nfs_server: An IP or hostname of the server where the share was created.
    +nfs_path : The path on the server where the OVA files are stored.

    +
    +
    +
    +
  2. +
  3. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  settings:
    +    vddkInitImage: <registry_route_or_server_path>/vddk:<tag> (3)
    +  secret:
    +    name: <secret> (4)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      VMware only: Specify the VDDK image that you created.

      +
    6. +
    7. +

      Specify the name of provider Secret CR.

      +
    8. +
    +
    +
  4. +
  5. +

    VMware only: Create a Host manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Host
    +metadata:
    +  name: <vmware_host>
    +  namespace: <namespace>
    +spec:
    +  provider:
    +    namespace: <namespace>
    +    name: <source_provider> (1)
    +  id: <source_host_mor> (2)
    +  ipAddress: <source_network_ip> (3)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the VMware Provider CR.

      +
    2. +
    3. +

      Specify the managed object reference (MOR) of the VMware host.

      +
    4. +
    5. +

      Specify the IP address of the VMware migration network.

      +
    6. +
    +
    +
  6. +
  7. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source: (2)
    +        id: <source_network_id> (3)
    +        name: <source_network_name>
    +    - destination:
    +        name: <network_attachment_definition> (4)
    +        namespace: <network_attachment_definition_namespace> (5)
    +        type: multus
    +      source:
    +        id: <source_network_id>
    +        name: <source_network_name>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are pod and multus.

      +
    2. +
    3. +

      You can use either the id or the name parameter to specify the source network.

      +
    4. +
    5. +

      Specify the VMware network MOR, the oVirt network UUID, or the {osp} network UUID.

      +
    6. +
    7. +

      Specify a network attachment definition for each additional KubeVirt network.

      +
    8. +
    9. +

      Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.

      +
    10. +
    +
    +
  8. +
  9. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        id: <source_datastore> (2)
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode>
    +      source:
    +        id: <source_datastore>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ReadWriteOnce and ReadWriteMany.

      +
    2. +
    3. +

      Specify the VMware data storage MOR, the oVirt storage domain UUID, or the {osp} volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.

      +
    4. +
    +
    +
  10. +
  11. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner (1)
    +  playbook: | (2)
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

      +
    2. +
    3. +

      Optional: Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.

      +
    4. +
    +
    +
  12. +
  13. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +spec:
    +  warm: true (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (3)
    +    network: (4)
    +      name: <network_map> (5)
    +      namespace: <namespace>
    +    storage: (6)
    +      name: <storage_map> (7)
    +      namespace: <namespace>
    +  targetNamespace: <target_namespace>
    +  vms: (8)
    +    - id: <source_vm> (9)
    +    - name: <source_vm>
    +      namespace: <namespace> (10)
    +      hooks: (11)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (12)
    +          step: <step> (13)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Plan CR.

      +
    2. +
    3. +

      Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.

      +
    4. +
    5. +

      Specify only one network map and one storage map per plan.

      +
    6. +
    7. +

      Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.

      +
    8. +
    9. +

      Specify the name of the NetworkMap CR.

      +
    10. +
    11. +

      Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.

      +
    12. +
    13. +

      Specify the name of the StorageMap CR.

      +
    14. +
    15. +

      For all source providers except for KubeVirt, you can use either the id or the name parameter to specify the source VMs.
      +KubeVirt source provider only: You can use only the name parameter, not the id. parameter to specify the source VMs.

      +
    16. +
    17. +

      Specify the VMware VM MOR, oVirt VM UUID or the {osp} VM UUID.

      +
    18. +
    19. +

      KubeVirt source provider only.

      +
    20. +
    21. +

      Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.

      +
    22. +
    23. +

      Specify the name of the Hook CR.

      +
    24. +
    25. +

      Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.

      +
    26. +
    +
    +
  14. +
  15. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration> (1)
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <plan> (2)
    +    namespace: <namespace>
    +  cutover: <cutover_time> (3)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    3. +

      Specify the name of the Plan CR that you are running. The Migration CR creates a VirtualMachine CR for each VM that is migrated.

      +
    4. +
    5. +

      Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example, 2021-04-04T01:23:45.678+09:00.

      +
    6. +
    +
    +
    +

    You can associate multiple Migration CRs with a single Plan CR. If a migration does not complete, you can create a new Migration CR, without changing the Plan CR, to migrate the remaining VMs.

    +
    +
  16. +
  17. +

    Retrieve the Migration CR to monitor the progress of the migration:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  18. +
+
+ + +
+ + diff --git a/documentation/modules/migration-plan-options-ui/index.html b/documentation/modules/migration-plan-options-ui/index.html new file mode 100644 index 00000000000..e96a60fb003 --- /dev/null +++ b/documentation/modules/migration-plan-options-ui/index.html @@ -0,0 +1,141 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the {kebab} beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/mtv-overview-page/index.html b/documentation/modules/mtv-overview-page/index.html new file mode 100644 index 00000000000..3ae2b0a167d --- /dev/null +++ b/documentation/modules/mtv-overview-page/index.html @@ -0,0 +1,142 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV Overview page

+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page displays the following information:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator.

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/mtv-resources-and-services/index.html b/documentation/modules/mtv-resources-and-services/index.html new file mode 100644 index 00000000000..e3092e368a8 --- /dev/null +++ b/documentation/modules/mtv-resources-and-services/index.html @@ -0,0 +1,131 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/mtv-settings/index.html b/documentation/modules/mtv-settings/index.html new file mode 100644 index 00000000000..a9be098b48b --- /dev/null +++ b/documentation/modules/mtv-settings/index.html @@ -0,0 +1,133 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+ + +
+ + diff --git a/documentation/modules/mtv-ui/index.html b/documentation/modules/mtv-ui/index.html new file mode 100644 index 00000000000..0b9037351af --- /dev/null +++ b/documentation/modules/mtv-ui/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/mtv-workflow/index.html b/documentation/modules/mtv-workflow/index.html new file mode 100644 index 00000000000..ef520b7d657 --- /dev/null +++ b/documentation/modules/mtv-workflow/index.html @@ -0,0 +1,113 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/network-prerequisites/index.html b/documentation/modules/network-prerequisites/index.html new file mode 100644 index 00000000000..ca7abe814d5 --- /dev/null +++ b/documentation/modules/network-prerequisites/index.html @@ -0,0 +1,196 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Network prerequisites

+
+
+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+
+
+

Ports

+
+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+ + +
+ + diff --git a/documentation/modules/non-admin-permissions-for-ui/index.html b/documentation/modules/non-admin-permissions-for-ui/index.html new file mode 100644 index 00000000000..e1f54b4124e --- /dev/null +++ b/documentation/modules/non-admin-permissions-for-ui/index.html @@ -0,0 +1,187 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyer.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyer.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyer.io

hooks

get, list, watch

forklift.konveyer.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyer.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyer.io

storagemaps

get, list, watch

forklift.konveyer.io

forkliftcontrollers

+
+ + + + + +
+
Note
+
+
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+ + +
+ + diff --git a/documentation/modules/obtaining-console-url/index.html b/documentation/modules/obtaining-console-url/index.html new file mode 100644 index 00000000000..3b9d408c93f --- /dev/null +++ b/documentation/modules/obtaining-console-url/index.html @@ -0,0 +1,107 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Getting the Forklift web console URL

+
+

You can get the Forklift web console URL at any time by using either the OKD web console, or the command line.

+
+
+
Prerequisites
+
    +
  • +

    KubeVirt Operator installed.

    +
  • +
  • +

    Forklift Operator installed.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  • +

    If you are using the OKD web console, follow these steps:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_web.adoc[]

+
+
+
    +
  • +

    If you are using the command line, get the Forklift web console URL with the following command:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_cli.adoc[]

+
+
+

You can now launch a browser and navigate to the Forklift web console.

+
+ + +
+ + diff --git a/documentation/modules/obtaining-vmware-fingerprint/index.html b/documentation/modules/obtaining-vmware-fingerprint/index.html new file mode 100644 index 00000000000..cc512d19b16 --- /dev/null +++ b/documentation/modules/obtaining-vmware-fingerprint/index.html @@ -0,0 +1,99 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Obtaining the SHA-1 fingerprint of a vCenter host

+
+

You must obtain the SHA-1 fingerprint of a vCenter host in order to create a Secret CR.

+
+
+
Procedure
+
    +
  • +

    Run the following command:

    +
    +
    +
    $ openssl s_client \
    +    -connect <vcenter_host>:443 \ (1)
    +    < /dev/null 2>/dev/null \
    +    | openssl x509 -fingerprint -noout -in /dev/stdin \
    +    | cut -d '=' -f 2
    +
    +
    +
    +
      +
    1. +

      Specify the IP address or FQDN of the vCenter host.

      +
    2. +
    +
    +
    +
    Example output
    +
    +
    01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
    +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/openstack-prerequisites/index.html b/documentation/modules/openstack-prerequisites/index.html new file mode 100644 index 00000000000..1c85782f7c5 --- /dev/null +++ b/documentation/modules/openstack-prerequisites/index.html @@ -0,0 +1,90 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

OpenStack prerequisites

+
+

The following prerequisites apply to {osp} migrations:

+
+
+ +
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+ + +
+ + diff --git a/documentation/modules/osh-adding-source-provider/index.html b/documentation/modules/osh-adding-source-provider/index.html new file mode 100644 index 00000000000..600966f0f2f --- /dev/null +++ b/documentation/modules/osh-adding-source-provider/index.html @@ -0,0 +1,137 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding an {osp} source provider

+
+

You can add an {osp} source provider by using the OKD web console.

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select Red Hat OpenStack Platform from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Name to display in the list of providers

      +
    • +
    • +

      {osp} Identity server URL: {osp} Identity (Keystone) endpoint, for example, http://controller:5000/v3

      +
    • +
    • +

      {osp} username: For example, admin

      +
    • +
    • +

      {osp} password:

      +
    • +
    • +

      Domain:

      +
    • +
    • +

      Project:

      +
    • +
    • +

      Region:

      +
    • +
    +
    +
  8. +
  9. +

    To allow a migration without validating the provider’s CA certificate, select the Skip certificate validation check box. By default, the checkbox is cleared, meaning that the certificate will be validated.

    +
  10. +
  11. +

    If you did not select Skip certificate validation, the CA certificate field is visible. Drag the CA certificate used to connect to the source environment to the text box or browse for it and click Select. If you did select the check box, the CA certificate text box is not visible.

    +
  12. +
  13. +

    Click Create to add and save the provider.

    +
    +

    The source provider appears in the list of providers.

    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/modules/ostack-app-cred-auth/index.html b/documentation/modules/ostack-app-cred-auth/index.html new file mode 100644 index 00000000000..cf4ab553078 --- /dev/null +++ b/documentation/modules/ostack-app-cred-auth/index.html @@ -0,0 +1,189 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using application credential authentication with an {osp} source provider

+
+

You can use application credential authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/ostack-token-auth/index.html b/documentation/modules/ostack-token-auth/index.html new file mode 100644 index 00000000000..a612d1b3231 --- /dev/null +++ b/documentation/modules/ostack-token-auth/index.html @@ -0,0 +1,180 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using token authentication with an {osp} source provider

+
+

You can use token authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/ova-prerequisites/index.html b/documentation/modules/ova-prerequisites/index.html new file mode 100644 index 00000000000..d4e290ec50c --- /dev/null +++ b/documentation/modules/ova-prerequisites/index.html @@ -0,0 +1,130 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/retrieving-validation-service-json/index.html b/documentation/modules/retrieving-validation-service-json/index.html new file mode 100644 index 00000000000..5f43191fd2e --- /dev/null +++ b/documentation/modules/retrieving-validation-service-json/index.html @@ -0,0 +1,483 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    +
      +
    1. +

      Allowed values for the provider are vsphere, ovirt, and openstack.

      +
    2. +
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+ + +
+ + diff --git a/documentation/modules/rhv-prerequisites/index.html b/documentation/modules/rhv-prerequisites/index.html new file mode 100644 index 00000000000..7ff1f4bb45d --- /dev/null +++ b/documentation/modules/rhv-prerequisites/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+ +
+
+

Unresolved directive in rhv-prerequisites.adoc - include::snip-migrating-luns.adoc[]

+
+ + +
+ + diff --git a/documentation/modules/rn-2.0/index.html b/documentation/modules/rn-2.0/index.html new file mode 100644 index 00000000000..b6dfd0227c4 --- /dev/null +++ b/documentation/modules/rn-2.0/index.html @@ -0,0 +1,163 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.0

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Warm migration
+

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

+
+
+
Cancel migration
+

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

+
+
+
Migration network
+

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

+
+
+
Validation service
+

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

+
+
+ + + + + +
+
Important
+
+
+

The validation service is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
+
+

Known issues

+
+
+

This section describes known issues and mitigations.

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Network map displays a "Destination network not found" error
+

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

+
+
+
Warm migration gets stuck during third precopy
+

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

+
+
+

You can do one of the following to mitigate this issue:

+
+
+
    +
  • +

    Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

    +
  • +
  • +

    Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    +
    +
    +
    $ kubectl patch configmap/vm-import-controller-config \
    +  -n openshift-cnv -p '{"data": \
    +  {"warmImport.intervalMinutes": "720"}}'
    +
    +
    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.1/index.html b/documentation/modules/rn-2.1/index.html new file mode 100644 index 00000000000..3e4cac5b505 --- /dev/null +++ b/documentation/modules/rn-2.1/index.html @@ -0,0 +1,191 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.1

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

Technical changes

+
+
+
VDDK image added to HyperConverged custom resource
+

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Cold migration from oVirt
+

You can perform a cold migration of VMs from oVirt.

+
+
+
Migration hooks
+

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

+
+
+
Filtered must-gather data collection
+

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

+
+
+
SR-IOV network support
+

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

+
+
+
+
+

Known issues

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Disk copy stage does not progress
+

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

+
+
+

The cause of this problem might be one of the following conditions:

+
+
+
    +
  • +

    The storage class does not exist on the target cluster.

    +
  • +
  • +

    The VDDK image has not been added to the HyperConverged custom resource.

    +
  • +
  • +

    The VM does not have a disk.

    +
  • +
  • +

    The VM disk is locked.

    +
  • +
  • +

    The VM time zone is not set to UTC.

    +
  • +
  • +

    The VM is configured for a USB device.

    +
  • +
+
+
+

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

+
+
+

To determine the cause:

+
+
+
    +
  1. +

    Click WorkloadsVirtualization in the OKD web console.

    +
  2. +
  3. +

    Click the Virtual Machines tab.

    +
  4. +
  5. +

    Select a virtual machine to open the Virtual Machine Overview screen.

    +
  6. +
  7. +

    Click Status to view the status of the virtual machine.

    +
  8. +
+
+
+
VM time zone must be UTC with no offset
+

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

+
+
+
oVirt resource UUID causes a "Provider not found" error
+

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

+
+
+

You must use the resource name. (BZ#1994037)

+
+
+
Same oVirt resource name in different data centers causes ambiguous reference
+

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

+
+
+

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

+
+
+
Snapshots are not deleted after warm migration
+

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.2/index.html b/documentation/modules/rn-2.2/index.html new file mode 100644 index 00000000000..1badf198b9b --- /dev/null +++ b/documentation/modules/rn-2.2/index.html @@ -0,0 +1,219 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.2

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the precopy time interval for warm migration
+

You can set the time interval between snapshots taken during the precopy stage of warm migration.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Creating validation rules
+

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

+
+
+
Downloading logs by using the web console
+

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

+
+
+
Duplicating a migration plan by using the web console
+

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

+
+
+
Archiving a migration plan by using the web console
+

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Certain Validation service issues do not block migration
+

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

+
+
+

The following Validation service assessments do not block migration:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Issues that do not block migration
AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Missing resource causes error message in current.log file
+

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

+
+
+

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

+
+
+
Importer pod log is unavailable after warm migration
+

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

+
+
+

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

+
+
+

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.
+

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR
+

If you delete a target VirtualMachine CR during the 'Convert image to kubevirt' step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.3/index.html b/documentation/modules/rn-2.3/index.html new file mode 100644 index 00000000000..2c8d727c80e --- /dev/null +++ b/documentation/modules/rn-2.3/index.html @@ -0,0 +1,156 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.3

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the VddkInitImage path is part of the procedure of adding VMware provider.
+

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

+
+
+
The StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Forklift 2.3 supports warm migration from oVirt
+

You can use warm migration to migrate VMs from both VMware and oVirt.

+
+
+
The minimal sufficient set of privileges for VMware users is established
+

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

+
+
+
Forklift documentation is updated with instructions on using hooks
+

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Some warm migrations from oVirt might fail
+

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

+
+
+
Snapshots are not deleted after warm migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

+
+
+
Warm migration from oVirt fails if a snapshot operation is performed on the source VM
+

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

The problem occurs for both vSphere and oVirt migrations.

+
+
+
Forklift 2.3.4 only: When the source provider is oVirt, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.
+

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.4/index.html b/documentation/modules/rn-2.4/index.html new file mode 100644 index 00000000000..6c2a954858d --- /dev/null +++ b/documentation/modules/rn-2.4/index.html @@ -0,0 +1,260 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.4

+
+
+
+

Migrate virtual machines (VMs) from VMware vSphere or oVirt or {osp} to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Faster disk image migration from oVirt
+

Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

+
+
+
Faster disk transfers by ovirt-imageio client (ovirt-img)
+

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

+
+
+
Faster migration using conversion pod disk transfer
+

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

+
+
+
Migrated virtual machines are not scheduled on the target OCP cluster
+

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

+
+
+
StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

+
+
+
VDDK 8 can be used in the VDDK image
+

Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:

+
+
+
    +
  • +

    If you are migrating to OCP 4.12 or earlier, use VDDK version 7.

    +
  • +
  • +

    If you are migrating to OCP 4.13 or later, use VDDK version 8.

    +
  • +
+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
OpenStack migration
+

Forklift now supports migrations with {osp} as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

+
+
+
OCP console plugin
+

The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

+
+
+
Skip certification option
+

'Skip certificate validation' option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

+
+
+
Only third-party certificate required
+

Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.

+
+
+
Conversion of VMs with RHEL9 guest operating system
+

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

+
+
+
Snapshots that are created during the migration in OpenStack are not deleted
+

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

+
+
+
oVirt snapshots are not deleted after a successful migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)

+
+
+
Migration fails during precopy/cutover while a snapshot operation is executed on the source VM
+

Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

+
+
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

+
+
+
Deleting migrated VM does not remove PVC and PV
+

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

+
+ +
+
Improve invalid/conflicting VM name handling
+

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

+
+
+
Prevent locking user accounts due to incorrect credentials
+

If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

+
+
+
Users without cluster-admin role can create new providers
+

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

+
+
+
Convert i440fx to q35
+

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

+
+
+
Preserve the UUID setting in SMBIOS for a VM that is migrated from oVirt
+

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)

+
+
+
Do not expose password for oVirt in error messages
+

Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.

+
+
+
QEMU guest agent is now installed on migrated VMs
+

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

+
+
+
+ + +
+ + diff --git a/documentation/modules/rn-2.5/index.html b/documentation/modules/rn-2.5/index.html new file mode 100644 index 00000000000..0b08ac79cc5 --- /dev/null +++ b/documentation/modules/rn-2.5/index.html @@ -0,0 +1,325 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.5

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Migration from OpenStack moves to being a fully supported feature
+

In this version, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

+
+
+
Disabling FIPS
+

EMS enforcement is disabled for migrations with VMware vSphere source providers to enable migrations from versions of vSphere that are supported by Forklift but do not comply with the 2023 FIPS requirements.

+
+
+
Integration of the create and update provider user interface
+

The user interface of create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.

+
+
+
Standalone UI
+

The old UI of MTV 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Migration using OVA files created by VMware vSphere
+

In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+

Unresolved directive in rn-2.5.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+
Migrating VMs between OKD clusters
+

In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider as well as a destination provider. You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)

+
+
+
Migration of VMs with direct LUNs from RHV
+

During the migration from RHV, direct LUNs are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

+
+
+
Additional authentication methods for OpenStack
+

In addition to standard password authentication, the following authentication methods are supported: Token authentication and Application credential authentication. (MTV-539)

+
+
+
Validation rules for OpenStack
+

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

+
+
+
VDDK is now optional for VMware vSphere providers
+

The VMware vSphere source provider can now be created without specifying a VDDK init image. It is strongly recommended to create a VDDK init image to accelerate migrations.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and running a new migration plan with the same name, or if deleting a migrated VM and remigrating the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while a snapshot operation is performed on the source VM
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If a user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Import OVA: ConnectionTestFailed message appears when adding OVA provider
+

When adding an OVA provider, the error message ConnectionTestFailed may instantly appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function
+

A flaw was found in the Gin-Gonic Gin Web Framework. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function.  A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
CVE-2023-26144 mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
+

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means MTV 2.5 versions before MTV 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+
+

For more information, see CVE-2023-26144.

+
+
+
Ensure up-to-date data is displayed in the create and update provider forms
+

In previous releases of Forklift, the create and update provider forms could have presented stale data.

+
+
+

This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

+
+
+
Snapshots that are created during a migration in OpenStack are not deleted
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

+
+
+

This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

+
+
+
oVirt snapshots are not deleted after a successful migration
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.

+
+
+

This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

+
+
+
Warm migration fails when cutover conflicts with precopy
+

In previous releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

+
+
+

This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

+
+
+
Warm migration fails when VM is locked
+

In previous releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because the snapshot creation could not be triggered.

+
+
+

This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

+
+
+
Deleting migrated VM does not remove PVC and PV
+

In previous releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

In previous releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

In previous releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.

+
+
+

This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated are able to boot on the target OKD cluster. (MTV-433)

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+

Upgrade notes

+
+
+

It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+ + +
+ + diff --git a/documentation/modules/running-migration-plan/index.html b/documentation/modules/running-migration-plan/index.html new file mode 100644 index 00000000000..2e8a17c31e0 --- /dev/null +++ b/documentation/modules/running-migration-plan/index.html @@ -0,0 +1,135 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/selecting-migration-network-for-virt-provider/index.html b/documentation/modules/selecting-migration-network-for-virt-provider/index.html new file mode 100644 index 00000000000..3129ae65545 --- /dev/null +++ b/documentation/modules/selecting-migration-network-for-virt-provider/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a KubeVirt provider

+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+
Note
+
+
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the {kebab}.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/selecting-migration-network-for-vmware-source-provider/index.html b/documentation/modules/selecting-migration-network-for-vmware-source-provider/index.html new file mode 100644 index 00000000000..03293b9fc49 --- /dev/null +++ b/documentation/modules/selecting-migration-network-for-vmware-source-provider/index.html @@ -0,0 +1,139 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a VMware source provider

+
+

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

+
+
+

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

+
+
+
Prerequisites
+
    +
  • +

    The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

    +
  • +
  • +

    The migration network must be accessible to the KubeVirt nodes through the default gateway.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

    +
    +
    +
    +
  • +
  • +

    The migration network must have jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click the host number in the Hosts column beside a provider to view a list of hosts.

    +
  4. +
  5. +

    Select one or more hosts and click Select migration network.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Network: Network name

      +
    • +
    • +

      ESXi host admin username: For example, root

      +
    • +
    • +

      ESXi host admin password: Password

      +
    • +
    +
    +
  8. +
  9. +

    Click Save.

    +
  10. +
  11. +

    Verify that the status of each host is Ready.

    +
    +

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/selecting-migration-network/index.html b/documentation/modules/selecting-migration-network/index.html new file mode 100644 index 00000000000..cbc168fc91c --- /dev/null +++ b/documentation/modules/selecting-migration-network/index.html @@ -0,0 +1,118 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a source provider

+
+

You can select a migration network for a source provider in the Forklift web console for improved performance.

+
+
+

If a source network is not optimal for migration, a Warning icon is displayed beside the host number in the Hosts column of the provider list.

+
+
+
Prerequisites
+

The migration network has the following prerequisites:

+
+
+
    +
  • +

    Minimum speed of 10 Gbps.

    +
  • +
  • +

    Accessible to the OpenShift nodes through the default gateway. The source disks are copied by a pod that is connected to the pod network of the target namespace.

    +
  • +
  • +

    Jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click Providers.

    +
  2. +
  3. +

    Click the host number of a provider to view the host list and network details.

    +
  4. +
  5. +

    Select the host to be updated and click Select migration network.

    +
  6. +
  7. +

    Select a Network from the list of available networks.

    +
    +

    The network list displays only the networks accessible to all the selected hosts. The hosts must have

    +
    +
  8. +
  9. +

    Click Check connection to verify the credentials.

    +
  10. +
  11. +

    Click Select to select the migration network.

    +
    +

    The migration network appears in the network details of the updated hosts.

    +
    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/snip-migrating-luns/index.html b/documentation/modules/snip-migrating-luns/index.html new file mode 100644 index 00000000000..0116952331a --- /dev/null +++ b/documentation/modules/snip-migrating-luns/index.html @@ -0,0 +1,89 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
  • +

    Migration of Fibre Channel LUNs is not supported.

    +
  • +
+
+
+
+ + +
+ + diff --git a/documentation/modules/snip_permissions-info/index.html b/documentation/modules/snip_permissions-info/index.html new file mode 100644 index 00000000000..a43580fbb34 --- /dev/null +++ b/documentation/modules/snip_permissions-info/index.html @@ -0,0 +1,85 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

If you are an administrator, you can see and work with components (providers, plans, etc.) for all projects.

+
+
+

If you are a non-administrator, you can only see and work only with the components of projects you have permissions for.

+
+
+ + + + + +
+
Tip
+
+
+

You can see which projects you have permissions for by clicking the Project list, which is in the upper-left of every page in the Migrations section except for the Overview.

+
+
+
+ + +
+ + diff --git a/documentation/modules/snippet_getting_web_console_url_cli/index.html b/documentation/modules/snippet_getting_web_console_url_cli/index.html new file mode 100644 index 00000000000..71ca3948fa7 --- /dev/null +++ b/documentation/modules/snippet_getting_web_console_url_cli/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+

+
+
+
+
$ kubectl get route virt -n konveyor-forklift \
+  -o custom-columns=:.spec.host
+
+
+
+

+ +The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

+
+
+

+ +.Example output

+
+
+
+
https://virt-konveyor-forklift.apps.cluster.openshift.com.
+
+
+ + +
+ + diff --git a/documentation/modules/snippet_getting_web_console_url_web/index.html b/documentation/modules/snippet_getting_web_console_url_web/index.html new file mode 100644 index 00000000000..99daf2f6e99 --- /dev/null +++ b/documentation/modules/snippet_getting_web_console_url_web/index.html @@ -0,0 +1,84 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Log in to the OKD web console.

    +
  2. +
  3. +

    Click NetworkingRoutes.

    +
  4. +
  5. +

    Select the {namespace} project in the Project: list.

    +
    +

    The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

    +
    +
    +

    Click the URL to navigate to the Forklift web console.

    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/snippet_ova_tech_preview/index.html b/documentation/modules/snippet_ova_tech_preview/index.html new file mode 100644 index 00000000000..e45b29470e4 --- /dev/null +++ b/documentation/modules/snippet_ova_tech_preview/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+
Important
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/modules/source-vm-prerequisites/index.html b/documentation/modules/source-vm-prerequisites/index.html new file mode 100644 index 00000000000..0a4fef8d82f --- /dev/null +++ b/documentation/modules/source-vm-prerequisites/index.html @@ -0,0 +1,121 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt.

    +
  • +
  • +

    VM names must contain only lowercase letters (a-z), numbers (0-9), or hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

    +
  • +
  • +

    VM names must not duplicate the name of a VM in the KubeVirt environment.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone entered a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+ + +
+ + diff --git a/documentation/modules/storage-support/index.html b/documentation/modules/storage-support/index.html new file mode 100644 index 00000000000..583a0e552c4 --- /dev/null +++ b/documentation/modules/storage-support/index.html @@ -0,0 +1,188 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+
+ + + + + +
+
Note
+
+
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+
Note
+
+
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+ + +
+ + diff --git a/documentation/modules/technology-preview/index.html b/documentation/modules/technology-preview/index.html new file mode 100644 index 00000000000..65933a5533c --- /dev/null +++ b/documentation/modules/technology-preview/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

{FeatureName} is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/documentation/modules/uninstalling-mtv-cli/index.html b/documentation/modules/uninstalling-mtv-cli/index.html new file mode 100644 index 00000000000..659b8bcd733 --- /dev/null +++ b/documentation/modules/uninstalling-mtv-cli/index.html @@ -0,0 +1,106 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI) by deleting the {namespace} project and the forklift.konveyor.io custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the project:

    +
    +
    +
    $ kubectl delete project konveyor-forklift
    +
    +
    +
  2. +
  3. +

    Delete the CRDs:

    +
    +
    +
    $ kubectl get crd -o name | grep 'forklift' | xargs kubectl delete
    +
    +
    +
  4. +
  5. +

    Delete the OAuthClient:

    +
    +
    +
    $ kubectl delete oauthclient/forklift-ui
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/uninstalling-mtv-ui/index.html b/documentation/modules/uninstalling-mtv-ui/index.html new file mode 100644 index 00000000000..e62b67792e5 --- /dev/null +++ b/documentation/modules/uninstalling-mtv-ui/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console to delete the {namespace} project and custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click HomeProjects.

    +
  2. +
  3. +

    Locate the konveyor-forklift project.

    +
  4. +
  5. +

    On the right side of the project, select Delete Project from the {kebab}.

    +
  6. +
  7. +

    In the Delete Project pane, enter the project name and click Delete.

    +
  8. +
  9. +

    Click AdministrationCustomResourceDefinitions.

    +
  10. +
  11. +

    Enter forklift in the Search field to locate the CRDs in the forklift.konveyor.io group.

    +
  12. +
  13. +

    On the right side of each CRD, select Delete CustomResourceDefinition from the {kebab}.

    +
  14. +
+
+ + +
+ + diff --git a/documentation/modules/updating-validation-rules-version/index.html b/documentation/modules/updating-validation-rules-version/index.html new file mode 100644 index 00000000000..60da97e133b --- /dev/null +++ b/documentation/modules/updating-validation-rules-version/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/upgrading-mtv-ui/index.html b/documentation/modules/upgrading-mtv-ui/index.html new file mode 100644 index 00000000000..7faf60c4cd5 --- /dev/null +++ b/documentation/modules/upgrading-mtv-ui/index.html @@ -0,0 +1,127 @@ + + + + + + + + Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + +Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Upgrading Forklift

+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled Operators{operator-name-ui}Subscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Addding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+ + +
+ + diff --git a/documentation/modules/using-must-gather/index.html b/documentation/modules/using-must-gather/index.html new file mode 100644 index 00000000000..bf24f8e483b --- /dev/null +++ b/documentation/modules/using-must-gather/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        Specify the VM ID as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/documentation/modules/virt-migration-workflow/index.html b/documentation/modules/virt-migration-workflow/index.html new file mode 100644 index 00000000000..67f7467fe13 --- /dev/null +++ b/documentation/modules/virt-migration-workflow/index.html @@ -0,0 +1,209 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or {osp} to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is {osp}.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+ + +
+ + diff --git a/documentation/modules/vmware-prerequisites/index.html b/documentation/modules/vmware-prerequisites/index.html new file mode 100644 index 00000000000..f68acbc92cc --- /dev/null +++ b/documentation/modules/vmware-prerequisites/index.html @@ -0,0 +1,248 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    You must install VMware Tools on all source virtual machines (VMs).

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    You must obtain the SHA-1 fingerprint of the vCenter host.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+
Note
+
+
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+
Note
+
+
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

+ + +
+ + diff --git a/feed.xml b/feed.xml new file mode 100644 index 00000000000..686a7cef3de --- /dev/null +++ b/feed.xml @@ -0,0 +1 @@ +Jekyll2023-11-05T18:34:46-06:00/feed.xmlForklift DocumentationMigrating VMware virtual machines to KubeVirt \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 00000000000..1ecf94bb568 --- /dev/null +++ b/index.html @@ -0,0 +1,89 @@ + + + + + + + + Forklift Documentation | Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift Documentation

+
+

What is Forklift?

+
+
+

Forklift is a tool in the Konveyor community for migrating virtual machines from VMware or oVirt to KubeVirt.

+
+
+
+
+

Documentation

+ +
+ + +
+ + diff --git a/jekyll-theme-cayman.gemspec b/jekyll-theme-cayman.gemspec new file mode 100644 index 00000000000..4a1c2d28f03 --- /dev/null +++ b/jekyll-theme-cayman.gemspec @@ -0,0 +1,22 @@ +# frozen_string_literal: true + +Gem::Specification.new do |s| + s.name = 'jekyll-theme-cayman' + s.version = '0.1.1' + s.license = 'CC0-1.0' + s.authors = ['Jason Long', 'GitHub, Inc.'] + s.email = ['opensource+jekyll-theme-cayman@github.com'] + s.homepage = 'https://github.com/pages-themes/cayman' + s.summary = 'Cayman is a Jekyll theme for GitHub Pages' + + s.files = `git ls-files -z`.split("\x0").select do |f| + f.match(%r{^((_includes|_layouts|_sass|assets)/|(LICENSE|README)((\.(txt|md|markdown)|$)))}i) + end + + s.platform = Gem::Platform::RUBY + s.add_runtime_dependency 'jekyll', '> 3.5', '< 5.0' + s.add_runtime_dependency 'jekyll-seo-tag', '~> 2.0' + s.add_development_dependency 'html-proofer', '~> 3.0' + s.add_development_dependency 'rubocop', '~> 0.50' + s.add_development_dependency 'w3c_validators', '~> 1.3' +end diff --git a/modules/about-cold-warm-migration/index.html b/modules/about-cold-warm-migration/index.html new file mode 100644 index 00000000000..6eeaebf4702 --- /dev/null +++ b/modules/about-cold-warm-migration/index.html @@ -0,0 +1,159 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About cold and warm migration

+
+
+
+

Forklift supports cold migration from:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

Forklift supports warm migration from VMware vSphere and from oVirt.

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
+
+

Cold migration

+
+
+

Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.

+
+
+
+
+

Warm migration

+
+
+

Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.

+
+
+

Then the VMs are shut down and the remaining data is copied during the cutover stage.

+
+
+
Precopy stage
+

The VMs are not shut down during the precopy stage.

+
+
+

The VM disks are copied incrementally using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.

+
+
+ + + + + +
+
Important
+
+
+

You must enable CBT for each source VM and each VM disk.

+
+
+

A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.

+
+
+
+
+

The precopy stage runs until the cutover stage is started manually or is scheduled to start.

+
+
+
Cutover stage
+

The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.

+
+
+

You can start the cutover stage manually by using the Forklift console or you can schedule a cutover time in the Migration manifest.

+
+
+
+ + +
+ + diff --git a/modules/about-rego-files/index.html b/modules/about-rego-files/index.html new file mode 100644 index 00000000000..c0c774cf6a2 --- /dev/null +++ b/modules/about-rego-files/index.html @@ -0,0 +1,104 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

About Rego files

+
+

Validation rules are written in Rego, the Open Policy Agent (OPA) native query language. The rules are stored as .rego files in the /usr/share/opa/policies/io/konveyor/forklift/<provider> directory of the Validation pod.

+
+
+

Each validation rule is defined in a separate .rego file and tests for a specific condition. If the condition evaluates as true, the rule adds a {“category”, “label”, “assessment”} hash to the concerns. The concerns content is added to the concerns key in the inventory record of the VM. The web console displays the content of the concerns key for each VM in the provider inventory.

+
+
+

The following .rego file example checks for distributed resource scheduling enabled in the cluster of a VMware VM:

+
+
+
drs_enabled.rego example
+
+
package io.konveyor.forklift.vmware (1)
+
+has_drs_enabled {
+    input.host.cluster.drsEnabled (2)
+}
+
+concerns[flag] {
+    has_drs_enabled
+    flag := {
+        "category": "Information",
+        "label": "VM running in a DRS-enabled cluster",
+        "assessment": "Distributed resource scheduling is not currently supported by OpenShift Virtualization. The VM can be migrated but it will not have this feature in the target environment."
+    }
+}
+
+
+
+
    +
  1. +

    Each validation rule is defined within a package. The package namespaces are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

    +
  2. +
  3. +

    Query parameters are based on the input key of the Validation service JSON.

    +
  4. +
+
+ + +
+ + diff --git a/modules/accessing-default-validation-rules/index.html b/modules/accessing-default-validation-rules/index.html new file mode 100644 index 00000000000..5e7dde309fb --- /dev/null +++ b/modules/accessing-default-validation-rules/index.html @@ -0,0 +1,108 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Checking the default validation rules

+
+

Before you create a custom rule, you must check the default rules of the Validation service to ensure that you do not create a rule that redefines an existing default value.

+
+
+

Example: If a default rule contains the line default valid_input = false and you create a custom rule that contains the line default valid_input = true, the Validation service will not start.

+
+
+
Procedure
+
    +
  1. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  2. +
  3. +

    Go to the OPA policies directory for your provider:

    +
    +
    +
    $ cd /usr/share/opa/policies/io/konveyor/forklift/<provider> (1)
    +
    +
    +
    +
      +
    1. +

      Specify vmware or ovirt.

      +
    2. +
    +
    +
  4. +
  5. +

    Search for the default policies:

    +
    +
    +
    $ grep -R "default" *
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/accessing-logs-cli/index.html b/modules/accessing-logs-cli/index.html new file mode 100644 index 00000000000..a1563f0f8bb --- /dev/null +++ b/modules/accessing-logs-cli/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Accessing logs and custom resource information from the command line interface

+
+

You can access logs and information about custom resources (CRs) from the command line interface by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, a completed, failed, or canceled migration plan, or a migrated virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ kubectl adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_name> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        You must specify the VM name, not the VM ID, as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/accessing-logs-ui/index.html b/modules/accessing-logs-ui/index.html new file mode 100644 index 00000000000..4425e685748 --- /dev/null +++ b/modules/accessing-logs-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Downloading logs and custom resource information from the web console

+
+

You can download logs and information about custom resources (CRs) for a completed, failed, or canceled migration plan or for migrated virtual machines (VMs) by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Get logs beside a migration plan name.

    +
  4. +
  5. +

    In the Get logs window, click Get logs.

    +
    +

    The logs are collected. A Log collection complete message is displayed.

    +
    +
  6. +
  7. +

    Click Download logs to download the archive file.

    +
  8. +
  9. +

    To download logs for a migrated VM, click a migration plan name and then click Get logs beside the VM.

    +
  10. +
+
+ + +
+ + diff --git a/modules/adding-hooks/index.html b/modules/adding-hooks/index.html new file mode 100644 index 00000000000..735e228b56f --- /dev/null +++ b/modules/adding-hooks/index.html @@ -0,0 +1,106 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding hooks

+
+

Hooks are custom code that you can run at certain stages of the migration. You can define a hook by using an Ansible playbook or a custom hook container.

+
+
+

You can create a hook before a migration plan or while creating a migration plan.

+
+
+
Prerequisites
+
    +
  • +

    You must create an Ansible playbook or a custom hook container.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the web console, click Hooks.

    +
  2. +
  3. +

    Click Create hook.

    +
  4. +
  5. +

    Specify the hook Name.

    +
  6. +
  7. +

    Select Ansible playbook or Custom container image as the Hook definition.

    +
  8. +
  9. +

    If you select Custom container image, specify the image location, for example, quay.io/github_project/container_name:container_id.

    +
  10. +
  11. +

    Select a migration step and click Add.

    +
    +

    The new migration hook appears in the Hooks list.

    +
    +
  12. +
+
+ + +
+ + diff --git a/modules/adding-source-provider/index.html b/modules/adding-source-provider/index.html new file mode 100644 index 00000000000..ef9262c6ea9 --- /dev/null +++ b/modules/adding-source-provider/index.html @@ -0,0 +1,82 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Click Create to add and save the provider.

    +
    +

    The provider appears in the list of providers.

    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/adding-virt-provider/index.html b/modules/adding-virt-provider/index.html new file mode 100644 index 00000000000..f4879e24e17 --- /dev/null +++ b/modules/adding-virt-provider/index.html @@ -0,0 +1,116 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding a KubeVirt destination provider

+
+

You can add a KubeVirt destination provider to the OKD web console in addition to the default KubeVirt destination provider, which is the provider where you installed Forklift.

+
+
+
Prerequisites
+ +
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select KubeVirt from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Specify the provider name to display in the list of target providers.

      +
    • +
    • +

      Kubernetes API server URL: Specify the OKD cluster API endpoint.

      +
    • +
    • +

      Service account token: Specify the cluster-admin service account token.

      +
      +

      If both URL and Service account token are left blank, the local OKD cluster is used.

      +
      +
    • +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The provider appears in the list of providers.

    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/canceling-migration-cli/index.html b/modules/canceling-migration-cli/index.html new file mode 100644 index 00000000000..e64caa40a31 --- /dev/null +++ b/modules/canceling-migration-cli/index.html @@ -0,0 +1,132 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel an entire migration or individual virtual machines (VMs) while a migration is in progress from the command line interface (CLI).

+
+
+
Canceling an entire migration
+
    +
  • +

    Delete the Migration CR:

    +
    +
    +
    $ kubectl delete migration <migration> -n <namespace> (1)
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    +
    +
  • +
+
+
+
Canceling the migration of individual VMs
+
    +
  1. +

    Add the individual VMs to the spec.cancel block of the Migration manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration>
    +  namespace: <namespace>
    +...
    +spec:
    +  cancel:
    +  - id: vm-102 (1)
    +  - id: vm-203
    +  - name: rhel8-vm
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can specify a VM by using the id key or the name key.

      +
    2. +
    +
    +
    +

    The value of the id key is the managed object reference, for a VMware VM, or the VM UUID, for a oVirt VM.

    +
    +
  2. +
  3. +

    Retrieve the Migration CR to monitor the progress of the remaining VMs:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  4. +
+
+ + +
+ + diff --git a/modules/canceling-migration-ui/index.html b/modules/canceling-migration-ui/index.html new file mode 100644 index 00000000000..daf6baf0e10 --- /dev/null +++ b/modules/canceling-migration-ui/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Canceling a migration

+
+

You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the OKD web console.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click Plans for virtualization.

    +
  2. +
  3. +

    Click the name of a running migration plan to view the migration details.

    +
  4. +
  5. +

    Select one or more VMs and click Cancel.

    +
  6. +
  7. +

    Click Yes, cancel to confirm the cancellation.

    +
    +

    In the Migration details by VM list, the status of the canceled VMs is Canceled. The unmigrated and the migrated virtual machines are not affected.

    +
    +
  8. +
+
+
+

You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.

+
+ + +
+ + diff --git a/modules/changing-precopy-intervals/index.html b/modules/changing-precopy-intervals/index.html new file mode 100644 index 00000000000..8bb7791396c --- /dev/null +++ b/modules/changing-precopy-intervals/index.html @@ -0,0 +1,92 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Changing precopy intervals for warm migration

+
+

You can change the snapshot interval by patching the ForkliftController custom resource (CR).

+
+
+
Procedure
+
    +
  • +

    Patch the ForkliftController CR:

    +
    +
    +
    $ kubectl patch forkliftcontroller/<forklift-controller> -n konveyor-forklift -p '{"spec": {"controller_precopy_interval": <60>}}' --type=merge (1)
    +
    +
    +
    +
      +
    1. +

      Specify the precopy interval in minutes. The default value is 60.

      +
    2. +
    +
    +
    +

    You do not need to restart the forklift-controller pod.

    +
    +
  • +
+
+ + +
+ + diff --git a/modules/collected-logs-cr-info/index.html b/modules/collected-logs-cr-info/index.html new file mode 100644 index 00000000000..d2ce172c8cf --- /dev/null +++ b/modules/collected-logs-cr-info/index.html @@ -0,0 +1,183 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Collected logs and custom resource information

+
+

You can download logs and custom resource (CR) yaml files for the following targets by using the OKD web console or the command line interface (CLI):

+
+
+
    +
  • +

    Migration plan: Web console or CLI.

    +
  • +
  • +

    Virtual machine: Web console or CLI.

    +
  • +
  • +

    Namespace: CLI only.

    +
  • +
+
+
+

The must-gather tool collects the following logs and CR files in an archive file:

+
+
+
    +
  • +

    CRs:

    +
    +
      +
    • +

      DataVolume CR: Represents a disk mounted on a migrated VM.

      +
    • +
    • +

      VirtualMachine CR: Represents a migrated VM.

      +
    • +
    • +

      Plan CR: Defines the VMs and storage and network mapping.

      +
    • +
    • +

      Job CR: Optional: Represents a pre-migration hook, a post-migration hook, or both.

      +
    • +
    +
    +
  • +
  • +

    Logs:

    +
    +
      +
    • +

      importer pod: Disk-to-data-volume conversion log. The importer pod naming convention is importer-<migration_plan>-<vm_id><5_char_id>, for example, importer-mig-plan-ed90dfc6-9a17-4a8btnfh, where ed90dfc6-9a17-4a8 is a truncated oVirt VM ID and btnfh is the generated 5-character ID.

      +
    • +
    • +

      conversion pod: VM conversion log. The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the VM. The conversion pod naming convention is <migration_plan>-<vm_id><5_char_id>.

      +
    • +
    • +

      virt-launcher pod: VM launcher log. When a migrated VM is powered on, the virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

      +
    • +
    • +

      forklift-controller pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      forklift-must-gather-api pod: The log is filtered for the migration plan, virtual machine, or namespace specified by the must-gather command.

      +
    • +
    • +

      hook-job pod: The log is filtered for hook jobs. The hook-job naming convention is <migration_plan>-<vm_id><5_char_id>, for example, plan2j-vm-3696-posthook-4mx85 or plan2j-vm-3696-prehook-mwqnl.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Empty or excluded log files are not included in the must-gather archive file.

      +
      +
      +
      +
    • +
    +
    +
  • +
+
+
+
Example must-gather archive structure for a VMware migration plan
+
+
must-gather
+└── namespaces
+    ├── target-vm-ns
+    │   ├── crs
+    │   │   ├── datavolume
+    │   │   │   ├── mig-plan-vm-7595-tkhdz.yaml
+    │   │   │   ├── mig-plan-vm-7595-5qvqp.yaml
+    │   │   │   └── mig-plan-vm-8325-xccfw.yaml
+    │   │   └── virtualmachine
+    │   │       ├── test-test-rhel8-2disks2nics.yaml
+    │   │       └── test-x2019.yaml
+    │   └── logs
+    │       ├── importer-mig-plan-vm-7595-tkhdz
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-7595-5qvqp
+    │       │   └── current.log
+    │       ├── importer-mig-plan-vm-8325-xccfw
+    │       │   └── current.log
+    │       ├── mig-plan-vm-7595-4glzd
+    │       │   └── current.log
+    │       └── mig-plan-vm-8325-4zw49
+    │           └── current.log
+    └── openshift-mtv
+        ├── crs
+        │   └── plan
+        │       └── mig-plan-cold.yaml
+        └── logs
+            ├── forklift-controller-67656d574-w74md
+            │   └── current.log
+            └── forklift-must-gather-api-89fc7f4b6-hlwb6
+                └── current.log
+
+
+ + +
+ + diff --git a/modules/common-attributes/index.html b/modules/common-attributes/index.html new file mode 100644 index 00000000000..135ad6e249e --- /dev/null +++ b/modules/common-attributes/index.html @@ -0,0 +1,66 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+ + diff --git a/modules/compatibility-guidelines/index.html b/modules/compatibility-guidelines/index.html new file mode 100644 index 00000000000..cc44cebf406 --- /dev/null +++ b/modules/compatibility-guidelines/index.html @@ -0,0 +1,125 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Software compatibility guidelines

+
+

You must install compatible software versions.

+
+ + ++++++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Compatible software versions
ForkliftOKDKubeVirtVMware vSphereoVirtOpenStack

2.5.1

4.12 or later

4.12 or later

6.5 or later

4.4 SP1 or later

16.1 or later

+
+ + + + + +
+
Note
+
+
Migration from oVirt 4.3
+
+

MTV 2.5 was tested only with oVirt (RHV) 4.4 SP1. +Migration from oVirt (oVirt) 4.3 has not been tested with Forklift 2.3.

+
+
+

As oVirt 4.3 lacks the improvements that were introduced in oVirt 4.4 for Forklift, and new features were not tested with oVirt 4.3, migrations from oVirt 4.3 may not function at the same level as migrations from oVirt 4.4, with some functionality may be missing.

+
+
+

Therefore, it is recommended to upgrade oVirt to the supported version above before the migration to KubeVirt.

+
+
+

However, migrations from oVirt 4.3.11 were tested with Forklift 2.3, and may work in practice in many environments using Forklift 2.3. In this case, we advise upgrading oVirt Manager (RHVM) to the previously mentioned supported version before the migration to KubeVirt.

+
+
+
+ + +
+ + diff --git a/modules/creating-migration-plan/index.html b/modules/creating-migration-plan/index.html new file mode 100644 index 00000000000..b493ba879d5 --- /dev/null +++ b/modules/creating-migration-plan/index.html @@ -0,0 +1,270 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a migration plan

+
+

You can create a migration plan by using the OKD web console.

+
+
+

A migration plan allows you to group virtual machines to be migrated together or with the same migration parameters, for example, a percentage of the members of a cluster or a complete application.

+
+
+

You can configure a hook to run an Ansible playbook or custom container image during a specified stage of the migration plan.

+
+
+
Prerequisites
+
    +
  • +

    If Forklift is not installed on the target cluster, you must add a target provider on the Providers page of the web console.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
  2. +
  3. +

    Click Create plan.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Plan name: Enter a migration plan name to display in the migration plan list.

      +
    • +
    • +

      Plan description: Optional: Brief description of the migration plan.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    • +

      Target namespace: Do one of the following:

      +
      +
        +
      • +

        Select a target namespace from the list

        +
      • +
      • +

        Create a target namespace by typing its name in the text box, and then clicking create "<the_name_you_entered>"

        +
      • +
      +
      +
    • +
    • +

      You can change the migration transfer network for this plan by clicking Select a different network, selecting a network from the list, and then clicking Select.

      +
      +

      If you defined a migration transfer network for the KubeVirt provider and if the network is in the target namespace, the network that you defined is the default network for all migration plans. Otherwise, the pod network is used.

      +
      +
    • +
    +
    +
  6. +
  7. +

    Click Next.

    +
  8. +
  9. +

    Select options to filter the list of source VMs and click Next.

    +
  10. +
  11. +

    Select the VMs to migrate and then click Next.

    +
  12. +
  13. +

    Select an existing network mapping or create a new network mapping.

    +
  14. +
  15. +

    . Optional: Click Add to add an additional network mapping.

    +
    +

    To create a new network mapping:

    +
    +
    +
      +
    • +

      Select a target network for each source network.

      +
    • +
    • +

      Optional: Select Save current mapping as a template and enter a name for the network mapping.

      +
    • +
    +
    +
  16. +
  17. +

    Click Next.

    +
  18. +
  19. +

    Select an existing storage mapping, which you can modify, or create a new storage mapping.

    +
    +

    To create a new storage mapping:

    +
    +
    +
      +
    1. +

      If your source provider is VMware, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    +
    +
  20. +
  21. +

    Optional: Select Save current mapping as a template and enter a name for the storage mapping.

    +
  22. +
  23. +

    Click Next.

    +
  24. +
  25. +

    Select a migration type and click Next.

    +
    +
      +
    • +

      Cold migration: The source VMs are stopped while the data is copied.

      +
    • +
    • +

      Warm migration: The source VMs run while the data is copied incrementally. Later, you will run the cutover, which stops the VMs and copies the remaining VM data and metadata.

      +
      + + + + + +
      +
      Note
      +
      +
      +

      Warm migration is supported only from vSphere and oVirt.

      +
      +
      +
      +
    • +
    +
    +
  26. +
  27. +

    Click Next.

    +
  28. +
  29. +

    Optional: You can create a migration hook to run an Ansible playbook before or after migration:

    +
    +
      +
    1. +

      Click Add hook.

      +
    2. +
    3. +

      Select the Step when the hook will be run: pre-migration or post-migration.

      +
    4. +
    5. +

      Select a Hook definition:

      +
      +
        +
      • +

        Ansible playbook: Browse to the Ansible playbook or paste it into the field.

        +
      • +
      • +

        Custom container image: If you do not want to use the default hook-runner image, enter the image path: <registry_path>/<image_name>:<tag>.

        +
        + + + + + +
        +
        Note
        +
        +
        +

        The registry must be accessible to your OKD cluster.

        +
        +
        +
        +
      • +
      +
      +
    6. +
    +
    +
  30. +
  31. +

    Click Next.

    +
  32. +
  33. +

    Review your migration plan and click Finish.

    +
    +

    The migration plan is saved on the Plans page.

    +
    +
    +

    You can click the {kebab} of the migration plan and select View details to verify the migration plan details.

    +
    +
  34. +
+
+ + +
+ + diff --git a/modules/creating-network-mapping/index.html b/modules/creating-network-mapping/index.html new file mode 100644 index 00000000000..3b885abd028 --- /dev/null +++ b/modules/creating-network-mapping/index.html @@ -0,0 +1,122 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a network mapping

+
+

You can create one or more network mappings by using the OKD web console to map source networks to KubeVirt networks.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    If you map more than one source and target network, each additional KubeVirt network requires its own network attachment definition.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationNetworkMaps for virtualization.

    +
  2. +
  3. +

    Click Create NetworkMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the network mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    Select a Source network and a Target namespace/network.

    +
  8. +
  9. +

    Optional: Click Add to create additional network mappings or to map multiple source networks to a single target network.

    +
  10. +
  11. +

    If you create an additional network mapping, select the network attachment definition as the target network.

    +
  12. +
  13. +

    Click Create.

    +
    +

    The network mapping is displayed on the NetworkMaps screen.

    +
    +
  14. +
+
+ + +
+ + diff --git a/modules/creating-storage-mapping/index.html b/modules/creating-storage-mapping/index.html new file mode 100644 index 00000000000..9b1830de616 --- /dev/null +++ b/modules/creating-storage-mapping/index.html @@ -0,0 +1,138 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a storage mapping

+
+

You can create a storage mapping by using the OKD web console to map source disk storages to KubeVirt storage classes.

+
+
+
Prerequisites
+
    +
  • +

    Source and target providers added to the OKD web console.

    +
  • +
  • +

    Local and shared persistent storage that support VM migration.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationStorageMaps for virtualization.

    +
  2. +
  3. +

    Click Create StorageMap.

    +
  4. +
  5. +

    Specify the following fields:

    +
    +
      +
    • +

      Name: Enter a name to display in the storage mappings list.

      +
    • +
    • +

      Source provider: Select a source provider.

      +
    • +
    • +

      Target provider: Select a target provider.

      +
    • +
    +
    +
  6. +
  7. +

    To create a storage mapping, click Add and map storage sources to target storage classes as follows:

    +
    +
      +
    1. +

      If your source provider is VMware vSphere, select a Source datastore and a Target storage class.

      +
    2. +
    3. +

      If your source provider is oVirt, select a Source storage domain and a Target storage class.

      +
    4. +
    5. +

      If your source provider is {osp}, select a Source volume type and a Target storage class.

      +
    6. +
    7. +

      If your source provider is a set of one or more OVA files, select a Source and a Target storage class for the dummy storage that applies to all virtual disks within the OVA files.

      +
    8. +
    9. +

      If your storage provider is KubeVirt. select a Source storage class and a Target storage class.

      +
    10. +
    11. +

      Optional: Click Add to create additional storage mappings, including mapping multiple storage sources to a single target storage class.

      +
    12. +
    +
    +
  8. +
  9. +

    Click Create.

    +
    +

    The mapping is displayed on the StorageMaps page.

    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/creating-validation-rule/index.html b/modules/creating-validation-rule/index.html new file mode 100644 index 00000000000..aa5dcffab8c --- /dev/null +++ b/modules/creating-validation-rule/index.html @@ -0,0 +1,238 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a validation rule

+
+

You create a validation rule by applying a config map custom resource (CR) containing the rule to the Validation service.

+
+
+ + + + + +
+
Important
+
+
+
    +
  • +

    If you create a rule with the same name as an existing rule, the Validation service performs an OR operation with the rules.

    +
  • +
  • +

    If you create a rule that contradicts a default rule, the Validation service will not start.

    +
  • +
+
+
+
+
+
Validation rule example
+

Validation rules are based on virtual machine (VM) attributes collected by the Provider Inventory service.

+
+
+

For example, the VMware API uses this path to check whether a VMware VM has NUMA node affinity configured: MOR:VirtualMachine.config.extraConfig["numa.nodeAffinity"].

+
+
+

The Provider Inventory service simplifies this configuration and returns a testable attribute with a list value:

+
+
+
+
"numaNodeAffinity": [
+    "0",
+    "1"
+],
+
+
+
+

You create a Rego query, based on this attribute, and add it to the forklift-validation-config config map:

+
+
+
+
`count(input.numaNodeAffinity) != 0`
+
+
+
+
Procedure
+
    +
  1. +

    Create a config map CR according to the following example:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: <forklift-validation-config>
    +  namespace: konveyor-forklift
    +data:
    +  vmware_multiple_disks.rego: |-
    +    package <provider_package> (1)
    +
    +    has_multiple_disks { (2)
    +      count(input.disks) > 1
    +    }
    +
    +    concerns[flag] {
    +      has_multiple_disks (3)
    +        flag := {
    +          "category": "<Information>", (4)
    +          "label": "Multiple disks detected",
    +          "assessment": "Multiple disks detected on this VM."
    +        }
    +    }
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the provider package name. Allowed values are io.konveyor.forklift.vmware for VMware and io.konveyor.forklift.ovirt for oVirt.

      +
    2. +
    3. +

      Specify the concerns name and Rego query.

      +
    4. +
    5. +

      Specify the concerns name and flag parameter values.

      +
    6. +
    7. +

      Allowed values are Critical, Warning, and Information.

      +
    8. +
    +
    +
  2. +
  3. +

    Stop the Validation pod by scaling the forklift-controller deployment to 0:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=0 deployment/forklift-controller
    +
    +
    +
  4. +
  5. +

    Start the Validation pod by scaling the forklift-controller deployment to 1:

    +
    +
    +
    $ kubectl scale -n konveyor-forklift --replicas=1 deployment/forklift-controller
    +
    +
    +
  6. +
  7. +

    Check the Validation pod log to verify that the pod started:

    +
    +
    +
    $ kubectl logs -f <validation_pod>
    +
    +
    +
    +

    If the custom rule conflicts with a default rule, the Validation pod will not start.

    +
    +
  8. +
  9. +

    Remove the source provider:

    +
    +
    +
    $ kubectl delete provider <provider> -n konveyor-forklift
    +
    +
    +
  10. +
  11. +

    Add the source provider to apply the new rule:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <provider>
    +  namespace: konveyor-forklift
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  secret:
    +    name: <secret> (3)
    +    namespace: konveyor-forklift
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      Specify the name of the provider Secret CR.

      +
    6. +
    +
    +
  12. +
+
+
+

You must update the rules version after creating a custom rule so that the Inventory service detects the changes and validates the VMs.

+
+ + +
+ + diff --git a/modules/creating-vddk-image/index.html b/modules/creating-vddk-image/index.html new file mode 100644 index 00000000000..5fda67a908b --- /dev/null +++ b/modules/creating-vddk-image/index.html @@ -0,0 +1,177 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Creating a VDDK image

+
+

Forklift uses the VMware Virtual Disk Development Kit (VDDK) SDK to transfer virtual disks from VMware vSphere.

+
+
+

You must download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You need the VDDK init image path in order to add a VMware source provider.

+
+
+ + + + + +
+
Note
+
+
+

Storing the VDDK image in a public registry might violate the VMware license terms.

+
+
+
+
+
Prerequisites
+
    +
  • +

    OKD image registry.

    +
  • +
  • +

    podman installed.

    +
  • +
  • +

    If you are using an external registry, KubeVirt must be able to access it.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Create and navigate to a temporary directory:

    +
    +
    +
    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
    +
    +
    +
  2. +
  3. +

    In a browser, navigate to the VMware VDDK version 8 download page.

    +
  4. +
  5. +

    Select version 8.0.1 and click Download.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    In order to migrate to KubeVirt 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.

    +
    +
    +
    +
  6. +
  7. +

    Save the VDDK archive file in the temporary directory.

    +
  8. +
  9. +

    Extract the VDDK archive:

    +
    +
    +
    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
    +
    +
    +
  10. +
  11. +

    Create a Dockerfile:

    +
    +
    +
    $ cat > Dockerfile <<EOF
    +FROM registry.access.redhat.com/ubi8/ubi-minimal
    +USER 1001
    +COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    +RUN mkdir -p /opt
    +ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    +EOF
    +
    +
    +
  12. +
  13. +

    Build the VDDK image:

    +
    +
    +
    $ podman build . -t <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  14. +
  15. +

    Push the VDDK image to the registry:

    +
    +
    +
    $ podman push <registry_route_or_server_path>/vddk:<tag>
    +
    +
    +
  16. +
  17. +

    Ensure that the image is accessible to your KubeVirt environment.

    +
  18. +
+
+ + +
+ + diff --git a/modules/error-messages/index.html b/modules/error-messages/index.html new file mode 100644 index 00000000000..da35b7ad434 --- /dev/null +++ b/modules/error-messages/index.html @@ -0,0 +1,83 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Error messages

+
+

This section describes error messages and how to resolve them.

+
+
+
warm import retry limit reached
+

The warm import retry limit reached error message is displayed during a warm migration if a VMware virtual machine (VM) has reached the maximum number (28) of changed block tracking (CBT) snapshots during the precopy stage.

+
+
+

To resolve this problem, delete some of the CBT snapshots from the VM and restart the migration plan.

+
+
+
Unable to resize disk image to required size
+

The Unable to resize disk image to required size error message is displayed when migration fails because a virtual machine on the target provider uses persistent volumes with an EXT4 file system on block storage. The problem occurs because the default overhead that is assumed by CDI does not completely include the reserved place for the root partition.

+
+
+

To resolve this problem, increase the file system overhead in CDI to be more than 10%.

+
+ + +
+ + diff --git a/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg b/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..999c62adec4 --- /dev/null +++ b/modules/images/136_OpenShift_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_OpenShift_0121 diff --git a/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg b/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..473e21ba4e2 --- /dev/null +++ b/modules/images/136_OpenShift_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_OpenShift_0121 diff --git a/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg b/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg new file mode 100644 index 00000000000..33a031a0909 --- /dev/null +++ b/modules/images/136_Upstream_Migration_Toolkit_0121_mtv-workflow.svg @@ -0,0 +1 @@ +NetworkmappingTargetproviderVirtualmachines1UserVirtual-Machine-Import4MigrationControllerPlan2Migration3StoragemappingSourceprovider136_0121 diff --git a/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg b/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg new file mode 100644 index 00000000000..e73192c0102 --- /dev/null +++ b/modules/images/136_Upstream_Migration_Toolkit_0121_virt-workflow.svg @@ -0,0 +1 @@ +Virtual-Machine-ImportProviderAPIVirtualmachineCDIControllerKubeVirtController<VM_name>podDataVolumeSourceProviderConversionpodPersistentVolumeDynamicallyprovisionedstoragePersistentVolume Claim163438710ProviderCredentialsUserVMdisk29VirtualMachineImportControllerVirtual-Machine-InstanceVirtual-Machine57Importerpod136_0121 diff --git a/modules/images/forklift-logo-darkbg.png b/modules/images/forklift-logo-darkbg.png new file mode 100644 index 00000000000..06e9d1b2494 Binary files /dev/null and b/modules/images/forklift-logo-darkbg.png differ diff --git a/modules/images/forklift-logo-darkbg.svg b/modules/images/forklift-logo-darkbg.svg new file mode 100644 index 00000000000..8a846e6361a --- /dev/null +++ b/modules/images/forklift-logo-darkbg.svg @@ -0,0 +1,164 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/modules/images/forklift-logo-lightbg.png b/modules/images/forklift-logo-lightbg.png new file mode 100644 index 00000000000..8dba83d97f8 Binary files /dev/null and b/modules/images/forklift-logo-lightbg.png differ diff --git a/modules/images/forklift-logo-lightbg.svg b/modules/images/forklift-logo-lightbg.svg new file mode 100644 index 00000000000..a8038cdf923 --- /dev/null +++ b/modules/images/forklift-logo-lightbg.svg @@ -0,0 +1,159 @@ + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/modules/images/kebab.png b/modules/images/kebab.png new file mode 100644 index 00000000000..81893bd4ad1 Binary files /dev/null and b/modules/images/kebab.png differ diff --git a/modules/images/mtv-ui.png b/modules/images/mtv-ui.png new file mode 100644 index 00000000000..009c9b46386 Binary files /dev/null and b/modules/images/mtv-ui.png differ diff --git a/modules/increasing-nfc-memory-vmware-host/index.html b/modules/increasing-nfc-memory-vmware-host/index.html new file mode 100644 index 00000000000..35a6fdc7044 --- /dev/null +++ b/modules/increasing-nfc-memory-vmware-host/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Increasing the NFC service memory of an ESXi host

+
+

If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host. Otherwise, the migration will fail because the NFC service memory is limited to 10 parallel connections.

+
+
+
Procedure
+
    +
  1. +

    Log in to the ESXi host as root.

    +
  2. +
  3. +

    Change the value of maxMemory to 1000000000 in /etc/vmware/hostd/config.xml:

    +
    +
    +
    ...
    +      <nfcsvc>
    +         <path>libnfcsvc.so</path>
    +         <enabled>true</enabled>
    +         <maxMemory>1000000000</maxMemory>
    +         <maxStreamMemory>10485760</maxStreamMemory>
    +      </nfcsvc>
    +...
    +
    +
    +
  4. +
  5. +

    Restart hostd:

    +
    +
    +
    # /etc/init.d/hostd restart
    +
    +
    +
    +

    You do not need to reboot the host.

    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/installing-mtv-operator/index.html b/modules/installing-mtv-operator/index.html new file mode 100644 index 00000000000..9c8dc2fc6cc --- /dev/null +++ b/modules/installing-mtv-operator/index.html @@ -0,0 +1,79 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
Prerequisites
+
    +
  • +

    OKD 4.10 or later installed.

    +
  • +
  • +

    KubeVirt Operator installed on an OpenShift migration target cluster.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin permissions.

    +
  • +
+
+ + +
+ + diff --git a/modules/issue_templates/issue.md b/modules/issue_templates/issue.md new file mode 100644 index 00000000000..30d52ab9cba --- /dev/null +++ b/modules/issue_templates/issue.md @@ -0,0 +1,15 @@ +## Summary + +(Describe the problem. Don't worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.) + +## What is the problem? + +(Paste the text or a screenshot here. Remember to include the **task number** so that we know which module is affected.) + +## What is the solution? + +(Correct text, link, or task.) + +## Notes + +(Do we need to fix something else?) diff --git a/modules/issue_templates/issue/index.html b/modules/issue_templates/issue/index.html new file mode 100644 index 00000000000..a8a9794534d --- /dev/null +++ b/modules/issue_templates/issue/index.html @@ -0,0 +1,79 @@ + + + + + + + + Summary | Forklift Documentation + + + + + + + + + + + + + +Summary | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + +
+

Summary

+ +

(Describe the problem. Don’t worry if the problem occurs in more than one checklist. You only need to mention the checklist where you see a problem. We will fix the module.)

+ +

What is the problem?

+ +

(Paste the text or a screenshot here. Remember to include the task number so that we know which module is affected.)

+ +

What is the solution?

+ +

(Correct text, link, or task.)

+ +

Notes

+ +

(Do we need to fix something else?)

+ + + +
+ + diff --git a/modules/making-open-source-more-inclusive/index.html b/modules/making-open-source-more-inclusive/index.html new file mode 100644 index 00000000000..00caa97834b --- /dev/null +++ b/modules/making-open-source-more-inclusive/index.html @@ -0,0 +1,69 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Making open source more inclusive

+
+

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

+
+ + +
+ + diff --git a/modules/migrating-virtual-machines-cli/index.html b/modules/migrating-virtual-machines-cli/index.html new file mode 100644 index 00000000000..3f7dfc669e4 --- /dev/null +++ b/modules/migrating-virtual-machines-cli/index.html @@ -0,0 +1,549 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migrating virtual machines

+
+

You migrate virtual machines (VMs) from the command line (CLI) by creating Forklift custom resources (CRs).

+
+
+ + + + + +
+
Important
+
+
+

You must specify a name for cluster-scoped CRs.

+
+
+

You must specify both a name and a namespace for namespace-scoped CRs.

+
+
+
+
+

Unresolved directive in migrating-virtual-machines-cli.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Prerequisites
+
    +
  • +

    VMware only: You must have a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters.

    +
  • +
  • +

    oVirt (oVirt) only: If you are migrating a virtual machine with a direct LUN disk, ensure that the nodes in the KubeVirt destination cluster that the VM is expected to run on can access the backend storage.

    +
  • +
+
+
+

Unresolved directive in migrating-virtual-machines-cli.adoc - include::snip-migrating-luns.adoc[]

+
+
+
Procedure
+
    +
  1. +

    Create a Secret manifest for the source provider credentials:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: v1
    +kind: Secret
    +metadata:
    +  name: <secret>
    +  namespace: <namespace>
    +  ownerReferences: (1)
    +    - apiVersion: forklift.konveyor.io/v1beta1
    +      kind: Provider
    +      name: <provider_name>
    +      uid: <provider_uid>
    +  labels:
    +    createdForProviderType: <provider_type> (2)
    +    createdForResourceType: providers
    +type: Opaque
    +stringData: (3)
    +  user: <user> (4)
    +  password: <password> (5)
    +  insecureSkipVerify: <true/false> (6)
    +  domainName: <domain_name> (7)
    +  projectName: <project_name> (8)
    +  regionName: <region name> (9)
    +  cacert: | (10)
    +    <ca_certificate>
    +  url: <api_end_point> (11)
    +  thumbprint: <vcenter_fingerprint> (12)
    +EOF
    +
    +
    +
    +
      +
    1. +

      The ownerReferences section is optional.

      +
    2. +
    3. +

      Specify the type of source provider. Allowed values are ovirt, vsphere, openstack, and ova. This label is needed to verify the credentials are correct when the remote system is accessible and, for oVirt, to retrieve the Engine CA certificate when a third-party certificate is specified.

      +
    4. +
    5. +

      The stringData section for OVA is different and is described in a note that follows the description of the Secret manifest.

      +
    6. +
    7. +

      Specify the vCenter user, the oVirt Engine user, or the {osp} user.

      +
    8. +
    9. +

      Specify the user password.

      +
    10. +
    11. +

      Specify <true> to skip certificate verification, which proceeds with an insecure migration and then the certificate is not required. Insecure migration means that the transferred data is sent over an insecure connection and potentially sensitive data could be exposed. Specifying <false> verifies the certificate.

      +
    12. +
    13. +

      {osp} only: Specify the domain name.

      +
    14. +
    15. +

      {osp} only: Specify the project name.

      +
    16. +
    17. +

      {osp} only: Specify the name of the {osp} region.

      +
    18. +
    19. +

      oVirt and {osp} only: For oVirt, enter the Engine CA certificate unless it was replaced by a third-party certificate, in which case enter the Engine Apache CA certificate. You can retrieve the Engine CA certificate at https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA. For {osp}, enter the CA certificate for connecting to the source environment. The certificate is not used when insecureSkipVerify is set to <true>.

      +
    20. +
    21. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    22. +
    23. +

      VMware only: Specify the vCenter SHA-1 fingerprint.

      +
    24. +
    +
    +
    + + + + + +
    +
    Note
    +
    +
    +

    The stringData section for an OVA Secret manifest is as follows:

    +
    +
    +
    +
    stringData:
    +  url: <nfs_server:/nfs_path>
    +
    +
    +
    +

    where:
    +nfs_server: An IP or hostname of the server where the share was created.
    +nfs_path : The path on the server where the OVA files are stored.

    +
    +
    +
    +
  2. +
  3. +

    Create a Provider manifest for the source provider:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Provider
    +metadata:
    +  name: <source_provider>
    +  namespace: <namespace>
    +spec:
    +  type: <provider_type> (1)
    +  url: <api_end_point> (2)
    +  settings:
    +    vddkInitImage: <registry_route_or_server_path>/vddk:<tag> (3)
    +  secret:
    +    name: <secret> (4)
    +    namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ovirt, vsphere, and openstack.

      +
    2. +
    3. +

      Specify the API end point URL, for example, https://<vCenter_host>/sdk for vSphere, https://<engine_host>/ovirt-engine/api for oVirt, or https://<identity_service>/v3 for {osp}.

      +
    4. +
    5. +

      VMware only: Specify the VDDK image that you created.

      +
    6. +
    7. +

      Specify the name of provider Secret CR.

      +
    8. +
    +
    +
  4. +
  5. +

    VMware only: Create a Host manifest:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Host
    +metadata:
    +  name: <vmware_host>
    +  namespace: <namespace>
    +spec:
    +  provider:
    +    namespace: <namespace>
    +    name: <source_provider> (1)
    +  id: <source_host_mor> (2)
    +  ipAddress: <source_network_ip> (3)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the VMware Provider CR.

      +
    2. +
    3. +

      Specify the managed object reference (MOR) of the VMware host.

      +
    4. +
    5. +

      Specify the IP address of the VMware migration network.

      +
    6. +
    +
    +
  6. +
  7. +

    Create a NetworkMap manifest to map the source and destination networks:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: NetworkMap
    +metadata:
    +  name: <network_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        name: <network_name>
    +        type: pod (1)
    +      source: (2)
    +        id: <source_network_id> (3)
    +        name: <source_network_name>
    +    - destination:
    +        name: <network_attachment_definition> (4)
    +        namespace: <network_attachment_definition_namespace> (5)
    +        type: multus
    +      source:
    +        id: <source_network_id>
    +        name: <source_network_name>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are pod and multus.

      +
    2. +
    3. +

      You can use either the id or the name parameter to specify the source network.

      +
    4. +
    5. +

      Specify the VMware network MOR, the oVirt network UUID, or the {osp} network UUID.

      +
    6. +
    7. +

      Specify a network attachment definition for each additional KubeVirt network.

      +
    8. +
    9. +

      Required only when type is multus. Specify the namespace of the KubeVirt network attachment definition.

      +
    10. +
    +
    +
  8. +
  9. +

    Create a StorageMap manifest to map source and destination storage:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: StorageMap
    +metadata:
    +  name: <storage_map>
    +  namespace: <namespace>
    +spec:
    +  map:
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode> (1)
    +      source:
    +        id: <source_datastore> (2)
    +    - destination:
    +        storageClass: <storage_class>
    +        accessMode: <access_mode>
    +      source:
    +        id: <source_datastore>
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +EOF
    +
    +
    +
    +
      +
    1. +

      Allowed values are ReadWriteOnce and ReadWriteMany.

      +
    2. +
    3. +

      Specify the VMware data storage MOR, the oVirt storage domain UUID, or the {osp} volume_type UUID. For example, f2737930-b567-451a-9ceb-2887f6207009.

      +
    4. +
    +
    +
  10. +
  11. +

    Optional: Create a Hook manifest to run custom code on a VM during the phase specified in the Plan CR:

    +
    +
    +
    $  cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Hook
    +metadata:
    +  name: <hook>
    +  namespace: <namespace>
    +spec:
    +  image: quay.io/konveyor/hook-runner (1)
    +  playbook: | (2)
    +    LS0tCi0gbmFtZTogTWFpbgogIGhvc3RzOiBsb2NhbGhvc3QKICB0YXNrczoKICAtIG5hbWU6IExv
    +    YWQgUGxhbgogICAgaW5jbHVkZV92YXJzOgogICAgICBmaWxlOiAiL3RtcC9ob29rL3BsYW4ueW1s
    +    IgogICAgICBuYW1lOiBwbGFuCiAgLSBuYW1lOiBMb2FkIFdvcmtsb2FkCiAgICBpbmNsdWRlX3Zh
    +    cnM6CiAgICAgIGZpbGU6ICIvdG1wL2hvb2svd29ya2xvYWQueW1sIgogICAgICBuYW1lOiB3b3Jr
    +    bG9hZAoK
    +EOF
    +
    +
    +
    +
      +
    1. +

      You can use the default hook-runner image or specify a custom image. If you specify a custom image, you do not have to specify a playbook.

      +
    2. +
    3. +

      Optional: Base64-encoded Ansible playbook. If you specify a playbook, the image must be hook-runner.

      +
    4. +
    +
    +
  12. +
  13. +

    Create a Plan manifest for the migration:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Plan
    +metadata:
    +  name: <plan> (1)
    +  namespace: <namespace>
    +spec:
    +  warm: true (2)
    +  provider:
    +    source:
    +      name: <source_provider>
    +      namespace: <namespace>
    +    destination:
    +      name: <destination_provider>
    +      namespace: <namespace>
    +  map: (3)
    +    network: (4)
    +      name: <network_map> (5)
    +      namespace: <namespace>
    +    storage: (6)
    +      name: <storage_map> (7)
    +      namespace: <namespace>
    +  targetNamespace: <target_namespace>
    +  vms: (8)
    +    - id: <source_vm> (9)
    +    - name: <source_vm>
    +      namespace: <namespace> (10)
    +      hooks: (11)
    +        - hook:
    +            namespace: <namespace>
    +            name: <hook> (12)
    +          step: <step> (13)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Plan CR.

      +
    2. +
    3. +

      Specify whether the migration is warm or cold. If you specify a warm migration without specifying a value for the cutover parameter in the Migration manifest, only the precopy stage will run.

      +
    4. +
    5. +

      Specify only one network map and one storage map per plan.

      +
    6. +
    7. +

      Specify a network mapping even if the VMs to be migrated are not assigned to a network. The mapping can be empty in this case.

      +
    8. +
    9. +

      Specify the name of the NetworkMap CR.

      +
    10. +
    11. +

      Specify a storage mapping even if the VMs to be migrated are not assigned with disk images. The mapping can be empty in this case.

      +
    12. +
    13. +

      Specify the name of the StorageMap CR.

      +
    14. +
    15. +

      For all source providers except for KubeVirt, you can use either the id or the name parameter to specify the source VMs.
      +KubeVirt source provider only: You can use only the name parameter, not the id. parameter to specify the source VMs.

      +
    16. +
    17. +

      Specify the VMware VM MOR, oVirt VM UUID or the {osp} VM UUID.

      +
    18. +
    19. +

      KubeVirt source provider only.

      +
    20. +
    21. +

      Optional: You can specify up to two hooks for a VM. Each hook must run during a separate migration step.

      +
    22. +
    23. +

      Specify the name of the Hook CR.

      +
    24. +
    25. +

      Allowed values are PreHook, before the migration plan starts, or PostHook, after the migration is complete.

      +
    26. +
    +
    +
  14. +
  15. +

    Create a Migration manifest to run the Plan CR:

    +
    +
    +
    $ cat << EOF | kubectl apply -f -
    +apiVersion: forklift.konveyor.io/v1beta1
    +kind: Migration
    +metadata:
    +  name: <migration> (1)
    +  namespace: <namespace>
    +spec:
    +  plan:
    +    name: <plan> (2)
    +    namespace: <namespace>
    +  cutover: <cutover_time> (3)
    +EOF
    +
    +
    +
    +
      +
    1. +

      Specify the name of the Migration CR.

      +
    2. +
    3. +

      Specify the name of the Plan CR that you are running. The Migration CR creates a VirtualMachine CR for each VM that is migrated.

      +
    4. +
    5. +

      Optional: Specify a cutover time according to the ISO 8601 format with the UTC time offset, for example, 2021-04-04T01:23:45.678+09:00.

      +
    6. +
    +
    +
    +

    You can associate multiple Migration CRs with a single Plan CR. If a migration does not complete, you can create a new Migration CR, without changing the Plan CR, to migrate the remaining VMs.

    +
    +
  16. +
  17. +

    Retrieve the Migration CR to monitor the progress of the migration:

    +
    +
    +
    $ kubectl get migration/<migration> -n <namespace> -o yaml
    +
    +
    +
  18. +
+
+ + +
+ + diff --git a/modules/migration-plan-options-ui/index.html b/modules/migration-plan-options-ui/index.html new file mode 100644 index 00000000000..68b4779e4e0 --- /dev/null +++ b/modules/migration-plan-options-ui/index.html @@ -0,0 +1,141 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Migration plan options

+
+

On the Plans for virtualization page of the OKD web console, you can click the {kebab} beside a migration plan to access the following options:

+
+
+
    +
  • +

    Get logs: Retrieves the logs of a migration. When you click Get logs, a confirmation window opens. After you click Get logs in the window, wait until Get logs changes to Download logs and then click the button to download the logs.

    +
  • +
  • +

    Edit: Edit the details of a migration plan. You cannot edit a migration plan while it is running or after it has completed successfully.

    +
  • +
  • +

    Duplicate: Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks:

    +
    +
      +
    • +

      Migrate VMs to a different namespace.

      +
    • +
    • +

      Edit an archived migration plan.

      +
    • +
    • +

      Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready.

      +
    • +
    +
    +
  • +
  • +

    Archive: Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Archive option is irreversible. However, you can duplicate an archived plan.

    +
    +
    +
    +
  • +
  • +

    Delete: Permanently remove a migration plan. You cannot delete a running migration plan.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The Delete option is irreversible.

    +
    +
    +

    Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

    +
    +
    +
    +
  • +
  • +

    View details: Display the details of a migration plan.

    +
  • +
  • +

    Restart: Restart a failed or canceled migration plan.

    +
  • +
  • +

    Cancel scheduled cutover: Cancel a scheduled cutover migration for a warm migration plan.

    +
  • +
+
+ + +
+ + diff --git a/modules/mtv-overview-page/index.html b/modules/mtv-overview-page/index.html new file mode 100644 index 00000000000..9290ccc7abe --- /dev/null +++ b/modules/mtv-overview-page/index.html @@ -0,0 +1,142 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV Overview page

+
+

The Forklift Overview page displays system-wide information about migrations and a list of Settings you can change.

+
+
+

If you have Administrator privileges, you can access the Overview page by clicking MigrationOverview in the OKD web console.

+
+
+

The Overview page displays the following information:

+
+
+
    +
  • +

    Migrations: The number of migrations performed using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Virtual Machine Migrations: The number of VMs migrated using Forklift:

    +
    +
      +
    • +

      Total

      +
    • +
    • +

      Running

      +
    • +
    • +

      Failed

      +
    • +
    • +

      Succeeded

      +
    • +
    • +

      Canceled

      +
    • +
    +
    +
  • +
  • +

    Operator: The namespace on which the Forklift Operator is deployed and the status of the Operator.

    +
  • +
  • +

    Conditions: Status of the Forklift Operator:

    +
    +
      +
    • +

      Failure: Last failure. False indicates no failure since deployment.

      +
    • +
    • +

      Running: Whether the Operator is currently running and waiting for the next reconciliation.

      +
    • +
    • +

      Successful: Last successful reconciliation.

      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/modules/mtv-resources-and-services/index.html b/modules/mtv-resources-and-services/index.html new file mode 100644 index 00000000000..90a26606516 --- /dev/null +++ b/modules/mtv-resources-and-services/index.html @@ -0,0 +1,131 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift custom resources and services

+
+

Forklift is provided as an OKD Operator. It creates and manages the following custom resources (CRs) and services.

+
+
+
Forklift custom resources
+
    +
  • +

    Provider CR stores attributes that enable Forklift to connect to and interact with the source and target providers.

    +
  • +
  • +

    NetworkMapping CR maps the networks of the source and target providers.

    +
  • +
  • +

    StorageMapping CR maps the storage of the source and target providers.

    +
  • +
  • +

    Plan CR contains a list of VMs with the same migration parameters and associated network and storage mappings.

    +
  • +
  • +

    Migration CR runs a migration plan.

    +
    +

    Only one Migration CR per migration plan can run at a given time. You can create multiple Migration CRs for a single Plan CR.

    +
    +
  • +
+
+
+
Forklift services
+
    +
  • +

    The Inventory service performs the following actions:

    +
    +
      +
    • +

      Connects to the source and target providers.

      +
    • +
    • +

      Maintains a local inventory for mappings and plans.

      +
    • +
    • +

      Stores VM configurations.

      +
    • +
    • +

      Runs the Validation service if a VM configuration change is detected.

      +
    • +
    +
    +
  • +
  • +

    The Validation service checks the suitability of a VM for migration by applying rules.

    +
  • +
  • +

    The Migration Controller service orchestrates migrations.

    +
    +

    When you create a migration plan, the Migration Controller service validates the plan and adds a status label. If the plan fails validation, the plan status is Not ready and the plan cannot be used to perform a migration. If the plan passes validation, the plan status is Ready and it can be used to perform a migration. After a successful migration, the Migration Controller service changes the plan status to Completed.

    +
    +
  • +
  • +

    The Populator Controller service orchestrates disk transfers using Volume Populators.

    +
  • +
  • +

    The Kubevirt Controller and Containerized Data Import (CDI) Controller services handle most technical operations.

    +
  • +
+
+ + +
+ + diff --git a/modules/mtv-settings/index.html b/modules/mtv-settings/index.html new file mode 100644 index 00000000000..18a0c08df4c --- /dev/null +++ b/modules/mtv-settings/index.html @@ -0,0 +1,133 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Configuring MTV settings

+
+

If you have Administrator privileges, you can access the Overview page and change the following settings in it:

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Forklift settings
SettingDescriptionDefault value

Max concurrent virtual machine migrations

The maximum number of VMs per plan that can be migrated simultaneously

20

Must gather cleanup after (hours)

The duration for retaining must gather reports before they are automatically deleted

Disabled

Controller main container CPU limit

The CPU limit allocated to the main controller container

500 m

Controller main container Memory limit

The memory limit allocated to the main controller container

800 Mi

Precopy internal (minutes)

The interval at which a new snapshot is requested before initiating a warm migration

60

Snapshot polling interval (seconds)

The frequency with which the system checks the status of snapshot creation or removal during warm migration

10

+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationOverview. The Settings list is on the right-hand side of the page.

    +
  2. +
  3. +

    In the Settings list, click the Edit icon of the setting you want to change.

    +
  4. +
  5. +

    Choose a setting from the list.

    +
  6. +
  7. +

    Click Save.

    +
  8. +
+
+ + +
+ + diff --git a/modules/mtv-ui/index.html b/modules/mtv-ui/index.html new file mode 100644 index 00000000000..3431fbd7a23 --- /dev/null +++ b/modules/mtv-ui/index.html @@ -0,0 +1,91 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

The MTV user interface

+
+

The Forklift user interface is integrated into the OKD web console.

+
+
+

In the left-hand panel, you can choose a page related to a component of the migration progress, for example, Providers for Migration, or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure Forklift settings.

+
+
+
+Forklift user interface +
+
Figure 1. Forklift extension interface
+
+
+

In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.

+
+
+
    +
  • +

    If you are an administrator, you can see all projects.

    +
  • +
  • +

    If you are a non-administrator, you can see only the projects that you have permissions to work with.

    +
  • +
+
+ + +
+ + diff --git a/modules/mtv-workflow/index.html b/modules/mtv-workflow/index.html new file mode 100644 index 00000000000..95e93cacfb0 --- /dev/null +++ b/modules/mtv-workflow/index.html @@ -0,0 +1,113 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

High-level migration workflow

+
+

The high-level workflow shows the migration process from the point of view of the user:

+
+
+
    +
  1. +

    You create a source provider, a target provider, a network mapping, and a storage mapping.

    +
  2. +
  3. +

    You create a Plan custom resource (CR) that includes the following resources:

    +
    +
      +
    • +

      Source provider

      +
    • +
    • +

      Target provider, if Forklift is not installed on the target cluster

      +
    • +
    • +

      Network mapping

      +
    • +
    • +

      Storage mapping

      +
    • +
    • +

      One or more virtual machines (VMs)

      +
    • +
    +
    +
  4. +
  5. +

    You run a migration plan by creating a Migration CR that references the Plan CR.

    +
    +

    If you cannot migrate all the VMs for any reason, you can create multiple Migration CRs for the same Plan CR until all VMs are migrated.

    +
    +
  6. +
  7. +

    For each VM in the Plan CR, the Migration Controller service records the VM migration progress in the Migration CR.

    +
  8. +
  9. +

    Once the data transfer for each VM in the Plan CR completes, the Migration Controller service creates a VirtualMachine CR.

    +
    +

    When all VMs have been migrated, the Migration Controller service updates the status of the Plan CR to Completed. The power state of each source VM is maintained after migration.

    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/network-prerequisites/index.html b/modules/network-prerequisites/index.html new file mode 100644 index 00000000000..9f5a2e5a0b6 --- /dev/null +++ b/modules/network-prerequisites/index.html @@ -0,0 +1,196 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Network prerequisites

+
+
+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    IP addresses, VLANs, and other network configuration settings must not be changed before or during migration. The MAC addresses of the virtual machines are preserved during migration.

    +
  • +
  • +

    The network connections between the source environment, the KubeVirt cluster, and the replication repository must be reliable and uninterrupted.

    +
  • +
  • +

    If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.

    +
  • +
+
+
+
+
+

Ports

+
+
+

The firewalls must enable traffic over the following ports:

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Network ports required for migrating from VMware vSphere
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

VMware vCenter

+

VMware provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer authentication

+

902

TCP

OpenShift nodes

VMware ESXi hosts

+

Disk transfer data copy

+
+ + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Network ports required for migrating from oVirt
PortProtocolSourceDestinationPurpose

443

TCP

OpenShift nodes

oVirt Engine

+

oVirt provider inventory

+
+
+

Disk transfer authentication

+

443

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer authentication

+

54322

TCP

OpenShift nodes

oVirt hosts

+

Disk transfer data copy

+
+
+
+ + +
+ + diff --git a/modules/non-admin-permissions-for-ui/index.html b/modules/non-admin-permissions-for-ui/index.html new file mode 100644 index 00000000000..5a3bc4aaa2c --- /dev/null +++ b/modules/non-admin-permissions-for-ui/index.html @@ -0,0 +1,187 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Permissions needed by non-administrators to work with migration plan components

+
+

If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).

+
+
+

By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.

+
+
+

For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + +
Table 1. Example migration plan roles and their privileges
RoleDescription

plans.forklift.konveyor.io-v1beta1-view

Can view migration plans but not to create, delete or modify them

plans.forklift.konveyor.io-v1beta1-edit

Can create, delete or modify (all parts of edit permissions) individual migration plans

plans.forklift.konveyor.io-v1beta1-admin

All edit privileges and the ability to delete the entire collection of migration plans

+
+

Note that pre-defined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).

+
+
+

As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:

+
+
+
    +
  • +

    Create and modify storage maps, network maps, and migration plans for the namespaces they have access to

    +
  • +
  • +

    Attach providers created by administrators to storage maps, network maps, and migration plans

    +
  • +
  • +

    Not be able to create providers or to change system settings

    +
  • +
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
ActionsAPI groupResource

get, list, watch, create, update, patch, delete

forklift.konveyer.io

plans

get, list, watch, create, update, patch, delete

forklift.konveyer.io

migrations

get, list, watch, create, update, patch, delete

forklift.konveyer.io

hooks

get, list, watch

forklift.konveyer.io

providers

get, list, watch, create, update, patch, delete

forklift.konveyer.io

networkmaps

get, list, watch, create, update, patch, delete

forklift.konveyer.io

storagemaps

get, list, watch

forklift.konveyer.io

forkliftcontrollers

+
+ + + + + +
+
Note
+
+
+

Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.

+
+
+
+ + +
+ + diff --git a/modules/obtaining-console-url/index.html b/modules/obtaining-console-url/index.html new file mode 100644 index 00000000000..d7d9665812c --- /dev/null +++ b/modules/obtaining-console-url/index.html @@ -0,0 +1,107 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Getting the Forklift web console URL

+
+

You can get the Forklift web console URL at any time by using either the OKD web console, or the command line.

+
+
+
Prerequisites
+
    +
  • +

    KubeVirt Operator installed.

    +
  • +
  • +

    Forklift Operator installed.

    +
  • +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  • +

    If you are using the OKD web console, follow these steps:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_web.adoc[]

+
+
+
    +
  • +

    If you are using the command line, get the Forklift web console URL with the following command:

    +
  • +
+
+
+

Unresolved directive in obtaining-console-url.adoc - include::snippet_getting_web_console_url_cli.adoc[]

+
+
+

You can now launch a browser and navigate to the Forklift web console.

+
+ + +
+ + diff --git a/modules/obtaining-vmware-fingerprint/index.html b/modules/obtaining-vmware-fingerprint/index.html new file mode 100644 index 00000000000..0757c3508c7 --- /dev/null +++ b/modules/obtaining-vmware-fingerprint/index.html @@ -0,0 +1,99 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Obtaining the SHA-1 fingerprint of a vCenter host

+
+

You must obtain the SHA-1 fingerprint of a vCenter host in order to create a Secret CR.

+
+
+
Procedure
+
    +
  • +

    Run the following command:

    +
    +
    +
    $ openssl s_client \
    +    -connect <vcenter_host>:443 \ (1)
    +    < /dev/null 2>/dev/null \
    +    | openssl x509 -fingerprint -noout -in /dev/stdin \
    +    | cut -d '=' -f 2
    +
    +
    +
    +
      +
    1. +

      Specify the IP address or FQDN of the vCenter host.

      +
    2. +
    +
    +
    +
    Example output
    +
    +
    01:23:45:67:89:AB:CD:EF:01:23:45:67:89:AB:CD:EF:01:23:45:67
    +
    +
    +
  • +
+
+ + +
+ + diff --git a/modules/openstack-prerequisites/index.html b/modules/openstack-prerequisites/index.html new file mode 100644 index 00000000000..874ed2e23ae --- /dev/null +++ b/modules/openstack-prerequisites/index.html @@ -0,0 +1,90 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

OpenStack prerequisites

+
+

The following prerequisites apply to {osp} migrations:

+
+
+ +
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+ + +
+ + diff --git a/modules/osh-adding-source-provider/index.html b/modules/osh-adding-source-provider/index.html new file mode 100644 index 00000000000..e0018d73108 --- /dev/null +++ b/modules/osh-adding-source-provider/index.html @@ -0,0 +1,137 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Adding an {osp} source provider

+
+

You can add an {osp} source provider by using the OKD web console.

+
+
+ + + + + +
+
Note
+
+
+

Migration using {osp} source providers only supports VMs that use only Cinder volumes.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click Create Provider.

    +
  4. +
  5. +

    Select Red Hat OpenStack Platform from the Provider type list.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Provider name: Name to display in the list of providers

      +
    • +
    • +

      {osp} Identity server URL: {osp} Identity (Keystone) endpoint, for example, http://controller:5000/v3

      +
    • +
    • +

      {osp} username: For example, admin

      +
    • +
    • +

      {osp} password:

      +
    • +
    • +

      Domain:

      +
    • +
    • +

      Project:

      +
    • +
    • +

      Region:

      +
    • +
    +
    +
  8. +
  9. +

    To allow a migration without validating the provider’s CA certificate, select the Skip certificate validation check box. By default, the checkbox is cleared, meaning that the certificate will be validated.

    +
  10. +
  11. +

    If you did not select Skip certificate validation, the CA certificate field is visible. Drag the CA certificate used to connect to the source environment to the text box or browse for it and click Select. If you did select the check box, the CA certificate text box is not visible.

    +
  12. +
  13. +

    Click Create to add and save the provider.

    +
    +

    The source provider appears in the list of providers.

    +
    +
  14. +
+
+ + +
+ + diff --git a/modules/ostack-app-cred-auth/index.html b/modules/ostack-app-cred-auth/index.html new file mode 100644 index 00000000000..e987c9be706 --- /dev/null +++ b/modules/ostack-app-cred-auth/index.html @@ -0,0 +1,189 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using application credential authentication with an {osp} source provider

+
+

You can use application credential authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of application credential authentication:

+
+
+
    +
  • +

    Application credential ID

    +
  • +
  • +

    Application credential name

    +
  • +
+
+
+

For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

You have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for application credential authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for application credential authentication, run the following command:

    +
    +
    +
    $ openstack application credential create --role member --role reader --secret redhat forklift
    +
    +
    +
    +

    The output, referred to here as <openstack_credential_output>, includes:

    +
    +
    +
      +
    • +

      The id and secret that you need for authentication using an application credential ID

      +
    • +
    • +

      The name and secret that you need for authentication using an application credential name

      +
    • +
    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using the application credential ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialID: <id_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using the application credential name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-appname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: applicationcredential
      +  applicationCredentialName: <name_from_openstack_credential_output>
      +  applicationCredentialSecret: <secret_from_openstack_credential_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/modules/ostack-token-auth/index.html b/modules/ostack-token-auth/index.html new file mode 100644 index 00000000000..204bb244f3f --- /dev/null +++ b/modules/ostack-token-auth/index.html @@ -0,0 +1,180 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using token authentication with an {osp} source provider

+
+

You can use token authentication, instead of username and password authentication, when you create an {osp} source provider.

+
+
+

Forklift supports both of the following types of token authentication:

+
+
+
    +
  • +

    Token with user ID

    +
  • +
  • +

    Token with user name

    +
  • +
+
+
+

For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.

+
+
+
Prerequisites
+

Have an {osp} account.

+
+
+
Procedure
+
    +
  1. +

    In the dashboard of the {osp} web console, click Project > API Access.

    +
  2. +
  3. +

    Expand Download OpenStack RC file and click OpenStack RC file.

    +
    +

    The file that is downloaded, referred to here as <openstack_rc_file>, includes the following fields used for token authentication:

    +
    +
    +
    +
    OS_AUTH_URL
    +OS_PROJECT_ID
    +OS_PROJECT_NAME
    +OS_DOMAIN_NAME
    +OS_USERNAME
    +
    +
    +
  4. +
  5. +

    To get the data needed for token authentication, run the following command:

    +
    +
    +
    $ openstack token issue
    +
    +
    +
    +

    The output, referred to here as <openstack_token_output>, includes the token, userID, and projectID that you need for authentication using a token with user ID.

    +
    +
  6. +
  7. +

    Create a Secret manifest similar to the following:

    +
    +
      +
    • +

      For authentication using a token with user ID:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenid
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  projectID: <projectID_from_openstack_token_output>
      +  userID: <userID_from_openstack_token_output>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    • +

      For authentication using a token with user name:

      +
      +
      +
      cat << EOF | oc apply -f -
      +apiVersion: v1
      +kind: Secret
      +metadata:
      +  name: openstack-secret-tokenname
      +  namespace: openshift-mtv
      +  labels:
      +    createdForProviderType: openstack
      +type: Opaque
      +stringData:
      +  authType: token
      +  token: <token_from_openstack_token_output>
      +  domainName: <OS_DOMAIN_NAME_from_openstack_rc_file>
      +  projectName: <OS_PROJECT_NAME_from_openstack_rc_file>
      +  username: <OS_USERNAME_from_openstack_rc_file>
      +  url: <OS_AUTH_URL_from_openstack_rc_file>
      +EOF
      +
      +
      +
    • +
    +
    +
  8. +
  9. +

    Continue migrating your virtual machine according to the procedure in Migrating virtual machines, starting with step 2, "Create a Provider manifest for the source provider."

    +
  10. +
+
+ + +
+ + diff --git a/modules/ova-prerequisites/index.html b/modules/ova-prerequisites/index.html new file mode 100644 index 00000000000..05b17620c44 --- /dev/null +++ b/modules/ova-prerequisites/index.html @@ -0,0 +1,130 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Open Virtual Appliance (OVA) prerequisites

+
+

The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:

+
+
+
    +
  • +

    All OVA files are created by VMware vSphere.

    +
  • +
+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+
    +
  • +

    The OVA files are in one or more folders under an NFS shared directory in one of the following structures:

    +
    +
      +
    • +

      In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.

      +
      +

      The filename of each compressed package must have the .ova extension. Several compressed packages can be stored in the same folder.

      +
      +
      +

      When this structure is used, Forklift scans the root folder and the first-level subfolders for compressed packages.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The folder /nfs is scanned.
      +The folder /nfs/subfolder1 is scanned.
      +But, /nfs/subfolder1/subfolder2 is not scanned.

      +
      +
    • +
    • +

      In extracted OVF packages.

      +
      +

      When this structure is used, Forklift scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages. +However, there can be only one .ovf file in a folder. Otherwise, the migration will fail.

      +
      +
      +

      For example, if the NFS share is, /nfs, then:
      +The OVF file /nfs/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/vm.ovf is scanned.
      +The OVF file /nfs/subfolder1/subfolder2/vm.ovf is scanned.
      +But, the OVF file /nfs/subfolder1/subfolder2/subfolder3/vm.ovf is not scanned.

      +
      +
    • +
    +
    +
  • +
+
+ + +
+ + diff --git a/modules/retrieving-validation-service-json/index.html b/modules/retrieving-validation-service-json/index.html new file mode 100644 index 00000000000..cffdac43e0b --- /dev/null +++ b/modules/retrieving-validation-service-json/index.html @@ -0,0 +1,483 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Retrieving the Inventory service JSON

+
+

You retrieve the Inventory service JSON by sending an Inventory service query to a virtual machine (VM). The output contains an "input" key, which contains the inventory attributes that are queried by the Validation service rules.

+
+
+

You can create a validation rule based on any attribute in the "input" key, for example, input.snapshot.kind.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the routes for the project:

    +
    +
    +
    oc get route -n openshift-mtv
    +
    +
    +
  2. +
  3. +

    Retrieve the Inventory service route:

    +
    +
    +
    $ kubectl get route <inventory_service> -n konveyor-forklift
    +
    +
    +
  4. +
  5. +

    Retrieve the access token:

    +
    +
    +
    $ TOKEN=$(oc whoami -t)
    +
    +
    +
  6. +
  7. +

    Trigger an HTTP GET request (for example, using Curl):

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN" https://<inventory_service_route>/providers -k
    +
    +
    +
  8. +
  9. +

    Retrieve the UUID of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider> -k (1)
    +
    +
    +
    +
      +
    1. +

      Allowed values for the provider are vsphere, ovirt, and openstack.

      +
    2. +
    +
    +
  10. +
  11. +

    Retrieve the VMs of a provider:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/vms -k
    +
    +
    +
  12. +
  13. +

    Retrieve the details of a VM:

    +
    +
    +
    $ curl -H "Authorization: Bearer $TOKEN"  https://<inventory_service_route>/providers/<provider>/<UUID>/workloads/<vm> -k
    +
    +
    +
    +
    Example output
    +
    +
    {
    +    "input": {
    +        "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/workloads/vm-431",
    +        "id": "vm-431",
    +        "parent": {
    +            "kind": "Folder",
    +            "id": "group-v22"
    +        },
    +        "revision": 1,
    +        "name": "iscsi-target",
    +        "revisionValidated": 1,
    +        "isTemplate": false,
    +        "networks": [
    +            {
    +                "kind": "Network",
    +                "id": "network-31"
    +            },
    +            {
    +                "kind": "Network",
    +                "id": "network-33"
    +            }
    +        ],
    +        "disks": [
    +            {
    +                "key": 2000,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 17179869184,
    +                "shared": false,
    +                "rdm": false
    +            },
    +            {
    +                "key": 2001,
    +                "file": "[iSCSI_Datastore] iscsi-target/iscsi-target_1-000001.vmdk",
    +                "datastore": {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                },
    +                "capacity": 10737418240,
    +                "shared": false,
    +                "rdm": false
    +            }
    +        ],
    +        "concerns": [],
    +        "policyVersion": 5,
    +        "uuid": "42256329-8c3a-2a82-54fd-01d845a8bf49",
    +        "firmware": "bios",
    +        "powerState": "poweredOn",
    +        "connectionState": "connected",
    +        "snapshot": {
    +            "kind": "VirtualMachineSnapshot",
    +            "id": "snapshot-3034"
    +        },
    +        "changeTrackingEnabled": false,
    +        "cpuAffinity": [
    +            0,
    +            2
    +        ],
    +        "cpuHotAddEnabled": true,
    +        "cpuHotRemoveEnabled": false,
    +        "memoryHotAddEnabled": false,
    +        "faultToleranceEnabled": false,
    +        "cpuCount": 2,
    +        "coresPerSocket": 1,
    +        "memoryMB": 2048,
    +        "guestName": "Red Hat Enterprise Linux 7 (64-bit)",
    +        "balloonedMemory": 0,
    +        "ipAddress": "10.19.2.96",
    +        "storageUsed": 30436770129,
    +        "numaNodeAffinity": [
    +            "0",
    +            "1"
    +        ],
    +        "devices": [
    +            {
    +                "kind": "RealUSBController"
    +            }
    +        ],
    +        "host": {
    +            "id": "host-29",
    +            "parent": {
    +                "kind": "Cluster",
    +                "id": "domain-c26"
    +            },
    +            "revision": 1,
    +            "name": "IP address or host name of the vCenter host or oVirt Engine host",
    +            "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/hosts/host-29",
    +            "status": "green",
    +            "inMaintenance": false,
    +            "managementServerIp": "10.19.2.96",
    +            "thumbprint": <thumbprint>,
    +            "timezone": "UTC",
    +            "cpuSockets": 2,
    +            "cpuCores": 16,
    +            "productName": "VMware ESXi",
    +            "productVersion": "6.5.0",
    +            "networking": {
    +                "pNICs": [
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic0",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic1",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic2",
    +                        "linkSpeed": 10000
    +                    },
    +                    {
    +                        "key": "key-vim.host.PhysicalNic-vmnic3",
    +                        "linkSpeed": 10000
    +                    }
    +                ],
    +                "vNICs": [
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk2",
    +                        "portGroup": "VM_Migration",
    +                        "dPortGroup": "",
    +                        "ipAddress": "192.168.79.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 9000
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk0",
    +                        "portGroup": "Management Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.13",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk1",
    +                        "portGroup": "Storage Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "172.31.2.13",
    +                        "subnetMask": "255.255.0.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk3",
    +                        "portGroup": "",
    +                        "dPortGroup": "dvportgroup-48",
    +                        "ipAddress": "192.168.61.13",
    +                        "subnetMask": "255.255.255.0",
    +                        "mtu": 1500
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualNic-vmk4",
    +                        "portGroup": "VM_DHCP_Network",
    +                        "dPortGroup": "",
    +                        "ipAddress": "10.19.2.231",
    +                        "subnetMask": "255.255.255.128",
    +                        "mtu": 1500
    +                    }
    +                ],
    +                "portGroups": [
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM Network",
    +                        "name": "VM Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Management Network",
    +                        "name": "Management Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch0"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_10G_Network",
    +                        "name": "VM_10G_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Storage",
    +                        "name": "VM_Storage",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_DHCP_Network",
    +                        "name": "VM_DHCP_Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-Storage Network",
    +                        "name": "Storage Network",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch1"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Isolated_67",
    +                        "name": "VM_Isolated_67",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    },
    +                    {
    +                        "key": "key-vim.host.PortGroup-VM_Migration",
    +                        "name": "VM_Migration",
    +                        "vSwitch": "key-vim.host.VirtualSwitch-vSwitch2"
    +                    }
    +                ],
    +                "switches": [
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch0",
    +                        "name": "vSwitch0",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM Network",
    +                            "key-vim.host.PortGroup-Management Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic4"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch1",
    +                        "name": "vSwitch1",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_10G_Network",
    +                            "key-vim.host.PortGroup-VM_Storage",
    +                            "key-vim.host.PortGroup-VM_DHCP_Network",
    +                            "key-vim.host.PortGroup-Storage Network"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic2",
    +                            "key-vim.host.PhysicalNic-vmnic0"
    +                        ]
    +                    },
    +                    {
    +                        "key": "key-vim.host.VirtualSwitch-vSwitch2",
    +                        "name": "vSwitch2",
    +                        "portGroups": [
    +                            "key-vim.host.PortGroup-VM_Isolated_67",
    +                            "key-vim.host.PortGroup-VM_Migration"
    +                        ],
    +                        "pNICs": [
    +                            "key-vim.host.PhysicalNic-vmnic3",
    +                            "key-vim.host.PhysicalNic-vmnic1"
    +                        ]
    +                    }
    +                ]
    +            },
    +            "networks": [
    +                {
    +                    "kind": "Network",
    +                    "id": "network-31"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-34"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-57"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "network-33"
    +                },
    +                {
    +                    "kind": "Network",
    +                    "id": "dvportgroup-47"
    +                }
    +            ],
    +            "datastores": [
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-35"
    +                },
    +                {
    +                    "kind": "Datastore",
    +                    "id": "datastore-63"
    +                }
    +            ],
    +            "vms": null,
    +            "networkAdapters": [],
    +            "cluster": {
    +                "id": "domain-c26",
    +                "parent": {
    +                    "kind": "Folder",
    +                    "id": "group-h23"
    +                },
    +                "revision": 1,
    +                "name": "mycluster",
    +                "selfLink": "providers/vsphere/c872d364-d62b-46f0-bd42-16799f40324e/clusters/domain-c26",
    +                "folder": "group-h23",
    +                "networks": [
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-31"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-34"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-57"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "network-33"
    +                    },
    +                    {
    +                        "kind": "Network",
    +                        "id": "dvportgroup-47"
    +                    }
    +                ],
    +                "datastores": [
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-35"
    +                    },
    +                    {
    +                        "kind": "Datastore",
    +                        "id": "datastore-63"
    +                    }
    +                ],
    +                "hosts": [
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-44"
    +                    },
    +                    {
    +                        "kind": "Host",
    +                        "id": "host-29"
    +                    }
    +                ],
    +                "dasEnabled": false,
    +                "dasVms": [],
    +                "drsEnabled": true,
    +                "drsBehavior": "fullyAutomated",
    +                "drsVms": [],
    +                "datacenter": null
    +            }
    +        }
    +    }
    +}
    +
    +
    +
  14. +
+
+ + +
+ + diff --git a/modules/rhv-prerequisites/index.html b/modules/rhv-prerequisites/index.html new file mode 100644 index 00000000000..8c1b6389731 --- /dev/null +++ b/modules/rhv-prerequisites/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

oVirt prerequisites

+
+

The following prerequisites apply to oVirt migrations:

+
+
+ +
+
+

Unresolved directive in rhv-prerequisites.adoc - include::snip-migrating-luns.adoc[]

+
+ + +
+ + diff --git a/modules/rn-2.0/index.html b/modules/rn-2.0/index.html new file mode 100644 index 00000000000..5d0d671a389 --- /dev/null +++ b/modules/rn-2.0/index.html @@ -0,0 +1,163 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.0

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Warm migration
+

Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied.

+
+
+
Cancel migration
+

You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs.

+
+
+
Migration network
+

You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the OKD pod network.

+
+
+
Validation service
+

The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan.

+
+
+ + + + + +
+
Important
+
+
+

The validation service is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+
+
+
+

Known issues

+
+
+

This section describes known issues and mitigations.

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Network map displays a "Destination network not found" error
+

If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. (BZ#1971259)

+
+
+
Warm migration gets stuck during third precopy
+

Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. (BZ#1969894)

+
+
+

You can do one of the following to mitigate this issue:

+
+
+
    +
  • +

    Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created.

    +
  • +
  • +

    Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes:

    +
    +
    +
    $ kubectl patch configmap/vm-import-controller-config \
    +  -n openshift-cnv -p '{"data": \
    +  {"warmImport.intervalMinutes": "720"}}'
    +
    +
    +
  • +
+
+
+
+ + +
+ + diff --git a/modules/rn-2.1/index.html b/modules/rn-2.1/index.html new file mode 100644 index 00000000000..07d0993ae02 --- /dev/null +++ b/modules/rn-2.1/index.html @@ -0,0 +1,191 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.1

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe new features and enhancements, known issues, and technical changes.

+
+
+
+
+

Technical changes

+
+
+
VDDK image added to HyperConverged custom resource
+

The VMware Virtual Disk Development Kit (VDDK) SDK image must be added to the HyperConverged custom resource. Before this release, it was referenced in the v2v-vmware config map.

+
+
+
+
+

New features and enhancements

+
+
+

This release adds the following features and improvements.

+
+
+
Cold migration from oVirt
+

You can perform a cold migration of VMs from oVirt.

+
+
+
Migration hooks
+

You can create migration hooks to run Ansible playbooks or custom code before or after migration.

+
+
+
Filtered must-gather data collection
+

You can specify options for the must-gather tool that enable you to filter the data by namespace, migration plan, or VMs.

+
+
+
SR-IOV network support
+

You can migrate VMs with a single root I/O virtualization (SR-IOV) network interface if the KubeVirt environment has an SR-IOV network.

+
+
+
+
+

Known issues

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Disk copy stage does not progress
+

The disk copy stage of a oVirt VM does not progress and the Forklift web console does not display an error message. (BZ#1990596)

+
+
+

The cause of this problem might be one of the following conditions:

+
+
+
    +
  • +

    The storage class does not exist on the target cluster.

    +
  • +
  • +

    The VDDK image has not been added to the HyperConverged custom resource.

    +
  • +
  • +

    The VM does not have a disk.

    +
  • +
  • +

    The VM disk is locked.

    +
  • +
  • +

    The VM time zone is not set to UTC.

    +
  • +
  • +

    The VM is configured for a USB device.

    +
  • +
+
+
+

To disable USB devices, see Configuring USB Devices in the Red Hat Virtualization documentation.

+
+
+

To determine the cause:

+
+
+
    +
  1. +

    Click WorkloadsVirtualization in the OKD web console.

    +
  2. +
  3. +

    Click the Virtual Machines tab.

    +
  4. +
  5. +

    Select a virtual machine to open the Virtual Machine Overview screen.

    +
  6. +
  7. +

    Click Status to view the status of the virtual machine.

    +
  8. +
+
+
+
VM time zone must be UTC with no offset
+

The time zone of the source VMs must be UTC with no offset. You can set the time zone to GMT Standard Time after first assessing the potential impact on the workload. (BZ#1993259)

+
+
+
oVirt resource UUID causes a "Provider not found" error
+

If a oVirt resource UUID is used in a Host, NetworkMap, StorageMap, or Plan custom resource (CR), a "Provider not found" error is displayed.

+
+
+

You must use the resource name. (BZ#1994037)

+
+
+
Same oVirt resource name in different data centers causes ambiguous reference
+

If a oVirt resource name is used in a NetworkMap, StorageMap, or Plan custom resource (CR) and if the same resource name exists in another data center, the Plan CR displays a critical "Ambiguous reference" condition. You must rename the resource or use the resource UUID in the CR.

+
+
+

In the web console, the resource name appears twice in the same list without a data center reference to distinguish them. You must rename the resource. (BZ#1993089)

+
+
+
Snapshots are not deleted after warm migration
+

Snapshots are not deleted automatically after a successful warm migration of a VMware VM. You must delete the snapshots manually in VMware vSphere. (BZ#2001270)

+
+
+
+ + +
+ + diff --git a/modules/rn-2.2/index.html b/modules/rn-2.2/index.html new file mode 100644 index 00000000000..fed46ccf01b --- /dev/null +++ b/modules/rn-2.2/index.html @@ -0,0 +1,219 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.2

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the precopy time interval for warm migration
+

You can set the time interval between snapshots taken during the precopy stage of warm migration.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Creating validation rules
+

You can create custom validation rules to check the suitability of VMs for migration. Validation rules are based on the VM attributes collected by the Provider Inventory service and written in Rego, the Open Policy Agent native query language.

+
+
+
Downloading logs by using the web console
+

You can download logs for a migration plan or a migrated VM by using the Forklift web console.

+
+
+
Duplicating a migration plan by using the web console
+

You can duplicate a migration plan by using the web console, including its VMs, mappings, and hooks, in order to edit the copy and run as a new migration plan.

+
+
+
Archiving a migration plan by using the web console
+

You can archive a migration plan by using the MTV web console. Archived plans can be viewed or duplicated. They cannot be run, edited, or unarchived.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Certain Validation service issues do not block migration
+

Certain Validation service issues, which are marked as Critical and display the assessment text, The VM will not be migrated, do not block migration. (BZ#2025977)

+
+
+

The following Validation service assessments do not block migration:

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Issues that do not block migration
AssessmentResult

The disk interface type is not supported by OpenShift Virtualization (only sata, virtio_scsi and virtio interface types are currently supported).

The migrated VM will have a virtio disk if the source interface is not recognized.

The NIC interface type is not supported by OpenShift Virtualization (only e1000, rtl8139 and virtio interface types are currently supported).

The migrated VM will have a virtio NIC if the source interface is not recognized.

The VM is using a vNIC profile configured for host device passthrough, which is not currently supported by OpenShift Virtualization.

The migrated VM will have an SR-IOV NIC. The destination network must be set up correctly.

One or more of the VM’s disks has an illegal or locked status condition.

The migration will proceed but the disk transfer is likely to fail.

The VM has a disk with a storage type other than image, and this is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has one or more snapshots with disks in ILLEGAL state. This is not currently supported by OpenShift Virtualization.

The migration will proceed but the disk transfer is likely to fail.

The VM has USB support enabled, but USB devices are not currently supported by OpenShift Virtualization.

The migrated VM will not have USB devices.

The VM is configured with a watchdog device, which is not currently supported by OpenShift Virtualization.

The migrated VM will not have a watchdog device.

The VM’s status is not up or down.

The migration will proceed but it might hang if the VM cannot be powered off.

+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Missing resource causes error message in current.log file
+

If a resource does not exist, for example, if the virt-launcher pod does not exist because the migrated VM is powered off, its log is unavailable.

+
+
+

The following error appears in the missing resource’s current.log file when it is downloaded from the web console or created with the must-gather tool: error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'. (BZ#2023260)

+
+
+
Importer pod log is unavailable after warm migration
+

Retaining the importer pod for debug purposes causes warm migration to hang during the precopy stage. (BZ#2016290)

+
+
+

As a temporary workaround, the importer pod is removed at the end of the precopy stage so that the precopy succeeds. However, this means that the importer pod log is not retained after warm migration is complete. You can only view the importer pod log by using the oc logs -f <cdi-importer_pod> command during the precopy stage.

+
+
+

This issue only affects the importer pod log and warm migration. Cold migration and the virt-v2v logs are not affected.

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Network, storage, and VM referenced by name in the Plan CR are not displayed in the web console.
+

If a Plan CR references storage, network, or VMs by name instead of by ID, the resources do not appear in the Forklift web console. The migration plan cannot be edited or duplicated. (BZ#1986020)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
If a target VM is deleted during migration, its migration status is Succeeded in the Plan CR
+

If you delete a target VirtualMachine CR during the 'Convert image to kubevirt' step of the migration, the Migration details page of the web console displays the state of the step as VirtualMachine CR not found. However, the status of the VM migration is Succeeded in the Plan CR file and in the web console. (BZ#2031529)

+
+
+
+ + +
+ + diff --git a/modules/rn-2.3/index.html b/modules/rn-2.3/index.html new file mode 100644 index 00000000000..137473be59f --- /dev/null +++ b/modules/rn-2.3/index.html @@ -0,0 +1,156 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.3

+
+
+
+

You can migrate virtual machines (VMs) from VMware vSphere or oVirt to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Setting the VddkInitImage path is part of the procedure of adding VMware provider.
+

In the web console, you enter the VddkInitImage path when adding a VMware provider. Alternatively, from the CLI, you add the VddkInitImage path to the Provider CR for VMware migrations.

+
+
+
The StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS. The documentation includes a link to the relevant procedure.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Forklift 2.3 supports warm migration from oVirt
+

You can use warm migration to migrate VMs from both VMware and oVirt.

+
+
+
The minimal sufficient set of privileges for VMware users is established
+

VMware users do not have to have full cluster-admin privileges to perform a VM migration. The minimal sufficient set of user’s privileges is established and documented.

+
+
+
Forklift documentation is updated with instructions on using hooks
+

Forklift documentation includes instructions on adding hooks to migration plans and running hooks on VMs.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Some warm migrations from oVirt might fail
+

When you run a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run. (BZ#2063531)

+
+
+
Snapshots are not deleted after warm migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. You can delete the snapshots manually. (BZ#22053183)

+
+
+
Warm migration from oVirt fails if a snapshot operation is performed on the source VM
+

If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (BZ#2057459)

+
+
+
QEMU guest agent is not installed on migrated VMs
+

The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. (BZ#2018062)

+
+
+
Deleting migration plan does not remove temporary resources.
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. (BZ#2018974) You must archive a migration plan before deleting it in order to clean up the temporary resources.

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Migration plan details page of the web console does not describe the reason for the failure. (BZ#2008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If you delete a migration plan and then run a new migration plan with the same name or if you delete a migrated VM and then remigrate the source VM, the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

The problem occurs for both vSphere and oVirt migrations.

+
+
+
Forklift 2.3.4 only: When the source provider is oVirt, duplicating a migration plan fails in either the network mapping stage or the storage mapping stage.
+

Possible workaround: Delete cache in the browser or restart the browser. (BZ#2143191)

+
+
+
+ + +
+ + diff --git a/modules/rn-2.4/index.html b/modules/rn-2.4/index.html new file mode 100644 index 00000000000..0aa0c9c23df --- /dev/null +++ b/modules/rn-2.4/index.html @@ -0,0 +1,260 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.4

+
+
+
+

Migrate virtual machines (VMs) from VMware vSphere or oVirt or {osp} to KubeVirt with Forklift.

+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Faster disk image migration from oVirt
+

Disk images are not converted anymore using virt-v2v when migrating from oVirt. This change speeds up migrations and also allows migration for guest operating systems that are not supported by virt-vsv. (forklift-controller#403)

+
+
+
Faster disk transfers by ovirt-imageio client (ovirt-img)
+

Disk transfers use ovirt-imageio client (ovirt-img) instead of Containerized Data Import (CDI) when migrating from RHV to the local OpenShift Container Platform cluster, accelerating the migration.

+
+
+
Faster migration using conversion pod disk transfer
+

When migrating from vSphere to the local OpenShift Container Platform cluster, the conversion pod transfers the disk data instead of Containerized Data Importer (CDI), accelerating the migration.

+
+
+
Migrated virtual machines are not scheduled on the target OCP cluster
+

The migrated virtual machines are no longer scheduled on the target OpenShift Container Platform cluster. This enables migrating VMs that cannot start due to limit constraints on the target at migration time.

+
+
+
StorageProfile resource needs to be updated for a non-provisioner storage class
+

You must update the StorageProfile resource with accessModes and volumeMode for non-provisioner storage classes such as NFS.

+
+
+
VDDK 8 can be used in the VDDK image
+

Previous versions of Forklift supported only using VDDK version 7 for the VDDK image. Forklift supports both versions 7 and 8, as follows:

+
+
+
    +
  • +

    If you are migrating to OCP 4.12 or earlier, use VDDK version 7.

    +
  • +
  • +

    If you are migrating to OCP 4.13 or later, use VDDK version 8.

    +
  • +
+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
OpenStack migration
+

Forklift now supports migrations with {osp} as a source provider. This feature is a provided as a Technology Preview and only supports cold migrations.

+
+
+
OCP console plugin
+

The Forklift Operator now integrates the Forklift web console into the OKD web console. The new UI operates as an OCP Console plugin that adds the sub-menu Migration to the navigation bar. It is implemented in version 2.4, disabling the old UI. You can enable the old UI by setting feature_ui: true in ForkliftController. (MTV-427)

+
+
+
Skip certification option
+

'Skip certificate validation' option was added to VMware and oVirt providers. If selected, the provider’s certificate will not be validated and the UI will not ask for specifying a CA certificate.

+
+
+
Only third-party certificate required
+

Only the third-party certificate needs to be specified when defining a oVirt provider that sets with the Manager CA certificate.

+
+
+
Conversion of VMs with RHEL9 guest operating system
+

Cold migrations from vSphere to a local Red Hat OpenShift cluster use virt-v2v on RHEL 9. (MTV-332)

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and then running a new migration plan with the same name, or if deleting a migrated VM and then remigrate the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack don’t fail, but the encryption key may be missing on the target OCP cluster.

+
+
+
Snapshots that are created during the migration in OpenStack are not deleted
+

The Migration Controller service does not delete snapshots that are created during the migration for source virtual machines in OpenStack automatically. Workaround: the snapshots can be removed manually on OpenStack.

+
+
+
oVirt snapshots are not deleted after a successful migration
+

The Migration Controller service does not delete snapshots automatically after a successful warm migration of a oVirt VM. Workaround: Snapshots can be removed from oVirt instead. (MTV-349)

+
+
+
Migration fails during precopy/cutover while a snapshot operation is executed on the source VM
+

Some warm migrations from oVirt might fail. When running a migration plan for warm migration of multiple VMs from oVirt, the migrations of some VMs might fail during the cutover stage. In that case, restart the migration plan and set the cutover time for the VM migrations that failed in the first run.

+
+
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If the user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Cannot schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it may result in a VM that cannot be scheduled. Workaround: It is recommended to use shared storage on the target OCP cluster.

+
+
+
Deleting migrated VM does not remove PVC and PV
+

When removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) are not deleted. Workaround: remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

When a migration fails, its PVCs and PVs are not deleted as expected when its migration plan is archived and deleted. Workaround: Remove the CDI importer pods and then remove the remaining PVCs and PVs. (MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

VM with multiple disks that was migrated might not be able to boot on the target OCP cluster. Workaround: Set the boot order appropriately to boot from the bootable disk. (MTV-433)

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OCP clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OCP cluster. It is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, their network interfaces could be disabled when they start in OpenShift Virtualization. (MTV-491)

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. The user needs to refresh the OCP Console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.4.3 and 2.5.2. It is advised to update to one of these versions of MTV or later.

+
+ +
+
Improve invalid/conflicting VM name handling
+

Improve the automatic renaming of VMs during migration to fit RFC 1123. This feature that was introduced in 2.3.4 is enhanced to cover more special cases. (MTV-212)

+
+
+
Prevent locking user accounts due to incorrect credentials
+

If a user specifies an incorrect password for oVirt providers, they are no longer locked in oVirt. An error returns when the oVirt manager is accessible and adding the provider. If the oVirt manager is inaccessible, the provider is added, but there would be no further attempt after failing, due to incorrect credentials. (MTV-324)

+
+
+
Users without cluster-admin role can create new providers
+

Previously, the cluster-admin role was required to browse and create providers. In this release, users with sufficient permissions on MTV resources (providers, plans, migrations, NetworkMaps, StorageMaps, hooks) can operate MTV without cluster-admin permissions. (MTV-334)

+
+
+
Convert i440fx to q35
+

Migration of virtual machines with i440fx chipset is now supported. The chipset is converted to q35 during the migration. (MTV-430)

+
+
+
Preserve the UUID setting in SMBIOS for a VM that is migrated from oVirt
+

The Universal Unique ID (UUID) number within the System Management BIOS (SMBIOS) no longer changes for VMs that are migrated from oVirt. This enhancement enables applications that operate within the guest operating system and rely on this setting, such as for licensing purposes, to operate on the target OCP cluster in a manner similar to that of oVirt. (MTV-597)

+
+
+
Do not expose password for oVirt in error messages
+

Previously, the password that was specified for oVirt manager appeared in error messages that were displayed in the web console and logs when failing to connect to oVirt. In this release, error messages that are generated when failing to connect to oVirt do not reveal the password for oVirt manager.

+
+
+
QEMU guest agent is now installed on migrated VMs
+

The QEMU guest agent is installed on VMs during cold migration from vSphere. (BZ#2018062)

+
+
+
+ + +
+ + diff --git a/modules/rn-2.5/index.html b/modules/rn-2.5/index.html new file mode 100644 index 00000000000..212a29044c2 --- /dev/null +++ b/modules/rn-2.5/index.html @@ -0,0 +1,325 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Forklift 2.5

+
+
+
+

You can use Forklift to migrate virtual machines from the following source providers to KubeVirt destination providers:

+
+
+
    +
  • +

    VMware vSphere

    +
  • +
  • +

    oVirt (oVirt)

    +
  • +
  • +

    {osp}

    +
  • +
  • +

    Open Virtual Appliances (OVAs) that were created by VMware vSphere

    +
  • +
  • +

    Remote KubeVirt clusters

    +
  • +
+
+
+

The release notes describe technical changes, new features and enhancements, and known issues.

+
+
+
+
+

Technical changes

+
+
+

This release has the following technical changes:

+
+
+
Migration from OpenStack moves to being a fully supported feature
+

In this version, migration using OpenStack source providers graduated from a Technology Preview feature to a fully supported feature.

+
+
+
Disabling FIPS
+

EMS enforcement is disabled for migrations with VMware vSphere source providers to enable migrations from versions of vSphere that are supported by Forklift but do not comply with the 2023 FIPS requirements.

+
+
+
Integration of the create and update provider user interface
+

The user interface of create and update providers now aligns with the look and feel of the OKD web console and displays up-to-date data.

+
+
+
Standalone UI
+

The old UI of MTV 2.3 cannot be enabled by setting feature_ui: true in ForkliftController anymore.

+
+
+
+
+

New features and enhancements

+
+
+

This release has the following features and improvements:

+
+
+
Migration using OVA files created by VMware vSphere
+

In Forklift 2.3, you can migrate using Open Virtual Appliance (OVA) files that were created by VMware vSphere as source providers. (MTV-336)

+
+
+ + + + + +
+
Note
+
+
+

Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by Forklift. Forklift supports only OVA files created by VMware vSphere.

+
+
+
+
+

Unresolved directive in rn-2.5.adoc - include::snippet_ova_tech_preview.adoc[]

+
+
+
Migrating VMs between OKD clusters
+

In Forklift 2.3, you can now use Red Hat KubeVirt provider as a source provider as well as a destination provider. You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that Forklift is deployed on. (MTV-571)

+
+
+
Migration of VMs with direct LUNs from RHV
+

During the migration from RHV, direct LUNs are detached from the source virtual machines and attached to the target virtual machines. Note that this mechanism does not work yet for Fibre Channel. (MTV-329)

+
+
+
Additional authentication methods for OpenStack
+

In addition to standard password authentication, the following authentication methods are supported: Token authentication and Application credential authentication. (MTV-539)

+
+
+
Validation rules for OpenStack
+

The validation service includes default validation rules for virtual machines from OpenStack. (MTV-508)

+
+
+
VDDK is now optional for VMware vSphere providers
+

The VMware vSphere source provider can now be created without specifying a VDDK init image. It is strongly recommended to create a VDDK init image to accelerate migrations.

+
+
+
+
+

Known issues

+
+
+

This release has the following known issues:

+
+
+
Deleting migration plan does not remove temporary resources
+

Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974)

+
+
+
Unclear error status message for VM with no operating system
+

The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846)

+
+
+
Log archive file includes logs of a deleted migration plan or VM
+

If deleting a migration plan and running a new migration plan with the same name, or if deleting a migrated VM and remigrating the source VM, then the log archive file created by the Forklift web console might include the logs of the deleted migration plan or VM. (BZ#2023764)

+
+
+
Migration of virtual machines with encrypted partitions fails during conversion
+

vSphere only: Migrations from oVirt and OpenStack do not fail, but the encryption key may be missing on the target OKD cluster.

+
+
+
Migration fails during precopy/cutover while a snapshot operation is performed on the source VM
+

Warm migration from oVirt fails if a snapshot operation is performed on the source VM. If a user performs a snapshot operation on the source VM at the time when a migration snapshot is scheduled, the migration fails instead of waiting for the user’s snapshot operation to finish. (MTV-456)

+
+
+
Unable to schedule migrated VM with multiple disks to more than one storage classes of type hostPath
+

When migrating a VM with multiple disks to more than one storage classes of type hostPath, it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target OKD cluster.

+
+
+
Non-supported guest operating systems in warm migrations
+

Warm migrations and migrations to remote OKD clusters from vSphere do not support all types of guest operating systems that are supported in cold migrations to the local OKD cluster. This is a consequence of using RHEL 8 in the former case and RHEL 9 in the latter case.
+See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems.

+
+
+
VMs from vSphere with RHEL 9 guest operating system may start with network interfaces that are down
+

When migrating VMs that are installed with RHEL 9 as guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in {ocp-name} Virtualization. (MTV-491)

+
+
+
Import OVA: ConnectionTestFailed message appears when adding OVA provider
+

When adding an OVA provider, the error message ConnectionTestFailed may instantly appear, although the provider is created successfully. If the message does not disappear after a few minutes and the provider status does not move to Ready, this means that the ova server pod creation has failed. (MTV-671)

+
+
+

For a complete list of all known issues in this release, see the list of Known Issues in Jira.

+
+
+
+
+

Resolved issues

+
+
+

This release has the following resolved issues:

+
+
+
Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
+

A flaw was found in handling multiplexed streams in the HTTP/2 protocol. In previous releases of MTV, the HTTP/2 protocol allowed a denial of service (server resource consumption) because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection, which resulted in a denial of service due to server resource consumption.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
Gin Web Framework does not properly sanitize filename parameter of Context.FileAttachment function
+

A flaw was found in the Gin-Gonic Gin Web Framework. The filename parameter of the Context.FileAttachment function was not properly sanitized. This flaw in the package could allow a remote attacker to bypass security restrictions caused by improper input validation by the filename parameter of the Context.FileAttachment function.  A maliciously created filename could cause the Content-Disposition header to be sent with an unexpected filename value, or otherwise modify the Content-Disposition header.

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+ +
+
CVE-2023-26144 mtv-console-plugin-container: graphql: Insufficient checks in the OverlappingFieldsCanBeMergedRule.ts
+

A flaw was found in the package GraphQL from 16.3.0 and before 16.8.1. This flaw means MTV 2.5 versions before MTV 2.5.2 are vulnerable to Denial of Service (DoS) due to insufficient checks in the OverlappingFieldsCanBeMergedRule.ts file when parsing large queries. This issue may allow an attacker to degrade system performance. (MTV-712)

+
+
+

This issue has been resolved in MTV 2.5.2. It is advised to update to this version of MTV or later.

+
+
+

For more information, see CVE-2023-26144.

+
+
+
Ensure up-to-date data is displayed in the create and update provider forms
+

In previous releases of Forklift, the create and update provider forms could have presented stale data.

+
+
+

This issue is resolved in Forklift 2.3, the new forms of create and update provider display up-to-date properties of the provider. (MTV-603)

+
+
+
Snapshots that are created during a migration in OpenStack are not deleted
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots that were created during a migration of source virtual machines in OpenStack automatically.

+
+
+

This issue is resolved in Forklift 2.3, all the snapshots created during the migration are removed after the migration has been completed. (MTV-620)

+
+
+
oVirt snapshots are not deleted after a successful migration
+

In previous releases of Forklift, the Migration Controller service did not delete snapshots automatically after a successful warm migration of a VM from oVirt.

+
+
+

This issue is resolved in Forklift 2.3, the snapshots generated during migration are removed after a successful migration, and the original snapshots are not removed after a successful migration. (MTV-349)

+
+
+
Warm migration fails when cutover conflicts with precopy
+

In previous releases of Forklift, the cutover operation failed when it was triggered while precopy was being performed. The VM was locked in oVirt and therefore the ovirt-engine rejected the snapshot creation, or disk transfer, operation.

+
+
+

This issue is resolved in Forklift 2.3, the cutover operation is triggered, but it is not performed at that time because the VM is locked. Once the precopy operation completes, the cutover operation is triggered. (MTV-686)

+
+
+
Warm migration fails when VM is locked
+

In previous releases of Forklift, triggering a warm migration while there was an ongoing operation in oVirt that locked the VM caused the migration to fail because the snapshot creation could not be triggered.

+
+
+

This issue is resolved in Forklift 2.3, warm migration does not fail when an operation that locks the VM is performed in oVirt. The migration does not fail, but starts when the VM is unlocked. (MTV-687)

+
+
+
Deleting migrated VM does not remove PVC and PV
+

In previous releases of Forklift, when removing a VM that was migrated, its persistent volume claims (PVCs) and physical volumes (PV) were not deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs and PVs are deleted when deleting migrated VM.(MTV-492)

+
+
+
PVC deletion hangs after archiving and deleting migration plan
+

In previous releases of Forklift, when a migration failed, its PVCs and PVs were not deleted as expected when its migration plan was archived and deleted.

+
+
+

This issue is resolved in Forklift 2.3, PVCs are deleted when archiving and deleting migration plan.(MTV-493)

+
+
+
VM with multiple disks may boot from non-bootable disk after migration
+

In previous releases of Forklift, VM with multiple disks that were migrated might not have been able to boot on the target OKD cluster.

+
+
+

This issue is resolved in Forklift 2.3, VM with multiple disks that are migrated are able to boot on the target OKD cluster. (MTV-433)

+
+
+

For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira.

+
+
+
+
+

Upgrade notes

+
+
+

It is recommended to upgrade from Forklift 2.4.2 to Forklift 2.3.

+
+
+
Upgrade from 2.4.0 fails
+

When upgrading from MTV 2.4.0 to a later version, the operation fails with an error that says the field 'spec.selector' of deployment forklift-controller is immutable. Workaround: Remove the custom resource forklift-controller of type ForkliftController from the installed namespace, and recreate it. Refresh the OKD console once the forklift-console-plugin pod runs to load the upgraded Forklift web console. (MTV-518)

+
+
+
+ + +
+ + diff --git a/modules/running-migration-plan/index.html b/modules/running-migration-plan/index.html new file mode 100644 index 00000000000..3647d0509e7 --- /dev/null +++ b/modules/running-migration-plan/index.html @@ -0,0 +1,135 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Running a migration plan

+
+

You can run a migration plan and view its progress in the OKD web console.

+
+
+
Prerequisites
+
    +
  • +

    Valid migration plan.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationPlans for virtualization.

    +
    +

    The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, and the description of each plan.

    +
    +
  2. +
  3. +

    Click Start beside a migration plan to start the migration.

    +
  4. +
  5. +

    Click Start in the confirmation window that opens.

    +
    +

    The Migration details by VM screen opens, displaying the migration’s progress

    +
    +
    +

    Warm migration only:

    +
    +
    +
      +
    • +

      The precopy stage starts.

      +
    • +
    • +

      Click Cutover to complete the migration.

      +
    • +
    +
    +
  6. +
  7. +

    If the migration fails:

    +
    +
      +
    1. +

      Click Get logs to retrieve the migration logs.

      +
    2. +
    3. +

      Click Get logs in the confirmation window that opens.

      +
    4. +
    5. +

      Wait until Get logs changes to Download logs and then click the button to download the logs.

      +
    6. +
    +
    +
  8. +
  9. +

    Click a migration’s Status, whether it failed or succeeded or is still ongoing, to view the details of the migration.

    +
    +

    The Migration details by VM screen opens, displaying the start and end times of the migration, the amount of data copied, and a progress pipeline for each VM being migrated.

    +
    +
  10. +
  11. +

    Expand an individual VM to view its steps and the elapsed time and state of each step.

    +
  12. +
+
+ + +
+ + diff --git a/modules/selecting-migration-network-for-virt-provider/index.html b/modules/selecting-migration-network-for-virt-provider/index.html new file mode 100644 index 00000000000..e79a1de5b77 --- /dev/null +++ b/modules/selecting-migration-network-for-virt-provider/index.html @@ -0,0 +1,100 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a KubeVirt provider

+
+

You can select a default migration network for a KubeVirt provider in the OKD web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.

+
+
+

If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer.

+
+
+ + + + + +
+
Note
+
+
+

You can override the default migration network of the provider by selecting a different network when you create a migration plan.

+
+
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    On the right side of the provider, select Select migration network from the {kebab}.

    +
  4. +
  5. +

    Select a network from the list of available networks and click Select.

    +
  6. +
+
+ + +
+ + diff --git a/modules/selecting-migration-network-for-vmware-source-provider/index.html b/modules/selecting-migration-network-for-vmware-source-provider/index.html new file mode 100644 index 00000000000..08c2e2aafd2 --- /dev/null +++ b/modules/selecting-migration-network-for-vmware-source-provider/index.html @@ -0,0 +1,139 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a VMware source provider

+
+

You can select a migration network in the OKD web console for a source provider to reduce risk to the source environment and to improve performance.

+
+
+

Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.

+
+
+
Prerequisites
+
    +
  • +

    The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer.

    +
  • +
  • +

    The migration network must be accessible to the KubeVirt nodes through the default gateway.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    The source virtual disks are copied by a pod that is connected to the pod network of the target namespace.

    +
    +
    +
    +
  • +
  • +

    The migration network must have jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click MigrationProviders for virtualization.

    +
  2. +
  3. +

    Click the host number in the Hosts column beside a provider to view a list of hosts.

    +
  4. +
  5. +

    Select one or more hosts and click Select migration network.

    +
  6. +
  7. +

    Specify the following fields:

    +
    +
      +
    • +

      Network: Network name

      +
    • +
    • +

      ESXi host admin username: For example, root

      +
    • +
    • +

      ESXi host admin password: Password

      +
    • +
    +
    +
  8. +
  9. +

    Click Save.

    +
  10. +
  11. +

    Verify that the status of each host is Ready.

    +
    +

    If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.

    +
    +
  12. +
+
+ + +
+ + diff --git a/modules/selecting-migration-network/index.html b/modules/selecting-migration-network/index.html new file mode 100644 index 00000000000..2f11eacc7d5 --- /dev/null +++ b/modules/selecting-migration-network/index.html @@ -0,0 +1,118 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Selecting a migration network for a source provider

+
+

You can select a migration network for a source provider in the Forklift web console for improved performance.

+
+
+

If a source network is not optimal for migration, a Warning icon is displayed beside the host number in the Hosts column of the provider list.

+
+
+
Prerequisites
+

The migration network has the following prerequisites:

+
+
+
    +
  • +

    Minimum speed of 10 Gbps.

    +
  • +
  • +

    Accessible to the OpenShift nodes through the default gateway. The source disks are copied by a pod that is connected to the pod network of the target namespace.

    +
  • +
  • +

    Jumbo frames enabled.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click Providers.

    +
  2. +
  3. +

    Click the host number of a provider to view the host list and network details.

    +
  4. +
  5. +

    Select the host to be updated and click Select migration network.

    +
  6. +
  7. +

    Select a Network from the list of available networks.

    +
    +

    The network list displays only the networks accessible to all the selected hosts. The hosts must have

    +
    +
  8. +
  9. +

    Click Check connection to verify the credentials.

    +
  10. +
  11. +

    Click Select to select the migration network.

    +
    +

    The migration network appears in the network details of the updated hosts.

    +
    +
  12. +
+
+ + +
+ + diff --git a/modules/snip-migrating-luns/index.html b/modules/snip-migrating-luns/index.html new file mode 100644 index 00000000000..589b0ee46ab --- /dev/null +++ b/modules/snip-migrating-luns/index.html @@ -0,0 +1,89 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Note
+
+
+
    +
  • +

    Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.

    +
  • +
  • +

    LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.

    +
  • +
  • +

    Migration of Fibre Channel LUNs is not supported.

    +
  • +
+
+
+
+ + +
+ + diff --git a/modules/snip_permissions-info/index.html b/modules/snip_permissions-info/index.html new file mode 100644 index 00000000000..2a99710a92e --- /dev/null +++ b/modules/snip_permissions-info/index.html @@ -0,0 +1,85 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

If you are an administrator, you can see and work with components (providers, plans, etc.) for all projects.

+
+
+

If you are a non-administrator, you can only see and work only with the components of projects you have permissions for.

+
+
+ + + + + +
+
Tip
+
+
+

You can see which projects you have permissions for by clicking the Project list, which is in the upper-left of every page in the Migrations section except for the Overview.

+
+
+
+ + +
+ + diff --git a/modules/snippet_getting_web_console_url_cli/index.html b/modules/snippet_getting_web_console_url_cli/index.html new file mode 100644 index 00000000000..0a03f063259 --- /dev/null +++ b/modules/snippet_getting_web_console_url_cli/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

+

+
+
+
+
$ kubectl get route virt -n konveyor-forklift \
+  -o custom-columns=:.spec.host
+
+
+
+

+ +The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

+
+
+

+ +.Example output

+
+
+
+
https://virt-konveyor-forklift.apps.cluster.openshift.com.
+
+
+ + +
+ + diff --git a/modules/snippet_getting_web_console_url_web/index.html b/modules/snippet_getting_web_console_url_web/index.html new file mode 100644 index 00000000000..8d010c50f82 --- /dev/null +++ b/modules/snippet_getting_web_console_url_web/index.html @@ -0,0 +1,84 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
    +
  1. +

    Log in to the OKD web console.

    +
  2. +
  3. +

    Click NetworkingRoutes.

    +
  4. +
  5. +

    Select the {namespace} project in the Project: list.

    +
    +

    The URL for the forklift-ui service that opens the login page for the Forklift web console is displayed.

    +
    +
    +

    Click the URL to navigate to the Forklift web console.

    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/snippet_ova_tech_preview/index.html b/modules/snippet_ova_tech_preview/index.html new file mode 100644 index 00000000000..ba037358f08 --- /dev/null +++ b/modules/snippet_ova_tech_preview/index.html @@ -0,0 +1,87 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview.

+
+
+ + + + + +
+
Important
+
+
+

Migration using one or more Open Virtual Appliance (OVA) files as a source provider is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/modules/source-vm-prerequisites/index.html b/modules/source-vm-prerequisites/index.html new file mode 100644 index 00000000000..4feca529326 --- /dev/null +++ b/modules/source-vm-prerequisites/index.html @@ -0,0 +1,121 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Source virtual machine prerequisites

+
+

The following prerequisites apply to all migrations:

+
+
+
    +
  • +

    ISO/CDROM disks must be unmounted.

    +
  • +
  • +

    Each NIC must contain one IPv4 and/or one IPv6 address.

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt.

    +
  • +
  • +

    VM names must contain only lowercase letters (a-z), numbers (0-9), or hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

    +
  • +
  • +

    VM names must not duplicate the name of a VM in the KubeVirt environment.

    +
    + + + + + +
    +
    Note
    +
    +
    +

    Forklift automatically assigns a new name to a VM that does not comply with the rules.

    +
    +
    +

    Forklift makes the following changes when it automatically generates a new VM name:

    +
    +
    +
      +
    • +

      Excluded characters are removed.

      +
    • +
    • +

      Uppercase letters are switched to lowercase letters.

      +
    • +
    • +

      Any underscore (_) is changed to a dash (-).

      +
    • +
    +
    +
    +

    This feature allows a migration to proceed smoothly even if someone entered a VM name that does not follow the rules.

    +
    +
    +
    +
  • +
+
+ + +
+ + diff --git a/modules/storage-support/index.html b/modules/storage-support/index.html new file mode 100644 index 00000000000..a71fc1d00c6 --- /dev/null +++ b/modules/storage-support/index.html @@ -0,0 +1,188 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Storage support and default modes

+
+

Forklift uses the following default volume and access modes for supported storage.

+
+
+ + + + + +
+
Note
+
+
+

If the KubeVirt storage does not support dynamic provisioning, you must apply the following settings:

+
+
+
    +
  • +

    Filesystem volume mode

    +
    +

    Filesystem volume mode is slower than Block volume mode.

    +
    +
  • +
  • +

    ReadWriteOnce access mode

    +
    +

    ReadWriteOnce access mode does not support live virtual machine migration.

    +
    +
  • +
+
+
+

See Enabling a statically-provisioned storage class for details on editing the storage profile.

+
+
+
+
+ + + + + +
+
Note
+
+
+

If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in CDI to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.

+
+
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. Default volume and access modes
ProvisionerVolume modeAccess mode

kubernetes.io/aws-ebs

Block

ReadWriteOnce

kubernetes.io/azure-disk

Block

ReadWriteOnce

kubernetes.io/azure-file

Filesystem

ReadWriteMany

kubernetes.io/cinder

Block

ReadWriteOnce

kubernetes.io/gce-pd

Block

ReadWriteOnce

kubernetes.io/hostpath-provisioner

Filesystem

ReadWriteOnce

manila.csi.openstack.org

Filesystem

ReadWriteMany

openshift-storage.cephfs.csi.ceph.com

Filesystem

ReadWriteMany

openshift-storage.rbd.csi.ceph.com

Block

ReadWriteOnce

kubernetes.io/rbd

Block

ReadWriteOnce

kubernetes.io/vsphere-volume

Block

ReadWriteOnce

+ + +
+ + diff --git a/modules/technology-preview/index.html b/modules/technology-preview/index.html new file mode 100644 index 00000000000..48b7c12c45c --- /dev/null +++ b/modules/technology-preview/index.html @@ -0,0 +1,88 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + + + +
+
Important
+
+
+

{FeatureName} is a Technology Preview feature only. Technology Preview features +are not supported with Red Hat production service level agreements (SLAs) and +might not be functionally complete. Red Hat does not recommend using them +in production. These features provide early access to upcoming product +features, enabling customers to test functionality and provide feedback during +the development process.

+
+
+

For more information about the support scope of Red Hat Technology Preview +features, see https://access.redhat.com/support/offerings/techpreview/.

+
+
+
+ + +
+ + diff --git a/modules/uninstalling-mtv-cli/index.html b/modules/uninstalling-mtv-cli/index.html new file mode 100644 index 00000000000..ab56058f343 --- /dev/null +++ b/modules/uninstalling-mtv-cli/index.html @@ -0,0 +1,106 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift from the command line interface

+
+

You can uninstall Forklift from the command line interface (CLI) by deleting the {namespace} project and the forklift.konveyor.io custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Delete the project:

    +
    +
    +
    $ kubectl delete project konveyor-forklift
    +
    +
    +
  2. +
  3. +

    Delete the CRDs:

    +
    +
    +
    $ kubectl get crd -o name | grep 'forklift' | xargs kubectl delete
    +
    +
    +
  4. +
  5. +

    Delete the OAuthClient:

    +
    +
    +
    $ kubectl delete oauthclient/forklift-ui
    +
    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/uninstalling-mtv-ui/index.html b/modules/uninstalling-mtv-ui/index.html new file mode 100644 index 00000000000..860e3200cec --- /dev/null +++ b/modules/uninstalling-mtv-ui/index.html @@ -0,0 +1,103 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Uninstalling Forklift by using the OKD web console

+
+

You can uninstall Forklift by using the OKD web console to delete the {namespace} project and custom resource definitions (CRDs).

+
+
+
Prerequisites
+
    +
  • +

    You must be logged in as a user with cluster-admin privileges.

    +
  • +
+
+
+
Procedure
+
    +
  1. +

    Click HomeProjects.

    +
  2. +
  3. +

    Locate the konveyor-forklift project.

    +
  4. +
  5. +

    On the right side of the project, select Delete Project from the {kebab}.

    +
  6. +
  7. +

    In the Delete Project pane, enter the project name and click Delete.

    +
  8. +
  9. +

    Click AdministrationCustomResourceDefinitions.

    +
  10. +
  11. +

    Enter forklift in the Search field to locate the CRDs in the forklift.konveyor.io group.

    +
  12. +
  13. +

    On the right side of each CRD, select Delete CustomResourceDefinition from the {kebab}.

    +
  14. +
+
+ + +
+ + diff --git a/modules/updating-validation-rules-version/index.html b/modules/updating-validation-rules-version/index.html new file mode 100644 index 00000000000..a42c939bd3d --- /dev/null +++ b/modules/updating-validation-rules-version/index.html @@ -0,0 +1,127 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Updating the inventory rules version

+
+

You must update the inventory rules version each time you update the rules so that the Provider Inventory service detects the changes and triggers the Validation service.

+
+
+

The rules version is recorded in a rules_version.rego file for each provider.

+
+
+
Procedure
+
    +
  1. +

    Retrieve the current rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 5
    +   }
    +}
    +
    +
    +
  2. +
  3. +

    Connect to the terminal of the Validation pod:

    +
    +
    +
    $ kubectl rsh <validation_pod>
    +
    +
    +
  4. +
  5. +

    Update the rules version in the /usr/share/opa/policies/io/konveyor/forklift/<provider>/rules_version.rego file.

    +
  6. +
  7. +

    Log out of the Validation pod terminal.

    +
  8. +
  9. +

    Verify the updated rules version:

    +
    +
    +
    $ GET https://forklift-validation/v1/data/io/konveyor/forklift/<provider>/rules_version (1)
    +
    +
    +
    +
    Example output
    +
    +
    {
    +   "result": {
    +       "rules_version": 6
    +   }
    +}
    +
    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/upgrading-mtv-ui/index.html b/modules/upgrading-mtv-ui/index.html new file mode 100644 index 00000000000..5a4c246985b --- /dev/null +++ b/modules/upgrading-mtv-ui/index.html @@ -0,0 +1,127 @@ + + + + + + + + Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + +Upgrading Forklift | Forklift Documentation + + + + + + + + + + + + + + + + + + + + + + + + +
+

Upgrading Forklift

+
+

You can upgrade the Forklift Operator by using the OKD web console to install the new version.

+
+
+
Procedure
+
    +
  1. +

    In the OKD web console, click OperatorsInstalled Operators{operator-name-ui}Subscription.

    +
  2. +
  3. +

    Change the update channel to the correct release.

    +
    +

    See Changing update channel in the OKD documentation.

    +
    +
  4. +
  5. +

    Confirm that Upgrade status changes from Up to date to Upgrade available. If it does not, restart the CatalogSource pod:

    +
    +
      +
    1. +

      Note the catalog source, for example, redhat-operators.

      +
    2. +
    3. +

      From the command line, retrieve the catalog source pod:

      +
      +
      +
      $ kubectl get pod -n openshift-marketplace | grep <catalog_source>
      +
      +
      +
    4. +
    5. +

      Delete the pod:

      +
      +
      +
      $ kubectl delete pod -n openshift-marketplace <catalog_source_pod>
      +
      +
      +
      +

      Upgrade status changes from Up to date to Upgrade available.

      +
      +
      +

      If you set Update approval on the Subscriptions tab to Automatic, the upgrade starts automatically.

      +
      +
    6. +
    +
    +
  6. +
  7. +

    If you set Update approval on the Subscriptions tab to Manual, approve the upgrade.

    +
    +

    See Manually approving a pending upgrade in the OKD documentation.

    +
    +
  8. +
  9. +

    If you are upgrading from Forklift 2.2 and have defined VMware source providers, edit the VMware provider by adding a VDDK init image. Otherwise, the update will change the state of any VMware providers to Critical. For more information, see Addding a VMSphere source provider.

    +
  10. +
  11. +

    If you mapped to NFS on the OKD destination provider in Forklift 2.2, edit the AccessModes and VolumeMode parameters in the NFS storage profile. Otherwise, the upgrade will invalidate the NFS mapping. For more information, see Customizing the storage profile.

    +
  12. +
+
+ + +
+ + diff --git a/modules/using-must-gather/index.html b/modules/using-must-gather/index.html new file mode 100644 index 00000000000..9e5144cdc29 --- /dev/null +++ b/modules/using-must-gather/index.html @@ -0,0 +1,157 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Using the must-gather tool

+
+

You can collect logs and information about Forklift custom resources (CRs) by using the must-gather tool. You must attach a must-gather data file to all customer cases.

+
+
+

You can gather data for a specific namespace, migration plan, or virtual machine (VM) by using the filtering options.

+
+
+ + + + + +
+
Note
+
+
+

If you specify a non-existent resource in the filtered must-gather command, no archive file is created.

+
+
+
+
+
Prerequisites
+
    +
  • +

    You must be logged in to the KubeVirt cluster as a user with the cluster-admin role.

    +
  • +
  • +

    You must have the OKD CLI (oc) installed.

    +
  • +
+
+
+
Collecting logs and CR information
+
    +
  1. +

    Navigate to the directory where you want to store the must-gather data.

    +
  2. +
  3. +

    Run the oc adm must-gather command:

    +
    +
    +
    $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest
    +
    +
    +
    +

    The data is saved as /must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    +
    +
  4. +
  5. +

    Optional: Run the oc adm must-gather command with the following options to gather filtered data:

    +
    +
      +
    • +

      Namespace:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- NS=<namespace> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Migration plan:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- PLAN=<migration_plan> /usr/bin/targeted
      +
      +
      +
    • +
    • +

      Virtual machine:

      +
      +
      +
      $ oc adm must-gather --image=quay.io/konveyor/forklift-must-gather:latest \
      +  -- VM=<vm_id> NS=<namespace> /usr/bin/targeted (1)
      +
      +
      +
      +
        +
      1. +

        Specify the VM ID as it appears in the Plan CR.

        +
      2. +
      +
      +
    • +
    +
    +
  6. +
+
+ + +
+ + diff --git a/modules/virt-migration-workflow/index.html b/modules/virt-migration-workflow/index.html new file mode 100644 index 00000000000..e6b65d3fc5c --- /dev/null +++ b/modules/virt-migration-workflow/index.html @@ -0,0 +1,209 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

Detailed migration workflow

+
+

You can use the detailed migration workflow to troubleshoot a failed migration.

+
+
+

The workflow describes the following steps:

+
+
+

Warm Migration or migration to a remote {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create the Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
  7. +

    The CDI Controller service creates an importer pod.

    +
  8. +
  9. +

    The importer pod streams the VM disk to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The Migration Controller service creates a conversion pod with the PVCs attached to it when importing from VMWare.

    +
    +

    The conversion pod runs virt-v2v, which installs and configures device drivers on the PVCs of the target VM.

    +
    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from oVirt or {osp} to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates for each source VM disk a PersistentVolumeClaim CR, and an OvirtVolumePopulator when the source is oVirt, or an OpenstackVolumePopulator CR when the source is {osp}.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Populator Controller service creates a temporarily persistent volume claim (PVC).

    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
    +
      +
    • +

      The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

      +
    • +
    +
    +
  6. +
  7. +

    The Populator Controller service creates a populator pod.

    +
  8. +
  9. +

    The populator pod transfers the disk data to the PV.

    +
    +

    After the VM disks are transferred:

    +
    +
  10. +
  11. +

    The temporary PVC is deleted, and the initial PVC points to the PV with the data.

    +
  12. +
  13. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  14. +
  15. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  16. +
+
+
+

Cold migration from VMWare to the local {ocp-name} cluster:

+
+
+
    +
  1. +

    When you create a Migration custom resource (CR) to run a migration plan, the Migration Controller service creates a DataVolume CR for each source VM disk.

    +
    +

    For each VM disk:

    +
    +
  2. +
  3. +

    The Containerized Data Importer (CDI) Controller service creates a blank persistent volume claim (PVC) based on the parameters specified in the DataVolume CR.



    +
  4. +
  5. +

    If the StorageClass has a dynamic provisioner, the persistent volume (PV) is dynamically provisioned by the StorageClass provisioner.

    +
  6. +
+
+
+

For all VM disks:

+
+
+
    +
  1. +

    The Migration Controller service creates a dummy pod to bind all PVCs. The name of the pod contains pvcinit.

    +
  2. +
  3. +

    The Migration Controller service creates a conversion pod for all PVCs.

    +
  4. +
  5. +

    The conversion pod runs virt-v2v, which converts the VM to the KVM hypervisor and transfers the disks' data to their corresponding PVs.

    +
    +

    After the VM disks are transferred:

    +
    +
  6. +
  7. +

    The Migration Controller service creates a VirtualMachine CR for each source virtual machine (VM), connected to the PVCs.

    +
  8. +
  9. +

    If the VM ran on the source environment, the Migration Controller powers on the VM, the KubeVirt Controller service creates a virt-launcher pod and a VirtualMachineInstance CR.

    +
    +

    The virt-launcher pod runs QEMU-KVM with the PVCs attached as VM disks.

    +
    +
  10. +
+
+ + +
+ + diff --git a/modules/vmware-prerequisites/index.html b/modules/vmware-prerequisites/index.html new file mode 100644 index 00000000000..62c8a02c719 --- /dev/null +++ b/modules/vmware-prerequisites/index.html @@ -0,0 +1,248 @@ + + + + + + + + Forklift Documentation + + + + + + + + + + + + + +Forklift Documentation | Migrating VMware virtual machines to KubeVirt + + + + + + + + + + + + + + + + + + + + + + + + +
+

VMware prerequisites

+
+

It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.

+
+
+

The following prerequisites apply to VMware migrations:

+
+
+
    +
  • +

    You must use a compatible version of VMware vSphere.

    +
  • +
  • +

    You must be logged in as a user with at least the minimal set of VMware privileges.

    +
  • +
  • +

    You must install VMware Tools on all source virtual machines (VMs).

    +
  • +
  • +

    The VM operating system must be certified and supported for use as a guest operating system with KubeVirt and for conversion to KVM with virt-v2v.

    +
  • +
  • +

    If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.

    +
  • +
  • +

    You must obtain the SHA-1 fingerprint of the vCenter host.

    +
  • +
  • +

    If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the NFC service memory of the host.

    +
  • +
  • +

    It is strongly recommended to disable hibernation because Forklift does not support migrating hibernated VMs.

    +
  • +
+
+
+ + + + + +
+
Important
+
+
+

In the event of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail

+
+
+
+
+ + + + + +
+
Note
+
+
+

Neither Forklift nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMWare.

+
+
+
+

VMware privileges

+
+

The following minimal set of VMware privileges is required to migrate virtual machines to KubeVirt with the Forklift.

+
+ + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1. VMware privileges
PrivilegeDescription

Virtual machine.Interaction privileges:

Virtual machine.Interaction.Power Off

Allows powering off a powered-on virtual machine. This operation powers down the guest operating system.

Virtual machine.Interaction.Power On

Allows powering on a powered-off virtual machine and resuming a suspended virtual machine.

+

Virtual machine.Provisioning privileges:

+
+
+ + + + + +
+
Note
+
+
+

All Virtual machine.Provisioning privileges are required.

+
+
+

Virtual machine.Provisioning.Allow disk access

Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow file access

Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow read-only disk access

Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting.

Virtual machine.Provisioning.Allow virtual machine download

Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Allow virtual machine files upload

Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM.

Virtual machine.Provisioning.Clone template

Allows cloning of a template.

Virtual machine.Provisioning.Clone virtual machine

Allows cloning of an existing virtual machine and allocation of resources.

Virtual machine.Provisioning.Create template from virtual machine

Allows creation of a new template from a virtual machine.

Virtual machine.Provisioning.Customize guest

Allows customization of a virtual machine’s guest operating system without moving the virtual machine.

Virtual machine.Provisioning.Deploy template

Allows deployment of a virtual machine from a template.

Virtual machine.Provisioning.Mark as template

Allows marking an existing powered-off virtual machine as a template.

Virtual machine.Provisioning.Mark as virtual machine

Allows marking an existing template as a virtual machine.

Virtual machine.Provisioning.Modify customization specification

Allows creation, modification, or deletion of customization specifications.

Virtual machine.Provisioning.Promote disks

Allows promote operations on a virtual machine’s disks.

Virtual machine.Provisioning.Read customization specifications

Allows reading a customization specification.

Virtual machine.Snapshot management privileges:

Virtual machine.Snapshot management.Create snapshot

Allows creation of a snapshot from the virtual machine’s current state.

Virtual machine.Snapshot management.Remove Snapshot

Allows removal of a snapshot from the snapshot history.

+ + +
+ + diff --git a/redirects.json b/redirects.json new file mode 100644 index 00000000000..9e26dfeeb6e --- /dev/null +++ b/redirects.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/robots.txt b/robots.txt new file mode 100644 index 00000000000..e087884e682 --- /dev/null +++ b/robots.txt @@ -0,0 +1 @@ +Sitemap: /sitemap.xml diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000000..0341b35bd67 --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,816 @@ + + + +/documentation/modules/about-cold-warm-migration/ + + +/modules/about-cold-warm-migration/ + + +/documentation/doc-Release_notes/modules/about-cold-warm-migration/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-cold-warm-migration/ + + +/documentation/doc-Release_notes/modules/about-rego-files/ + + +/documentation/modules/about-rego-files/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/about-rego-files/ + + +/modules/about-rego-files/ + + +/modules/accessing-default-validation-rules/ + + +/documentation/doc-Release_notes/modules/accessing-default-validation-rules/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-default-validation-rules/ + + +/documentation/modules/accessing-default-validation-rules/ + + +/modules/accessing-logs-cli/ + + +/documentation/modules/accessing-logs-cli/ + + +/documentation/doc-Release_notes/modules/accessing-logs-cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/accessing-logs-ui/ + + +/documentation/doc-Release_notes/modules/accessing-logs-ui/ + + +/modules/accessing-logs-ui/ + + +/documentation/modules/accessing-logs-ui/ + + +/documentation/modules/adding-hooks/ + + +/modules/adding-hooks/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-hooks/ + + +/documentation/doc-Release_notes/modules/adding-hooks/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-source-provider/ + + +/documentation/modules/adding-source-provider/ + + +/modules/adding-source-provider/ + + +/documentation/doc-Release_notes/modules/adding-source-provider/ + + +/documentation/doc-Release_notes/modules/adding-virt-provider/ + + +/documentation/modules/adding-virt-provider/ + + +/modules/adding-virt-provider/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/adding-virt-provider/ + + +/documentation/modules/canceling-migration-cli/ + + +/documentation/doc-Release_notes/modules/canceling-migration-cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-cli/ + + +/modules/canceling-migration-cli/ + + +/documentation/doc-Release_notes/modules/canceling-migration-ui/ + + +/documentation/modules/canceling-migration-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/canceling-migration-ui/ + + +/modules/canceling-migration-ui/ + + +/documentation/modules/changing-precopy-intervals/ + + +/documentation/doc-Release_notes/modules/changing-precopy-intervals/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/changing-precopy-intervals/ + + +/modules/changing-precopy-intervals/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/collected-logs-cr-info/ + + +/documentation/doc-Release_notes/modules/collected-logs-cr-info/ + + +/modules/collected-logs-cr-info/ + + +/documentation/modules/collected-logs-cr-info/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/common-attributes/ + + +/modules/common-attributes/ + + +/documentation/modules/common-attributes/ + + +/documentation/doc-Release_notes/modules/common-attributes/ + + +/modules/compatibility-guidelines/ + + +/documentation/modules/compatibility-guidelines/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/compatibility-guidelines/ + + +/documentation/doc-Release_notes/modules/compatibility-guidelines/ + + +/documentation/modules/creating-migration-plan/ + + +/documentation/doc-Release_notes/modules/creating-migration-plan/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-migration-plan/ + + +/modules/creating-migration-plan/ + + +/modules/creating-network-mapping/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-network-mapping/ + + +/documentation/doc-Release_notes/modules/creating-network-mapping/ + + +/documentation/modules/creating-network-mapping/ + + +/documentation/doc-Release_notes/modules/creating-storage-mapping/ + + +/documentation/modules/creating-storage-mapping/ + + +/modules/creating-storage-mapping/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-storage-mapping/ + + +/documentation/doc-Release_notes/modules/creating-validation-rule/ + + +/modules/creating-validation-rule/ + + +/documentation/modules/creating-validation-rule/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-validation-rule/ + + +/modules/creating-vddk-image/ + + +/documentation/modules/creating-vddk-image/ + + +/documentation/doc-Release_notes/modules/creating-vddk-image/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/creating-vddk-image/ + + +/documentation/doc-Release_notes/modules/error-messages/ + + +/modules/error-messages/ + + +/documentation/modules/error-messages/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/error-messages/ + + +/modules/increasing-nfc-memory-vmware-host/ + + +/documentation/doc-Release_notes/modules/increasing-nfc-memory-vmware-host/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/increasing-nfc-memory-vmware-host/ + + +/documentation/modules/increasing-nfc-memory-vmware-host/ + + +/ + + +/documentation/modules/installing-mtv-operator/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/installing-mtv-operator/ + + +/modules/installing-mtv-operator/ + + +/documentation/doc-Release_notes/modules/installing-mtv-operator/ + + +/modules/making-open-source-more-inclusive/ + + +/documentation/modules/making-open-source-more-inclusive/ + + +/documentation/doc-Release_notes/modules/making-open-source-more-inclusive/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/making-open-source-more-inclusive/ + + +/documentation/doc-Release_notes/master/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/master/ + + +/modules/migrating-virtual-machines-cli/ + + +/documentation/doc-Release_notes/modules/migrating-virtual-machines-cli/ + + +/documentation/modules/migrating-virtual-machines-cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migrating-virtual-machines-cli/ + + +/modules/migration-plan-options-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/migration-plan-options-ui/ + + +/documentation/modules/migration-plan-options-ui/ + + +/documentation/doc-Release_notes/modules/migration-plan-options-ui/ + + +/modules/mtv-overview-page/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-overview-page/ + + +/documentation/modules/mtv-overview-page/ + + +/documentation/doc-Release_notes/modules/mtv-overview-page/ + + +/documentation/modules/mtv-resources-and-services/ + + +/documentation/doc-Release_notes/modules/mtv-resources-and-services/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-resources-and-services/ + + +/modules/mtv-resources-and-services/ + + +/documentation/modules/mtv-settings/ + + +/documentation/doc-Release_notes/modules/mtv-settings/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-settings/ + + +/modules/mtv-settings/ + + +/documentation/doc-Release_notes/modules/mtv-ui/ + + +/documentation/modules/mtv-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-ui/ + + +/modules/mtv-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/mtv-workflow/ + + +/modules/mtv-workflow/ + + +/documentation/modules/mtv-workflow/ + + +/documentation/doc-Release_notes/modules/mtv-workflow/ + + +/documentation/doc-Release_notes/modules/network-prerequisites/ + + +/documentation/modules/network-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/network-prerequisites/ + + +/modules/network-prerequisites/ + + +/documentation/modules/non-admin-permissions-for-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/non-admin-permissions-for-ui/ + + +/documentation/doc-Release_notes/modules/non-admin-permissions-for-ui/ + + +/modules/non-admin-permissions-for-ui/ + + +/documentation/modules/obtaining-console-url/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-console-url/ + + +/documentation/doc-Release_notes/modules/obtaining-console-url/ + + +/modules/obtaining-console-url/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/obtaining-vmware-fingerprint/ + + +/documentation/modules/obtaining-vmware-fingerprint/ + + +/modules/obtaining-vmware-fingerprint/ + + +/documentation/doc-Release_notes/modules/obtaining-vmware-fingerprint/ + + +/modules/openstack-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/openstack-prerequisites/ + + +/documentation/doc-Release_notes/modules/openstack-prerequisites/ + + +/documentation/modules/openstack-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/osh-adding-source-provider/ + + +/documentation/modules/osh-adding-source-provider/ + + +/documentation/doc-Release_notes/modules/osh-adding-source-provider/ + + +/modules/osh-adding-source-provider/ + + +/documentation/modules/ostack-app-cred-auth/ + + +/modules/ostack-app-cred-auth/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-app-cred-auth/ + + +/documentation/doc-Release_notes/modules/ostack-app-cred-auth/ + + +/modules/ostack-token-auth/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ostack-token-auth/ + + +/documentation/modules/ostack-token-auth/ + + +/documentation/doc-Release_notes/modules/ostack-token-auth/ + + +/modules/ova-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/ova-prerequisites/ + + +/documentation/modules/ova-prerequisites/ + + +/documentation/doc-Release_notes/modules/ova-prerequisites/ + + +/modules/retrieving-validation-service-json/ + + +/documentation/doc-Release_notes/modules/retrieving-validation-service-json/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/retrieving-validation-service-json/ + + +/documentation/modules/retrieving-validation-service-json/ + + +/documentation/modules/rhv-prerequisites/ + + +/modules/rhv-prerequisites/ + + +/documentation/doc-Release_notes/modules/rhv-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rhv-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.0/ + + +/documentation/modules/rn-2.0/ + + +/documentation/doc-Release_notes/modules/rn-2.0/ + + +/modules/rn-2.0/ + + +/documentation/modules/rn-2.1/ + + +/modules/rn-2.1/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.1/ + + +/documentation/doc-Release_notes/modules/rn-2.1/ + + +/documentation/modules/rn-2.2/ + + +/documentation/doc-Release_notes/modules/rn-2.2/ + + +/modules/rn-2.2/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.2/ + + +/documentation/modules/rn-2.3/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.3/ + + +/modules/rn-2.3/ + + +/documentation/doc-Release_notes/modules/rn-2.3/ + + +/documentation/doc-Release_notes/modules/rn-2.4/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.4/ + + +/documentation/modules/rn-2.4/ + + +/modules/rn-2.4/ + + +/documentation/modules/rn-2.5/ + + +/modules/rn-2.5/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/rn-2.5/ + + +/documentation/doc-Release_notes/modules/rn-2.5/ + + +/documentation/modules/running-migration-plan/ + + +/documentation/doc-Release_notes/modules/running-migration-plan/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/running-migration-plan/ + + +/modules/running-migration-plan/ + + +/documentation/doc-Release_notes/modules/selecting-migration-network-for-virt-provider/ + + +/documentation/modules/selecting-migration-network-for-virt-provider/ + + +/modules/selecting-migration-network-for-virt-provider/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-virt-provider/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network-for-vmware-source-provider/ + + +/documentation/doc-Release_notes/modules/selecting-migration-network-for-vmware-source-provider/ + + +/documentation/modules/selecting-migration-network-for-vmware-source-provider/ + + +/modules/selecting-migration-network-for-vmware-source-provider/ + + +/documentation/doc-Release_notes/modules/selecting-migration-network/ + + +/documentation/modules/selecting-migration-network/ + + +/modules/selecting-migration-network/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/selecting-migration-network/ + + +/documentation/doc-Release_notes/modules/snip-migrating-luns/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip-migrating-luns/ + + +/documentation/modules/snip-migrating-luns/ + + +/modules/snip-migrating-luns/ + + +/documentation/doc-Release_notes/modules/snip_permissions-info/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snip_permissions-info/ + + +/modules/snip_permissions-info/ + + +/documentation/modules/snip_permissions-info/ + + +/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_cli/ + + +/documentation/modules/snippet_getting_web_console_url_cli/ + + +/modules/snippet_getting_web_console_url_cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_cli/ + + +/documentation/modules/snippet_getting_web_console_url_web/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_getting_web_console_url_web/ + + +/modules/snippet_getting_web_console_url_web/ + + +/documentation/doc-Release_notes/modules/snippet_getting_web_console_url_web/ + + +/documentation/modules/snippet_ova_tech_preview/ + + +/modules/snippet_ova_tech_preview/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/snippet_ova_tech_preview/ + + +/documentation/doc-Release_notes/modules/snippet_ova_tech_preview/ + + +/documentation/doc-Release_notes/modules/source-vm-prerequisites/ + + +/documentation/modules/source-vm-prerequisites/ + + +/modules/source-vm-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/source-vm-prerequisites/ + + +/documentation/doc-Release_notes/modules/storage-support/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/storage-support/ + + +/documentation/modules/storage-support/ + + +/modules/storage-support/ + + +/documentation/doc-Release_notes/modules/technology-preview/ + + +/documentation/modules/technology-preview/ + + +/modules/technology-preview/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/technology-preview/ + + +/documentation/doc-Release_notes/modules/uninstalling-mtv-cli/ + + +/documentation/modules/uninstalling-mtv-cli/ + + +/modules/uninstalling-mtv-cli/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-cli/ + + +/documentation/doc-Release_notes/modules/uninstalling-mtv-ui/ + + +/documentation/modules/uninstalling-mtv-ui/ + + +/modules/uninstalling-mtv-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/uninstalling-mtv-ui/ + + +/documentation/doc-Release_notes/modules/updating-validation-rules-version/ + + +/documentation/modules/updating-validation-rules-version/ + + +/modules/updating-validation-rules-version/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/updating-validation-rules-version/ + + +/documentation/doc-Release_notes/modules/upgrading-mtv-ui/ + + +/documentation/modules/upgrading-mtv-ui/ + + +/modules/upgrading-mtv-ui/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/upgrading-mtv-ui/ + + +/documentation/modules/using-must-gather/ + + +/modules/using-must-gather/ + + +/documentation/doc-Release_notes/modules/using-must-gather/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/using-must-gather/ + + +/documentation/doc-Release_notes/modules/virt-migration-workflow/ + + +/documentation/modules/virt-migration-workflow/ + + +/modules/virt-migration-workflow/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/virt-migration-workflow/ + + +/documentation/doc-Release_notes/modules/vmware-prerequisites/ + + +/documentation/modules/vmware-prerequisites/ + + +/modules/vmware-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/vmware-prerequisites/ + + +/documentation/doc-Migration_Toolkit_for_Virtualization/modules/issue_templates/issue/ + + +/documentation/doc-Release_notes/modules/issue_templates/issue/ + + +/documentation/modules/issue_templates/issue/ + + +/modules/issue_templates/issue/ + +