Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V2V] Run the playbook on the appliance with the conversion host in inventory #18613

Merged
merged 13 commits into from
Apr 18, 2019

Conversation

ghost
Copy link

@ghost ghost commented Apr 2, 2019

In existing implementation, CloudForms connect to the conversion host via SSH to execute the ansible-runner command. The Ansible playbooks are now shipped in the appliance to keep the playbooks in sync with the backend capabilities. This PR updates the extra_vars hash to match v2v-conversion-host-ansible-1.12 requirements and calls ansible-runner on the CloudForms appliance.

To do so, it generates the runtime directory for the playbook:

  • the inventory contains the conversion host hostname or IP address
  • the credentials are passed in temporary files, which are deleted as soon as ansible-runner ends
  • the extra vars are passed in a file

The ansible-runner command is then run via AwesomeSpawn.run.

Note: this PR is dedicated to Hammer. Another one will be created to use Ansible::Runner. This will require some changes to Ansible::Runner class to allow specifying the inventory and the credentials.

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1622728

Depends on: ManageIQ/manageiq-api#535, #18541

def ansible_playbook(playbook, extra_vars = {}, auth_type = nil)
host = hostname || ipaddress

command = "ansible-playbook #{playbook} --inventory #{host}, --become -vvv"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the comma be there after the host?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's how you define the inventory from a list of hostnames / IP addresses. Otherwise, it looks for a file.

command = "ansible-playbook #{playbook} --inventory #{host}, --become -vvv"

auth = authentication_type(auth_type) || authentications.first
command += " --user #{auth.userid}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor, but prefer << over += for String building to avoid intermediate strings.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


connect_ssh { |ssu| ssu.shell_exec(command) }
_log.info("FDUPONT - Calling Ansible playbook: #{command}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't forget to remove these FDUPONT debug lines

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and also because they might contain some sensitive data.

@ghost
Copy link
Author

ghost commented Apr 5, 2019

@miq-bot add-label transformation, bug, hammer/yes, wip

@miq-bot miq-bot changed the title Run the playbook on the appliance with the conversion host in inventory [WIP] Run the playbook on the appliance with the conversion host in inventory Apr 5, 2019
@ghost ghost force-pushed the v2v_conversion_host_playbook branch from 17134e5 to 30cb4db Compare April 5, 2019 14:42
@ghost ghost force-pushed the v2v_conversion_host_playbook branch from 55e6b92 to 2565bc5 Compare April 15, 2019 21:02
@ghost
Copy link
Author

ghost commented Apr 15, 2019

@miq-bot remove-label wip

@miq-bot miq-bot changed the title [WIP] Run the playbook on the appliance with the conversion host in inventory Run the playbook on the appliance with the conversion host in inventory Apr 15, 2019
@miq-bot miq-bot removed the wip label Apr 15, 2019
@ghost
Copy link
Author

ghost commented Apr 15, 2019

@miq-bot add-reviewer @djberg96
@miq-bot add-reviewer @agrare

@miq-bot miq-bot requested review from djberg96 and agrare April 15, 2019 21:37
@agrare
Copy link
Member

agrare commented Apr 16, 2019

@fdupont-redhat now that this is intended for backport to hammer, I thought we were going to continue to use ansible-playbook on hammer and refactor to use ansible::runner with added inventory support on master.

@agrare
Copy link
Member

agrare commented Apr 16, 2019

Spoke offline to @fdupont-redhat and there seems to be an issue with becoming root over ssh, it should be working and isn't. I think we should be fixing that instead of replacing with ansible-runner for a backport to fix an issue that isn't understood.

@djberg96
Copy link
Contributor

@agrare @fdupont-redhat What's the issue with root over ssh? Does it happen for both userid/password and private key?

@dmetzger57
Copy link
Contributor

@agrare @fdupont-redhat refactoring to use runner is substantial, I'd prefer not to see such a large refactor (and change for QE) in a Z-Stream and instead understand the failure of something that is fully expected to work (root over ssh) and fix the underlying issue.

@ghost
Copy link
Author

ghost commented Apr 16, 2019

@agrare What do you think of the initial commit: 2b1a60c ? It was only a start as it didn't update the async task.

@agrare
Copy link
Member

agrare commented Apr 16, 2019

@fdupont-redhat yeah I'm good with that (minus the logs obviously)

@ghost
Copy link
Author

ghost commented Apr 17, 2019

@agrare thank you for pointing out my stupidity. I reverted to using ansible-playbook.
@agrare @djberg96 Could you please review again, please ?

@agrare agrare self-assigned this Apr 18, 2019
@ghost ghost changed the title Run the playbook on the appliance with the conversion host in inventory [V2V] Run the playbook on the appliance with the conversion host in inventory Apr 18, 2019
Copy link
Member

@agrare agrare left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fdupont-redhat I'm really worried about that MiqTask lookup, and I think updating the task is making the ansible_playbook method more complex than it needs to be. It would be better IMO to just return the awesome_spawn CommandResult to the caller and let it handle updating the task.

Is there any way to get the miq_task_id to these methods calling ansible_playbook so that we don't need to do a loop over every task looking for ones on this conversion host?

app/models/conversion_host.rb Show resolved Hide resolved
def ansible_playbook(playbook, extra_vars = {})
command = "ansible-playbook #{playbook} -i #{ipaddress}"
def ansible_playbook(playbook, extra_vars = {}, auth_type = nil)
task = MiqTask.all.select { |t| t.context_data.present? && t.context_data[:conversion_host_id] == id }.sort_by(&:created_on).last
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where having proper associations would help, this is bringing back every miq_task and loading up the context_data hash in ruby 😱

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use a .where clause here instead couldn't we? Totally untested, but this is the general idea:

MiqTask.where.not(:context_data => nil)
       .where(:context_data[:conversion_host_id]) => id)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather pass the task_id to the calling method and update the task there so we don't have to do any searching at all

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, we have the task id, so I changed the code to pass it to the *_conversion_host_role methods, that in turn pass it to ansible_playbook.

app/models/conversion_host.rb Show resolved Hide resolved
rescue => e
_log.error("Ansible playbook '#{playbook}' failed for '#{resource.name}' with [#{e.class}: #{e}]")
errormsg = "Ansible playbook '#{playbook}' failed for '#{resource.name}' with [#{e.class}: #{e}]"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm if you just raise I don't think there's going to be much meaningful information in the exception, it is just going to be a RuntimeError and you won't see the result stdout.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. And we capture the result.output in the ensure part.

ansible_output_name = playbook.split('/').last.split('.').first
task&.update_context(task.context_data.merge!(ansible_output_name => result.output))
end
if !ssh_private_key_file.nil? && File.exist?(ssh_private_key_file.path)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a style thing but we usually prefer ssh_private_key_file.present? instead of !.nil?

task&.update_context(task.context_data.merge!(ansible_output_name => result.output))
end
if !ssh_private_key_file.nil? && File.exist?(ssh_private_key_file.path)
ssh_private_key_file.close unless ssh_private_key_file.closed?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What case is this handling? If there was an exception writing the file before it was closed? Probably better to handle this with a begin/rescue block right around writing the file then.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case the SSH private key file creation fails on line 301.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's what I thought so in that case I think it'd be better to have a more localized rescue like:

ssh_private_key_file = Tempfile.new('ansible_key')
begin
  ssh_private_key_file.write(auth.auth_key)
ensure
  ssh_private_key_file.close
end

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

ansible_output_name = playbook.split('/').last.split('.').first
task&.update_context(task.context_data.merge!(ansible_output_name => result.output))
end
File.delete(ssh_private_key_file) if ssh_private_key_file.present? && File.exist?(ssh_private_key_file.path)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be simplified to ssh_private_key_file&.unlink

raise unless result.exit_status.zero?
ensure
unless result.nil?
ansible_output_name = playbook.split('/').last.split('.').first
Copy link
Member

@agrare agrare Apr 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure you can just do File.basename(playbook, ".yml")

>> File.basename("/usr/share/v2v-conversion-host-ansible/playbooks/conversion_host_check.yml", ".yml")
=> "conversion_host_check"

raise unless result.exit_status.zero?
ensure
task&.update_context(task.context_data.merge!(File.basename(playbook, '.yml') => result.output)) unless result.nil?
ssh_private_key_file&.unlink
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! much better

@miq-bot
Copy link
Member

miq-bot commented Apr 18, 2019

Checked commits fabiendupont/manageiq@2b1a60c~...4d18a99 with ruby 2.3.3, rubocop 0.52.1, haml-lint 0.20.0, and yamllint 1.10.0
3 files checked, 0 offenses detected
Everything looks fine. 🍰

Copy link
Member

@agrare agrare left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 LGTM

@agrare agrare merged commit 1a2f07f into ManageIQ:master Apr 18, 2019
@agrare agrare added this to the Sprint 110 Ending Apr 29, 2019 milestone Apr 18, 2019
simaishi pushed a commit that referenced this pull request Apr 25, 2019
…ybook

[V2V] Run the playbook on the appliance with the conversion host in inventory

(cherry picked from commit 1a2f07f)

https://bugzilla.redhat.com/show_bug.cgi?id=1694229
@simaishi
Copy link
Contributor

Hammer backport details:

$ git log -1
commit 14d5c668d3ad382b71ea02f14e833ac3cdc772a6
Author: Adam Grare <agrare@redhat.com>
Date:   Thu Apr 18 12:58:56 2019 -0400

    Merge pull request #18613 from fdupont-redhat/v2v_conversion_host_playbook
    
    [V2V] Run the playbook on the appliance with the conversion host in inventory
    
    (cherry picked from commit 1a2f07f84a71c88dfdcd2cec936b7aba4e9f768b)
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1694229

@jerryk55
Copy link
Member

jerryk55 commented May 2, 2019

@fdupont-redhat does this PR obviate the requirement to change to running ansible-runner on the appliance as I was previously investigating? It was my impression that we were calling ansible-playbook on the remote host via ssh (not ansible-runner as mentioned in the top-level description here. Thanks in advance.

@agrare
Copy link
Member

agrare commented May 3, 2019

@jerryk55 we still want to use ansible-runner, this was just intended for backport so we wanted to minimize the changes made

@jerryk55
Copy link
Member

jerryk55 commented May 3, 2019

@agrare maybe I'm just lost at sea but intending this for backport seems orthogonal to the question. Maybe we can talk face-to-face about this Monday?

@agrare
Copy link
Member

agrare commented May 3, 2019

does this PR obviate the requirement to change to running ansible-runner on the appliance as I was previously investigating?

No it doesn't

It was my impression that we were calling ansible-playbook on the remote host via ssh (not ansible-runner as mentioned in the top-level description here. Thanks in advance.

This change calls ansible-playbook on the appliance passing the target host as inventory instead of calling ansible-playbook on the target host over ssh

@jerryk55
Copy link
Member

jerryk55 commented May 3, 2019

@agrare I understand that. What had been explained to me by @jameswnl was that the reason I was supposed to look at migrating to ansible-runner, was so that we could run it on the appliance instead of calling ansible-playbook on the target host over ssh. Now that we're calling ansible-playbook on the appliance, what is the justification for migrating to ansible-runner? Thanks.

@Fryguy
Copy link
Member

Fryguy commented May 3, 2019

@jerryk55 Because writing our own ansible-playbook wrapper goes against using ansible-runner which is itself an ansible-playbook wrapper that does all the right stuff and is owned by the ansible team.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants