Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement AnonymousAuthenticationProvider. #79985

Merged
merged 7 commits into from
Nov 23, 2020

Conversation

azasypkin
Copy link
Member

@azasypkin azasypkin commented Oct 8, 2020

Summary

Disclaimer: we don't leverage Elasticsearch built-in anonymous access functionality (support of this option was later added in #84074).


The main goal is to provide a well-integrated, built-in alternative to a dedicated proxy in front of Kibana that our users historically used to achieve anonymous-like access. The would reduce a number of moving parts in the Kibana setup and open a plenty of opportunities to integrate anonymous access with other parts of Kibana that would lead to a better security and user experience overall.

Requirements

Since it's a new feature we can define any requirements for this we think are reasonable and can increase safety overall, even for anonymous access:

  1. It can only be configured using new authc.providers format. We need this since anonymous access can be a complimentary to another non-anonymous authentication mechanism, for example we can configure both SAML and anonymous at the same time and we'll need to present both at the login screen using different icon, description (e.g. Log in as an Employee and Login as a Guest), hint and maybe even accessAgreement.

  2. It can only be enabled if TLS between Kibana and Elasticsearch is setup. We'll likely need to rely on access tokens and/or API keys under the hood and Elasticsearch requires TLS for that to work in production. (abandoned this idea due to complexity and unnecessary overhead, TLS would still be required is users want to use ApiKey instead of username/password credentials)

  3. Every anonymous user will have an anonymous session associated with them. Such sessions won't have an idle timeout, but will have a fixed lifespan by default.

Scenarios

We need to support many different scenarios:

  • When Anonymous access is complimentary to another authentication mechanism and Login Selector is enabled (default)
  • When Anonymous access is complimentary to another authentication mechanism, but Login Selector isn't enabled
  • When Anonymous access is a sole authentication mechanism (or we can even forbid that and require at least one another non-anonymous way of accessing Kibana?)
  • Reporting should work (may not work if we rely on API keys, but we should be fine in 8.0 though), technically AuthenticatedUser may give enough information to other plugins to decided whether certain functionality should be available or not.
  • When Anonymous access is enabled (with or without Login Selector) and Kibana is embedded.

Setup

Username and password

  1. Configure anonymous access in kibana.yml using user with username anonymous and password anonymous:
xpack.security.authc.providers:
  basic.basic1:
    order: 0
    description: "Log in as an Employee"
  anonymous.anonymous1:
    order: 1
    description: "Continue as guest"
    icon: "globe"
    credentials:
      username: "anonymous"
      password: "anonymous"
  1. Log in as a user who has a manage_security privilege and create that anonymous user with the specified password and assign any roles you wish anonymous users to have and you're done.

Kibana will not store credentials in the session, but will send them to Elasticsearch with every "anonymous" access.

API key

  1. Log in as a user who has a manage_api_key privilege and create an API key with the required privileges:
POST http://localhost:9200/_security/api_key
Authorization: Basic XXXX
Accept: application/json
Content-Type: application/json

{
  "name": "anonymous",
  "role_descriptors": {
    "role-a": {
      "indices" : [
        {
          "names" : [
            "kibana_sample_data_ecommerce"
          ],
          "privileges" : [
            "read",
            "view_index_metadata"
          ],
          "field_security" : {
            "grant" : ["*"],
            "except" : [ ]
          },
          "allow_restricted_indices" : false
        }
      ],
      "applications" : [
        {
          "application" : "kibana-.kibana",
          "privileges" : [
            "feature_dashboard.minimal_read",
            "feature_dashboard.url_create"
          ],
          "resources" : ["*"]
        }
      ]
    }
  }
}
  1. Configure anonymous access using API key you've just created ($ echo -n 'id:api_key' | base64) and you're done:
xpack.security.authc.providers:
  basic.basic1:
    order: 0
  anonymous.anonymous1:
    order: 1
    description: "Continue as guest"
    icon: "globe"
    credentials:
      apiKey: "XXXXXQ=="

1n

Auto Login

Users may want to leverage anonymous access when they embed Kibana or share a link to it. This will work out of the box when anonymous is the only authentication mechanism enabled in Kibana or when it's configured as the first one in a multi-provider scenario with disabled Login Selector. In all other scenarios user will need to explicitly express their intention to use anonymous access. Such intention should be expressed through auth_provider_hint=<provider-name> query string parameter. At the first stage this argument must be added manually, but we'll add an UI for that in the next stage.

2n

As you may have noticed users will have to go through Login Selector anyway, but the Continue as Guest option will be selected automatically. We do this to re-use lots of functionality we already implemented in the Login Selector to cover all possible fail cases when user cannot log in anonymously.

3n

Fixes: #18331


Release note: it will now be possible to log in to Kibana anonymously without using any 3rd-party reverse proxy workarounds.

@azasypkin azasypkin added release_note:enhancement release highlight Team:Security Team focused on: Auth, Users, Roles, Spaces, Audit Logging, and more! Feature:Security/Authentication Platform Security - Authentication v7.11.0 labels Oct 8, 2020
@azasypkin azasypkin requested a review from a team as a code owner October 8, 2020 12:14
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-security (Team:Security)

@azasypkin azasypkin marked this pull request as draft October 8, 2020 12:15
@arisonl
Copy link
Contributor

arisonl commented Oct 14, 2020

@azasypkin This looks solid! A couple of initial thoughts and questions:

  • Let's make sure the UX for all scenarios works well.
  • In the presence of other authentication mechanisms and in the absence of a selector, order of precedence according to the .yml file will be enforced, as you show in the example, correct? In that case, the anonymous provider would presumably be the last one for most use cases. Would there be any way for a user to access it in that case?
  • I've gone through a number of related ERs again and they look pretty consistent that one of the major use cases is going to be anonymous dashboard access but let's touch base with the field to check for usage patterns, I'll help with that.
  • This looks like it's based on the ES anonymous access for the most part, if I am not mistaken. Are there any other options and what would be the pros and cons?
  • Can we introduce UX improvements on top of this model? E.g.:
    • You can assign any role to the anon user. Does this create a risk that you may accidentally make it more permissive than you intend/should? Would visual indications that assets/spaces are accessible anonymously make sense here? E.g. places that this might be possible: in the list of saved objects, in spaces, a warning when assigning roles to the anon user. If you think that this might be helpful, let's touch base with design.
    • Can we get the anonymous user in Kibana OOtB once it is set up in the .yml file, so that admins do not need to create it in Kibana?
  • Anything requires special attention with regards to the cloud, except the usual white listing?

@azasypkin
Copy link
Member Author

azasypkin commented Oct 14, 2020

@azasypkin This looks solid! A couple of initial thoughts and questions:

Thanks for looking into this @arisonl !

  • Let's make sure the UX for all scenarios works well.

Yep, I don't anticipate any problems with the UX for the time being, except for the case when we have multiple providers and explicitly disabled login selector.

  • In the presence of other authentication mechanisms and in the absence of a selector, order of precedence according to the .yml file will be enforced, as you show in the example, correct?

Correct.

In that case, the anonymous provider would presumably be the last one for most use cases. Would there be any way for a user to access it in that case?

There are only two possibilities here: either we provide a special URL user can go to (like we did with /login for saml+basic scenarios in pre-login-selector times, I don't like it) or we allow a login_hint-like query string parameter that would tell us which provider to use (I'm thinking about supporting something like this for the embedding scenario anyway). We support neither at the moment. We can also forbid this scenario if we cannot figure out a reasonable UX for it (2+ providers that include both anonymous and non-anonymous providers and login selector is explicitly disabled === disallow anonymous access).

  • I've gone through a number of related ERs again and they look pretty consistent that one of the major use cases is going to be anonymous dashboard access but let's touch base with the field to check for usage patterns, I'll help with that.

Awesome, thanks!

  • This looks like it's based on the ES anonymous access for the most part, if I am not mistaken. Are there any other options and what would be the pros and cons?

Well, actually I don't rely on a native ES anonymous access here at all. That means that the other option is to actually somehow leverage it, but IMO it makes overall setup a bit more complex:

  • Admins will need to configure it in ES as well and as far I know it's not allowed in ESS and explicitly forbidden in ECE.
  • Byproduct of this is that ES will be able to serve anonymous requests directly as well. It may not a problem in fact, but still an additional area of concern to keep in mind.
  • It may not be trivial for Kibana to figure out if anonymous access is being used to adapt UI etc.
  • With the current approach we can also configure multiple different anonymous providers based on different permission set. Not sure if there is any value in this though (e.g. embed Kibana in different places using different roles or something like this), but we can
  • Can we introduce UX improvements on top of this model? E.g.:
    • You can assign any role to the anon user. Does this create a risk that you may accidentally make it more permissive than you intend/should? Would visual indications that assets/spaces are accessible anonymously make sense here? E.g. places that this might be possible: in the list of saved objects, in spaces, a warning when assigning roles to the anon user. If you think that this might be helpful, let's touch base with design.

Yeah, these are all interesting ideas! I'm not sure if we can do that technically in all cases since anonymous user/API key can be created before Kibana knows that it will be used for that purpose, but let's explore this too.

  • Can we get the anonymous user in Kibana OOtB once it is set up in the .yml file, so that admins do not need to create it in Kibana?

We can create user, but it's actually the easiest part. The complex part is to define roles that we cannot figure out automatically. Also if we create something automatically we should remove it automatically as well if anonymous access is disabled, and that may not be easy to do or not something Kibana admins want us to do.

If we get feedback that anonymous access in Kibana is used very frequently we can even invest some time and introduce something like Anonymous access setup wizard with all the bells and whistles.

  • Anything requires special attention with regards to the cloud, except the usual white listing?

Can't think of anything at the moment except for the new entries to the provider allow-list.

@arisonl
Copy link
Contributor

arisonl commented Oct 14, 2020

There are only two possibilities here: [...] I'm thinking about supporting something like this for the embedding scenario anyway

Exactly, the embedding scenario comes into play here.

The complex part is to define roles that we cannot figure out automatically.

What are you thinking in terms of roles? Because if we can frame it somehow and if it is feasible, it sounds safer than creating an anon user and allow admins to assign just any role.

@azasypkin
Copy link
Member Author

What are you thinking in terms of roles?

I mean we cannot know the privileges admins want to give anonymous users, they may want to allow access to a Dashboard, or to a Discover, or to a Canvas, or ML, or any other permutation of the features Kibana offers. It doesn't seem like we can make a reasonable decision on our own. Moreover, they may want to give several roles (e.g. give a reporting_user role or any other reserved roles as well). Or what do you have in mind?

@ManuelKugelmann
Copy link

Gogogo! :) Thanks for adding this - can't wait for this being available in elastic cloud!

@kobelb
Copy link
Contributor

kobelb commented Nov 16, 2020

Full-disclosure, this question is solely based on reading the description in the PR, and I haven't taken a look at the code or tried it out myself, I'm being super lazy, apologies. How do you anticipate anonymous access working for consumers of Kibana's HTTP APIs? Is my understanding correct that some credentials are still be required as part of the HTTP request, but it'd be possible to hit the login endpoint with no credentials and get a cookie that can be used for subsequent requests?

@azasypkin
Copy link
Member Author

but it'd be possible to hit the login endpoint with no credentials and get a cookie that can be used for subsequent requests?

Yep, that would certainly be possible, similarly to how you'd use PKI and Kerberos for API access. I'd treat this as a workaround though: if API access is needed then I'd expect users to leverage HTTP authentication instead (with Basic, Bearer or ApiKey).

Using anonymous access to purely interact with Kibana APIs isn't something I prioritized just yet based on the use cases we know about. But if we feel a strong need to support that I'd probably leverage a custom X-Auth-Provider-Hint: anonymous1 HTTP header for a limited set of providers (to complement the auth_provider_hint query string parameter that we'll have on the "UI level").

What do you think? Would cookie-based workaround be enough to cover the use cases you have in mind?

@kobelb
Copy link
Contributor

kobelb commented Nov 17, 2020

Using anonymous access to purely interact with Kibana APIs isn't something I prioritized just yet based on the use cases we know about. But if we feel a strong need to support that I'd probably leverage a custom X-Auth-Provider-Hint: anonymous1 HTTP header for a limited set of providers (to complement the auth_provider_hint query string parameter that we'll have on the "UI level").

I think the approach you've outlined would be a lot easier to consume than the cookie-based approach.

My primary concern stemmed from the inherent differences between how Kibana's HTTP APIs will behave when it's using anonymous access vs Elasticsearch. AFAIK, when a HTTP request to Elasticsearch doesn't contain authorization credentials and Elasticsearch is configured for anonymous access, it will use the anonymous user. I don't think it's required for us to replicate the Elasticsearch approach, and I've struggled to think of any use-cases that would absolutely require we do so. The ability to specify the X-Auth-Provider-Hint does seem like it'd make this a lot easier for consumers of the HTTP APIs if they do want to take advantage of anonymous access, but I'm tempted to say that we should wait until the need is obvious before introducing this complexity.

@azasypkin
Copy link
Member Author

I don't think it's required for us to replicate the Elasticsearch approach, and I've struggled to think of any use-cases that would absolutely require we do so.

I'd even argue that in the Kibana context the current ES behavior may be a bit surprising since there is no clear indication of the user intent to use anonymous access for the particular request (unlike requests with Authorization and Cookie).

I'm tempted to say that we should wait until the need is obvious before introducing this complexity.

Good, let's wait for the initial reaction/feedback then.

@ryankeairns
Copy link
Contributor

+1 what Larry said. Will check it out tomorrow.

Copy link
Member

@dmlemeshko dmlemeshko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@legrego legrego left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested a bunch of different scenarios (both auth modes, different access levels and provider orderings), and this is looking fantastic!

function isAPIKeyCredentials(
credentials: UsernameAndPasswordCredentials | APIKeyCredentials
): credentials is APIKeyCredentials {
const apiKey = ((credentials as unknown) as Record<string, any>).apiKey;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we can simplify this cast:

Suggested change
const apiKey = ((credentials as unknown) as Record<string, any>).apiKey;
const apiKey = (credentials as APIKeyCredentials).apiKey;

}),
schema.object({
username: schema.maybe(schema.string()),
apiKey: schema.string(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worry about the usability of a single apiKey field. We've found that users have a hard time doing the base64(id + ':' + key) correctly, and I feel like we'd be in a better place to do that for them. What do you think about exposing two fields for the id and key, so that we can encode it properly? That would be more consistent with the username/password approach, as we don't require them to transform those credentials into their base64 equivalent

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, we can go this way too. The reasons why I picked another approach initially are:

  • Single string for an API key is widely known pattern
  • I was thinking that in our API key management UI we'll be displaying already concatenated and encoded key that's ready to use
  • Users may already have existing concatenated and encodes keys, I thought that most of the users will store them in that form.

What about more flexible schema instead?

  apiKey: schema.oneOf([
    schema.object({ key: schema.string(), id: schema.string() }),
    schema.string(),
  ]),

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking that in our API key management UI we'll be displaying already concatenated and encoded key that's ready to use

Yeah I think that's something we can and should do when the API Key is first created - we don't have anything like that just yet though.

Single string for an API key is widely known pattern
Users may already have existing concatenated and encodes keys, I thought that most of the users will store them in that form.

Both of these are fair points. I guess it'll come down to how familiar the administrator is with ES API Keys when they go to configure anonymous access.

What about more flexible schema instead?

I like it!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it!

Good 👍 Will update schema then.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

this.logger.debug(
`Request to ${request.url.pathname}${request.url.search} has been authenticated.`
);
return AuthenticationResult.succeeded(username ? { ...user, username } : user, {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question what is the use case for overriding the username of the user who created the API Key?

I worry this will give our users a false sense of confidentiality -- a simple call to /_security/_authenticate via dev tools (or a rogue plugin making their own API calls) will reveal the true identity.

They might also not understand the importance of the username -- for example, if you changed the displayed username to elastic, then (I think) you'll see all reports generated by the real elastic user. Likewise, the real elastic user will see all reports generated by their anonymous users.

^^ When trying to test the above, I found that it doesn't seem to be overriding the username consistently. For example, I'm seeing audit logs for the canonical username as opposed to the overridden username. Was that intentional?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question what is the use case for overriding the username of the user who created the API Key?

Yeah, I'm not entirely sure we need this either. But since initial UX proposal I've got implied that Profile link/page should stay even for anonymous users I wanted to display something there. And displaying the name of the user that created API key would also be confusing, so it's purely UI thing.

I'm wondering if we ES API keys API can be changed to allow us to specify username that should be associated with the API key.

They might also not understand the importance of the username -- for example, if you changed the displayed username to elastic, then (I think) you'll see all reports generated by the real elastic user. Likewise, the real elastic user will see all reports generated by their anonymous users.

Hmmm, I'd be really surprised if we base our checks on the username (ignoring authentication realms) 🙈 I'll double check if it's the case if we happen to keep this username.

For example, I'm seeing audit logs for the canonical username as opposed to the overridden username. Was that intentional?

You mean Kibana audit logs? If so, then it's not intentional, and I'd say it's even concerning since audit logging should get username only from AuthenticatedUser. That's a good catch, let me see where this coming from.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean Kibana audit logs? If so, then it's not intentional, and I'd say it's even concerning since audit logging should get username only from AuthenticatedUser. That's a good catch, let me see where this coming from.

Hmm, yeah, it's coming from the response of shield.hasPrivileges and we use it in the audit events. This doesn't look like something that'd be easy to change. Do you think we can just drop this idea with overriding username and see what our users say and then get back to it only if it becomes a problem?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we can just drop this idea with overriding username and see what our users say and then get back to it only if it becomes a problem?

++ yeah I think we should drop the username override for now, and re-evaluate if needed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


// If authentication succeeded we should return some state to create a session, that will reuse for all subsequent
// requests and anonymous user interactions with the Kibana.
return isAPIKeyCredentials(this.credentials)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about doing a check after authenticating, to make sure that the anonymous user isn't too powerful? An anonymous user with the manage_security cluster privilege is potentially dangerous to have, so I wonder if we should take steps to prevent that from happening. Having manage_security is just a couple of clicks away from having superuser access -- and once the anonymous user has superuser, the cluster is essentially public/unprotected.

We can consider this for a followup as well. This might be something that we only check if other auth providers are enabled. If the only option is anonymous, then maybe they really don't care that everyone is a superuser 🤷

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First of all, I definitely agree that we should build a safety net around anonymous access!

But I was thinking about this as a part of a broader initiative around the security center where we could display things like this to the users who are actually authorized to act on them (e.g. administrators). If we offload that work to the security center or alike we can implement more sophisticated heuristics since even though manage_security is dangerous it's not the only thing that can do harm.

I'm also hesitant to do that additional check for every request that requires authentication for performance reasons knowing that it may not actually prevent damage in the end...

I tend to move discussion of the possible ideas and solutions to the followup. Does that sound good to you?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to move discussion of the possible ideas and solutions to the followup. Does that sound good to you?

tl;dr Yes, let's take this to a followup!

But I was thinking about this as a part of a broader initiative around the security center where we could display things like this to the users who are actually authorized to act on them (e.g. administrators). If we offload that work to the security center or alike we can implement more sophisticated heuristics since even though manage_security is dangerous it's not the only thing that can do harm.

I certainly see the benefit of including these types of checks within the security center. Keep in mind though that a powerful anonymous user could choose to dismiss these checks so the real administrator might not notice.

I'm also hesitant to do that additional check for every request that requires authentication for performance reasons knowing that it may not actually prevent damage in the end...

I agree - I was thinking we could only check on login, rather than on every call to authenticate. It's less than perfect, but would give us reasonable protection.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keep in mind though that a powerful anonymous user could choose to dismiss these checks so the real administrator might not notice.

That's a good point. We can also design security center in a way that important actions/warnings won't go unnoticed by others in the admin group though, but it's an orthogonal topic anyway.

I agree - I was thinking we could only check on login, rather than on every call to authenticate. It's less than perfect, but would give us reasonable protection.

I see, let's discuss pros and cons in the follow-up then.

@@ -120,6 +129,38 @@ const providersConfigSchema = schema.object(
schema.object({ ...getCommonProviderSchemaProperties(), realm: schema.string() })
)
),
anonymous: getUniqueProviderSchema(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reminder don't forget docs for these new settings

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For sure (and allow-list update for Cloud as well)! Depending on how review goes I'll either add docs to this PR or move to a separate one.

});

it('should fail if `Authorization` header is present, but not valid', async () => {
const spnegoResponse = await supertest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: is spnegoResponse a copy/paste error from the Kerberos suite?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, thanks! Will fix.

@@ -131,12 +131,18 @@ export class SecurityNavControl extends Component<Props, State> {
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question Do you think it still makes sense to show the Profile link for anonymous users? This would only end up showing them an "implementation detail" of the cluster's anonymous access setup -- in other words, we show the underlying user, which might not always make sense to them.

Instead of the profile link, we could show a message indicating that they're using Kibana as an anonymous user/guest, which would give more context to the new "Log in" button that we show them here. (@ryankeairns thoughts?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 We can hide the Profile link.

As for an additional message, the header of the context menu should already show the user as Guest, so I don't think we need any further clarification. We will then end up with:

Screen Shot 2020-11-19 at 7 58 55 AM

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good, I'll hide it then!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -360,6 +387,32 @@ export class LoginForm extends Component<Props, State> {
return null;
};

private renderAutoLoginOverlay = () => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this overlay, what do you think about doing something a little simpler? The <EuiLoadingContent /> draws my attention away from the smaller "Authenticating..." text below, and almost gives the impression that I'm waiting on the login screen to finish loading.

Maybe something like this?
CleanShot 2020-11-18 at 13 34 58

diff --git a/x-pack/plugins/security/public/authentication/login/components/login_form/login_form.tsx b/x-pack/plugins/security/public/authentication/login/components/login_form/login_form.tsx
index 346bc6e7d2c..e8d84b5935e 100644
--- a/x-pack/plugins/security/public/authentication/login/components/login_form/login_form.tsx
+++ b/x-pack/plugins/security/public/authentication/login/components/login_form/login_form.tsx
@@ -25,7 +25,6 @@ import {
   EuiLoadingSpinner,
   EuiLink,
   EuiHorizontalRule,
-  EuiLoadingContent,
 } from '@elastic/eui';
 import { i18n } from '@kbn/i18n';
 import { FormattedMessage } from '@kbn/i18n/react';
@@ -389,27 +388,19 @@ export class LoginForm extends Component<Props, State> {
 
   private renderAutoLoginOverlay = () => {
     return (
-      <Fragment>
-        <EuiPanel data-test-subj="loginSelector" paddingSize="none">
-          {this.props.selector.providers.map(() => (
-            <EuiLoadingContent className="secLoginCard secLoginCard-autoLogin" lines={2} />
-          ))}
-        </EuiPanel>
-        <EuiSpacer />
-        <EuiFlexGroup alignItems="center" justifyContent="center" gutterSize="m" responsive={false}>
-          <EuiFlexItem grow={false}>
-            <EuiText size="s" className="eui-textCenter">
-              <FormattedMessage
-                id="xpack.security.loginPage.autoLoginAuthenticatingLabel"
-                defaultMessage="Authenticating…"
-              />
-            </EuiText>
-          </EuiFlexItem>
-          <EuiFlexItem grow={false}>
-            <EuiLoadingSpinner size="m" />
-          </EuiFlexItem>
-        </EuiFlexGroup>
-      </Fragment>
+      <EuiFlexGroup alignItems="center" justifyContent="center" gutterSize="m" responsive={false}>
+        <EuiFlexItem grow={false}>
+          <EuiLoadingSpinner size="l" />
+        </EuiFlexItem>
+        <EuiFlexItem grow={false}>
+          <EuiText size="m" className="eui-textCenter">
+            <FormattedMessage
+              id="xpack.security.loginPage.autoLoginAuthenticatingLabel"
+              defaultMessage="Authenticating..."
+            />
+          </EuiText>
+        </EuiFlexItem>
+      </EuiFlexGroup>
     );
   };
 

@ryankeairns might have better ideas when he reviews, so don't take mine as the "winning" suggestion

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current behavior is actually based on the UX proposed by @ryankeairns 🙂 I think the intention was to better support both success and failure scenarios when overlay must transition to the full login selector with all the options, but let's wait for the Ryan input.

Copy link
Contributor

@ryankeairns ryankeairns Nov 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 Do we anticipate the failure state to be infrequent? Perhaps I overestimated how often users would land on the log in selector/list when under the auto log in scenario. The benefit of the current loading design is that it foreshadows what is coming; however, if that doesn't materialize 99% of the time then I can see going with Larry's simpler suggestion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we anticipate the failure state to be infrequent?

I don't think anyone knows that yet. We definitely hope that fail rate will be on the low side, but failures will definitely happen (ES unavailability, license issues and so on). Both UI proposal work for me (I'm more concerned about Loading Elastic screens before and after this screen to be honest 🙂 ).

however, if that doesn't materialize 99% of the time then I can see going with Larry's simpler suggestion.

Okay, let's optimistically say that it'll be the case and see how it goes. I'll make that change then. Thanks everyone!

Copy link
Member Author

@azasypkin azasypkin Nov 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. And updated gifs in the issue description.

public render() {
if (this.isLoadingState(LoadingStateType.AutoLogin)) {
return this.renderAutoLoginOverlay();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add unit tests to cover the auto-login scenario?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely, I expected a bit more back and forth on this particular UI change. Once we know how it should look like I'll add a test to cover that 👍

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@azasypkin
Copy link
Member Author

Kibana-QA changes LGTM. I started flaky-test-runner to check tests stability:

Thanks @dmlemeshko! Looks like we're good here:

https://kibana-ci.elastic.co/job/kibana+flaky-test-suite-runner/1013/

Executions: 40, Failures: 0

https://kibana-ci.elastic.co/job/kibana+flaky-test-suite-runner/1014/

Executions: 20, Failures: 0

Copy link
Contributor

@ryankeairns ryankeairns left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving with two small recommendations.

@@ -41,6 +41,14 @@
+ .secLoginCard {
border-top: $euiBorderThin;
}

&.secLoginCard-autoLogin {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this be removed once the loading state is simplified as proposed by Larry?

If it remains for some other use, then we should tweak this name. The single - is intended for modifiers/states (e.g. isLoading). If this is simply a variation on the layout, then we could simply change to __autoLogin, for exmple.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this be removed once the loading state is simplified as proposed by Larry?

Yep, we don't need this anymore.

If it remains for some other use, then we should tweak this name. The single - is intended for modifiers/states (e.g. isLoading). If this is simply a variation on the layout, then we could simply change to __autoLogin, for exmple.

Good to know! I was torn on this, in the code auto-login is treated as one of the loading states, but at the same time it's represented as a completely different layout too.

border-color: transparent;

+ .secLoginCard {
padding-top: unset;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be supported well enough - since we no longer support IE - but setting this to 0 or inherit likely achieves the same result in a potentially more sure proof manner.

@azasypkin
Copy link
Member Author

@legrego PR should be ready for another review round, thanks!

Copy link
Member

@legrego legrego left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - great work, Aleh!

`);
});

it('requires both `id` and `key` in extend `apiKey` format credentials', () => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
it('requires both `id` and `key` in extend `apiKey` format credentials', () => {
it('requires both `id` and `key` in extended `apiKey` format credentials', () => {

- [credentials.0.username]: expected value of type [string] but got [undefined]
- [credentials.1.apiKey]: types that failed validation:
- [credentials.apiKey.0.key]: expected value of type [string] but got [undefined]
- [credentials.apiKey.1]: expected value of type [string] but got [Object]"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of scope for this PR, but I'd love to see us provide more helpful error messages for some of these scenarios. We've all trained ourselves how to read these, but they're not easy to understand once the schemas become moderately complex

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++, these messages have gone out of control. If there is no easy way to improve kbn/config-schema then we'll use custom validation functions that would return less cryptic and more actionable messages.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue for kbn/config-schema: #84264

* @param request Request instance.
* @param state State value previously stored by the provider.
*/
private async authenticateViaAuthorizationHeader(request: KibanaRequest, state?: unknown) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like the simplifications you made to this provider in the second review round!

@azasypkin azasypkin merged commit e3ca8a9 into elastic:master Nov 23, 2020
@azasypkin azasypkin deleted the issue-18331-anonymous-access branch November 23, 2020 10:41
azasypkin added a commit to azasypkin/kibana that referenced this pull request Nov 23, 2020
# Conflicts:
#	test/functional/services/common/browser.ts
gmmorris added a commit to rudolf/kibana that referenced this pull request Nov 23, 2020
* master: (67 commits)
  [Observability] Load hasData call asynchronously (elastic#80644)
  Implement AnonymousAuthenticationProvider. (elastic#79985)
  Deprecate `visualization:colorMapping` advanced setting (elastic#83372)
  [TSVB] [Rollup] Table tab not working with rollup indexes (elastic#83635)
  Revert "[Search] Search batching using bfetch (elastic#83418)" (elastic#84037)
  skip flaky suite (elastic#83772)
  skip flaky suite (elastic#69849)
  create kbn-legacy-logging package (elastic#77678)
  [Search] Search batching using bfetch (elastic#83418)
  [Security Solution] Refactor Timeline flyout to take a full page (elastic#82033)
  Drop use of console-stamp (elastic#83922)
  skip flaky suite (elastic#84011 , elastic#84012)
  Fixed usage of `isReady` for usage collection of alerts and actions (elastic#83760)
  [maps] support URL drilldowns (elastic#83732)
  Revert "Added default dedupKey value as an {{alertInstanceId}} to provide grouping functionality for PagerDuty incidents. (elastic#83226)"
  [code coverage] Update jest config to collect more data (elastic#83804)
  Added default dedupKey value as an {{alertInstanceId}} to provide grouping functionality for PagerDuty incidents. (elastic#83226)
  [Security Solution] Give notice when endpoint policy is out of date (elastic#83469)
  [Security Solution] Sync url state on any changes to query string (elastic#83314)
  [CI] Initial TeamCity implementation (elastic#81043)
  ...
@azasypkin
Copy link
Member Author

7.x/7.11.0: 751b7f2

@DeBaker1974
Copy link

Hello, would this functionality enable a customer to display "end customer" specific information on dashboards on its customer portal?
use case : I am a customer, I login to my customer portal, and I see reporting (dashboards) about my monthly consumption, montly billing, order information...
Thanks.

@azasypkin
Copy link
Member Author

Hello, would this functionality enable a customer to display "end customer" specific information on dashboards on its customer portal?
use case : I am a customer, I login to my customer portal, and I see reporting (dashboards) about my monthly consumption, montly billing, order information...
Thanks.

Hi @pboulanger74,

If understand your use case correctly then what you need isn't anonymous access, but SSO (e.g. SAML or OIDC). In this case when user logs in to the portal that embeds Kibana dashboard, they will automatically log in to Kibana using the same Identity Provider session and hence Kibana/Elasticsearch will know who the user is exactly.

The second part would be to setup the field and document level security so that relevant data is filtered based on the name of the current user.

It should be possible to achieve since 6.x I believe.

@kibanamachine
Copy link
Contributor

kibanamachine commented Dec 14, 2020

💔 Build Failed

Failed CI Steps


Test Failures

Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/ml/anomaly_detection/annotations·ts.machine learning anomaly detection annotations displays error on broken annotation index and recovers after fix

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 3 times on tracked branches: https://github.com/elastic/kibana/issues/77289

[00:00:00]       │
[00:00:00]         └-: machine learning
[00:00:00]           └-> "before all" hook
[00:00:00]           └-> "before all" hook
[00:00:00]             │ debg creating role ft_ml_source
[00:00:00]             │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_source]
[00:00:00]             │ debg creating role ft_ml_source_readonly
[00:00:00]             │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_source_readonly]
[00:00:00]             │ debg creating role ft_ml_dest
[00:00:00]             │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_dest]
[00:00:00]             │ debg creating role ft_ml_dest_readonly
[00:00:00]             │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_dest_readonly]
[00:00:00]             │ debg creating role ft_ml_ui_extras
[00:00:00]             │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_ui_extras]
[00:00:00]             │ debg creating role ft_default_space_ml_all
[00:00:00]             │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_default_space_ml_all]
[00:00:00]             │ debg creating role ft_default_space_ml_read
[00:00:00]             │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_default_space_ml_read]
[00:00:00]             │ debg creating role ft_default_space_ml_none
[00:00:00]             │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_default_space_ml_none]
[00:00:00]             │ debg creating user ft_ml_poweruser
[00:00:00]             │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_poweruser]
[00:00:00]             │ debg created user ft_ml_poweruser
[00:00:00]             │ debg creating user ft_ml_poweruser_spaces
[00:00:00]             │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_poweruser_spaces]
[00:00:00]             │ debg created user ft_ml_poweruser_spaces
[00:00:00]             │ debg creating user ft_ml_viewer
[00:00:00]             │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_viewer]
[00:00:00]             │ debg created user ft_ml_viewer
[00:00:00]             │ debg creating user ft_ml_viewer_spaces
[00:00:00]             │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_viewer_spaces]
[00:00:00]             │ debg created user ft_ml_viewer_spaces
[00:00:00]             │ debg creating user ft_ml_unauthorized
[00:00:01]             │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_unauthorized]
[00:00:01]             │ debg created user ft_ml_unauthorized
[00:00:01]             │ debg creating user ft_ml_unauthorized_spaces
[00:00:01]             │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_unauthorized_spaces]
[00:00:01]             │ debg created user ft_ml_unauthorized_spaces
[00:09:50]           └-: anomaly detection
[00:09:50]             └-> "before all" hook
[00:35:14]             └-: annotations
[00:35:14]               └-> "before all" hook
[00:35:14]               └-> "before all" hook
[00:35:14]                 │ info [ml/farequote] Loading "mappings.json"
[00:35:14]                 │ info [ml/farequote] Loading "data.json.gz"
[00:35:14]                 │ info [ml/farequote] Skipped restore for existing index "ft_farequote"
[00:35:15]                 │ debg Searching for 'index-pattern' with title 'ft_farequote'...
[00:35:15]                 │ debg  > Found '0d3c50a0-3e04-11eb-b900-ff5f0ad6bd78'
[00:35:15]                 │ debg Index pattern with title 'ft_farequote' already exists. Nothing to create.
[00:35:15]                 │ debg applying update to kibana config: {"dateFormat:tz":"UTC"}
[00:35:16]                 │ debg Creating anomaly detection job with id 'fq_single_1_smv'...
[00:35:16]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared] creating index, cause [api], templates [.ml-anomalies-], shards [1]/[1]
[00:35:16]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-anomalies-shared]
[00:35:16]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-annotations-6] creating index, cause [api], templates [], shards [1]/[1]
[00:35:16]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-annotations-6]
[00:35:16]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-config] creating index, cause [auto(bulk api)], templates [.ml-config], shards [1]/[1]
[00:35:16]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-config]
[00:35:16]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-notifications-000001] creating index, cause [auto(bulk api)], templates [.ml-notifications-000001], shards [1]/[1]
[00:35:16]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-notifications-000001]
[00:35:18]                 │ debg Waiting up to 5000ms for 'fq_single_1_smv' to exist...
[00:35:18]                 │ debg Creating datafeed with id 'datafeed-fq_single_1_smv'...
[00:35:19]                 │ debg Waiting up to 5000ms for 'datafeed-fq_single_1_smv' to exist...
[00:35:19]                 │ debg Opening anomaly detection job 'fq_single_1_smv'...
[00:35:19]                 │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Opening job [fq_single_1_smv]
[00:35:19]                 │ info [o.e.x.c.m.u.MlIndexAndAlias] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] About to create first concrete index [.ml-state-000001] with alias [.ml-state-write]
[00:35:19]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-state-000001] creating index, cause [api], templates [.ml-state], shards [1]/[1]
[00:35:19]                 │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-state-000001]
[00:35:19]                 │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ml-size-based-ilm-policy]
[00:35:19]                 │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] Loading model snapshot [N/A], job latest_record_timestamp [N/A]
[00:35:19]                 │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [ml-size-based-ilm-policy]
[00:35:19]                 │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [ml-size-based-ilm-policy]
[00:35:20]                 │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] [autodetect/216359] [CResourceMonitor.cc@74] Setting model memory limit to 10 MB
[00:35:20]                 │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Successfully set job state to [opened] for job [fq_single_1_smv]
[00:35:20]                 │ debg Starting datafeed 'datafeed-fq_single_1_smv' with start: '0', end: '1607949215011'...
[00:35:20]                 │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] Datafeed started (from: 1970-01-01T00:00:00.000Z to: 2020-12-14T12:33:35.011Z) with frequency [450000ms]
[00:35:20]                 │ debg Waiting up to 120000ms for datafeed state to be stopped...
[00:35:20]                 │ debg Fetching datafeed state for datafeed datafeed-fq_single_1_smv
[00:35:20]                 │ debg --- retry.waitForWithTimeout error: expected job state to be stopped but got started
[00:35:20]                 │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared/Tougz03SRha551qW9VaPLw] update_mapping [_doc]
[00:35:20]                 │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] 10000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:35:20]                 │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] 20000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:35:20]                 │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] 30000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:35:20]                 │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] 40000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:35:20]                 │ debg Fetching datafeed state for datafeed datafeed-fq_single_1_smv
[00:35:20]                 │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:35:20]                 │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] 50000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:35:20]                 │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] 60000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:35:21]                 │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] 70000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:35:21]                 │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] 80000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:35:21]                 │ debg Fetching datafeed state for datafeed datafeed-fq_single_1_smv
[00:35:21]                 │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:35:21]                 │ debg Fetching datafeed state for datafeed datafeed-fq_single_1_smv
[00:35:21]                 │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:35:21]                 │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] Lookback has finished
[00:35:21]                 │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] attempt to stop datafeed [datafeed-fq_single_1_smv] for job [fq_single_1_smv]
[00:35:21]                 │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] try lock [20s] to stop datafeed [datafeed-fq_single_1_smv] for job [fq_single_1_smv]...
[00:35:21]                 │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] stopping datafeed [datafeed-fq_single_1_smv] for job [fq_single_1_smv], acquired [true]...
[00:35:21]                 │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] datafeed [datafeed-fq_single_1_smv] for job [fq_single_1_smv] has been stopped
[00:35:22]                 │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Closing job [fq_single_1_smv], because [close job (api)]
[00:35:22]                 │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] [autodetect/216359] [CCmdSkeleton.cc@51] Handled 86274 records
[00:35:22]                 │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] [autodetect/216359] [CAnomalyJob.cc@1569] Pruning all models
[00:35:22]                 │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared/Tougz03SRha551qW9VaPLw] update_mapping [_doc]
[00:35:22]                 │ info [o.e.x.m.p.AbstractNativeProcess] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] State output finished
[00:35:22]                 │ info [o.e.x.m.j.p.a.o.AutodetectResultProcessor] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] 480 buckets parsed from autodetect output
[00:35:22]                 │ debg Fetching datafeed state for datafeed datafeed-fq_single_1_smv
[00:35:22]                 │ debg Waiting up to 120000ms for job state to be closed...
[00:35:22]                 │ debg Fetching anomaly detection job stats for job fq_single_1_smv...
[00:35:22]                 │ debg --- retry.waitForWithTimeout error: expected job state to be closed but got closing
[00:35:22]                 │ info [o.e.x.m.j.p.a.AutodetectCommunicator] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_single_1_smv] job closed
[00:35:22]                 │ debg Fetching anomaly detection job stats for job fq_single_1_smv...
[00:35:22]                 │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-annotations-6-wrong-mapping] creating index, cause [api], templates [], shards [1]/[1]
[00:35:22]                 │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-annotations-6-wrong-mapping/u6Kjh2GvTmSu5hvY-lWEAA] update_mapping [_doc]
[00:35:22]                 │ info [o.e.x.c.m.j.p.ElasticsearchMappings] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Version of mappings for [.ml-annotations-6-wrong-mapping] not found, recreating
[00:35:22]                 │ debg SecurityPage.forceLogout
[00:35:22]                 │ debg Find.existsByDisplayedByCssSelector('.login-form') with timeout=100
[00:35:22]                 │ info [o.e.x.m.MlInitializationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Error creating ML annotations index or aliases
[00:35:22]                 │      java.lang.IllegalArgumentException: mapper [modified_time] cannot be changed from type [long] to [date]
[00:35:22]                 │      	at org.elasticsearch.index.mapper.FieldMapper.checkIncomingMergeType(FieldMapper.java:287) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.index.mapper.FieldMapper.merge(FieldMapper.java:272) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.index.mapper.FieldMapper.merge(FieldMapper.java:55) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.index.mapper.ObjectMapper.doMerge(ObjectMapper.java:526) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.index.mapper.RootObjectMapper.doMerge(RootObjectMapper.java:307) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.index.mapper.ObjectMapper.merge(ObjectMapper.java:486) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.index.mapper.RootObjectMapper.merge(RootObjectMapper.java:302) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.index.mapper.Mapping.merge(Mapping.java:106) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.index.mapper.DocumentMapper.merge(DocumentMapper.java:301) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.cluster.metadata.MetadataMappingService$PutMappingExecutor.applyRequest(MetadataMappingService.java:259) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.cluster.metadata.MetadataMappingService$PutMappingExecutor.execute(MetadataMappingService.java:227) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:697) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:319) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:214) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:674) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:35:22]                 │      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
[00:35:22]                 │      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
[00:35:22]                 │      	at java.lang.Thread.run(Thread.java:832) [?:?]
[00:35:23]                 │ debg --- retry.tryForTime error: .login-form is not displayed
[00:35:23]                 │ debg Redirecting to /logout to force the logout
[00:35:23]                 │ debg Waiting on the login form to appear
[00:35:23]                 │ debg Waiting for Login Page to appear.
[00:35:23]                 │ debg Waiting up to 100000ms for login page...
[00:35:23]                 │ debg browser[INFO] http://localhost:61191/logout?_t=1607949218373 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:35:23]                 │
[00:35:23]                 │ debg browser[INFO] http://localhost:61191/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:35:23]                 │ debg Find.existsByDisplayedByCssSelector('.login-form') with timeout=2500
[00:35:26]                 │ debg browser[INFO] http://localhost:61191/login?_t=1607949218373 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:35:26]                 │
[00:35:26]                 │ debg browser[INFO] http://localhost:61191/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:35:26]                 │ERROR browser[SEVERE] http://localhost:61191/internal/spaces/_active_space - Failed to load resource: the server responded with a status of 401 (Unauthorized)
[00:35:26]                 │ debg browser[INFO] http://localhost:61191/38302/bundles/core/core.entry.js 12:193817 "Detected an unhandled Promise rejection.
[00:35:26]                 │      Error: Unauthorized"
[00:35:26]                 │ERROR browser[SEVERE] http://localhost:61191/38302/bundles/core/core.entry.js 5:3002 
[00:35:26]                 │ debg --- retry.tryForTime error: .login-form is not displayed
[00:35:27]                 │ debg Find.existsByDisplayedByCssSelector('.login-form') with timeout=2500
[00:35:27]                 │ debg TestSubjects.exists(loginForm)
[00:35:27]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="loginForm"]') with timeout=2500
[00:35:27]                 │ debg Waiting for Login Form to appear.
[00:35:27]                 │ debg Waiting up to 100000ms for login form...
[00:35:27]                 │ debg TestSubjects.exists(loginForm)
[00:35:27]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="loginForm"]') with timeout=2500
[00:35:27]                 │ debg TestSubjects.setValue(loginUsername, ft_ml_poweruser)
[00:35:27]                 │ debg TestSubjects.click(loginUsername)
[00:35:27]                 │ debg Find.clickByCssSelector('[data-test-subj="loginUsername"]') with timeout=10000
[00:35:27]                 │ debg Find.findByCssSelector('[data-test-subj="loginUsername"]') with timeout=10000
[00:35:27]                 │ debg TestSubjects.setValue(loginPassword, mlp001)
[00:35:27]                 │ debg TestSubjects.click(loginPassword)
[00:35:27]                 │ debg Find.clickByCssSelector('[data-test-subj="loginPassword"]') with timeout=10000
[00:35:27]                 │ debg Find.findByCssSelector('[data-test-subj="loginPassword"]') with timeout=10000
[00:35:27]                 │ debg TestSubjects.click(loginSubmit)
[00:35:27]                 │ debg Find.clickByCssSelector('[data-test-subj="loginSubmit"]') with timeout=10000
[00:35:27]                 │ debg Find.findByCssSelector('[data-test-subj="loginSubmit"]') with timeout=10000
[00:35:27]                 │ debg Waiting for login result, expected: chrome.
[00:35:27]                 │ debg Find.findByCssSelector('[data-test-subj="kibanaChrome"] .app-wrapper:not(.hidden-chrome)') with timeout=20000
[00:35:27]                 │ proc [kibana]   log   [12:33:42.502] [info][plugins][routes][security] Logging in with provider "basic" (basic)
[00:35:29]                 │ debg browser[INFO] http://localhost:61191/app/home 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:35:29]                 │
[00:35:29]                 │ debg browser[INFO] http://localhost:61191/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:35:29]                 │ debg Finished login process currentUrl = http://localhost:61191/app/home#/
[00:35:29]                 │ debg Waiting up to 20000ms for logout button visible...
[00:35:29]                 │ debg TestSubjects.exists(userMenuButton)
[00:35:29]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="userMenuButton"]') with timeout=2500
[00:35:29]                 │ debg TestSubjects.exists(userMenu)
[00:35:29]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="userMenu"]') with timeout=2500
[00:35:32]                 │ debg --- retry.tryForTime error: [data-test-subj="userMenu"] is not displayed
[00:35:32]                 │ debg TestSubjects.click(userMenuButton)
[00:35:32]                 │ debg Find.clickByCssSelector('[data-test-subj="userMenuButton"]') with timeout=10000
[00:35:32]                 │ debg Find.findByCssSelector('[data-test-subj="userMenuButton"]') with timeout=10000
[00:35:33]                 │ debg TestSubjects.exists(userMenu)
[00:35:33]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="userMenu"]') with timeout=120000
[00:35:33]                 │ debg TestSubjects.exists(userMenu > logoutLink)
[00:35:33]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="userMenu"] [data-test-subj="logoutLink"]') with timeout=2500
[00:35:33]               └-> displays error on broken annotation index and recovers after fix
[00:35:33]                 └-> "before each" hook: global before each
[00:35:33]                 │ debg === TEST STEP === loads from job list row link
[00:35:33]                 │ debg navigating to ml url: http://localhost:61191/app/ml
[00:35:33]                 │ debg navigate to: http://localhost:61191/app/ml
[00:35:33]                 │ debg browser[INFO] http://localhost:61191/app/ml?_t=1607949227975 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:35:33]                 │
[00:35:33]                 │ debg browser[INFO] http://localhost:61191/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:35:33]                 │ debg ... sleep(700) start
[00:35:34]                 │ debg ... sleep(700) end
[00:35:34]                 │ debg returned from get, calling refresh
[00:35:34]                 │ERROR browser[SEVERE] http://localhost:61191/38302/bundles/core/core.entry.js 12:192870 TypeError: Failed to fetch
[00:35:34]                 │          at _callee3$ (http://localhost:61191/38302/bundles/core/core.entry.js:6:43940)
[00:35:34]                 │          at l (http://localhost:61191/38302/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:321:1751406)
[00:35:34]                 │          at Generator._invoke (http://localhost:61191/38302/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:321:1751159)
[00:35:34]                 │          at Generator.forEach.e.<computed> [as throw] (http://localhost:61191/38302/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:321:1751763)
[00:35:34]                 │          at fetch_asyncGeneratorStep (http://localhost:61191/38302/bundles/core/core.entry.js:6:38998)
[00:35:34]                 │          at _throw (http://localhost:61191/38302/bundles/core/core.entry.js:6:39406)
[00:35:34]                 │ debg browser[INFO] http://localhost:61191/app/ml?_t=1607949227975 341 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:35:34]                 │
[00:35:34]                 │ debg browser[INFO] http://localhost:61191/bootstrap.js 42:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:35:34]                 │ debg currentUrl = http://localhost:61191/app/ml
[00:35:34]                 │          appUrl = http://localhost:61191/app/ml
[00:35:34]                 │ debg TestSubjects.find(kibanaChrome)
[00:35:34]                 │ debg Find.findByCssSelector('[data-test-subj="kibanaChrome"]') with timeout=60000
[00:35:35]                 │ debg ... sleep(501) start
[00:35:35]                 │ debg ... sleep(501) end
[00:35:35]                 │ debg in navigateTo url = http://localhost:61191/app/ml/overview
[00:35:35]                 │ debg --- retry.try error: URL changed, waiting for it to settle
[00:35:36]                 │ debg ... sleep(501) start
[00:35:36]                 │ debg ... sleep(501) end
[00:35:36]                 │ debg in navigateTo url = http://localhost:61191/app/ml/overview
[00:35:36]                 │ debg TestSubjects.exists(statusPageContainer)
[00:35:36]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="statusPageContainer"]') with timeout=2500
[00:35:39]                 │ debg --- retry.tryForTime error: [data-test-subj="statusPageContainer"] is not displayed
[00:35:39]                 │ debg TestSubjects.exists(mlApp)
[00:35:39]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="mlApp"]') with timeout=2000
[00:35:39]                 │ debg TestSubjects.click(~mlMainTab & ~anomalyDetection)
[00:35:39]                 │ debg Find.clickByCssSelector('[data-test-subj~="mlMainTab"][data-test-subj~="anomalyDetection"]') with timeout=10000
[00:35:39]                 │ debg Find.findByCssSelector('[data-test-subj~="mlMainTab"][data-test-subj~="anomalyDetection"]') with timeout=10000
[00:35:39]                 │ debg TestSubjects.exists(~mlMainTab & ~anomalyDetection & ~selected)
[00:35:39]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="mlMainTab"][data-test-subj~="anomalyDetection"][data-test-subj~="selected"]') with timeout=120000
[00:35:40]                 │ debg TestSubjects.exists(mlPageJobManagement)
[00:35:40]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="mlPageJobManagement"]') with timeout=120000
[00:35:40]                 │ debg TestSubjects.exists(~mlJobListTable)
[00:35:40]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="mlJobListTable"]') with timeout=60000
[00:35:40]                 │ debg TestSubjects.exists(mlJobListTable loaded)
[00:35:40]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="mlJobListTable loaded"]') with timeout=30000
[00:35:40]                 │ debg TestSubjects.exists(~mlJobListTable)
[00:35:40]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="mlJobListTable"]') with timeout=60000
[00:35:40]                 │ debg TestSubjects.exists(mlJobListTable loaded)
[00:35:40]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="mlJobListTable loaded"]') with timeout=30000
[00:35:40]                 │ debg TestSubjects.find(mlJobListSearchBar)
[00:35:40]                 │ debg Find.findByCssSelector('[data-test-subj="mlJobListSearchBar"]') with timeout=10000
[00:35:40]                 │ debg TestSubjects.find(~mlJobListTable)
[00:35:40]                 │ debg Find.findByCssSelector('[data-test-subj~="mlJobListTable"]') with timeout=10000
[00:35:40]                 │ debg TestSubjects.click(~mlJobListTable > ~row-fq_single_1_smv > mlOpenJobsInSingleMetricViewerButton)
[00:35:40]                 │ debg Find.clickByCssSelector('[data-test-subj~="mlJobListTable"] [data-test-subj~="row-fq_single_1_smv"] [data-test-subj="mlOpenJobsInSingleMetricViewerButton"]') with timeout=10000
[00:35:40]                 │ debg Find.findByCssSelector('[data-test-subj~="mlJobListTable"] [data-test-subj~="row-fq_single_1_smv"] [data-test-subj="mlOpenJobsInSingleMetricViewerButton"]') with timeout=10000
[00:35:41]                 │ debg TestSubjects.exists(~mlPageSingleMetricViewer)
[00:35:41]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj~="mlPageSingleMetricViewer"]') with timeout=120000
[00:35:41]                 │ debg TestSubjects.missingOrFail(mlLoadingIndicator)
[00:35:41]                 │ debg Find.waitForDeletedByCssSelector('[data-test-subj="mlLoadingIndicator"]') with timeout=2500
[00:35:41]                 │ proc [kibana]   log   [12:33:56.648] [error][data][elasticsearch] [search_phase_execution_exception]: all shards failed
[00:35:41]                 │ proc [kibana]   log   [12:33:56.749] [error][data][elasticsearch] [search_phase_execution_exception]: all shards failed
[00:35:42]                 │ERROR browser[SEVERE] http://localhost:61191/api/ml/annotations - Failed to load resource: the server responded with a status of 400 (Bad Request)
[00:35:42]                 │ERROR browser[SEVERE] http://localhost:61191/api/ml/annotations - Failed to load resource: the server responded with a status of 400 (Bad Request)
[00:35:42]                 │ debg === TEST STEP === pre-fills the job selection
[00:35:42]                 │ debg TestSubjects.findAll(mlJobSelectionBadges > ~mlJobSelectionBadge)
[00:35:42]                 │ debg Find.allByCssSelector('[data-test-subj="mlJobSelectionBadges"] [data-test-subj~="mlJobSelectionBadge"]') with timeout=10000
[00:35:42]                 │ debg === TEST STEP === pre-fills the detector input
[00:35:42]                 │ debg TestSubjects.exists(mlSingleMetricViewerSeriesControls > mlSingleMetricViewerDetectorSelect)
[00:35:42]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="mlSingleMetricViewerSeriesControls"] [data-test-subj="mlSingleMetricViewerDetectorSelect"]') with timeout=120000
[00:35:42]                 │ debg TestSubjects.getAttribute(mlSingleMetricViewerSeriesControls > mlSingleMetricViewerDetectorSelect, value, tryTimeout=120000, findTimeout=10000)
[00:35:42]                 │ debg TestSubjects.find(mlSingleMetricViewerSeriesControls > mlSingleMetricViewerDetectorSelect)
[00:35:42]                 │ debg Find.findByCssSelector('[data-test-subj="mlSingleMetricViewerSeriesControls"] [data-test-subj="mlSingleMetricViewerDetectorSelect"]') with timeout=10000
[00:35:42]                 │ debg === TEST STEP === should display the annotations section showing an error
[00:35:42]                 │ debg TestSubjects.exists(mlAnomalyExplorerAnnotations error)
[00:35:42]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="mlAnomalyExplorerAnnotations error"]') with timeout=30000
[00:35:42]                 │ proc [kibana]   log   [12:33:57.286] [error][data][elasticsearch] [search_phase_execution_exception]: all shards failed
[00:35:42]                 │ERROR browser[SEVERE] http://localhost:61191/api/ml/annotations - Failed to load resource: the server responded with a status of 400 (Bad Request)
[00:35:42]                 │ debg === TEST STEP === should navigate to anomaly explorer
[00:35:42]                 │ debg TestSubjects.click(mlAnomalyResultsViewSelectorExplorer)
[00:35:42]                 │ debg Find.clickByCssSelector('[data-test-subj="mlAnomalyResultsViewSelectorExplorer"]') with timeout=10000
[00:35:42]                 │ debg Find.findByCssSelector('[data-test-subj="mlAnomalyResultsViewSelectorExplorer"]') with timeout=10000
[00:35:42]                 │ debg TestSubjects.exists(mlPageAnomalyExplorer)
[00:35:42]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="mlPageAnomalyExplorer"]') with timeout=120000
[00:35:43]                 │ERROR browser[SEVERE] http://localhost:61191/api/ml/anomaly_detectors - Failed to load resource: net::ERR_NETWORK_CHANGED
[00:35:43]                 │ debg browser[INFO] http://localhost:61191/38302/bundles/plugin/ml/ml.chunk.6.js 2:122392 "jobService error getting list of jobs:" TypeError: Failed to fetch
[00:35:43]                 │          at _callee3$ (http://localhost:61191/38302/bundles/core/core.entry.js:6:43940)
[00:35:43]                 │          at l (http://localhost:61191/38302/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:321:1751406)
[00:35:43]                 │          at Generator._invoke (http://localhost:61191/38302/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:321:1751159)
[00:35:43]                 │          at Generator.forEach.e.<computed> [as throw] (http://localhost:61191/38302/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:321:1751763)
[00:35:43]                 │          at fetch_asyncGeneratorStep (http://localhost:61191/38302/bundles/core/core.entry.js:6:38998)
[00:35:43]                 │          at _throw (http://localhost:61191/38302/bundles/core/core.entry.js:6:39406)
[00:35:43]                 │ debg browser[INFO] http://localhost:61191/38302/bundles/plugin/ml/ml.chunk.6.js 2:119689 "Error loading jobs in route resolve." Object
[00:35:43]                 │ERROR browser[SEVERE] http://localhost:61191/38302/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js 297:110227 Uncaught TypeError: Cannot read property 'datafeed_config' of undefined
[00:35:43]                 │ERROR browser[SEVERE] http://localhost:61191/38302/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js 297:110227 Uncaught TypeError: Cannot read property 'datafeed_config' of undefined
[00:35:43]                 │ debg === TEST STEP === should display the annotations section showing an error
[00:35:43]                 │ debg TestSubjects.exists(mlAnomalyExplorerAnnotationsPanel error)
[00:35:43]                 │ debg Find.existsByDisplayedByCssSelector('[data-test-subj="mlAnomalyExplorerAnnotationsPanel error"]') with timeout=30000
[00:35:45]                 │ debg --- retry.tryForTime error: [data-test-subj="mlAnomalyExplorerAnnotationsPanel error"] is not displayed
[00:35:48]                 │ debg --- retry.tryForTime failed again with the same message...
[00:35:51]                 │ debg --- retry.tryForTime failed again with the same message...
[00:35:54]                 │ debg --- retry.tryForTime failed again with the same message...
[00:35:57]                 │ debg --- retry.tryForTime failed again with the same message...
[00:36:00]                 │ debg --- retry.tryForTime failed again with the same message...
[00:36:03]                 │ debg --- retry.tryForTime failed again with the same message...
[00:36:06]                 │ debg --- retry.tryForTime failed again with the same message...
[00:36:09]                 │ debg --- retry.tryForTime failed again with the same message...
[00:36:13]                 │ debg --- retry.tryForTime failed again with the same message...
[00:36:13]                 │ info Taking screenshot "/dev/shm/workspace/parallel/19/kibana/x-pack/test/functional/screenshots/failure/machine learning anomaly detection annotations displays error on broken annotation index and recovers after fix.png"
[00:36:13]                 │ info Current URL is: http://localhost:61191/app/ml/explorer?_g=(ml%3A(jobIds%3A!(fq_single_1_smv))%2CrefreshInterval%3A(display%3AOff%2Cpause%3A!t%2Cvalue%3A0)%2Ctime%3A(from%3A'2016-02-07T00%3A00%3A00.000Z'%2Cto%3A'2016-02-11T23%3A59%3A54.000Z'))&_a=(mlExplorerFilter%3A()%2CmlExplorerSwimlane%3A())
[00:36:13]                 │ info Saving page source to: /dev/shm/workspace/parallel/19/kibana/x-pack/test/functional/failure_debug/html/machine learning anomaly detection annotations displays error on broken annotation index and recovers after fix.html
[00:36:13]                 └- ✖ fail: machine learning anomaly detection annotations displays error on broken annotation index and recovers after fix
[00:36:13]                 │      Error: expected testSubject(mlAnomalyExplorerAnnotationsPanel error) to exist
[00:36:13]                 │       at TestSubjects.existOrFail (/dev/shm/workspace/parallel/19/kibana/test/functional/services/common/test_subjects.ts:62:15)
[00:36:13]                 │       at Object.assertAnnotationsPanelExists (test/functional/services/ml/anomaly_explorer.ts:71:7)
[00:36:13]                 │       at Context.<anonymous> (test/functional/apps/ml/anomaly_detection/annotations.ts:83:7)
[00:36:13]                 │       at Object.apply (/dev/shm/workspace/parallel/19/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)
[00:36:13]                 │ 
[00:36:13]                 │ 

Stack Trace

Error: expected testSubject(mlAnomalyExplorerAnnotationsPanel error) to exist
    at TestSubjects.existOrFail (/dev/shm/workspace/parallel/19/kibana/test/functional/services/common/test_subjects.ts:62:15)
    at Object.assertAnnotationsPanelExists (test/functional/services/ml/anomaly_explorer.ts:71:7)
    at Context.<anonymous> (test/functional/apps/ml/anomaly_detection/annotations.ts:83:7)
    at Object.apply (/dev/shm/workspace/parallel/19/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)

X-Pack API Integration Tests.x-pack/test/api_integration/apis/ml/results/get_anomalies_table_data·ts.apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 1 times on tracked branches: https://dryrun

[00:00:00]       │
[00:00:00]         │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ds-ilm-history-5-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
[00:00:00]         │ info [o.e.c.m.MetadataCreateDataStreamService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-000001] and backing indices []
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ds-ilm-history-5-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]         │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-000001][0]]])." previous.health="YELLOW" reason="shards started [[.ds-ilm-history-5-000001][0]]"
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ds-ilm-history-5-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [ilm-history-ilm-policy]
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook
[00:06:47]           └-: Machine Learning
[00:06:47]             └-> "before all" hook
[00:06:47]             └-> "before all" hook
[00:06:47]               │ debg creating role ft_ml_source
[00:06:47]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_source]
[00:06:47]               │ debg creating role ft_ml_source_readonly
[00:06:47]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_source_readonly]
[00:06:47]               │ debg creating role ft_ml_dest
[00:06:47]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_dest]
[00:06:47]               │ debg creating role ft_ml_dest_readonly
[00:06:47]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_dest_readonly]
[00:06:47]               │ debg creating role ft_ml_ui_extras
[00:06:47]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_ui_extras]
[00:06:47]               │ debg creating role ft_default_space_ml_all
[00:06:47]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_default_space_ml_all]
[00:06:47]               │ debg creating role ft_default_space_ml_read
[00:06:47]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_default_space_ml_read]
[00:06:47]               │ debg creating role ft_default_space_ml_none
[00:06:47]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_default_space_ml_none]
[00:06:47]               │ debg creating user ft_ml_poweruser
[00:06:47]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_poweruser]
[00:06:47]               │ debg created user ft_ml_poweruser
[00:06:47]               │ debg creating user ft_ml_poweruser_spaces
[00:06:47]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_poweruser_spaces]
[00:06:47]               │ debg created user ft_ml_poweruser_spaces
[00:06:47]               │ debg creating user ft_ml_viewer
[00:06:47]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_viewer]
[00:06:47]               │ debg created user ft_ml_viewer
[00:06:47]               │ debg creating user ft_ml_viewer_spaces
[00:06:47]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_viewer_spaces]
[00:06:47]               │ debg created user ft_ml_viewer_spaces
[00:06:47]               │ debg creating user ft_ml_unauthorized
[00:06:47]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_unauthorized]
[00:06:47]               │ debg created user ft_ml_unauthorized
[00:06:47]               │ debg creating user ft_ml_unauthorized_spaces
[00:06:47]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_unauthorized_spaces]
[00:06:47]               │ debg created user ft_ml_unauthorized_spaces
[00:09:34]             └-: ResultsService
[00:09:34]               └-> "before all" hook
[00:09:34]               └-: GetAnomaliesTableData
[00:09:34]                 └-> "before all" hook
[00:09:34]                 └-> "before all" hook
[00:09:34]                   │ info [ml/farequote] Loading "mappings.json"
[00:09:34]                   │ info [ml/farequote] Loading "data.json.gz"
[00:09:34]                   │ info [ml/farequote] Skipped restore for existing index "ft_farequote"
[00:09:35]                   │ debg applying update to kibana config: {"dateFormat:tz":"UTC"}
[00:09:35]                   │ debg Creating anomaly detection job with id 'fq_multi_1_ae'...
[00:09:35]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared] creating index, cause [api], templates [.ml-anomalies-], shards [1]/[1]
[00:09:35]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-anomalies-shared]
[00:09:35]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-annotations-6] creating index, cause [api], templates [], shards [1]/[1]
[00:09:35]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-annotations-6]
[00:09:35]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared/EttSWWwzSCqjpjdzENMGmA] update_mapping [_doc]
[00:09:35]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-config] creating index, cause [auto(bulk api)], templates [.ml-config], shards [1]/[1]
[00:09:35]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-config]
[00:09:35]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-notifications-000001] creating index, cause [auto(bulk api)], templates [.ml-notifications-000001], shards [1]/[1]
[00:09:35]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-notifications-000001]
[00:09:36]                   │ debg Waiting up to 5000ms for 'fq_multi_1_ae' to exist...
[00:09:36]                   │ debg Creating datafeed with id 'datafeed-fq_multi_1_se'...
[00:09:37]                   │ debg Waiting up to 5000ms for 'datafeed-fq_multi_1_se' to exist...
[00:09:37]                   │ debg Opening anomaly detection job 'fq_multi_1_ae'...
[00:09:37]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Opening job [fq_multi_1_ae]
[00:09:37]                   │ info [o.e.x.c.m.u.MlIndexAndAlias] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] About to create first concrete index [.ml-state-000001] with alias [.ml-state-write]
[00:09:37]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-state-000001] creating index, cause [api], templates [.ml-state], shards [1]/[1]
[00:09:37]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-state-000001]
[00:09:37]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ml-size-based-ilm-policy]
[00:09:37]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [ml-size-based-ilm-policy]
[00:09:37]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] Loading model snapshot [N/A], job latest_record_timestamp [N/A]
[00:09:37]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [ml-size-based-ilm-policy]
[00:09:38]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] [autodetect/281203] [CResourceMonitor.cc@74] Setting model memory limit to 20 MB
[00:09:38]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Successfully set job state to [opened] for job [fq_multi_1_ae]
[00:09:38]                   │ debg Starting datafeed 'datafeed-fq_multi_1_se' with start: '0', end: '1607950112868'...
[00:09:38]                   │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] Datafeed started (from: 1970-01-01T00:00:00.000Z to: 2020-12-14T12:48:32.868Z) with frequency [600000ms]
[00:09:38]                   │ debg Waiting up to 120000ms for datafeed state to be stopped...
[00:09:38]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:38]                   │ debg --- retry.waitForWithTimeout error: expected job state to be stopped but got started
[00:09:38]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared/EttSWWwzSCqjpjdzENMGmA] update_mapping [_doc]
[00:09:38]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 10000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:38]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 20000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:38]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:38]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:39]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 30000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:39]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:39]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:39]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 40000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:39]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:39]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:39]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 50000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:40]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:40]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:40]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 60000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:40]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 70000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:40]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:40]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:41]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 80000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:41]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:41]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:41]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:41]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] to [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] in policy [ilm-history-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] to [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] in policy [ml-size-based-ilm-policy]
[00:09:42]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] to [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] in policy [kibana-event-log-policy]
[00:09:42]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] in policy [ilm-history-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] in policy [ml-size-based-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] in policy [kibana-event-log-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] in policy [ilm-history-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] in policy [ml-size-based-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] in policy [kibana-event-log-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] in policy [ilm-history-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] in policy [ml-size-based-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] in policy [kibana-event-log-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] in policy [ilm-history-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] in policy [ml-size-based-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] in policy [kibana-event-log-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ml-size-based-ilm-policy]
[00:09:42]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
[00:09:42]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:42]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:43]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:43]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:43]                   │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] Lookback has finished
[00:09:43]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] attempt to stop datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae]
[00:09:43]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] try lock [20s] to stop datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae]...
[00:09:43]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] stopping datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae], acquired [true]...
[00:09:43]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae] has been stopped
[00:09:43]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Closing job [fq_multi_1_ae], because [close job (api)]
[00:09:43]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] [autodetect/281203] [CCmdSkeleton.cc@51] Handled 86274 records
[00:09:43]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] [autodetect/281203] [CAnomalyJob.cc@1569] Pruning all models
[00:09:43]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared/EttSWWwzSCqjpjdzENMGmA] update_mapping [_doc]
[00:09:43]                   │ info [o.e.x.m.p.AbstractNativeProcess] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] State output finished
[00:09:43]                   │ info [o.e.x.m.j.p.a.o.AutodetectResultProcessor] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 120 buckets parsed from autodetect output
[00:09:43]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:43]                   │ debg Waiting up to 120000ms for job state to be closed...
[00:09:43]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:43]                   │ debg --- retry.waitForWithTimeout error: expected job state to be closed but got closing
[00:09:44]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:44]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:44]                   │ info [o.e.x.m.j.p.a.AutodetectCommunicator] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] job closed
[00:09:44]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:44]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:45]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:45]                 └-> should fetch anomalies table data
[00:09:45]                   └-> "before each" hook: global before each
[00:09:45]                   └- ✖ fail: apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data
[00:09:45]                   │       Error: expected 13 to sort of equal 12
[00:09:45]                   │       + expected - actual
[00:09:45]                   │ 
[00:09:45]                   │       -13
[00:09:45]                   │       +12
[00:09:45]                   │       
[00:09:45]                   │       at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:100:11)
[00:09:45]                   │       at Assertion.eql (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:244:8)
[00:09:45]                   │       at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:79:40)
[00:09:45]                   │       at Object.apply (/dev/shm/workspace/parallel/3/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)
[00:09:45]                   │ 
[00:09:45]                   │ 

Stack Trace

Error: expected 13 to sort of equal 12
    at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:100:11)
    at Assertion.eql (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:244:8)
    at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:79:40)
    at Object.apply (/dev/shm/workspace/parallel/3/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16) {
  actual: '13',
  expected: '12',
  showDiff: true
}

X-Pack API Integration Tests.x-pack/test/api_integration/apis/ml/results/get_anomalies_table_data·ts.apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]       │
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook
[00:06:42]           └-: Machine Learning
[00:06:42]             └-> "before all" hook
[00:06:42]             └-> "before all" hook
[00:06:42]               │ debg creating role ft_ml_source
[00:06:42]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_source]
[00:06:42]               │ debg creating role ft_ml_source_readonly
[00:06:42]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_source_readonly]
[00:06:42]               │ debg creating role ft_ml_dest
[00:06:42]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_dest]
[00:06:42]               │ debg creating role ft_ml_dest_readonly
[00:06:42]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_dest_readonly]
[00:06:42]               │ debg creating role ft_ml_ui_extras
[00:06:42]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_ml_ui_extras]
[00:06:42]               │ debg creating role ft_default_space_ml_all
[00:06:42]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_default_space_ml_all]
[00:06:42]               │ debg creating role ft_default_space_ml_read
[00:06:42]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_default_space_ml_read]
[00:06:42]               │ debg creating role ft_default_space_ml_none
[00:06:42]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added role [ft_default_space_ml_none]
[00:06:42]               │ debg creating user ft_ml_poweruser
[00:06:42]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_poweruser]
[00:06:42]               │ debg created user ft_ml_poweruser
[00:06:42]               │ debg creating user ft_ml_poweruser_spaces
[00:06:42]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_poweruser_spaces]
[00:06:42]               │ debg created user ft_ml_poweruser_spaces
[00:06:42]               │ debg creating user ft_ml_viewer
[00:06:42]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_viewer]
[00:06:42]               │ debg created user ft_ml_viewer
[00:06:42]               │ debg creating user ft_ml_viewer_spaces
[00:06:42]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_viewer_spaces]
[00:06:42]               │ debg created user ft_ml_viewer_spaces
[00:06:42]               │ debg creating user ft_ml_unauthorized
[00:06:42]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_unauthorized]
[00:06:42]               │ debg created user ft_ml_unauthorized
[00:06:42]               │ debg creating user ft_ml_unauthorized_spaces
[00:06:42]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] added user [ft_ml_unauthorized_spaces]
[00:06:43]               │ debg created user ft_ml_unauthorized_spaces
[00:09:28]             └-: ResultsService
[00:09:28]               └-> "before all" hook
[00:09:28]               └-: GetAnomaliesTableData
[00:09:28]                 └-> "before all" hook
[00:09:28]                 └-> "before all" hook
[00:09:28]                   │ info [ml/farequote] Loading "mappings.json"
[00:09:28]                   │ info [ml/farequote] Loading "data.json.gz"
[00:09:28]                   │ info [ml/farequote] Skipped restore for existing index "ft_farequote"
[00:09:29]                   │ debg applying update to kibana config: {"dateFormat:tz":"UTC"}
[00:09:29]                   │ debg Creating anomaly detection job with id 'fq_multi_1_ae'...
[00:09:29]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared] creating index, cause [api], templates [.ml-anomalies-], shards [1]/[1]
[00:09:29]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-anomalies-shared]
[00:09:29]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-annotations-6] creating index, cause [api], templates [], shards [1]/[1]
[00:09:29]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-annotations-6]
[00:09:29]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared/hlQ58R0vQ3CBy1CPoa3rIw] update_mapping [_doc]
[00:09:29]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-config] creating index, cause [auto(bulk api)], templates [.ml-config], shards [1]/[1]
[00:09:29]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-config]
[00:09:29]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-notifications-000001] creating index, cause [auto(bulk api)], templates [.ml-notifications-000001], shards [1]/[1]
[00:09:29]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-notifications-000001]
[00:09:30]                   │ debg Waiting up to 5000ms for 'fq_multi_1_ae' to exist...
[00:09:30]                   │ debg Creating datafeed with id 'datafeed-fq_multi_1_se'...
[00:09:31]                   │ debg Waiting up to 5000ms for 'datafeed-fq_multi_1_se' to exist...
[00:09:31]                   │ debg Opening anomaly detection job 'fq_multi_1_ae'...
[00:09:31]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Opening job [fq_multi_1_ae]
[00:09:31]                   │ info [o.e.x.c.m.u.MlIndexAndAlias] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] About to create first concrete index [.ml-state-000001] with alias [.ml-state-write]
[00:09:31]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-state-000001] creating index, cause [api], templates [.ml-state], shards [1]/[1]
[00:09:31]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] updating number_of_replicas to [0] for indices [.ml-state-000001]
[00:09:31]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ml-size-based-ilm-policy]
[00:09:31]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] Loading model snapshot [N/A], job latest_record_timestamp [N/A]
[00:09:31]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [ml-size-based-ilm-policy]
[00:09:31]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [ml-size-based-ilm-policy]
[00:09:32]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] [autodetect/189508] [CResourceMonitor.cc@74] Setting model memory limit to 20 MB
[00:09:32]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Successfully set job state to [opened] for job [fq_multi_1_ae]
[00:09:32]                   │ debg Starting datafeed 'datafeed-fq_multi_1_se' with start: '0', end: '1607948586812'...
[00:09:32]                   │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] Datafeed started (from: 1970-01-01T00:00:00.000Z to: 2020-12-14T12:23:06.812Z) with frequency [600000ms]
[00:09:32]                   │ debg Waiting up to 120000ms for datafeed state to be stopped...
[00:09:32]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:32]                   │ debg --- retry.waitForWithTimeout error: expected job state to be stopped but got started
[00:09:32]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared/hlQ58R0vQ3CBy1CPoa3rIw] update_mapping [_doc]
[00:09:32]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 10000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:32]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:32]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:32]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 20000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:33]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 30000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:33]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:33]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:33]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 40000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:33]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:33]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:33]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 50000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:34]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:34]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:34]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 60000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:34]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 70000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:34]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:34]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:35]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 80000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:35]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:35]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:35]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:35]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:36]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:36]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:36]                   │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] Lookback has finished
[00:09:36]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] attempt to stop datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae]
[00:09:36]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] try lock [20s] to stop datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae]...
[00:09:36]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] stopping datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae], acquired [true]...
[00:09:36]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [no_realtime] datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae] has been stopped
[00:09:36]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] Closing job [fq_multi_1_ae], because [close job (api)]
[00:09:36]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] [autodetect/189508] [CCmdSkeleton.cc@51] Handled 86274 records
[00:09:36]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] [autodetect/189508] [CAnomalyJob.cc@1569] Pruning all models
[00:09:36]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [.ml-anomalies-shared/hlQ58R0vQ3CBy1CPoa3rIw] update_mapping [_doc]
[00:09:36]                   │ info [o.e.x.m.p.AbstractNativeProcess] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] State output finished
[00:09:36]                   │ info [o.e.x.m.j.p.a.o.AutodetectResultProcessor] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] 120 buckets parsed from autodetect output
[00:09:36]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:36]                   │ debg Waiting up to 120000ms for job state to be closed...
[00:09:36]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:36]                   │ debg --- retry.waitForWithTimeout error: expected job state to be closed but got closing
[00:09:37]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:37]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:37]                   │ info [o.e.x.m.j.p.a.AutodetectCommunicator] [kibana-ci-immutable-ubuntu-16-tests-xxl-1607945303785455150] [fq_multi_1_ae] job closed
[00:09:37]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:37]                 └-> should fetch anomalies table data
[00:09:37]                   └-> "before each" hook: global before each
[00:09:37]                   └- ✖ fail: apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data
[00:09:37]                   │       Error: expected 13 to sort of equal 12
[00:09:37]                   │       + expected - actual
[00:09:37]                   │ 
[00:09:37]                   │       -13
[00:09:37]                   │       +12
[00:09:37]                   │       
[00:09:37]                   │       at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:100:11)
[00:09:37]                   │       at Assertion.eql (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:244:8)
[00:09:37]                   │       at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:79:40)
[00:09:37]                   │       at Object.apply (/dev/shm/workspace/parallel/3/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)
[00:09:37]                   │ 
[00:09:37]                   │ 

Stack Trace

Error: expected 13 to sort of equal 12
    at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:100:11)
    at Assertion.eql (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:244:8)
    at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:79:40)
    at Object.apply (/dev/shm/workspace/parallel/3/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16) {
  actual: '13',
  expected: '12',
  showDiff: true
}

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
security 470 471 +1

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
security 784.5KB 787.1KB +2.6KB

Distributable file count

id before after diff
default 43023 43024 +1

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
security 163.1KB 163.5KB +422.0B

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backported Feature:Security/Authentication Platform Security - Authentication needs_docs release highlight release_note:enhancement Team:Security Team focused on: Auth, Users, Roles, Spaces, Audit Logging, and more! v7.11.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Anonymous access
10 participants