Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add nativeTest for dynamically configuring jobs through Operation API in GraalVM Native Image #2426

Merged
merged 1 commit into from
Aug 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 19 additions & 3 deletions docs/content/user-manual/usage/operation-api/_index.cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,15 @@ chapter = true

ElasticJob 提供了 Java API,可以通过直接对注册中心进行操作的方式控制作业在分布式环境下的生命周期。

该模块目前仍处于孵化状态。
该模块目前仍处于孵化状态。可能的依赖配置如下,

```xml
<dependency>
<groupId>org.apache.shardingsphere.elasticjob</groupId>
<artifactId>elasticjob-lifecycle</artifactId>
<version>${elasticjob.version}</version>
</dependency>
```

## 配置类 API

Expand Down Expand Up @@ -43,11 +51,10 @@ ElasticJob 提供了 Java API,可以通过直接对注册中心进行操作的

作业在不与当前运行中作业冲突的情况下才会触发执行,并在启动后自动清理此标记。

方法签名:void trigger(Optional<String> jobName, Optional<String> serverIp)
方法签名:void trigger(Optional<String> jobName)

* **Parameters:**
* jobName — 作业名称
* serverIp — 作业服务器IP地址

### 禁用作业

Expand Down Expand Up @@ -83,6 +90,15 @@ ElasticJob 提供了 Java API,可以通过直接对注册中心进行操作的
* jobName — 作业名称
* serverIp — 作业服务器IP地址

### Dump 作业

方法签名:String dump(String jobName, String instanceIp, int dumpPort)

* **Parameters:**
* jobName — 作业名称
* serverIp — 作业服务器IP地址
* dumpPort — Dump port

## 操作分片的 API

类名称:`org.apache.shardingsphere.elasticjob.lifecycle.api.ShardingOperateAPI`
Expand Down
22 changes: 19 additions & 3 deletions docs/content/user-manual/usage/operation-api/_index.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,15 @@ chapter = true

ElasticJob provides a Java API, which can control the life cycle of jobs in a distributed environment by directly operating the registry.

The module is still in incubation.
The module is still in incubation. Possible dependency configurations are as follows,

```xml
<dependency>
<groupId>org.apache.shardingsphere.elasticjob</groupId>
<artifactId>elasticjob-lifecycle</artifactId>
<version>${elasticjob.version}</version>
</dependency>
```

## Configuration API

Expand Down Expand Up @@ -43,11 +51,10 @@ Class name:`org.apache.shardingsphere.elasticjob.lifecycle.api.JobOperateAPI`

The job will only trigger execution if it does not conflict with the currently running job, and this flag will be automatically cleared after it is started.

Method signature:void trigger(Optional<String> jobName, Optional<String> serverIp)
Method signature:void trigger(Optional<String> jobName)

* **Parameters:**
* jobName — Job name
* serverIp — IP address of the job server

### Disable job

Expand Down Expand Up @@ -83,6 +90,15 @@ Method signature:void remove(Optional<String> jobName, Optional<String> server
* jobName — Job name
* serverIp — IP address of the job server

### Dump job

Method signature:String dump(String jobName, String instanceIp, int dumpPort)

* **Parameters:**
* jobName — Job name
* serverIp — IP address of the job server
* dumpPort — Dump port

## Operate sharding API

Class name:`org.apache.shardingsphere.elasticjob.lifecycle.api.ShardingOperateAPI`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,11 @@
"condition":{"typeReachable":"org.apache.shardingsphere.elasticjob.kernel.tracing.yaml.YamlTracingConfiguration"},
"name":"org.apache.shardingsphere.elasticjob.kernel.tracing.yaml.YamlTracingConfiguration",
"allPublicMethods":true
},
{
"condition":{"typeReachable":"org.apache.shardingsphere.elasticjob.kernel.internal.schedule.LiteJob"},
"name":"org.apache.shardingsphere.elasticjob.kernel.internal.schedule.LiteJob",
"allDeclaredMethods": true,
"allDeclaredConstructors": true
}
]

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@
}, {
"condition":{"typeReachable":"org.apache.shardingsphere.elasticjob.kernel.internal.setup.JobClassNameProviderFactory"},
"pattern":"\\QMETA-INF/services/org.apache.shardingsphere.elasticjob.kernel.internal.setup.JobClassNameProvider\\E"
}, {
"condition":{"typeReachable":"org.apache.shardingsphere.elasticjob.kernel.internal.sharding.ShardingService"},
"pattern":"\\QMETA-INF/services/org.apache.shardingsphere.elasticjob.kernel.internal.sharding.strategy.JobShardingStrategy\\E"
}, {
"condition":{"typeReachable":"org.apache.shardingsphere.elasticjob.reg.exception.RegExceptionHandler"},
"pattern":"\\QMETA-INF/services/org.apache.shardingsphere.elasticjob.reg.exception.IgnoredExceptionProvider\\E"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[
{
"condition":{"typeReachable":"org.hamcrest.internal.ReflectiveTypeFinder"},
"name":"org.hamcrest.core.StringStartsWith",
"queryAllDeclaredMethods":true
},
{
"condition":{"typeReachable":"org.hamcrest.internal.ReflectiveTypeFinder"},
"name":"org.hamcrest.core.SubstringMatcher",
"queryAllDeclaredMethods":true
}
]
7 changes: 6 additions & 1 deletion test/native/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,12 @@
<version>${project.version}</version>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.apache.shardingsphere.elasticjob</groupId>
<artifactId>elasticjob-lifecycle</artifactId>
<version>${project.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.awaitility</groupId>
<artifactId>awaitility</artifactId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ public final class Foo implements Serializable {
private Status status;

public enum Status {
TODO,
UNFINISHED,
COMPLETED
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ public List<Foo> fetchData(final ShardingContext shardingContext) {
new SimpleDateFormat("HH:mm:ss").format(new Date()),
Thread.currentThread().getId(),
"DATAFLOW FETCH");
return fooRepository.findTodoData(shardingContext.getShardingParameter(), 10);
return fooRepository.findUnfinishedData(shardingContext.getShardingParameter(), 10);
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ public List<Foo> fetchData(final ShardingContext shardingContext) {
new SimpleDateFormat("HH:mm:ss").format(new Date()),
Thread.currentThread().getId(),
"DATAFLOW FETCH");
return springBootFooRepository.findTodoData(shardingContext.getShardingParameter(), 10);
return springBootFooRepository.findUnfinishedData(shardingContext.getShardingParameter(), 10);
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ public void execute(final ShardingContext shardingContext) {
new SimpleDateFormat("HH:mm:ss").format(new Date()),
Thread.currentThread().getId(),
"SIMPLE");
List<Foo> data = fooRepository.findTodoData(shardingContext.getShardingParameter(), 10);
List<Foo> data = fooRepository.findUnfinishedData(shardingContext.getShardingParameter(), 10);
data.stream().mapToLong(Foo::getId).forEach(fooRepository::setCompleted);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ public void execute(final ShardingContext shardingContext) {
new SimpleDateFormat("HH:mm:ss").format(new Date()),
Thread.currentThread().getId(),
"SIMPLE");
springBootFooRepository.findTodoData(shardingContext.getShardingParameter(), 10)
springBootFooRepository.findUnfinishedData(shardingContext.getShardingParameter(), 10)
.forEach(each -> springBootFooRepository.setCompleted(each.getId()));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -37,21 +37,21 @@ public FooRepository() {

private void addData(final long idFrom, final long idTo, final String location) {
LongStream.range(idFrom, idTo)
.forEachOrdered(i -> data.put(i, new Foo(i, location, Foo.Status.TODO)));
.forEachOrdered(i -> data.put(i, new Foo(i, location, Foo.Status.UNFINISHED)));
}

/**
* Find todoData.
* Find Unfinished Data.
* @param location location
* @param limit limit
* @return An ordered collection, where the user has precise control over where in the list each element is inserted.
*/
public List<Foo> findTodoData(final String location, final int limit) {
public List<Foo> findUnfinishedData(final String location, final int limit) {
List<Foo> result = new ArrayList<>(limit);
int count = 0;
for (Map.Entry<Long, Foo> each : data.entrySet()) {
Foo foo = each.getValue();
if (foo.getLocation().equals(location) && foo.getStatus() == Foo.Status.TODO) {
if (foo.getLocation().equals(location) && foo.getStatus() == Foo.Status.UNFINISHED) {
result.add(foo);
count++;
if (count == limit) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,28 +32,28 @@ public class SpringBootFooRepository {
private final Map<Long, Foo> data = new ConcurrentHashMap<>(300, 1);

public SpringBootFooRepository() {
addData(0L, 100L, "Beijing");
addData(100L, 200L, "Shanghai");
addData(200L, 300L, "Guangzhou");
addData(0L, 100L, "Norddorf");
addData(100L, 200L, "Bordeaux");
addData(200L, 300L, "Somerset");
}

private void addData(final long idFrom, final long idTo, final String location) {
LongStream.range(idFrom, idTo)
.forEachOrdered(i -> data.put(i, new Foo(i, location, Foo.Status.TODO)));
.forEachOrdered(i -> data.put(i, new Foo(i, location, Foo.Status.UNFINISHED)));
}

/**
* Find todoData.
* Find Unfinished Data.
* @param location location
* @param limit limit
* @return An ordered collection, where the user has precise control over where in the list each element is inserted.
*/
public List<Foo> findTodoData(final String location, final int limit) {
public List<Foo> findUnfinishedData(final String location, final int limit) {
List<Foo> result = new ArrayList<>(limit);
int count = 0;
for (Map.Entry<Long, Foo> each : data.entrySet()) {
Foo foo = each.getValue();
if (foo.getLocation().equals(location) && foo.getStatus() == Foo.Status.TODO) {
if (foo.getLocation().equals(location) && foo.getStatus() == Foo.Status.UNFINISHED) {
result.add(foo);
count++;
if (count == limit) {
Expand Down
Loading
Loading