Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Easier configuration/running #44

Open
elect86 opened this issue Apr 23, 2024 · 1 comment
Open

Easier configuration/running #44

elect86 opened this issue Apr 23, 2024 · 1 comment

Comments

@elect86
Copy link

elect86 commented Apr 23, 2024

So, I got pulled in JDLL by the spanish people on Prague

I was looking at the readme and the first thing I thought is that I might help you, folks, with some automatization/template ready to clone and start playing from there

It's based on Kotlin (and some or all in Gradle), I'm pretty confident I could provide something along these lines:

// 0. Setting Up JDLL
// no need, just clone the template repo

// 1. Downloading a model (optional)
downloadModel {
     // enum, statically typed
    model = Model.`B. Sutilist bacteria segmentation - Widefield microscopy - 2D UNet`
    // set with some default value, but customizable
    //dst = projectDir / "models"
}

// 2. Installing DL engines
// we may also implement all the necessary logic expressing the compatibility among 
// the different DL framework, OS and Arch, printing errors if incompatible or warnings
// if a best effort try is being made  
framework {
    // if engine, cpu and gpu are not specified, then 
    // `EngineInstall::installEnginesForModelByNameinDir` will be called
    // engine = Tensorflow.`2.0` // also enum, statically typed
    // cpu = true
    // gpu = true
    // set with some default value, but customizable
    installationDir = projectDir / "engines"
}
// will automatically failed if `!installed`

// 3. Creating the tensors
val img1 = model.create<FloatType>() // [1, 512, 512, 1] inferred from `model`
tensor {
     input = build(model.inputs.bxyc, img1) // "input_1" might be inferred
     outputEmpty = buildEmptyTensor(model.outputs.bxyc) // "conv2d_19" might be inferred
     outputBlankTensor = buildBlankTensor<FloatType>(model.outputs.bxyc) // [1, 512, 512, 3] inferred
}

// 4. Loading the model
dlEngine { // or dlCompatibleEngine {
    framework = TensorFlow.`2.7.0`
    cpu = true
    gpu = true
    // engineDir inferred
}

// the rest of the step can be created and executed automatically
// everything gets inferred:
// - model load
// - model run
// - cleanup

Following the Gradle philosophy of "convention over configuration", we could assume conventions over framework and have that step completely optional as well. Something similar for cpu/gpu=true

@elect86
Copy link
Author

elect86 commented Apr 25, 2024

So, I quickly prototyped something like this:

plugins { bioimage.io.jdll }

setEngine {
    engine = Framework.torchscript.`1,13,1`
}

setModel {
    model = Models.EnhancerMitochondriaEM2D
}

execute = { model, inputsTensor, outputsTensor ->

    inputsTensor += buildTensor<FloatType>()
    outputsTensor += buildTensor<FloatType>(outputs = true)

    println(Util.average(Util.asDoubleArray(outputsTensor.first().data)))
    model.runModel(inputsTensor, outputsTensor)
    println(Util.average(Util.asDoubleArray(outputsTensor.first().data)))
}

It's a little hacky to get something asap and it works just before the Model::createDeepLearningModel is called, because then the classloader concept has to be fixed/reworked, but the idea is that

You can go massively down with requested code (original) and make it truly script (now essentially it's running during Gradle configuration time, with manual caching engine and model)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant