Skip to content
Mudit edited this page Jul 29, 2021 · 8 revisions

3D Histopathology using Inviwo and OpenSlide

Motivation

With the advent of technology, medical sciences have also seen an advancement and improvement from the traditional approaches used in various fields. Histopathology is yet another field which has been, and can be further benefitted from the technology. Traditionally, after the slide development the pathologists were required to look at each of the slides individually in order to diagnose. But now, using Volume Rendering techniques, one could generate a 3 dimensional model of the tissue using the slides enabling better histopathologic analysis.

Goal 🚀

We aim to create an application which provides a user with the following functionalities:

  1. Send a set of virtual slides (.svs, .tif, etc.)
  2. Generate a 3D volume using the provided set of slides
    1. Given that the slides can be very large in size, come up with a suitable architecture to manage data.
  3. Provide a set of UI based functionalities:
    1. Add a Slicing feature using an arbitrary plane or axis-aligned planes
      1. View the Cross-Section
    2. Select a sub-volume
      1. Viewing certain artifacts based on their color. Refer to this for further information.
      2. Draw a ROI
    3. Zoom/Scaling the volume with replacement of the current resolution image with an appropriate higher resolution section.
    4. Annotations in 3D → provide an option to annotate artifacts in 3D
      1. Some WSI's provide polylines → Generate a marked region in 3D rendering from a set of points.
  4. Alignment of different slides
    1. When we stack multiple slides, it is important to understand which section of each slide corresponds to a section in another.
    2. Furthermore, two slides created from the same tissue may not be aligned from corner to corner so it is important to correct such misalignments

Source Code

agam-kashyap/3Dhistopathology

Installation Guide on Ubuntu

Inviwo Setup

  1. Refer to the inviwo documentation.

Ensure the ubuntu version supports Qt

  1. Use the Cmake GUI to select the necessary modules and add CMAKE_PREFIX_PATH

img

  1. For any module not found error:

    1. Run apt-file search <module name>
    2. Install the suggested modules
    • Example

      Qt5config.make not found

      $  apt-file search Qt5config
      $  sudo apt-get install qtbase5-dev qtdeclarative5-dev
  2. If certain issues still persist, refer to the Slack Channel of Inviwo.

Openslide Setup

  1. Download the openslide source code.
  2. Dependencies for openslide :
    • OpenJPEG : sudo apt install libopenjp2-7 libopenjp2-toollibopenjp2-7-dev
    • Cairo: sudo apt install libcairo2-dev
    • GDK pixbuf: sudo apt install libgdk-pixbuf2.0-dev
    • Sqlite 3: sudo apt-get install sqlite3 libsqlite3-dev
  3. Go to the source code directory and execute ./configure make make install

Building Custom Processors and Modules

  1. Inviwo Documentation provides simple approaches for it

  2. If the new module requires a dependency on another module add them to depends.cmake

    Example

    Ensure that there are no circular dependency - if Module A has dependency on Module B, Module B can not have a dependency on Module A. It expects the dependency graph to be a DAG.

Current Network and Part of Goal Achieved ✅

Add an image of the network, and upload the .inv file along with it.

Currently a user gets the following features:

  1. A user can provide slides in a supported formated, which will generate a 3D volume. Currently, for the purpose of demo, we are replicating a single slide and stacking them.

Original Volume Render

3D volume redering and Axis-aligned volume cropping
  1. User can choose from Voxel-value classification as can be seen above, or a Transfer Function classification - which has been modified from the original processor.

    This method allows user to select colors from the original slide, and render only those artifacts with the same color.

    A Threshold property lets the user selects a radius around the mouse-click from which to capture unique colors.

Color Selection GIF

Selecting the required colors and removing the rest to render only the cells
  1. A user can perform arbitrary plane slicing of the volume and view the cross-section.

    Arbitrary Volume Slice

    Defining the plane using the plane equation by defining the plane point and normal to the plane

Explanation for each Custom Processor

  • For understanding any of the classes, inviwo documentation provides with an inheritance chart which makes it easier to understand.
  • Inviwo uses an indirect approach to setting up the shaders variables as is explained for the VolumeRaycaster in the section below.

Custom Volume Raycaster

Processor Code

  • Understanding the existing Volume Raycaster and the supporting fragment shader

    The existing processor uses transfer function to determine how the output looks. But we had to come up with a different approach.

img

The 4*256 grid shown at the bottom of the image represents the isovalues. The isovalues represent the intensity. Say we determine an *x-y* range of isovalues representing and artifact say a cell. But the transfer doesn't provide us with any information about the colour nor the position, which is a major drawback of a transfer function. There maybe another entity with same intensity at the same point, so we can not assign them the same colour. 

**A probable solution:**

For any color we have three channels (excluding the alpha value). Split the single transfer function into three different transfer function with each of the transfer functions taking their values from each channel(r-g-b) of the original color. Assign the nodes the original color. This way we are able to specify the opacity for specific colors.

---

**How is the transfer function applied?**

```cpp
void MultichannelRaycaster::initializeResources() {
    utilgl::addShaderDefines(shader_, raycasting_);
    utilgl::addShaderDefines(shader_, camera_);
    utilgl::addShaderDefines(shader_, lighting_);
    utilgl::addShaderDefines(shader_, positionIndicator_);
    utilgl::addShaderDefinesBGPort(shader_, backgroundPort_);

    if (volumePort_.hasData()) {
        size_t channels = volumePort_.getData()->getDataFormat()->getComponents();

        auto tfs = transferFunctions_.getPropertiesByType<TransferFunctionProperty>();
        for (size_t i = 0; i < tfs.size(); i++) {
            tfs[i]->setVisible(i < channels ? true : false);
        }

        std::stringstream ss;
        ss << channels;
        shader_.getFragmentShaderObject()->addShaderDefine("NUMBER_OF_CHANNELS", ss.str());

        std::stringstream ss2;
        for (size_t i = 0; i < channels; ++i) {
            ss2 << "color[" << i << "] = APPLY_CHANNEL_CLASSIFICATION(transferFunction" << i + 1
                << ", voxel, " << i << ");";
        }
        shader_.getFragmentShaderObject()->addShaderDefine("SAMPLE_CHANNELS", ss2.str());
        shader_.build();
    }
}
```

Look at the highlighted part of the code. Here a string `ss2` is being generated which is setting color to a GLSL key `APPLY_CHANNEL_CLASSIFICATION(transferFunction, voxel, channel)` . This is replaced by the following code as defined in `shaderutils.cpp` . 

```cpp
// classification (default (red channel) or specific channel)
        std::string_view value;
        std::string_view valueMulti;
        switch (property.classification_.get()) {
            case RaycastingProperty::Classification::None:
                value = "vec4(voxel.r)";
                valueMulti = "vec4(voxel[channel])";
                break;
            case RaycastingProperty::Classification::TF:
                value = "applyTF(transferFunc, voxel.r)";
                valueMulti = "applyTF(transferFunc, voxel, channel)";
                break;
            case RaycastingProperty::Classification::Voxel:
            default:
                value = "voxel";
                valueMulti = "voxel";
                break;
        }
        const std::string_view key = "APPLY_CLASSIFICATION(transferFunc, voxel)";
        const std::string_view keyMulti =
            "APPLY_CHANNEL_CLASSIFICATION(transferFunc, voxel, channel)";
        shader.getFragmentShaderObject()->addShaderDefine(key, value);
        shader.getFragmentShaderObject()->addShaderDefine(keyMulti, valueMulti);
```

Notice that the `addShaderDefine()` sets the value of the key, as was generated above, to the value depending on the inputs. We define a `bgKey` which is replaced by its value later.

```cpp
void addShaderDefinesBGPort(Shader& shader, const ImageInport& port) {
    std::string_view bgKey = "DRAW_BACKGROUND(result,t,tIncr,color,bgTDepth,tDepth)";
    if (port.isConnected()) {
        shader.getFragmentShaderObject()->addShaderDefine("BACKGROUND_AVAILABLE");
        shader.getFragmentShaderObject()->addShaderDefine(
            bgKey, "drawBackground(result,t,tIncr,color,bgTDepth,tDepth)");
    } else {
        shader.getFragmentShaderObject()->removeShaderDefine("BACKGROUND_AVAILABLE");
        shader.getFragmentShaderObject()->addShaderDefine(bgKey, "result");
    }
}
```

Lets look at the `applyTF` function definition.

```glsl
#ifndef IVW_CLASSIFICATION_GLSL
#define IVW_CLASSIFICATION_GLSL

vec4 applyTF(sampler2D transferFunction, vec4 voxel) {
    return texture(transferFunction, vec2(voxel.r, 0.5));
}

vec4 applyTF(sampler2D transferFunction, vec4 voxel, int channel) {
    return texture(transferFunction, vec2(voxel[channel], 0.5));
}

vec4 applyTF(sampler2D transferFunction, float intensity) {
    return texture(transferFunction, vec2(intensity, 0.5));
}

#endif  // IVW_CLASSIFICATION_GLSL
```

Here we get the values for each of the voxels based on the transfer function. Now that we the calculated values let's look at what `shader_.getFragmentShaderObject()->addShaderDefine("SAMPLE_CHANNELS", ss2.str());` does. 

```glsl
while (t < tEnd) {
        samplePos = entryPoint + t * rayDirection;
        voxel = getNormalizedVoxel(volume, volumeParameters, samplePos);

        // macro defined in MultichannelRaycaster::initializeResources()
        // sets colors;
        **SAMPLE_CHANNELS;**

        result = DRAW_BACKGROUND(result, t, tIncr, backgroundColor, bgTDepth, tDepth);
        result = DRAW_PLANES(result, samplePos, rayDirection, tIncr, positionindicator, t, tDepth);

        if (color[0].a > 0 || color[1].a > 0 || color[2].a > 0 || color[3].a > 0) {
            // World space position
            vec3 worldSpacePosition = (volumeParameters.textureToWorld * vec4(samplePos, 1.0)).xyz;
            gradients = COMPUTE_ALL_GRADIENTS(voxel, volume, volumeParameters, samplePos);
            for (int i = 0; i < NUMBER_OF_CHANNELS; ++i) {
                color[i].rgb =
                    APPLY_LIGHTING(lighting, color[i].rgb, color[i].rgb, vec3(1.0),
                                   worldSpacePosition, normalize(-gradients[i]), toCameraDir);
                result = APPLY_COMPOSITING(result, color[i], samplePos, voxel, gradients[i], camera,
                                           raycaster.isoValue, t, tDepth, tIncr);
            }
        }

        // early ray termination
        if (result.a > ERT_THRESHOLD) {
            t = tEnd;
        } else {
            t += tIncr;
        }
    }
```

Notice the `SAMPLE_CHANNELS` . This is the variable whose value is set to the `ss2` string in *multichannelraycaster.cpp*. So now in our code we have the `color` variable whose value is set by the transfer functions.  

**How is the Transfer Function setup in the code for a VolumeRaycaster?**

Its definition is in `volumeraycaster.cpp` . We don't define a transfer function directly, but use `IsoTFProperty` which consists of a transfer function as its attribute. In code, its object is `isotfComposite_` . Now in `initializeResources()` function, `utilgl::addDefines()` is called which as defined in *line 489* `shaderutils.cpp` sets the property as follows:

```cpp
void addShaderDefines(Shader& shader, const IsoTFProperty& property) {
addShaderDefines(shader, property.isovalues_);
}

void addShaderDefines(Shader& shader, const IsoValueProperty& property) {
const auto isovalueCount = property.get().size();

// need to ensure there is always at least one isovalue due to the use of the macro
// as array size in IsovalueParameters

shader.getFragmentShaderObject()->addShaderDefine(
    "MAX_ISOVALUE_COUNT", StrBuffer{"{}", std::max<size_t>(1, isovalueCount)});

shader.getFragmentShaderObject()->setShaderDefine("ISOSURFACE_ENABLED",
                                                  !property.get().empty());
}

```

So it just sets `MAX_ISOVALUE_COUNT`, `ISOSURFACE_ENABLED` . So, as of now the transfer function attribute of the `isotfComposite_` hasn't been used yet.

Then in the `raycast()` the `utilgl::bindAndSetUniforms(shader_, units, isotfComposite_)` is called which is where the transfer function comes into play whose definition goes into `textureutils.cpp`

```cpp
void bindTexture(const IsoTFProperty& property, const TextureUnit& texUnit) {
if (auto tfLayer = property.tf_.get().getData()) {
	auto transferFunctionGL = tfLayer->getRepresentation<LayerGL>();
	transferFunctionGL->bindTexture(texUnit.getEnum());
	}
}

void bindAndSetUniforms(Shader& shader, TextureUnitContainer& cont, const IsoTFProperty& property) {
TextureUnit unit;
bindTexture(property, unit);
shader.setUniform(property.tf_.getIdentifier(), unit);
cont.push_back(std::move(unit));
}
```
  • Understanding the CustomVolumeRaycaster

    Since the method of transfer function presented several issues, we decided to come up with a different, simpler approach. The goal was now to render only those colors specified by the user. This doesn't provide a control over opacity yet, but that can be a future improvement.

    First we wrote a custom fragment shader to be used for this purpose. The important function here was getColorVal .

    vec4 getColorVal(vec4 colorArray[MAX_COLORS], vec4 voxel)
    {
    
        if (colorLen == 0)return voxel;
    
        for(int i=0; i< colorLen;i++)
        {
            if(voxel.r <= colorArray[i].r + 0.01 && voxel.r >= colorArray[i].r - 0.01) 
            {  
                if (voxel.g <= colorArray[i].g + 0.01 && voxel.g >= colorArray[i].g - 0.01)
                {
                    if(voxel.b <= colorArray[i].b + 0.01 && voxel.b >= colorArray[i].b - 0.01)
                    {
                        if(colorArray[i].a != 0.0)
                        {    return voxel;}
                        else
                        {    return vec4(0.0,0.0,0.0,0.0);}
                    }
                }
            }
        }
        return vec4(0.0,0.0,0.0,0.0);
    }

    One thing to take care of is that we must either clear the memory created earlier in the fragment shader for storing the selected color values, or we must ensure that we don't iterate over memory that is not needed.

    Next we defined a new function in shaderutils.cpp to accomodate the changes, with a similar reasoning as explained earlier

    void addShaderDefines(Shader& shader, const CustomRaycastingProperty& property) {
        {
            // rendering type
            switch (property.renderingType_.get()) {
                case CustomRaycastingProperty::RenderingType::Dvr:
                default:
                    shader.getFragmentShaderObject()->addShaderDefine("INCLUDE_DVR");
                    shader.getFragmentShaderObject()->removeShaderDefine("INCLUDE_ISOSURFACES");
                    break;
            }
        }
    
        {
            // classification (default (red channel) or specific channel)
            std::string_view value;
            std::string_view valueMulti;
            switch (property.classification_.get()) {
                case CustomRaycastingProperty::Classification::None:
                    value = "vec4(voxel.r)";
                    valueMulti = "vec4(voxel[channel])";
                    break;
                case CustomRaycastingProperty::Classification::TF:
                    value = "getColorVal(pointValue, voxel)";
                    valueMulti = "getColorVal(pointValue, voxel, channel)";
                    break;
                case CustomRaycastingProperty::Classification::Voxel:
                default:
                    value = "voxel";
                    valueMulti = "voxel";
                    break;
            }
            const std::string_view key = "APPLY_CLASSIFICATION(pointValue, voxel)";
            const std::string_view keyMulti =
                "APPLY_CHANNEL_CLASSIFICATION(pointValue, voxel, channel)";
            shader.getFragmentShaderObject()->addShaderDefine(key, value);
            shader.getFragmentShaderObject()->addShaderDefine(keyMulti, valueMulti);
        }

    Keep in mind to add any new shaders to the CMakeLists.txt

Custom Pixel Value

Processor Code

  • Understanding Custom Pixel Value Processor

    The getRegionColors function is the main function for this processor. Given any coordinates, it appends the unique colors in a vector.

    void CustomPixelValue::getRegionColors(size2_t pos )
    {
        auto img = inport_.getData();
        auto dims = img->getDimensions();
        auto numCh = img->getNumberOfColorLayers();
    
        for (size_t i = 0; i < numCh; i++) {
                img->getColorLayer(i)
                    ->getRepresentation<LayerRAM>()
                    ->dispatch<void, dispatching::filter::All>([&](const auto layer) {
                        using ValueType = util::PrecisionValueType<decltype(layer)>;
                        using Comp = typename util::value_type<ValueType>::type;
                        const auto data = layer->getDataTyped();
                        const auto im = util::IndexMapper2D(dims);
    
                        auto value = data[im(pos)];
                        auto v = util::glm_convert<glm::vec<4, Comp>>(value);
                        v = util::applySwizzleMask(v, img->getColorLayer(i)->getSwizzleMask());
    
                        auto vf = util::glm_convert<glm::vec<4, float>>(v);
                        if constexpr (std::is_integral_v<Comp>) {
                            vf /= std::numeric_limits<Comp>::max();
                        }
                        std::vector<inviwo::vec4> selectedColorVal;
                        for(auto p : selectedPixelsData_.getProperties())
                        {
                            auto t = static_cast<FloatVec4Property*>(p);
                            selectedColorVal.push_back(t->get());
                        }
                        if(std::find(selectedColorVal.begin(), selectedColorVal.end(), vf) == selectedColorVal.end())
                        {
                            std::string selectedPixelsIdentifier = "Color" + std::to_string(MaxColorVal_+1);
                            std::string selectedPixelsName = "Color " + std::to_string(MaxColorVal_+1); 
                            selectedPixelsData_.addProperty(new FloatVec4Property(selectedPixelsIdentifier, selectedPixelsName,
                                                            vf,vec4(0), vec4(1),
                                                            vec4(std::numeric_limits<float>::epsilon()), 
                                                            InvalidationLevel::Valid,PropertySemantics::Color));
                            MaxColorVal_++;
                        }
                        areaPixelsData_ = selectedColorVal;
                    });
            }
        return;
    }

    Based on the GPU there is a memory limit, hence we can not store a large number of colors. Current the maximum array_size that is defined is 100.

  • Shortcomings and Possible Solutions

    1. As mentioned above, there is a memory limitation on the amount of color values that can be passed to the shader. Such a limitation affects the user experience.
    2. As of now, the most efficient solution is utilising the method explained here. This can be achieved by modifying the already existing multichannelraycaster . Though the only issue that it will bring forth is a lot of work for the user to set the color of each of the nodes of transfer function, that too in three separate transfer function widgets. This can be solved but that would require creating a custom transfer function setter.

Custom Mesh Clipping

Processor Code

The aim was to show a visual plane which crops the volume. The issue that I found was that the Volume Raycaster expected the coordinates belonging to [-1,1] while the property value stored in the MeshClipping processor was in the original coordinates. But testing out with the normalised coordinates didn't seem to provide us with the visual plane.

img

The plane appears at an offset from the cropping plane and isn't visible for all values

A better solution would be to explore the VolumeSliceGL processor. As can be seen in the diagnosticlayout_head.inv example, this processor correctly shows the planes. The task shall only be to allow for arbitrary plane equations instead of axis-aligned.

agam-kashyap/3Dhistopathology

Custom Image Stack Volume Processor-Single

Processor Code

  • Understanding Custom Image Stack Volume Processor

    This processor is a modified version of ImageStackVolumeSource processor which is build by inviwo community.

    The aim was to stack bunch of Whole Slide Image (WSI) and convert them into 3D Volume data. WSI images contains millions of pixels and have very high resolution image. These WSI can be as big as 10 GB, therefore stacking multiple such images is not feasible.

    1. How to load such big data then?

      We can't load such big data and we don't even need to! We will load only that portion which user want to see at a time. For normal screen resolution (1920,1080) is sufficient.

      A file pattern property is used to fetch and store the location of WSIs present in local machine.

      // Constructor
      filePattern_("filePattern", "File Pattern", "####.svs", "")
      addProperty(filePattern_);
      filePattern_.onChange([&]() { isReady_.update(); });
      auto image_paths = filePattern_.getFileList();

      These images are sent to SlideExtractor(image_paths)

      // Process
      CustomImageStackVolumeProcessorSingle::process() {
        util::OnScopeExit guard{[&]() { outport_.setData(nullptr); }};
        myFile.open(CustomImageStackVolumeProcessorSingle::IMG_PATH, std::ios_base::out | std::ios_base::binary);
        myFile.close();
        if (filePattern_.isModified() || reload_.isModified() || skipUnsupportedFiles_.isModified() || isRectanglePresent_.isModified() || level_.isModified() || coordinateX_.isModified() || coordinateY_.isModified()) {
            myFile.open(CustomImageStackVolumeProcessorSingle::IMG_PATH, std::ios_base::out | std::ios_base::binary);
            auto image_paths = filePattern_.getFileList();
            std::cout << "Current image path is : " << image_paths[0] << std::endl;
            SlideExtractor(image_paths[0]);
            volume_ = load();
            if (volume_) {
                basis_.updateForNewEntity(*volume_, deserialized_);
                information_.updateForNewVolume(*volume_, deserialized_);
            }
            deserialized_ = false;
            myFile.close();
        }
      
        if (volume_) {
            basis_.updateEntity(*volume_);
            information_.updateVolume(*volume_);
        }
        outport_.setData(volume_);
        guard.release();
      }

      SlideExtractor() uses openslide library helps us to extract a particular region of a data. Now, WSI has different levels of images, therefore user can choose a particular level (0 to MAX_LEVEL).

      // SlideExtractor
      void CustomImageStackVolumeProcessorSingle::SlideExtractor(std::string PATH){
          // std::cout << "Image path is : "<< PATH << std::endl;
          openslide_t* op = openslide_open(PATH.c_str());
          int32_t level = level_.get();
          int64_t dim_lvlk[2];
          int64_t dim_lvl0[2];
          level_.setMaxValue(openslide_get_level_count(op)-1); // Setting max value of level of a given WSI 
      
          if(op!=0)
          {
              openslide_get_level_dimensions(op,0,&dim_lvl0[0],&dim_lvl0[1]);
              openslide_get_level_dimensions(op,level,&dim_lvlk[0],&dim_lvlk[1]);
          }
          else
              std::cout << "Please enter a valid image path" << std::endl;
      }

      Given a coordinates (top-left corner), level, width and height it can read pixel data of a rectangular cross-section.

      // SlideExtractor
      start_x=coordinateX_.get(),start_y=coordinateY_.get(),w=1920,h=1080;
      uint32_t *dest = (uint32_t*)malloc(size_img);
      start_x*=(dim_lvl0[0]/(float)dim_lvlk[0]); // Converting to level0 frame 
      start_y*=(dim_lvl0[1]/(float)dim_lvlk[1]); // Converting to level0 frame 
      
      if(op!=0){
              openslide_read_region(op,dest,floor(start_x),floor(start_y),level,w,h);
              openslide_close(op);
              std::cout << "in OP" << std::endl;
      }

      RGB pixel data is calculated and is written to JPG file using TooJpeg Library.

      int bytesperpixel=3;
      auto pixels = new unsigned char[w*h*bytesperpixel];
      
      for (int y = 0; y < h; y++) {
      for (int x = 0; x < w; x++) {
          uint32_t argb = dest[y * w + x];
          double red = (argb >> 16)& 0xff;
          double green = (argb >> 8)& 0xff;
          double blue = (argb >> 0)& 0xff;
          // std::cout << red << " " << green << " " << blue << std::endl;
          int offset = (y * w + x)*bytesperpixel;
          pixels[offset]=red;
          pixels[offset+1]=green;
          pixels[offset+2]=blue;
      }}
      TooJpeg::writeJpeg(ImgOutput, pixels, w, h);

      These JPG images from different WSIs correspondingly are loaded and stacked together to convert into 3D volume data using load(). Refer Inviwo documentation for this.

    2. Features in Inviwo UI

      There are sliders for level (Level) and coordinates (PointerX, PointerY). Using these sliders, user can view particular region of 3D volume (Stacked WSIs) of given values of level and coordinates.

      Whenever the level is changed, the old level coordinates has to mapped to new level coordinates.

      if(level!=zoom_in){
          std::cout << "Zoomed to level: " << level << std::endl;
          std::cout << "Previous coordinates: " << start_x << " " << start_y << std::endl;
          int64_t dim_lvl_prev[2];
          openslide_get_level_dimensions(op,zoom_in,&dim_lvl_prev[0],&dim_lvl_prev[1]);
          start_x*=(dim_lvlk[0]/(float)dim_lvl_prev[0]);
          start_y*=(dim_lvlk[1]/(float)dim_lvl_prev[1]);
          std::cout << "New coordinates: " << start_x << " " << start_y << std::endl;
          zoom_in = level;
          coordinateX_.set(start_x);
          coordinateY_.set(start_y);
          // std::cout << "Zoom value: " << zoom_in << std::endl;
      }
  • Things to do

    1. Current code read only one WSI, it has to be modified to work for multiple WSIs.

Custom Image Stack Volume Processor - Multi

Processor Code

  • Understanding the processor

    This processor is another version of Custom Image Stack Volume Processor - Single. The goal was to enable a user to draw a rectangular region in a low resolution image and return a higher resolution section of that region.

    As was explained in the above section, we are storing the required images and then reading them. So, it is clear that we can not use the same image file to achieve our goal. Hence, this processor required us to create two separate files.

    This is not the most efficient solution. Due to the less efficient integration of Openslide with inviwo, we had to resort to this method. Once we are able to directly generate volume data from the WSI without a need for storing the images, the Processor will become much more efficient.

    // SlideExtractor
    start_x = inport_.getData()->at(0).x;
    start_y = inport_.getData()->at(0).y;
    
    w = abs(inport_.getData()->at(0).x - inport_.getData()->at(1).x);
    h = abs(inport_.getData()->at(0).y - inport_.getData()->at(2).y);
    
    w *= (dim_lvl0[0]/dim_lvlk[0]); // Mapping width to level 0
    h *= (dim_lvl0[1]/dim_lvlk[1]); // Mapping height to level 0
    if(w > 1920) w=1920;
    if(h > 1080) w=1080;

Helpful Resources

Understanding Histopathology and Volume Rendering

Helpful Research Publications

Possible Further approaches and developments

  1. One major problem to be solved is to determine what slices to pull out from the OpenSlide API when a user is not looking orthogonally at the volume. Currently, we are passing the coordinates of a ROI to the OpenSlide API and pulling out the higher resolution slide from there. Assume the condition when a user slices a volume and zooms in onto the sliced region. We must figure out how to pull out the higher resolution image of that region.
  2. Annotations could be a great addition to the application. It could utilise processors such as DrawLines , TextOverlay and PixelValue (if one wishes to highlight the region).