Skip to content
Nemo Andrea edited this page Nov 12, 2023 · 5 revisions

The Medjed Optical design 🔬

Objective selection

Lets take Olympus Plan N 10X 0.25NA with effective focal length 18mm.

Tube lens for DMD

We want the entire DMD to cover the image circle of the objective lens (2.2mm). With the diagonal of the DLP300s (7.93mm) that means we would need a magnification ratio of 3.6 or greater. Anywhere between 4 and 5 might be good. By increasing the mag we avoid using the corners of the image circle, which will have lower resolution, at the cost of exposing a smaller area in a single exposure.

If we choose an 80mm doublet lens as tube lens we would get magnification 80/18 = 4.4. Our projected image circle for the DMD would be 1.78; comfortably within the image circle of the objective.

The projected 'pixel size' of the DMD (5.4um on DLP300s) would be scaled by the same amount and be ~1.23um. So we could make superpixels of ~5x5um by combining 4x4 pixels. This would leave us with an effective pixel resolution of 1280x720 --> 320x180.

Alternatively one could opt for a 2x2 superpixel strategy at ~2.5x2.5um pixel size and get 640x320 effective pixels per exposure field. But with increased resolution comes increased r̶e̶s̶p̶o̶n̶s̶i̶b̶i̶l̶i̶t̶y̶ requirements on the defocus accuracy.

Tube lens for autofocus and alignment camera

The camera working with red light (~630nm) serves both as autofocus system and as a marker alignment camera. Both functions are essential for proper operation of the system.

To avoid rolling shutter issues and make fast processing possible, the current sensor of choice is the Sony IMX296, as found in the Raspberry Pi Global Shutter camera. It only has a resolution of 1456x1088 pixels, but that is plenty for live video and marker alignment. Its main utility relies on being a global shutter type, which means that for contrast based autofocus or more importantly laser-based autofocus there is no rolling shutter artifacts that could result in misinterpretation of an image.

Note that while the Raspberry Pi global shutter camera uses the IMX296 sensor, it uses one with an RGB setup, which is suboptimal. A mono variant of the sensor would result in better signal to noise performance, but unfortunately for us that would require paying about 3 times more with something like the Venus 161-61U3M

The sensor itself is pretty small, a bit smaller than the DLP300 series, with a diagonal of 6.3mm. We would like to be able to have the FOV of the sensor to be a bit bigger than the projected UV pattern from the DMD, so we can inspect the quality and patterns (useful for calibration purposes). A 60mm lens should do the trick, giving an image circle on the sample of 6.3/(60/18) =~ 1.89mm. Nicely inside the objective image circle, and still slightly bigger than the projected image circle of the DMD.

The IMX296 has a pixel size of 3.45um, meaning the sampled image size on the sample is 3.45/(60/18) =~ 1.04um. This is not quite Nyquist sampled (see the inspection camera section), but that is not an issue for this camera, as it is not critical to get the maximum information that the objective can sample.

Tube lens for inspection camera

The Medjed system can have a white light inspection module added to the optical system. The switching between lithography mode and inspection mode will be manual (inserting a mirror) and only one mode can be active at the same time (see diagram). The white light would be useful for inspection of samples after processing or in between processing steps. Since the Medjed system already has motorized Z and XY axes, it seems like a sensible option to add on a simple white light microscope.

We want to Nyquist sample the objective with our detector, so that we can get all the detail the objective can offer. The Olympus objective has a NA of 0.25, so a resolving power of the objective is 0.61*400(nm)/0.25 ~ 1000nm. So we need to ensure the 'projected' pixel size of our sensor is 1000/2 = 500nm or smaller. The Sony IMX477 (found on e.g. raspberry pi HQ camera) has a pixel size of 1.55um so we would need a demagnification of 1.55/0.5 = = 3.

The demagnification is equal to the ratio of focal lengths (since we have an infinity setup). So we would need a focal length of about ~18*3 = 60mm.

We also need to check if that chosen magnification gets us in trouble with the (finite) field diameter of the objective lens (2.2mm). The IMX477 has a sensor diagonal of 7.86mm, which is very similar to the DLP301s by coincidence. The 'sample image circle' that the sensor will then be 7.86 / (60/18) = 2.358. This would be problematic, as we would get serious vignette/edge contrast loss. If we choose a higher magnification, we start oversampling the objective lens and we are essentially wasting pixels of the detector. Between the two options, I think the latter is preferred, as it means we can get 60fps video without vignetting.

So that means we go for the nearest size up that is available at Thorlabs, a 75mm lens. We would get a pixel 'size' of 0.37um, and an image circle of 1.89mm. If 70mm were available, that may be an even better choice.

Autofocus

It doesn't matter how good your NA is if your projected image is not in focus. The Medjed autofocus system is still a bit in the air. Contrast based approach are risky as plain silicon wafers are basically featureless wastelands (although tiny imperfections in the photoresist layers can give reveal some contrast-y features). Active focussing methods are probably required.

Let's run by some options.

  1. Ultrasonic autofocus ➡ resolution is not good enough

  2. Capacitive autofocus ➡ resolution good enough, but probably not reliable as device layers would cause (spurious) changes in capacitance. Sensor would also not be in-line with the optics, but off to the side

  3. Pneumatic autofocus ➡ Resolution good enough, needs to be close to substrate ~150um (risk of collision). Does not seem to have any off-the-shelf parts for use in optics context. Requires pressurised air supply, which is not very convenient.

  4. Inductive autofocus ➡ Only effective in good conductors (metals and such)

So that leaves pretty much only active optical autofocus methods. Here active is used to describe the fact that additional components and illumination are being brought into the optical design to realise autofocus. In contrast to passive autofocus methods like contrast-based autofocus, that could use the existing IR alignment system.

There seem to be 3 options that could be implemented realistically:

  1. Laser& pinhole-based methods as used in this reference publication 🧾
  2. Laser triangulation/masking approach, as described in Figure 1 and 2 of this reference publication 🧾
  3. Astigmatic approach, similar to autofocus in CD optical pickup units using a quad photodiode for sensing, as used in this reference publication 🧾

Option 1 is probably cheapest, but requires introduction of two separate paths into the optical system (the laser diode and sensing diode path (separate from the main cameras)). Option 3 is out by virtue of requiring two extra lens elements on top of what is required for setup 1. Option 2 seems attractive because it can be combined with the existing IR path and could make use of the same IR camera.

Note

For the implementation of option 2, it would prevent the simultaneous use of the IR imaging system (to align to markers) and autofocus, but this is not a huge problem as switching between modes should be quick enough (switching illumination on and off and changing camera gain)