You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found an error when trying to predict non-square portions of an image (for example --roi_x_y "500,1500,2500,2500").
ignacio@houdini:~/satelites/super_resolution/DSen2/testing$ python s2_tiles_supres.py /media/ignacio/Datos/datasets/satelites/S2_tiles/S2A_MSIL1C_20170608T105651_N0205_R094_T30TWM_20170608T110453.SAFE/MTD_MSIL1C.xml output_file.tif --roi_x_y "500,1500,2500,2500" --copy_original_bands --run_60
Using TensorFlow backend.
Selected UTM Zone: UTM 30N
Selected pixel region: xmin=498, ymin=1500, xmax=2495, ymax=2495:
Image size: width=1998 x height=996
Selected 10m bands: B4 B3 B2 B8
Selected 20m bands: B5 B6 B7 B8A B11 B12
Selected 60m bands: B1 B9
Loading selected data from: Bands B2, B3, B4, B8 with 10m resolution, UTM 30N
Loading selected data from: Bands B5, B6, B7, B8A, B11, B12 with 20m resolution, UTM 30N
Loading selected data from: Bands B1, B9, B10 with 60m resolution, UTM 30N
Super-resolving the 60m data into 10m bands
Symbolic Model Created.
2019-02-11 12:12:07.425593: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-02-11 12:12:07.509926: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-02-11 12:12:07.510305: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8225
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 7.29GiB
2019-02-11 12:12:07.510320: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-02-11 12:12:07.708667: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-02-11 12:12:07.708698: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-02-11 12:12:07.708704: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-02-11 12:12:07.708898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7040 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
Predicting using file: ../models/s2_030_lr_1e-05.hdf5
72/72 [==============================] - 2s
(2, 1998, 996)
Super-resolving the 20m data into 10m bands
Symbolic Model Created.
Predicting using file: ../models/s2_032_lr_1e-04.hdf5
162/162 [==============================] - 1s
(6, 1998, 996)
Writing the original 10m bands and the super-resolved bands in output_file.tif
Traceback (most recent call last):
File "s2_tiles_supres.py", line 407, in <module>
write_band_data(sr[:, :, bi], "SR" + validated_descriptions[bn], "SR" + bn)
File "s2_tiles_supres.py", line 380, in write_band_data
result_dataset.GetRasterBand(bidx).WriteArray(data)
File "/home/ignacio/anaconda3/lib/python3.6/site-packages/osgeo/gdal.py", line 2623, in WriteArray
callback_data = callback_data )
File "/home/ignacio/anaconda3/lib/python3.6/site-packages/osgeo/gdal_array.py", line 378, in BandWriteArray
raise ValueError("array larger than output file, or offset off edge")
ValueError: array larger than output file, or offset off edge
Looking into the issue it turns out that sr has the coordinates transposed, that is the correct output shape (the one of the data10 array) is (996, 1998, 4) while the shape of sr is (1998, 996, 6).
If I do a sr = np.moveaxis(sr, 0, 1) just after line 405, I can manage to get no errors and save the output image. But then the output image is completely messed out for some reason.
The point is that one shouldn't need to transpose sr because the code works well (the output image makes sense) with square images, but if one does not transpose sr then one cannot save the array because the dimensions are wrong. One solution that will preserve the good behaviour of square crops would be to use sr = sr.reshape(sr.shape[1], sr.shape[0], sr.shape[2]) but that does not seem to work either:
sr = sr.reshape(sr.shape[1], sr.shape[0], sr.shape[2], order='C')
sr = sr.reshape(sr.shape[1], sr.shape[0], sr.shape[2], order='F')
sr = sr.reshape(sr.shape[1], sr.shape[0], sr.shape[2], order='A')
Maybe the error comes from upstairs when the arrays are cropped or from the way the image is segmented and reassembled in patches to feed the DNN.
Have you already encountered this issue?
The text was updated successfully, but these errors were encountered:
Hi Charis,
I found an error when trying to predict non-square portions of an image (for example
--roi_x_y "500,1500,2500,2500"
).Looking into the issue it turns out that
sr
has the coordinates transposed, that is the correct output shape (the one of thedata10
array) is(996, 1998, 4)
while the shape ofsr
is(1998, 996, 6)
.If I do a
sr = np.moveaxis(sr, 0, 1)
just after line 405, I can manage to get no errors and save the output image. But then the output image is completely messed out for some reason.The point is that one shouldn't need to transpose
sr
because the code works well (the output image makes sense) with square images, but if one does not transposesr
then one cannot save the array because the dimensions are wrong. One solution that will preserve the good behaviour of square crops would be to usesr = sr.reshape(sr.shape[1], sr.shape[0], sr.shape[2])
but that does not seem to work either:sr = sr.reshape(sr.shape[1], sr.shape[0], sr.shape[2], order='C')
sr = sr.reshape(sr.shape[1], sr.shape[0], sr.shape[2], order='F')
sr = sr.reshape(sr.shape[1], sr.shape[0], sr.shape[2], order='A')
Maybe the error comes from upstairs when the arrays are cropped or from the way the image is segmented and reassembled in patches to feed the DNN.
Have you already encountered this issue?
The text was updated successfully, but these errors were encountered: