Artifacts in PyPlot vs. OpenCV imshow - python

I have a greyscale image in numpy array format (standard OpenCV format). Normal image, uint8, all values between 0 and 255. When I run:
import cv2
cv2.imshow('', image)
I get:
But when I run:
from matplotlib import pyplot
pyplot.imshow(image, cmap="gray")
pyplot.show()
I get:
And what's really weird is that if I resize the pyplot image window, those line artifacts change in width. What's up with this? I have no idea what it's showing these artifacts.

Related

how to display an grayscale image in matplotlib while scaling the display based on the range of pixel values [duplicate]

I'm trying to display a grayscale image using matplotlib.pyplot.imshow(). My problem is that the grayscale image is displayed as a colormap. I need the grayscale because I want to draw on top of the image with color.
I read in the image and convert to grayscale using PIL's Image.open().convert("L")
image = Image.open(file).convert("L")
Then I convert the image to a matrix so that I can easily do some image processing using
matrix = scipy.misc.fromimage(image, 0)
However, when I do
figure()
matplotlib.pyplot.imshow(matrix)
show()
it displays the image using a colormap (i.e. it's not grayscale).
What am I doing wrong here?
The following code will load an image from a file image.png and will display it as grayscale.
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
fname = 'image.png'
image = Image.open(fname).convert("L")
arr = np.asarray(image)
plt.imshow(arr, cmap='gray', vmin=0, vmax=255)
plt.show()
If you want to display the inverse grayscale, switch the cmap to cmap='gray_r'.
Try to use a grayscale colormap?
E.g. something like
imshow(..., cmap=pyplot.cm.binary)
For a list of colormaps, see http://scipy-cookbook.readthedocs.org/items/Matplotlib_Show_colormaps.html
import matplotlib.pyplot as plt
You can also run once in your code
plt.gray()
This will show the images in grayscale as default
im = array(Image.open('I_am_batman.jpg').convert('L'))
plt.imshow(im)
plt.show()
I would use the get_cmap method. Ex.:
import matplotlib.pyplot as plt
plt.imshow(matrix, cmap=plt.get_cmap('gray'))
try this:
import pylab
from scipy import misc
pylab.imshow(misc.lena(),cmap=pylab.gray())
pylab.show()
#unutbu's answer is quite close to the right answer.
By default, plt.imshow() will try to scale your (MxN) array data to 0.0~1.0. And then map to 0~255. For most natural taken images, this is fine, you won't see a different. But if you have narrow range of pixel value image, say the min pixel is 156 and the max pixel is 234. The gray image will looks totally wrong.
The right way to show an image in gray is
from matplotlib.colors import NoNorm
...
plt.imshow(img,cmap='gray',norm=NoNorm())
...
Let's see an example:
this is the origianl image:
original
this is using defaul norm setting,which is None:
wrong pic
this is using NoNorm setting,which is NoNorm():
right pic
Use no interpolation and set to gray.
import matplotlib.pyplot as plt
plt.imshow(img[:,:,1], cmap='gray',interpolation='none')

Matplotlib changes channels of image when saving

I am using matplotlib to read an image, but the number of channels changes after I save the original image with imsave. Here is the code:
import matplotlib.image as mpimg
img = mpimg.imread('sample.tiff')
print(img.shape)
mpimg.imsave('sample2.tiff', img)
img2 = mpimg.imread('sample2.tiff')
print(img2.shape)
And here is the output:
(2160, 2160)
(2160, 2160, 4)
The image becomes a 4-channel image while it was 1-channel originally. And it seems that the final channel is always 255.
What is happening here? And the original image looks meaning less as it is all black. But when I read & save it with imread and imsave, I can finally see some meaningful figures.
It looks like you are not the first person to have this problem - see here.
My suggestion would be to use imageio (or PIL) to save the image (in fact, to read it too) and it works fine:
import imageio
import matplotlib.image as mpimg
img = mpimg.imread('a.tif')
imageio.imwrite('result.tif',img)
The input image sample.tiff is a one channel gray scale image. One cannot know why that is the case, it simply depends on where you got that image from.
imread converts this image to a 2D numpy array.
When given a 2D numpy array as input imsave will apply a colormap to the array, and, without further arguments given, apply a normalization between the minimum and maximum data value. The resulting image is hence a color image with 4 channels.
imread then converts this image to a 3D numpy array.

matplotlib imshow distorting colors

I have tried to use the imshow function from matplotlib.pyplot and it works perfectly to show grayscale images. When I tried to represent rgb images, it changes the colors, showing a more blue-ish color.
See an example:
import cv2
import matplotlib.pyplot as plt
lena=cv2.imread("lena.jpg")
plt.imshow(lena)
plt.show()
The resulting image is something like this
While the original image is this
If it is something related to the colormap, there is any way to make it work with rgb images?
This worked for me:
plt.imshow(lena[:,:,::-1]) # RGB-> BGR
Same idea but nicer and more robust approach is to use "ellipsis" proposed by #rayryeng:
plt.imshow(lena[...,::-1])
OpenCV represents the images in BGR as opposed to the RGB we expect. Since it is in the reverse order, you tend to see the blue color in images.
Try using the following line (below comment in code) for converting from BGR to RGB:
import cv2
import matplotlib.pyplot as plt
lena=cv2.imread("lena.jpg")
#plt.imshow(lena)
#plt.axis("off")
#Converts from BGR to RGB
plt.imshow(cv2.cvtColor(lena, cv2.COLOR_BGR2RGB))
plt.show()

Display image as grayscale using matplotlib

I'm trying to display a grayscale image using matplotlib.pyplot.imshow(). My problem is that the grayscale image is displayed as a colormap. I need the grayscale because I want to draw on top of the image with color.
I read in the image and convert to grayscale using PIL's Image.open().convert("L")
image = Image.open(file).convert("L")
Then I convert the image to a matrix so that I can easily do some image processing using
matrix = scipy.misc.fromimage(image, 0)
However, when I do
figure()
matplotlib.pyplot.imshow(matrix)
show()
it displays the image using a colormap (i.e. it's not grayscale).
What am I doing wrong here?
The following code will load an image from a file image.png and will display it as grayscale.
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
fname = 'image.png'
image = Image.open(fname).convert("L")
arr = np.asarray(image)
plt.imshow(arr, cmap='gray', vmin=0, vmax=255)
plt.show()
If you want to display the inverse grayscale, switch the cmap to cmap='gray_r'.
Try to use a grayscale colormap?
E.g. something like
imshow(..., cmap=pyplot.cm.binary)
For a list of colormaps, see http://scipy-cookbook.readthedocs.org/items/Matplotlib_Show_colormaps.html
import matplotlib.pyplot as plt
You can also run once in your code
plt.gray()
This will show the images in grayscale as default
im = array(Image.open('I_am_batman.jpg').convert('L'))
plt.imshow(im)
plt.show()
I would use the get_cmap method. Ex.:
import matplotlib.pyplot as plt
plt.imshow(matrix, cmap=plt.get_cmap('gray'))
try this:
import pylab
from scipy import misc
pylab.imshow(misc.lena(),cmap=pylab.gray())
pylab.show()
#unutbu's answer is quite close to the right answer.
By default, plt.imshow() will try to scale your (MxN) array data to 0.0~1.0. And then map to 0~255. For most natural taken images, this is fine, you won't see a different. But if you have narrow range of pixel value image, say the min pixel is 156 and the max pixel is 234. The gray image will looks totally wrong.
The right way to show an image in gray is
from matplotlib.colors import NoNorm
...
plt.imshow(img,cmap='gray',norm=NoNorm())
...
Let's see an example:
this is the origianl image:
original
this is using defaul norm setting,which is None:
wrong pic
this is using NoNorm setting,which is NoNorm():
right pic
Use no interpolation and set to gray.
import matplotlib.pyplot as plt
plt.imshow(img[:,:,1], cmap='gray',interpolation='none')

Python PIL cut off my 16-bit grayscale image at 8-bit

I'm working on an python program to display images of stars. The images are 16-bit grayscale tiffs.
If I try to display them in an extern program, e.g. ImageMagick they are correct but if I load them in python and then use 'show()' or implement them in a canvas in Tkinter they are, unless a few pixel, totally white.
So I estimate python sets every pixel above 255 to white but I don't know why. If I load the image and then save it as tiff again, ImageMagick can show it correct.
Thanks for help.
Try to convert the image to a numpy array and display that:
import Image
import matplotlib.pyplot as plt
import numpy as np
img = Image.open('image.tiff')
arr = np.asarray(img.getdata()).reshape(img.size[1], img.size[0])
plt.imshow(arr)
plt.show()
You can change the color mapping too:
from matplotlib import cm
plt.imshow(arr, cmap=cm.gray)

Resources