from pyrsgis import raster
from pyrsgis.convert import changeDimension
# Assign file names
greenareaband1='Sentinel-2 (2)dense.tiff'
greenareaband2='Sentinel-2 L1C (3)dense.tiff'
greenareaband3='Sentinel-2 L1C (4)dense.tiff'
# Read the rasters as array
df,myimage=raster.read(greenareaband1,bands='all')
AttributeError: 'NoneType' object has no attribute 'ReadAsArray'
I keep getting this error but i'm sure that i have uploaded these images using
from google.colab import files
files.upload()
I had the same problem and I discovered that I made a mistake on assigning the file name. Maybe there is a mistake there and thus, it is not recognized as a tif, and thence be able to ReadAsArray(). Hope that is the only problem.
You have a couple of issue here. Having two spaces and parentheses in your file name is the last thing you want to do in Python. Make sure that you have changed the working directory to where your file is or provide relative path and add 'r' in the beginning. For example:
input_file = r'E:/path_to_your_file/raster_file.tif'
ds, data_arr = raster.read(input_file)
About working with Colab. I think the best option would be upload you files on your Google drive and then authenticate your Colab script to mount drive. Then you just need to change the working directory like this:
# authenticate google drive
from google.colab import drive
drive.mount('/content/drive')
# change working directory
os.chdir(r'/content/drive/My Drive/path_to_your_file')
Or, after mounting the drive simply do this:
input_file = r'/content/drive/My Drive/path_to_your_file/raster_file.tif'
ds, data_arr = raster.read(input_file)
Related
# Read in the image
from google.colab import drive
drive.mount('/gdrive')
filepath='gdrive/MyDrive/image.png'
image = cv2.imread(filepath)
print(type(image))
While running this i am getting Class "NoneType" . Where i am failing in reading image file from drive. please correct me.
I think the image path should be an absolute path with a slash in the beginning. ('/gdrive/MyDrive/image.png').
Ran the code
export_path= '/content/gdrive/My Drive'+ '\\model\\'+'20191003053122'
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ["myTag"], export_path)
graph = tf.get_default_graph()
and incurred error
ValueError: The passed save_path is not a valid checkpoint: /content/gdrive/My Drive\model\20191003053122/variables/variables
What I don't understand is that I have set the same path previously(export_path = '/content/gdrive/My Drive' +'\\model\\'+time.strftime("%Y%m%d%H%M%S",time.localtime())
) but Google Colab still says the checkpoint not valid. What does it mean and went wrong? I also changed both paths multiple times(like replacing '/content/gdrive/My Drive' with os.getcwd()+) to make sure they match each other but didn't help.
I am wondering if it's because the code
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ["myTag"], export_path)
graph = tf.get_default_graph()
is deprecated- if that's the case, what equivalent should I use instead? Maybe Keras? Any contribution is appreciated. Thanks
It looks like there is an access issue between Google Colab and Google drive
Can you take a step back and verify whether your Google drive is mounted properly and you are able to write and read files via Goole Colab.?
Here are the steps to be followed:
Import and mount Google Drive
from google.colab import drive
drive.mount('/content/gdrive')
Verify you are able to list the files in your drive or the location you want to write into
!ls "/content/gdrive/My Drive"
Write the file in Google Drive,
I have used "w" so that it creates the file if it doesn't exist
Other file modes in Python open() function
with open("/content/gdrive/My Drive/myFile.txt", "w") as file:
file.write("Your text goes here")
Read the file again to check if the writing file went well!
!cat "/content/gdrive/My Drive/myFile.txt"
Image of my Google colab working code snippet
You can even check the official Goolge Colab notebook with steps here
Hope this helps :)
I'm new to Machine Learning and I'm following a Sentdex tutorial on Google Colab. It's supposed to be a ML program that distinguishes between cat and dog images. However, whenever I run my code, somethings wrong with my 'file or directory.'
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\atlgwc16\\PetImages/Dog'
I honestly don't know where Google Colab stores its files so I don't know where to put the folder of images.
Here is my full code so far:
import numpy as np
import matplotlib.pyplot as plt
import os
import cv2
from tqdm import tqdm
DATADIR = "C:\Users\atlgwc16\PetImages"
CATEGORIES = ["Dog", "Cat"]
for category in CATEGORIES:
path = os.path.join(DATADIR, category)
for img in os.listdir(path):
img_array = cv2.imread(os.path.join(path,img), cv2.IMREAD_GRAYSCALE)
plt.imshow(img_array, cmap = 'gray')
plt.show()
break
Tutorial being followed as referenced in the question:
https://pythonprogramming.net/loading-custom-data-deep-learning-python-tensorflow-keras/
Since you are using Google Colab, you can upload the Kaggle dataset of dog and cat images to Google Drive. See the Google Colab Jupyter notebook provided by Google that explains how to do this:
https://colab.research.google.com/notebooks/io.ipynb#scrollTo=u22w3BFiOveA
You would then access files from your Google Drive (in this case, the training set after you upload it to Google Drive) much in the same way as accessing files locally on your computer.
This is the example provided in the link above:
with open('/content/gdrive/My Drive/foo.txt', 'w') as f:
f.write('Hello Google Drive!')
!cat /content/gdrive/My\ Drive/foo.txt
So, since you are using Google Colab, you would need to adjust the code from the Sentdex tutorial to work better with the notebook you are creating. Google Colab uses Jupyter notebooks. Each cell in the notebook runs off of the same 'session'. So, if you import a Python module in one cell, it can be used in the next cells. It's magic like that.
It would look like this:
[CELL 1]
from google.colab import drive
drive.mount('/content/gdrive')
You will then give permission for Google Colab to access your Google Drive.
[CELL 2]
import numpy as np
import matplotlib.pyplot as plt
import os
import cv2
from tqdm import tqdm
DATADIR = '/content/gdrive/My Drive/PetImages/'
#^See?#
# You would need to go to Google Drive and create the 'PetImages' folder at the top level of your Google Drive. You would upload the data set to the PetImages folder creating a 'Dog' subfolder and a 'Cat' subfolder.
CATEGORIES = ["Dog", "Cat"]
for category in CATEGORIES: # do dogs and cats
path = os.path.join(DATADIR,category) # create path to dogs and cats
for img in os.listdir(path): # iterate over each image per dogs and cats
img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE) # convert to array
plt.imshow(img_array, cmap='gray') # graph it
plt.show() # display!
break # we just want one for now so break
break #...and one more!
After properly uploading the data set to Google Drive and using the special google.colab module, you should be able to easily access your training data. Google Colab is a cloud-based tool for creating Jupyter notebooks and running Python programs. So, while similar to running a Python program locally on your computer, it is not exactly the same. It would help to read through how Google Colab works more if you want to use it completely in the cloud--using GDrive to store files rather than your own computer. See the link I posted above from Google.
Happy coding.
I did it for my self and it works for me.
I use data set from my local drive such as a hard disk.
note: your dataset folder must be in the zip form.
first, follow the method with me and you will access your dataset from the local drive.I use google colab. first, create a Jupyter notebook in google Colab and run the below code step by step:
first step: run the below code in your notebook and upload your dataset from your hard drive or local drive
from google.Colab import files
uploaded = files.upload()
when the process is complete 100 percent and do the second step:
second step:
copy and run the below code, this step will unzip the dataset
import zipfile
import io
zf = zipfile.ZipFile(io.BytesIO(uploaded['DogVsCat.zip']), "r")
zf.extractall()
third step: run the code it will import all the required libraries
import numpy as np
import os
import cv2
import matplotlib.pyplot as plt
this will import all the required libraries for you.
fourth step:
specify the path for specifying the path do below steps:
fist: check the image for more ease of your
on the left corner folder icon click on the highlighted folder and you will see your unzip dataset in my case my dataset is "DogVsCat",
note: there you will see two kinds of dataset zip and unzip, you copy the path of unzip data.
right click on it and copy the path from it and
run the below code:
DIRECTORY ='/content/DogVsCats'
CATEGORIES = ['cats', 'dogs']
note: please add your path in DIRECTORY(This directory is the path for me) path, not my path. and again run the code:
note: please add your own folder names in CATEGORIES not my folder names for more information see the picture:
my dataset structure
at the end create train data
5th step:
data = []
for category in CATEGORIES:
path = os.path.join(DIRECTORY, category)
for img in os.listdir(path):
img_path = os.path.join(path, img)
label = CATEGORIES.index(category)
arr = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
new_arr = cv2.resize(arr, (60, 60))
data.append([new_arr, label])
sixth step:
print the data:
run the below code to show you
data
seventh step: shuffle your data:
import random
random.shuffle(data)
eight-step:
specify features and labels for training the model
X = []
y = []
for features, label in data:
X.append(features)
y.append(label)
X = np.array(X)
y = np.array(y)
ninth-step: print features
X
tenth-step: print labels
y
note I can not share all the code with you for the lack of time.
note: for more clearness check my code pictures:
pic1-Of-My-Code
pic2-of-my-code
I'm currently having some issues removing a file in python. I am creating a temporary file for pdf to image conversion. It is housed in a folder that holds a .ppm file and converts it to a .jpg file. It then deletes the temporary .ppm file. Here is the code:
import pdf2image
from PIL import Image
import os
images = pdf2image.convert_from_path('Path to pdf.pdf', output_folder='./folder name')
file = ''
for files in os.listdir('./folder name'):
if files.endswith(".ppm"):
file = files
path = os.path.join('folder name',file)
im = Image.open(path)
im.save("Path to image.jpg")
im.close()
os.remove(path)
The issue is at the end in the os.remove(path). I get the following error:
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'path to ppm file'
I would appreciate any help and thanks in advance.
Not really the answer to your question, but you can just output in the correct format at the start, and avoid the issue in the first place:
pdf2image.convert_from_path('Path to pdf.pdf', output_folder='./folder name', fmt='jpg')
To actually answer your question, I'm not sure why you're having the issue, because really the close() should prevent this problem. Perhaps check out this answer and try using a with statement? Or maybe the permissions release is just delayed, I'm curious what throwing that remove in a loop for as long as it throws an exception would do.
Edit: To set the name, you'll want to do something like:
images = pdf2image.convert_from_path('Path to pdf.pdf', output_folder='./folder name', fmt='jpg')
for image in images:
# Save the image
The pdf2image documentation looks like it recommends using a temporary folder, like in this example, and then you can just .save(...) the PIL image:
import tempfile
with tempfile.TemporaryDirectory() as path:
images_from_path = convert_from_path('/home/kankroc/example.pdf', output_folder=path)
# Do something here
Edit: I realized that the reason it was in use is probably because you need to close() all the images in images. You should read up on the pdf2image documentation and about the PIL images that it spits out for more details.
I'm trying to save a user uploaded file directly to S3 without saving it locally. This project is using Django 1.9 and Boto3.
The relevant code is:
p=request.FILES['img'].read()
s3=boto3.resource('s3',
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
b = s3.Bucket(settings.AWS_STORAGE_BUCKET_NAME)
b.put_object(Key="media/test.jpg", Body=p)
This correctly uploads a file called 'test.jpg' to the media folder.
However, if I download 'test.jpg' from Amazon and try to open it in an image viewer, I get the message: "Error interpreting JPEG image file (Not a JPEG file: starts with 0xf0 0xef)". The jpg file is also only 26kb whereas the original was 116kb.
What is going wrong? I assume I am passing the wrong data as Body in the put_object method. But what should p be instead?
Update and Solutions
With JordonPhilips's help, I realised that because I had already opened the uploaded image earlier in the view with Pillow, the request.FILES['img'] socket had already been read.
The solution I went with was to remove the Pillow code, leaving the boto upload as the first access of request.FILES['img'].
However, I also figured out a solution if you want to do something to the image first (e.g. in Pillow):
from Pillow import Image
import cStringIO as StringIO
import boto3
and then in the view function:
im = Image.open(request.FILES['img'])
# whatever image analysis here
file2 = StringIO.StringIO()
im.save(file2,"jpeg",quality='keep')
s3 = boto3.resource( 's3', aws_access_key_id=settings. AWS_ACCESS_KEY_ID, aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
b = s3.Bucket(settings.AWS_STORAGE_BUCKET_NAME)
b.put_object(Key="media/test.jpg", Body=file2.getvalue())
It looks like your problem was that you were trying to read the socket multiple times. You can only read the socket once, so you need to keep reference to important information.