I am getting following error when i try to unpickle the cifar-10 dataset. I need to train a model but I can't even get the data for my operations. How can I fix this problem
dict=cPickle.load(fo)
UnpicklingError: invalid load key, '\x06'.
import tensorflow as tf
import os
import numpy as np
import dataset_class
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import glob
from PIL import Image
from scipy.spatial.distance import pdist
def cifar_10_reshape(batch_arg):
output=np.reshape(batch_arg,(10000,3,32,32)).transpose(0,2,3,1)
return output
def unpickle(file):
import _pickle as cPickle
fo=open(file,'rb')
dict=cPickle.load(fo)
fo.close()
return dict
#Loading cifar-10 data and reshaping it to be batch_sizex32x32x3
batch1=unpickle('cifar-10-batches-py/data_batch_1.bin')
batch2=unpickle('cifar-10-batches-py/data_batch_2.bin')
batch3=unpickle('cifar-10-batches-py/data_batch_3.bin')
batch4=unpickle('cifar-10-batches-py/data_batch_4.bin')
batch5=unpickle('cifar-10-batches-py/data_batch_5.bin')
batch1_data=cifar_10_reshape(batch1['data'])
batch2_data=cifar_10_reshape(batch2['data'])
batch3_data=cifar_10_reshape(batch3['data'])
batch4_data=cifar_10_reshape(batch4['data'])
batch5_data=cifar_10_reshape(batch5['data'])
batch1_labels=batch1['labels']
batch2_labels=batch2['labels']
batch3_labels=batch3['labels']
batch4_labels=batch4['labels']
batch5_labels=batch5['labels']
test_batch=unpickle('cifar-10-batches-py/test_batch')
test_images=cifar_10_reshape(test_batch['data'])
test_labels_data=test_batch['labels']
train_images=np.concatenate((batch1_data,batch2_data,batch3_data,batch4_data,batch5_data),axis=0)
train_labels_data=np.concatenate((batch1_labels,batch2_labels,batch3_labels,batch4_labels,batch5_labels),axis=0)
From what I have understood of the CIFAR-10 dataset, the version you are trying to unpickle is in a binary format, while you are not providing any information to the 'unpickler' about the encoding. You might have more luck trying the loading function provided on the CIFAR-10 website (https://www.cs.toronto.edu/~kriz/cifar.html) for python 3.x:
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
Related
I have a TensorFlow Keras deep learning model in the form of an h5 file.
How can I upload an image and return a NumPy array in FastAPI?
import numpy as np
import cv2
from fastapi import FastAPI, File, UploadFile
import numpy as np
from tensorflow.keras.models import load_model
import tensorflow as tf
model=load_model("complete_model.h5")
app = FastAPI()
def prepare(image):
IMG_SIZE = 224
new_array = cv2.resize(image, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE,IMG_SIZE,3)
#app.post("/")
async def root(file: UploadFile = File(...)):
global model
content = await file.read()
nparr = np.fromstring(content, np.uint8)
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR).astype(np.float32)
prediction = model.predict(prepare(img))
return prediction
When uploading the image using Swagger UI, I get the following error:
line 137, in jsonable_encoder
data = dict(obj)
TypeError: 'numpy.float32' object is not iterable
Working code without FastAPI:
import numpy as np
import numpy as np
from tensorflow.keras.models import load_model
import tensorflow as tf
import cv2
model=load_model("complete_model.h5")
def prepare(image):
IMG_SIZE = 224
new_array = cv2.resize(image, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE,IMG_SIZE,3)
img = cv2.imread("./test.jpeg").astype(np.float32)
prediction = model.predict(prepare(img))
print(prediction)
Result in the terminal:
[[0.25442022 0.74557984]]
How can I get the same result while using FastAPI?
The error is thrown when returning the response (i.e., prediction in your case) from your endpoint. It looks like FastAPI is trying to convert the NumPy array into a dict, using the jsonable_encoder, which is used internally by FastAPI when returning a value from an endpoint, and which seems to call Python's vars() method, as shown in the error you provided here (have a look at the discussion here, as well as the documentation). Thus, what you could do is to convert the NumPy array into a Python list and then serialise it into a JSON object:
return json.dumps(prediction.tolist())
On OpenAPI (Swagger UI), you will still be able to see the expected result. However, if you need to convert it back to a NumPy array, you can parse the JSON string as shown below.
arr = np.asarray(json.loads(resp.json())) # resp.json() if using Python requests
If you would like to return the NumPy array as raw bytes and display the image in the browser or download it, have a look at this answer.
I'm trying to run the following code:
import os
import librosa
import IPython.display as ipd
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import wavfile
import warnings
warnings.filterwarnings("ignore")
train_audio_path = 'train/audio/'
samples, sample_rate = librosa.load(train_audio_path+'yes/0a7c2a8d_nohash_0.wav', sr = 16000)
But get two errors:
Error opening 'train/audio/yes/0a7c2a8d_nohash_0.wav': File contains data in an unknown format
and
NoBackendError:
I've tried downloading ffmpeg and gstreamer to fix the second error but no luck. I'm not sure what to do about the first error as I have imported libraries that should be able to handle .wav files.
Thank you for your help in advance.
enter image description hereI need to download satellite images using python. I have found a code in GitHub but I did not understand what at this line. Please help me what it exactly is.
Visit https://github.com/kscottz/PythonFromSpace/blob/master/TheBasics.ipynb
import sys
import os
import json
import scipy
import urllib
import datetime
import urllib3
import rasterio
import subprocess
import numpy as np
import pandas as pd
import seaborn as sns
from osgeo import gdal
from planet import api
from planet.api import filters
from traitlets import link
import rasterio.tools.mask as rio_mask
from shapely.geometry import mapping, shape
from IPython.display import display, Image, HTML
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
urllib3.disable_warnings()
from ipyleaflet import (
Map,
Marker,
TileLayer, ImageOverlay,
Polyline, Polygon, Rectangle, Circle, CircleMarker,
GeoJSON,
DrawControl
)
%matplotlib inline
# will pick up api_key via environment variable PL_API_KEY
# but can be specified using `api_key` named argument
api_keys = json.load(open("apikeys.json",'r'))
client = api.ClientV1(api_key=api_keys["PLANET_API_KEY"])
# Make a slippy map to get GeoJSON
api_keys = json.load(open("apikeys.json",'r'))
client = api.ClientV1(api_key=api_keys["PLANET_API_KEY"])
What is the meaning of these two lines. What file should I upload for apikeys.json
You should follow this link to get an API Key.
https://support.planet.com/hc/en-us/articles/212318178-What-is-my-API-key-
apikeys.json is a JSON file of following format/content in json:
{"PLANET_API_KEY":"<Some API Key here>"}
json.load(...) API loads this json file as a dictionary
#ImportError: cannot import name 'comb'
import scipy
from scipy.misc import comb
# Loading the vectorizer and classfier
with open('classifier.pickle','rb') as f:
classifier = pickle.load(f)
with open('tfidfmodel.pickle','rb') as f:
vectorizer = pickle.load(f)
both the files classifier.pickle and tfid.pickle as kept in same folder
This has nothing to do with the location of classifier or vectorizer (tfid does not appear in your question).
This is an import error, due to the fact that comb was moved to scipy.special. This would've been easy to find by googling "scipy.misc.comb" and seeing the result "scipy.special.comb" fairly high.
import os
import sys
import numpy as np
import scipy
import pylab
import pymorph
import mahotas
import matplotlib.pyplot as plt
import Image
from scipy import ndimage
from pymorph import regmax
from PIL import Image
path='all_images'
for file in os.listdir(path):
current = os.path.join(path, file)
extension = os.path.splitext(current)[-1]
fileType = extension.upper()
print(current)
if os.path.isfile(current):
img = mahotas.imread(current)
imgf = ndimage.gaussian_filter(img, 8)
pylab.gray()
imgf.save('dnaa.gif')
Can not save file using the below python code. Error: numpy.ndarray object has no attribute 'save'. Can anyone help how to save file using pylab. I guss the last line of the code has some issue.
Use mahotas.imsave('dnaa.gif', imgf) instead. The NumPy array you get from gaussian_filter doesn't have save functionality built in.