FFT Runtime Error in Running Galsim - python

I keep receiving the following error when running a script to save an animation:
RuntimeError: SB Error: fourierDraw() requires an FFT that is too large, 6144
If you can handle the large FFT, you may update gsparams.maximum_fft_size.
So I went into /Galsim/include/galsim/GSparams.h
and I changed the following
maximum_fft_size(16384) from maximum_fft_size(4096)
or 2^14 from 2^12.
I still get the same error as before. Should I restart my machine or something?

That is not where to change the maximum_fft_size parameter. See demo7 for an example of how to use the GSParams object and to update parameters. There is also an example in the doc string for GSObject:
>>> gal = galsim.Sersic(n=4, half_light_radius=4.3)
>>> psf = galsim.Moffat(beta=3, fwhm=2.85)
>>> conv = galsim.Convolve([gal,psf])
>>> im = galsim.Image(1000,1000, scale=0.05) # Note the very small pixel scale!
>>> im = conv.drawImage(image=im) # This uses the default GSParams.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "galsim/base.py", line 1236, in drawImage
image.added_flux = prof.SBProfile.draw(imview.image, gain, wmult)
RuntimeError: SB Error: fourierDraw() requires an FFT that is too large, 6144
If you can handle the large FFT, you may update gsparams.maximum_fft_size.
>>> big_fft_params = galsim.GSParams(maximum_fft_size=10240)
>>> conv = galsim.Convolve([gal,psf],gsparams=big_fft_params)
>>> im = conv.drawImage(image=im) # Now it works (but is slow!)
>>> im.write('high_res_sersic.fits')

Related

Too many values to unpack? python-radar

I have problem which when i run the code it shows an error:
Traceback (most recent call last):
File "C:\Users\server\PycharmProjects\Publictest2\main.py", line 19, in <module>
Distance = radar.route.distance(Starts, End, modes='transit')
File "C:\Users\server\PycharmProjects\Publictest2\venv\lib\site-packages\radar\endpoints.py", line 612, in distance
(origin_lat, origin_lng) = origin
ValueError: too many values to unpack (expected 2)
My Code:
from radar import RadarClient
import pandas as pd
API_key = 'API'
radar = RadarClient(API_key)
file = pd.read_excel('files')
file['AntGeo'] = Sourced[['Ant_lat', 'Ant_long']].apply(','.join, axis=1)
file['BaseGeo'] = Sourced[['Base_lat', 'Base_long']].apply(','.join, axis=1)
antpoint = file['AntGeo']
basepoint = file['BaseGeo']
for antpoint in antpoint:
dist= radar.route.distance(antpoint , basepoint, modes='transit')
dist= dist['routes'][0]['distance']
dist= dist / 1000
Firstly, your error code does not match your given code sample correctly.
It is apparent you are working with the python library for the Radar API.
Your corresponding line 19 is dist= radar.route.distance(antpoint , basepoint, modes='transit')
From the radar-python 'pypi manual', your route should be referenced as:
## Routing
radar.route.distance(origin=[lat,lng], destination=[lat,lng], modes='car', units='metric')
Without having sight of your dataset, file, one can nonetheless deduce or expect the following:
Your antpoint and basepoint must be a two-item list (or tuple).
For instance, your antpoint ought to have a coordinate like [40.7041029, -73.98706]
See the radar-python manual
line 11 and 13 in your code
file['AntGeo'] = Sourced[['Ant_lat', 'Ant_long']].apply(','.join, axis=1)
file['BaseGeo'] = Sourced[['Base_lat', 'Base_long']].apply(','.join, axis=1)
Your error is occuring at this part:
Distance = radar.route.distance(Starts, End, modes='transit')
(origin_lat, origin_lng) = origin
First of all check the amount of variables that "origin" delivers to you, it's mismatched with the expectation I guess.

TorchServe: How to convert bytes output to tensors

I have a model that is served using TorchServe. I'm communicating with the TorchServe server using gRPC. The final postprocess method of the custom handler defined returns a list which is converted into bytes for transfer over the network.
The post process method
def postprocess(self, data):
# data type - torch.Tensor
# data shape - [1, 17, 80, 64] and data dtype - torch.float32
return data.tolist()
The main issue is at the client where converting the received bytes from TorchServe to a torch Tensor is inefficiently done via ast.literal_eval
# This takes 0.3 seconds
response = self.inference_stub.Predictions(
inference_pb2.PredictionsRequest(model_name=model_name, input=input_data))
# This takes 0.84 seconds
predictions = torch.as_tensor(literal_eval(
response.prediction.decode('utf-8')))
Using numpy.frombuffer or torch.frombuffer return the following error.
import numpy as np
np.frombuffer(response.prediction)
Traceback (most recent call last):
File "<string>", line 1, in <module>
ValueError: buffer size must be a multiple of element size
np.frombuffer(response.prediction, dtype=np.float32)
Traceback (most recent call last):
File "<string>", line 1, in <module>
ValueError: buffer size must be a multiple of element size
Using torch
import torch
torch.frombuffer(response.prediction, dtype = torch.float32)
Traceback (most recent call last):
File "<string>", line 1, in <module>
ValueError: buffer length (2601542 bytes) after offset (0 bytes) must be a multiple of element size (4)
Is there an alternative, more efficient solution of converting the received bytes into torch.Tensor?
One hack I've found that has significantly increased the performance while sending large tensors is to return a list of json.
In your handler's postprocess function:
def postprocess(self, data):
output_data = {}
output_data['data'] = data.tolist()
return [output_data]
At the clients side when you receive the grpc response, decode it using json.loads
response = self.inference_stub.Predictions(
inference_pb2.PredictionsRequest(model_name=model_name, input=input_data))
decoded_output = response.prediction.decode('utf-8')
preds = torch.as_tensor(json.loads(decoded_output))
preds should have the output tensor
Update:
There's an even faster method and should completely solve the bottleneck. Use tf.io.serialize_tensor from tensorflow to serialize your tensor inside postprocess
def postprocess(self, data):
return [tf.io.serialize_tensor(data.cpu()).numpy()]
Decode it using tf.io.parse_tensor
response = self.inference_stub.Predictions(
inference_pb2.PredictionsRequest(model_name=model_name, input=input_data))
prediction = response.prediction
torch.as_tensor(tf.io.parse_tensor(prediction, out_type=tf.float32).numpy())

Uber Ludwig: Issue Making Predictions

I decided to mess with Uber Ludwig again. I wanted to make a simple demo using the python API that learns to add 1 to the input number. I have successfully produced a model, but the issue arises when predicting. I am running on the newest release from github on PopOS 19.10 on CPU TensorFlow.
Thank you for any help.
Edit: I have reproduced the issue on windows as well.
The error is as follows
Traceback (most recent call last):
File "predict.py", line 3, in <module>
x = model.predict({"numberIn":[1]}, return_type='dict')
File "/home/user/.local/lib/python3.7/site-packages/ludwig/api.py", line 914, in predict
gpu_fraction=gpu_fraction,
File "/home/user/.local/lib/python3.7/site-packages/ludwig/api.py", line 772, in _predict
self.model_definition['preprocessing']
File "/home/user/.local/lib/python3.7/site-packages/ludwig/data/preprocessing.py", line 159, in build_data
preprocessing_parameters
File "/home/user/.local/lib/python3.7/site-packages/ludwig/data/preprocessing.py", line 180, in handle_missing_values
dataset_df[feature['name']] = dataset_df[feature['name']].fillna(
AttributeError: 'list' object has no attribute 'fillna'
Here is my prediction script
from ludwig.api import LudwigModel
model = LudwigModel.load("/home/user/Documents/ludwig-test/plus1/results/api_experiment_run_0/model")
x = model.predict({"numberIn":[1]}, return_type='dict')
#x = model.predict({"numberIn":[1]}, return_type=<class 'dict'>) I tried this with no success
print(x)
Here is the contents of my training script.
mydata = {"numberIn":[], "value":[]}
for x in range(10000):
mydata["numberIn"].append(x)
mydata["value"].append(x + 1)
from ludwig.api import LudwigModel
print("Imported Ludwig")
modelobject = LudwigModel(model_definition_file="modeldef.yaml")
stats = modelobject.train(data_dict=mydata)
modelobject.close()
modeldef.yaml
input_features:
-
name: numberIn
type: numerical
output_features:
-
name: value
type: numerical
Solution: Input argument of predict function is not positional and data_dict needs to be specified in this case.
x = modelobject.predict(data_dict=mydictionary)

I need to FFT wav files?

I saw this question and answer about using fft on wav files and tried to implement it like this:
import matplotlib.pyplot as plt
from scipy.io import wavfile # get the api
from scipy.fftpack import fft
from pylab import *
import sys
def f(filename):
fs, data = wavfile.read(filename) # load the data
a = data.T[0] # this is a two channel soundtrack, I get the first track
b=[(ele/2**8.)*2-1 for ele in a] # this is 8-bit track, b is now normalized on [-1,1)
c = fft(b) # create a list of complex number
d = len(c)/2 # you only need half of the fft list
plt.plot(abs(c[:(d-1)]),'r')
savefig(filename+'.png',bbox_inches='tight')
files = sys.argv[1:]
for ele in files:
f(ele)
quit()
But whenever I call it:
$ python fft.py 0.0/4515-11057-0058.flac.wav-16000.wav
I get the error:
Traceback (most recent call last):
File "fft.py", line 18, in <module>
f(ele)
File "fft.py", line 10, in f
b=[(ele/2**8.)*2-1 for ele in a] # this is 8-bit track, b is now normalized on [-1,1)
TypeError: 'numpy.int16' object is not iterable
How can I create a script that generates frequency distributions for each file in the list of arguments?
Your error message states that you are trying to iterate over an integer (a). When you define a via
a = data.T[0]
you grab the first value of data.T. Since your data files are single channel, you are taking the first value of the first channel (an integer). Changing this to
a = data.T
will fix your problem.

Memory error while slicing an array

I have object d connected to h5 dataset:
>>> data = d[:, :, 0].astype(np.float32)
>>> data.shape
(17201, 10801)
>>> data[data==-32768] = data[data>0].min()
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
MemoryError
Can I do some other slicing trick to avoid this error?
OK, I'm writing answer myself, as there is acceptable solution, gained after #mgilson questioned data type.
If data allows, memory error can be avoided by using simpler data type while operating on array. Considering initial question this worked for me:
>>> data = d[:, :, 0].astype(np.short)
>>> data[data==-32768] = data[data>0].min()
>>> data = data.astype(np.float32)

Categories