I would like to convert an Oracle DB frame including a 'SHAPE' column that hold the 'oracle.sql.STRUCT#' information into something more accessible, either a geojson/shapefile/dataframe either using Python/R or SQL.
Any ideas?
Create your frame with a query using one of the SDO_UTIL functions to convert the shape (sdo_geometry type) to a type easily consumed by Python/R, i.e. wkb, wkt, geojson. For example SDO_UTIL.TO_WKTGEOMETRY(shape). See info on the conversion functions here; https://docs.oracle.com/en/database/oracle/oracle-database/19/spatl/SDO_UTIL-reference.html
Related
I'm building a GUI with GTK-3 in Python using Glade. There is also a ListStore with its columns defined in the Glade file. Now I want to fill the ListStore with numeric values (there are int & float columns) that I receive as strings, so I need to convert them according to the column types.
I would like to use the column types as defined in the ListStore object for the conversion instead of a hardcoded list to maintain the column types only once (preferably in the Glade file) and to keep this part modular for other similar ListStores that will be added later. Unfortunately, the column types I get out of the ListStore by calling TreeModel.get_column_type(index_) are GObject.GTypes and I've no idea how to create an appropriate value using the GType or get the related Python type from it.
I already tried to play around with a gint type object and the GObject.GType class in the interactive Python shell and read through the online docu, but everything without getting a clue to solve my problem. Also creating a GObject.Value of type gint from a string like "123" or transform() a gchararray value into a gint one didn't work.
How can I convert my strings into the proper numeric type based on a GType? Is it actually achievable what I want to do or do I need a different approach? I could use some static mapping of GTypes to Pyhton types instead, but if I just miss something about the GType system I would like to understand it.
I'm calling a python function using the UiPath Python Activities Pack (Get Python Object) and it returns a DataFrame in order to use it within UiPath. Unfortunately, UiPath is not able to convert the DataFrame to a .Net DataType like a DataTable.
Even when I try to convert the DataFrame to any other format (String, numpy array, html etc.) it is not working although the documentation mentions explicitly that all DataTypes are supported. The Python script does its work an stores the Content of the DataFrame in an Excel file and I could, of course, just read the Excel file. I was just wondering whether there is a way to directly pass the data to UiPath instead of saving it first and reading it again.
Actually, I spent quite some time on this but finally figured out how to pass the pandas DataFrame to UiPath and make it available there in a dataTable. I explain how I did it in the following:
Python Script:
I let the python function that I call in the UiPath 'Invoke Paython Method' activity return the pandas dataframe as a JSON string, i.e.
return df.to_json(orient='records')
Get Python Object:
Save the JSON string in a variable of type string
Deserialize JSON:
Choose 'System.Data.DataTable' as TypeArgument and store the result in a variable of type dataTable
Now the Data from the pandas dataFrame is available in a .Net dataTable in Uipath.
I'm new to python and even newer to SQL and have just run into the following problem:
I want to insert a list (or actually, a list containing one or more dictionaries) into a single cell in my SQL database. This is one row of my data:
[a,b,c,[{key1: int, key2: int},{key1: int, key2: int}]]
As the number of dictionaries inside the lists varies and I want to iterate through the elements of the list later on, I thought it would make sense to keep it in one place (thus not splitting the list into its single elements). However, when trying to insert the list as it is, I get the following error:
sqlite3.InterfaceError: Error binding parameter 2 - probably unsupported type.
How can this kind of list be inserted into a single cell of my SQL database?
SQLite has no facility for a 'nested' column; you'd have to store your list as text or binary data blob; serialise it on the way in, deserialise it again on the way out.
How you serialise to text or binary data depends on your use-cases. JSON (via the json module could be suitable if your lists and dictionaries consist only of text, numbers, booleans and None (with the dictionaries only using strings as keys). JSON is supported by a wide range of other languages, so you keep your data reasonably compatible. Or you could use pickle, which lets you serialise to a binary format and can handle just about anything Python can throw at it, but it's specific to Python.
You can then register an adapter to handle converting between the serialisation format and Python lists:
import json
import sqlite
def adapt_list_to_JSON(lst):
return json.dumps(lst).encode('utf8')
def convert_JSON_to_list(data):
return json.loads(data.decode('utf8'))
sqlite3.register_adapter(list, adapt_list_to_JSON)
sqlite3.register_converter("json", convert_JSON_to_list)
then connect with detect_types=sqlite3.PARSE_DECLTYPES and declare your column type as json, or use detect_types=sqlite3.PARSE_COLNAMES and use [json] in a column alias (SELECT datacol AS "datacol [json]" FROM ...) to trigger the conversion on loading.
How can I store python 'list' values into MySQL and access it later from the same database like a normal list?
I tried storing the list as a varchar type and it did store it. However, while accessing the data from MySQL I couldn't access the same stored value as a list, but it instead it acts as a string. So, accessing the list with index was no longer possible. Is it perhaps easier to store some data in the form of sets datatype? I see the MySQL datatype 'set' but i'm unable to use it from python. When I try to store set from python into MySQL, it throws the following error: 'MySQLConverter' object has no attribute '_set_to_mysql'. Any help is appreciated
P.S. I have to store co-ordinate of an image within the list along with the image number. So, it is going to be in the form [1,157,421]
Use a serialization library like json:
import json
l1 = [1,157,421]
s = json.dumps(l1)
l2 = json.loads(s)
Are you using an ORM like SQLAlchemy?
Anyway, to answer your question directly, you can use json or pickle to convert your list to a string and store that. Then to get it back, you can parse it (as JSON or a pickle) and get the list back.
However, if your list is always a 3 point coordinate, I'd recommend making separate x, y, and z columns in your table. You could easily write functions to store a list in the correct columns and convert the columns to a list, if you need that.
I have a scientific model which I am running in Python which produces a lookup table as output. That is, it produces a many-dimensional 'table' where each dimension is a parameter in the model and the value in each cell is the output of the model.
My question is how best to store this lookup table in Python. I am running the model in a loop over every possible parameter combination (using the fantastic itertools.product function), but I can't work out how best to store the outputs.
It would seem sensible to simply store the output as a ndarray, but I'd really like to be able to access the outputs based on the parameter values not just indices. For example, rather than accessing the values as table[16][5][17][14] I'd prefer to access them somehow using variable names/values, for example:
table[solar_z=45, solar_a=170, type=17, reflectance=0.37]
or something similar to that. It'd be brilliant if I were able to iterate over the values and get their parameter values back - that is, being able to find out that table[16]... corresponds to the outputs for solar_z = 45.
Is there a sensible way to do this in Python?
Why don't you use a database? I have found MongoDB (and the official Python driver, Pymongo) to be a wonderful tool for scientific computing. Here are some advantages:
Easy to install - simply download the executables for your platform (2 minutes tops, seriously).
Schema-less data model
Blazing fast
Provides map/reduce functionality
Very good querying functionalities
So, you could store each entry as a MongoDB entry, for example:
{"_id":"run_unique_identifier",
"param1":"val1",
"param2":"val2" # etcetera
}
Then you could query the entries as you will:
import pymongo
data = pymongo.Connection("localhost", 27017)["mydb"]["mycollection"]
for entry in data.find(): # this will yield all results
yield entry["param1"] # do something with param1
Whether or not MongoDB/pymongo are the answer to your specific question, I don't know. However, you could really benefit from checking them out if you are into data-intensive scientific computing.
If you want to access the results by name, then you could use a python nested dictionary instead of ndarray, and serialize it in a .JSON text file using json module.
One option is to use a numpy ndarray for the data (as you do now), and write a parser function to convert the query values into row/column indices.
For example:
solar_z_dict = {...}
solar_a_dict = {...}
...
def lookup(dataArray, solar_z, solar_a, type, reflectance):
return dataArray[solar_z_dict[solar_z] ], solar_a_dict[solar_a], ...]
You could also convert to string and eval, if you want to have some of the fields to be given as "None" and be translated to ":" (to give the full table for that variable).
For example, rather than accessing the values as table[16][5][17][14]
I'd prefer to access them somehow using variable names/values
That's what numpy's dtypes are for:
dt = [('L','float64'),('T','float64'),('NMSF','float64'),('err','float64')]
data = plb.loadtxt(argv[1],dtype=dt)
Now you can access the data elements using date['T']['L']['NMSF']
More info on dtypes:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html