I have a scroll list on my window that I am going to insert 2 entry for each row, I am trying to understand how I can catch the entry that has been changed and update my array with this value.
I will explain what is my code:
I have an array that has 2 fields: Name and Description
Each row has 2 entry, Name and Description
When I am going to modify the row number 2 I want to update my object on my array:
rows[1].name = XXX rows[1].description = YYY
You might also want to consider using Gtk.TreeView with editable cells. The underlying Gtk.ListStore could replace your array.
But you can also use your existing entries and pass any data you want as "user data" to the callback for the "changed" signal.
def on_entry_changed(entry, data):
print("Row %d, Column %s - %s", data[0], data[1], entry.get_text())
for i in xrange(10):
name = Gtk.Entry()
name.connect("changed", on_entry_changed, (i, "name"))
description = Gtk.Entry()
description.connect("changed", on_entry_changed, (i, "description"))
# add your entries to a box or whatever
Related
I have a function that checks if a barcode is known to the warehouse. If so, the function grabs the row from the dataframe (occupied by an imported excel file) and will be inserted into a treeview with known items. If the barcode is unknown it will be inserted into a listbox.
The function works and does what it is supposed to do, but I want to expand it by updating the row in the treeview by increasing its quantity by 1 when adding the same barcode to the treeview. See the picture for the current behaviour. Known items treeview
# Function to process new entered barcodes by filtering known and unknown items and adding them to treeview
def scan_check(event):
scanned_item = scan_entry.get()
for code in df.iloc[:, 1]: # column with barcodes
if code == scanned_item:
for row in df.to_numpy().tolist(): # dataframe with item / barcode / item description / size / quantity
if scanned_item in row:
quantity_count = 1
row.insert(4, quantity_count)
scanTree.insert(parent='', index='end', value=row)
for child in scanTree.get_children():
if scanTree.item(child, option='values'[3]) in scanTree.get_children():
quantity_count += 1
scanTree.set(child, 'Quantity', quantity_count)
scan_entry.delete(0, tkinter.END)
break # to prevent adding item to unknown products listbox as well
else:
unknown_listbox.insert(tkinter.END, scanned_item)
scan_entry.delete(0, tkinter.END)
My question is: How would I write the if clause, after iterating throught the children, when I want to check if the added row from the dataframe is already in my treeview?
My attempts at the if clause did not work obviously. I was hoping anyone could help me with my problem. Thanks for reading.
You can simplify the logic:
Search the treeview first for the barcode, if found, update the quantity
If not found, search the dataframe. If found, insert new record into treeview, otherwise insert the barcode to the unknown listbox
def scan_check(event):
scanned_item = scan_entry.get().strip()
if scanned_item == '':
# do nothing if empty string is input
return
# search treeview
for child in scanTree.get_children():
row = scanTree.set(child)
if row['Barcode'] == scanned_item:
# update quantity
scanTree.set(child, 'Quantity', int(row['Quantity'])+1)
break # prevent executing else block
else:
# search dataframe
result = df.loc[df['Barcode'].astype(str) == scanned_item]
if result.empty:
# should check whether barcode already exists?
unknown_listbox.insert('end', scanned_item)
else:
scanTree.insert('', 'end', values=result.iloc[0].to_list()+[1])
scan_entry.delete(0, 'end')
I am attempting to get the results of a stored procedure and populate a model dynamically, or at a minimum, generate a model based off of the result.
My intent is to create a reusable function where it should be ambiguous to the data. I will not know the fields being returned, and wish to take what's returned from the stored procedure, get the field names and put the data in an object with said field names.
How can I dynamically discover the columns in a result set returned from a stored procedure and then create an object to match?
I was able to figure this out. I got a list of the column names from the returned data, created an object by name and set properties/attributes of the object by string.
def callProc(sqlString, clsName):
cursor = connection.cursor()
dataResults = []
try:
cursor.execute(sqlString)
#get data results
results = cursor.fetchall()
#Get column names
columns = [column[0] for column in cursor.description]
#populate class
for row in results:
p = getattr(sys.modules[__name__], clsName)
i=0
for x in columns:
#set property value of matching column name
setattr(p, x, row[i])
#get property value
#x = getattr(p, x)
i=i+1
dataResults.append(p)
except Exception as ex:
print(ex)
finally:
cursor.close()
return dataResults
So I have a SQL query ran in python that will add data to a database, but I am wondering if there is a duplicate key that just updates a couple of fields. The data that I am using is around 30 columns, and wondering if there is a way to do this.
data = [3, "hello", "this", "is", "random", "data",.......,44] #this being 30 items long
car_placeholder = ",".join(['%s'] * len(data))
qry = f"INSERT INTO car_sales_example VALUES ({car_placeholder}) ON DUPLICATE KEY UPDATE
Price = {data[15]}, IdNum = {data[29]}"
cursor.execute(qry, data)
conn.commit()
I want to be able to add an entry if the key doesn't exist, but if it does, update some of the columns within the entry which is that being the Price and the IdNum, which are at odd locations in the dataset. Is this even possible?
If this is not, is there a way to update every column within the database without explicitly saying it. For example
qry = f"INSERT INTO car_sales_example VALUES ({car_placeholder}) ON DUPLICATE KEY UPDATE
car_sales_example VALUES ({car_placeholder})"
instead of going column by column ->
ON DUPLICATE KEY UPDATE Id = %s, Name = %s, Number = %s, etc... #for 30 columns
In ON DUPLICATE KEY UPDATE you can use the VALUES() function with the name of a column to get the value that would have been inserted into that column.
ON DUPLICATE KEY UPDATE price = VALUES(price), idnum = VALUES(idnum)
I've got a ESRI Point Shape file with (amongst others) a nMSLINK field and a DIAMETER field. The MSLINK is not unique, because of a spatial join. What I want to achieve is to keep only the features in the shapefile that have a unique MSLINK and the smallest DIAMETER value, together with the corresponding values in the other fields. I can use a searchcursor to achieve this (looping through all features and removing each feature that does not comply, but this takes ages (> 75000 features). I was wondering if eg. numpy could do the trick faster in ArcMap/arcpy.
I think, making that kind of processing would definitely be a lot faster if you work on memory instead of interacting with arcgis. For example, by putting all the rows first into a python object (probably a namedtuple would be a good option here). Then you can find out which rows you want to delete or insert.
The fastest approach depends on a) if you have a lot of (MSLINK) repeated rows, then the fastest would be inserting just the ones you need in a new layer. Or b) if the rows to be deleted are just a few compared to the total of rows, then deleting is faster.
For a) you'll need to fetch all fields into the tuple, including the point coordinates, so that you can just create a new feature class and insert the new rows.
# Example of Variant a:
from collections import namedtuple
# assuming the following:
source_fc # contains name of the fclass
the_path # contains path to the shape
cleaned_fc # the name of the cleaned fclass
# use all fields of source_fc plus the shape token to get a touple with xy
# coordinates (using 'mslink' and 'diam' here to simplify the example)
fields = ['mslink', 'diam', 'field3', ... ]
all_fields = fields + ['SHAPE#XY']
# define a namedtuple to hold and work with the rows, use the name 'point' to
# hold the coordinates-tuple
Row = namedtuple('Row', fields + ['point'])
data = []
with arcpy.da.SearchCursor(source_fc, fields) as sc:
for r in sc:
# unzip the values from each row into a new Row (namedtuple) and append
# to data
data.append(Row(*r))
# now just delete the rows we don't want, for this, the easiest way, is probably
# to order the tuple first after MSLINK and then after the diamater...
data = sorted(data, key = lambda x : (x.mslink, x.diam))
# ... now just keep the first ones for each mslink
to_keep = []
last_mslink = None
for d in data:
if last_mslink != d.mslink:
last_mslink = d.mslink
to_keep.append(d)
# create a new feature class with the same fields as the source_fc
arcpy.CreateFeatureclass_management(
out_path=the_path, out_name=cleaned_fc, template=source_fc)
with arcpy.da.InsertCursor(cleaned_fc, all_fields) as ic:
for r in to_keep:
ic.insertRow(*r)
And for alternative b) I would just fetch 3 fields, a unique ID, MSLINK and the diameter. Then make a delete list (here you only need the unique ids). Then loop again through the feature class and delete the rows with the id on your delete-list. Just to be sure, I would duplicate the feature class first, and work on a copy.
There are a few steps you can take to accomplish this task more efficiently. First and foremost, making use of the data analyst cursor as opposed to the older version of cursor will increase the speed of your process. This assumes you are working in 10.1 or beyond. Then you can employ summary statistics, namely its ability to find a minimum value based off a case field. For yours, the case field would be nMSLINK.
The code below first creates a statistics table with all unique 'nMSLINK' values, and its corresponding minimum 'DIAMETER' value. I then use a table select to select out only rows in the table whose 'FREQUENCY' field is not 1. From here I iterate through my new table and start to build a list of strings that will make up a final sql statement. After this iteration, I use the python join function to create an sql string that looks something like this:
("nMSLINK" = 'value1' AND "DIAMETER" <> 624.0) OR ("nMSLINK" = 'value2' AND "DIAMETER" <> 1302.0) OR ("nMSLINK" = 'value3' AND "DIAMETER" <> 1036.0) ...
The sql selects rows where nMSLINK values are not unique and where DIAMETER values are not the minimum. Using this SQL, I select by attribute and delete selected rows.
This SQL statement is written assuming your feature class is in a file geodatabase and that 'nMSLINK' is a string field and 'DIAMETER' is a numeric field.
The code has the following inputs:
Feature: The feature to be analyzed
Workspace: A folder that will store a couple intermediate tables temporarily
TempTableName1: A name for one temporary table.
TempTableName2: A name for a second temporary table
Field1 = The nonunique field
Field2 = The field with the numeric values that you wish to find the lowest of
Code:
# Import modules
from arcpy import *
import os
# Local variables
#Feature to analyze
Feature = r"C:\E1B8\ScriptTesting\Workspace\Workspace.gdb\testfeatureclass"
#Workspace to export table of identicals
Workspace = r"C:\E1B8\ScriptTesting\Workspace"
#Name of temp DBF table file
TempTableName1 = "Table1"
TempTableName2 = "Table2"
#Field names
Field1 = "nMSLINK" #nonunique
Field2 = "DIAMETER" #field with numeric values
#Make layer to allow selection
MakeFeatureLayer_management (Feature, "lyr")
#Path for first temp table
Table = os.path.join (Workspace, TempTableName1)
#Create statistics table with min value
Statistics_analysis (Feature, Table, [[Field2, "MIN"]], [Field1])
#SQL Select rows with frequency not equal to one
sql = '"FREQUENCY" <> 1'
# Path for second temp table
Table2 = os.path.join (Workspace, TempTableName2)
# Select rows with Frequency not equal to one
TableSelect_analysis (Table, Table2, sql)
#Empty list for sql bits
li = []
# Iterate through second table
cursor = da.SearchCursor (Table2, [Field1, "MIN_" + Field2])
for row in cursor:
# Add SQL bit to list
sqlbit = '("' + Field1 + '" = \'' + row[0] + '\' AND "' + Field2 + '" <> ' + str(row[1]) + ")"
li.append (sqlbit)
del row
del cursor
#Create SQL for selection of unwanted features
sql = " OR ".join (li)
print sql
#Select based on SQL
SelectLayerByAttribute_management ("lyr", "", sql)
#Delete selected features
DeleteFeatures_management ("lyr")
#delete temp files
Delete_management ("lyr")
Delete_management (Table)
Delete_management (Table2)
This should be quicker than a straight-up cursor. Let me know if this makes sense. Good luck!
I am trying to create an new unique id field in an access table. I already have one field called SITE_ID_FD, but it is historical. The format of the unique value in that field isn't what our current format is, so I am creating a new field with the new format.
Old Format = M001, M002, K003, K004, S005, M006, etc
New format = 12001, 12002, 12003, 12004, 12005, 12006, etc
I wrote the following script:
fc = r"Z:\test.gdb\testfc"
x = 12001
cursor = arcpy.UpdateCursor(fc)
for row in cursor:
row.setValue("SITE_ID", x)
cursor.updateRow(row)
x+= 1
This works fine, but it populates the new id field based on the default sorting of objectID. I need to sort 2 fields first and then populate the new id field based on that sorting (I want to sort by a field called SITE and then by the old id field SITE_ID_FD)
I tried manually sorting the 2 fields in hopes that Python would honor the sort, but it doesn't. I'm not sure how to do this in Python. Can anyone suggest a method?
A possible solution is when you are creating your update cursor. you can specify to the cursor the fields by which you wish it to be sorted (sorry for my english..), they explain this in the documentation: http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#//000v0000003m000000
so it goes like this:
UpdateCursor(dataset, {where_clause}, {spatial_reference}, {fields}, {sort_fields})
and you are intrested only in the sort_fields so assuming that your code will work well on a sorted table and that you want the table ordered asscending the second part of your code should look like this:
fc = r"Z:\test.gdb\testfc"
x = 12001
cursor = arcpy.UpdateCursor(fc,"","","","SITE A, SITE_ID_FD A")
#if you want to sort it descending you need to write it with a D
#>> cursor = arcpy.UpdateCursor(fc,"","","","SITE D, SITE_ID_FD D")
for row in cursor:
row.setValue("SITE_ID", x)
cursor.updateRow(row)
x+= 1
i hope this helps
Added a link to the arcpy docs in a comment, but from what I can tell, this will create a new, sorted dataset--
import arcpy
from arcpy import env
env.workspace = r"z:\test.gdb"
arcpy.Sort_management("testfc", "testfc_sort", [["SITE", "ASCENDING"],
["SITE_IF_FD", "ASCENDING]])
And this will, on the sorted dataset, do what you want:
fc = r"Z:\test.gdb\testfc_sort"
x = 12001
cursor = arcpy.UpdateCursor(fc)
for row in cursor:
row.setValue("SITE_ID", x)
cursor.updateRow(row)
x+= 1
I'm assuming there's some way to just copy the sorted/modified dataset back over the original, so it's all good?
I'll admit, I don't use arcpy, and the docs could be a lot more explicit.