iterating through attribute tables (ArcGIS 10.1, Python) - python

I'm hoping to duplicate my techniques for looping through tables in R using python in the ArcGIS/arcpy framework. Specifically, is there a practical way to loop through the rows of an attribute table using python and copy that data based on the values from previous table values?
For example, using R I would use code similar to the following to copy rows of data from one table that have unique values for a specific variable:
## table name: data
## variable of interest: variable
## new table: new.data
for (i in 1:nrow(data))
{
if (data$variable[i] != data$variable[i-1])
{
rbind(new.data,data[i,])
}
}
If I've written the above code correctly then in words, this for-loop simply checks to see if the current value in a table is different from the previous value and adds all column values for that row to the new table if it is in fact a new value. Any help with this thought process would be great.
Thanks!

To just get the unique values in a table in a field in arcpy:
import arcpy
table = "mytable"
field = "my_field"
# ArcGIS 10.0
unique_values = set(row.getValue(field) for row in iter(arcpy.SearchCursor(table).next, None))
# ArcGIS 10.1+
unique_values = {row[0] for row in arcpy.da.SearchCursor(table, field)}

Yes to loop through values in table using arcpy you want to use a cursor. Its been a while since I've used arcpy, but if I recall correctly the one you want is a search cursor. In its simplest form this is what it would look like:
import arcpy
curObj = arcpy.SearchCursor(r"C:/shape.shp")
row = curObj.next()
while row:
columnValue = row.getValue("columnName")
row = curObj.next()
As of version 10 (i think) they introduced a data access cursor which is orders of magnitude faster. Data access or DA cursors require you to declare what columns you want to have returned when you create the cursor. Example:
import arcpy
columns = ['column1', 'something', 'someothercolumn']
curObj = arcpy.da.SearchCursor(r"C:/somefile.shp", columns)
for row in curObj:
print 'column1 is', row[0]
print 'someothercolumn is', row[2]

Related

How to store the queried MySQL values from different Columns of the same row in Python?

I'm trying to get some values from selected columns but from the same row. I managed to get the data from the desired row, but having issues with the data storing and displaying. How can I store the values from the different column (4333, 3444, 2222, 1) into variables for further computations. Thanks.
deviceEUI = "00137a10000129a9"
command = ("SELECT `Current_1`, `Current_2`, `Current_3`, `Multiplier_1` from `Current Data` WHERE `Device EUI` =%s ORDER BY `Timestamp` DESC LIMIT 1")
cursor.execute(command, (deviceEUI,))
results = cursor.fetchall()
print(type(results))
print(results)
Output:
<class 'list'>
[(Decimal('4333.00'), Decimal('3444.00'), Decimal('2222.00'), Decimal('1.00'))]
This is just an idea any way.
Retrieve the data of the specific columns you want,
Create an insert statement.
Write a conditional statement to insert the retrieved data into your new table where you want the retrieved data stored.
Think about this and do your manipulation.

Create a new SQLite table in python with for-loop

Say I have 100 different integers I want to store like a row with 100 columns.
I am trying it like this:
db = sqlite3.connect("test.db")
c = db.cursor()
c.execute('''
CREATE TABLE IF NOT EXISTS nums(
id INTEGER PRIMARY KEY,
''')
for i in range(100):
c.execute('''
ALTER TABLE nums
ADD ''' + 'column_' + i + '''INTEGER''')
db.commit()
Someone told me that when you are using numbers as column names you could probably do it a better way. But if I for example have a list with strings in python, and I want to loop through them and store every individual string in its own column, the approach would be the same, right?
However, this code runs without errors for me, but no new table is created, how come?
Your ALTER statement is incorrect as it's missing the COLUMN after ADD. You can use the following:
for i in range(100):
c.execute(f'ALTER TABLE nums ADD COLUMN column_{i} INTEGER')

How to replace values in an attribute table for one column?

I need to replace values in an attribute table for one column (replace zeroes in column named "label" to 100). Is this possible using ogr or python? I have to do this for 500+ shapefiles.
In the Esri ArcGIS realm, Update Cursors are typically used for this type of operation.
For example
import arcpy
# Your input feature class
fc = r'C:\path\to\your.gdb\feature_class'
# Start an update cursor and change values from 0 to 100 in a field called "your_field"
with arcpy.da.UpdateCursor(fc, "your_field") as cursor:
for row in cursor:
if row[0] == 0:
row[0] = 100
cursor.updateRow(row)

Updating a cell based with the value that came before it

I want to develop a script to update individual cells (row of a specific column) of an attribute table based on the value of the cell that comes immediately before it as well as data in other columns but in the same row. I'm sure that this can be done with cursors but I'm having trouble conceptualizing exactly how to tackle this.
Essentially what I want to do is this:
If Column A, row 13 = a certain value AND Column B, row 13 = a certain value (but different from A), then change Column A, row 13 to be the same value as Column A, row 12.
If this can't be done with cursors then maybe some kind of array or matrix, or list of lists would be the way to go? I'm basically looking for the best direction to take with this. EDIT: My files are shapefiles or I also have them in .csv format. My code is really basic right now:
import arcpy
from arcpy import env
env.workspace = "C:/All Data Files/My Documents All/My Documents/wrk"
inputLyr = "C:/All Data Files/My Documents All/My Documents/wrk/file.lyr"
fields = ["time", "lon", "activityIn", "time", "fixType"]
cursor180 = arcpy.da.SearchCursor(inputLyr, fields, """"lon" = -180""")
for row in cursor180:
# Print the rows that have no data, along with activity Intensity
print row[0], row[1], row[2]

python and cx_Oracle - dynamic cursor.setinputsizes

I'm using cx_Oracle to select rows from one database and then insert those rows to a table in another database. The 2nd table's columns match the first select.
So I have (simplified):
db1_cursor.execute('select col1, col2 from tab1')
rows = db1_cursor.fetchall()
db2_cursor.bindarraysize = len(rows)
db2_cursor.setinputsizes(cx_Oracle.NUMBER, cx_Oracle.BINARY)
db2_cursor.executemany('insert into tab2 values (:1, :2)', rows)
This works fine, but my question is how to avoid the hard coding in setinputsizes (I have many more columns).
I can get the column types from db1_cursor.description, but I'm not sure how to feed those into setinputsizes. i.e. how can I pass a list to setinputsizes instead of arguments?
Hope this makes sense - new to python and cx_Oracle
Just use tuple unpacking.
eg.
db_types = (d[1] for d in db1_cursor.description)
db2_cursor.setinputsizes(*db_types)

Categories