I need to replace values in an attribute table for one column (replace zeroes in column named "label" to 100). Is this possible using ogr or python? I have to do this for 500+ shapefiles.
In the Esri ArcGIS realm, Update Cursors are typically used for this type of operation.
For example
import arcpy
# Your input feature class
fc = r'C:\path\to\your.gdb\feature_class'
# Start an update cursor and change values from 0 to 100 in a field called "your_field"
with arcpy.da.UpdateCursor(fc, "your_field") as cursor:
for row in cursor:
if row[0] == 0:
row[0] = 100
cursor.updateRow(row)
Related
I'm trying to get some values from selected columns but from the same row. I managed to get the data from the desired row, but having issues with the data storing and displaying. How can I store the values from the different column (4333, 3444, 2222, 1) into variables for further computations. Thanks.
deviceEUI = "00137a10000129a9"
command = ("SELECT `Current_1`, `Current_2`, `Current_3`, `Multiplier_1` from `Current Data` WHERE `Device EUI` =%s ORDER BY `Timestamp` DESC LIMIT 1")
cursor.execute(command, (deviceEUI,))
results = cursor.fetchall()
print(type(results))
print(results)
Output:
<class 'list'>
[(Decimal('4333.00'), Decimal('3444.00'), Decimal('2222.00'), Decimal('1.00'))]
This is just an idea any way.
Retrieve the data of the specific columns you want,
Create an insert statement.
Write a conditional statement to insert the retrieved data into your new table where you want the retrieved data stored.
Think about this and do your manipulation.
Im trying to store my json file to the popNames DB but this error pops up.
My Json file is a dictionary with the country being the key and the person names as key_value. In my DB I want to put the country as the first element as a primary and the names in the subsequent column in the db table
Could anyone help me with this?
enter image description here
Every INSERT call creates a new row in the PopNamesDB table. Your code creates many such rows: the first row has a country but NULL for all the other columns. The next N rows each have a null country, a value for colName, and NULL for all the other columns.
An easy way to fix your code is to change your followup INSERT calls (on line 109) to change the row you created earlier, instead of creating new rows. The query will look something like
cur.execute(''' UPDATE PopNamesDB SET ''' + colName + ''' = ? WHERE country = ?''', (y, c))
I have a table in PostGIS with several rasters, which have the same spatial reference, but the tiffs are from different dates. Now I am trying to access the column "rast" to detect changes between rows. My aim is to subtract the pixel value of the first row from the second and then from the third row's pixel values, and so on.
How can I iterate over the rows and subtract the pixel values of each row from the following row?
[enter image description here][1]
#!/usr/bin/python
# -*- coding: utf-8 -*-
import psycopg2
import sys
conn = None
conn = psycopg2.connect(database="postgres", user="postgres",host="localhost", password="password")
cur = conn.cursor()
cur.execute('SELECT * from my_table')
while True:
row = cur.fetchone()
if row == None:
break
rast_col = row[1]
I imported several rasters, which have the same spatial area but diffrent dates via following command:
C:\Program Files\PostgreSQL\9.6\bin>raster2pgsql -s 4326 -F -I "C:\User\Desktop\Data\*.tif" public.all_data|psql -U User -h localhost -p 5432
This is the table that was created in postgresql after importing the data [1]: https://i.stack.imgur.com/uBHX3.jpg
Each row is representing one raster image in "TIFF" format. The column "rast" contains the pixel values. My aim is to calculate the diffrence between the adjacent rows...same like the lag windows function, but it does not work on raster column type...
The only thing, that i fixed was calculating the diffrence between two raster images. For that I had to create for each row a separate table. U can see it below:
CREATE TABLE table1 AS SELECT * FROM my_tabke WHERE rid=1;
CREATE TABLE table2 AS SELECT * FROM my_table WHERE rid=2;
And then I did a simple MapAlgebra Operation on both tables like this:
SELECT ST_MapAlgebra(t1.rast,t2.rast, '([rast1]-[rast2])') AS rast INTO diffrence FROM table1 t1, table2 t2;
but this is just the diffrence between two rasters, and for the MapAlgebra operation I had to create extra tables for each raster images. But I have more the 40 raster images in one table, and I want to detect the change of all adjacent rows between my table.
The lag() window function should work on raster columns just like on any old column. It just selects the value from a row before the current offset by some amount in the window frame.
You of course cannot just subtract rasters using Postgresql operators – not without overloading at least.
In order to calculate the differences between adjacent rasters ordered by rid you should pass the lagged raster as an argument to ST_MapAlgebra
SELECT ST_MapAlgebra(rast, lag(rast) OVER (ORDER BY rid DESC),
'[rast1] - [rast2]')
FROM my_table;
Since lag() selects rows before the current row in the partition, the rows are ordered by rid in descending order; 2 comes before 1 etc. Also because a window frame by default consists only of rows that come before the current row, this is easier than using lead() and a frame clause that selects rows following the current.
Disclaimer
I've not used rasters and you may have to fine tune the query to suit your specific needs.
I'm hoping to duplicate my techniques for looping through tables in R using python in the ArcGIS/arcpy framework. Specifically, is there a practical way to loop through the rows of an attribute table using python and copy that data based on the values from previous table values?
For example, using R I would use code similar to the following to copy rows of data from one table that have unique values for a specific variable:
## table name: data
## variable of interest: variable
## new table: new.data
for (i in 1:nrow(data))
{
if (data$variable[i] != data$variable[i-1])
{
rbind(new.data,data[i,])
}
}
If I've written the above code correctly then in words, this for-loop simply checks to see if the current value in a table is different from the previous value and adds all column values for that row to the new table if it is in fact a new value. Any help with this thought process would be great.
Thanks!
To just get the unique values in a table in a field in arcpy:
import arcpy
table = "mytable"
field = "my_field"
# ArcGIS 10.0
unique_values = set(row.getValue(field) for row in iter(arcpy.SearchCursor(table).next, None))
# ArcGIS 10.1+
unique_values = {row[0] for row in arcpy.da.SearchCursor(table, field)}
Yes to loop through values in table using arcpy you want to use a cursor. Its been a while since I've used arcpy, but if I recall correctly the one you want is a search cursor. In its simplest form this is what it would look like:
import arcpy
curObj = arcpy.SearchCursor(r"C:/shape.shp")
row = curObj.next()
while row:
columnValue = row.getValue("columnName")
row = curObj.next()
As of version 10 (i think) they introduced a data access cursor which is orders of magnitude faster. Data access or DA cursors require you to declare what columns you want to have returned when you create the cursor. Example:
import arcpy
columns = ['column1', 'something', 'someothercolumn']
curObj = arcpy.da.SearchCursor(r"C:/somefile.shp", columns)
for row in curObj:
print 'column1 is', row[0]
print 'someothercolumn is', row[2]
I want to develop a script to update individual cells (row of a specific column) of an attribute table based on the value of the cell that comes immediately before it as well as data in other columns but in the same row. I'm sure that this can be done with cursors but I'm having trouble conceptualizing exactly how to tackle this.
Essentially what I want to do is this:
If Column A, row 13 = a certain value AND Column B, row 13 = a certain value (but different from A), then change Column A, row 13 to be the same value as Column A, row 12.
If this can't be done with cursors then maybe some kind of array or matrix, or list of lists would be the way to go? I'm basically looking for the best direction to take with this. EDIT: My files are shapefiles or I also have them in .csv format. My code is really basic right now:
import arcpy
from arcpy import env
env.workspace = "C:/All Data Files/My Documents All/My Documents/wrk"
inputLyr = "C:/All Data Files/My Documents All/My Documents/wrk/file.lyr"
fields = ["time", "lon", "activityIn", "time", "fixType"]
cursor180 = arcpy.da.SearchCursor(inputLyr, fields, """"lon" = -180""")
for row in cursor180:
# Print the rows that have no data, along with activity Intensity
print row[0], row[1], row[2]