Trying to separate a blender object useing a program - python

in blender 3.4.1 i need a way to divid a huge object into tons of seprate object that contain less than 10,000 trianlge so far this is what i got
import bpy
def separate_objects(obj, max_triangles):
# Get the object's mesh data
mesh = obj.data
# Get the number of triangles in the mesh
triangle_count = sum(len(poly.vertices) - 2 for poly in mesh.polygons)
# If the number of triangles is less than the max, just return the object as is
if triangle_count <= max_triangles:
return [obj]
# Create a new collection to store the separated objects
collection = bpy.data.collections.new(obj.name + "_separated")
bpy.context.scene.collection.children.link(collection)
# Separate the mesh into multiple meshes, each with less than max_triangles triangles
separate_meshes = []
for poly in mesh.polygons:
# Skip this polygon if we've already processed it
if poly.use_smooth:
continue
poly.use_smooth = True
# Create a new mesh and add the polygon to it
new_mesh = obj.data.copy()
new_mesh.clear_geometry()
new_mesh.polygons.new(poly.vertices)
# Create a new object using the new mesh
new_obj = bpy.data.objects.new(obj.name + "_part", new_mesh)
separate_meshes.append(new_obj)
# Link the new object to the collection
collection.objects.link(new_obj)
return separate_meshes
Get the selected object
obj = bpy.context.selected_objects[0]
Separate the object into multiple objects, each with less than 10,000 triangles
separated_objs = separate_objects(obj, 10000)
Select the separated objects
bpy.ops.object.select_all(action='DESELECT')
for obj in separated_objs:
obj.select_set(True)
this is the error:
Python: Traceback (most recent call last):
File "\Text", line 44, in
File "\Text", line 29, in separate_objects
AttributeError: 'bpy_prop_collection' object has no attribute 'new'
this program has been made with chatgpt and it cant get past this point

Related

is it possible to add code to a list in python

import PySimpleGUI as sg
rows_needed = 2
result = [['text1', 'text2'], ['text3']]
menu_layout = []
for x in range(0,rows_needed):
temp = []
try:
temp.append(sg.Button(c) for c in result[x])
finally:
pass
menu_layout.append(temp)
print(menu_layout)
layout = [[sg.Button(c) for c in result]]
window = sg.Window('', menu_layout)
window.read()
so im attempting to create a nested list for menu layout,
the result i want would be for example
menu_layout = [[sg.Button('text1'), sg.Button('text2')], [sg.Button('text3')],]
im using pysimplegui
my current code at the top gives the following result in powershell
[[<generator object menu.<locals>.<genexpr> at 0x0000016308733AE0>]]
[[<generator object menu.<locals>.<genexpr> at 0x0000016308733AE0>],
[<generator object menu.<locals>.<genexpr> at 0x0000016308733C30>]]
Traceback (most recent call last): File
"C:\Users\cafemax\projects\POS\POS\Client_Posv2.py", line 713, in
<module>
menu() File "C:\Users\cafemax\projects\POS\POS\Client_Posv2.py", line 532, in menu
window = sg.Window('', menu_layout) File "C:\Users\cafemax\.venvs\stockcontrol\lib\site-packages\PySimpleGUI\PySimpleGUI.py",
line 9604, in __init__
self.Layout(layout) File "C:\Users\cafemax\.venvs\stockcontrol\lib\site-packages\PySimpleGUI\PySimpleGUI.py",
line 9783, in layout
self.add_rows(new_rows) File "C:\Users\cafemax\.venvs\stockcontrol\lib\site-packages\PySimpleGUI\PySimpleGUI.py",
line 9753, in add_rows
self.add_row(*row) File "C:\Users\cafemax\.venvs\stockcontrol\lib\site-packages\PySimpleGUI\PySimpleGUI.py",
line 9708, in add_row
if element.ParentContainer is not None: AttributeError: 'generator' object has no attribute 'ParentContainer'
the reason im trying to do this is because i need to be able to generate a variable amount of buttons based on the size of a list so i can't hard code this.
any help on how to fix this or change must make to my append
temp doesn't contain sg.Button objects, it contains generators because that's what you appended to it. You don't want to create temp and then append the generator to it, you want to extend your list with your generator. See What is the difference between Python's list methods append and extend?
for x in range(0, rows_needed):
temp = []
try:
temp.extend(sg.Button(c) for c in result[x])
finally:
pass
menu_layout.append(temp)
Alternatively, you can simply create temp using a list comprehension:
for x in range(0, rows_needed):
temp = []
try:
temp = [sg.Button(c) for c in result[x]]
finally:
pass
menu_layout.append(temp)
I'm not sure what the try..finally is for, it doesn't seem to be doing anything, but I left it in because that's not the question you're asking.

Indicating a population structure in EggLib Python

In Python, I am using EggLib. I am trying to calculate Jost's D value per SNP found in a VCF file.
Data
Data is here in VCF format. The data set is small, there are 2 populations, 100 individuals per population and 6 SNPs (all on chromosome 1).
Each individual is named Pp.Ii, where p is the population index it belongs to and i is the individual index.
Code
My difficulties concern the specification of the population structure. Here is my trial
### Read the vcf file ###
vcf = egglib.io.VcfParser("MyData.vcf")
### Create the `Structure` object ###
# Dictionary for a given cluster. There is only one cluster.
dcluster = {}
# Loop through each population
for popIndex in [0,1]:
# dictionnary for a given population. There are two populations
dpop = {}
# Loop through each individual
for IndIndex in range(popIndex * 100,(popIndex + 1) * 100):
# A single list to define an individual
dpop[IndIndex] = [IndIndex*2, IndIndex*2 + 1]
dcluster[popIndex] = dpop
struct = {0: dcluster}
### Define the population structure ###
Structure = egglib.stats.make_structure(struct, None)
### Configurate the 'ComputeStats' object ###
cs = egglib.stats.ComputeStats()
cs.configure(only_diallelic=False)
cs.add_stats('Dj') # Jost's D
### Isolate a SNP ###
vcf.next()
site = egglib.stats.site_from_vcf(vcf)
### Calculate Jost's D ###
cs.process_site(site, struct=Structure)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/egglib/stats/_cstats.py", line 431, in process_site
self._frq.process_site(site, struct=struct)
File "/Library/Python/2.7/site-packages/egglib/stats/_freq.py", line 159, in process_site
if sum(struct) != site._obj.get_ning(): raise ValueError, 'invalid structure (sample size is required to match)'
ValueError: invalid structure (sample size is required to match)
The documentation indicates here
[The Structure object] is a tuple containing two items, each being a dict. The first one represents the ingroup and the second represents the outgroup.
The ingroup dictionary is itself a dictionary holding more dictionaries, one for each cluster of populations. Each cluster dictionary is a dictionary of populations, populations being themselves represented by a dictionary. A population dictionary is, again, a dictionary of individuals. Fortunately, individuals are represented by lists.
An individual list contains the index of all samples belonging to this individual. For haploid data, individuals will be one-item lists. In other cases, all individual lists are required to have the same number of items (consistent ploidy). Note that, if the ploidy is more than one, nothing enforces that samples of a given individual are grouped within the original data.
The keys of the ingroup dictionary are the labels identifying each cluster. Within a cluster dictionary, the keys are population labels. Finally, within a population dictionary, the keys are individual labels.
The second dictionary represents the outgroup. Its structure is simpler: it has individual labels as keys, and lists of corresponding sample indexes as values. The outgroup dictionary is similar to any ingroup population dictionary. The ploidy is required to match over all ingroup and outgroup individuals.
but I fail to make sense of it. The example provided is for fasta format and I don't understand to extend the logic to VCF format.
There are two errors
First error
The function make_structure returns the Structure object but does not save it within stats. You therefore have to save this output and use it in the function process_site.
Structure = egglib.stats.make_structure(struct, None)
Second error
The Structure object must designate haploids. Therefore, create the dictionary as
dcluster = {}
for popIndex in [0,1]:
dpop = {}
for IndIndex in range(popIndex * 100,(popIndex + 1) * 100):
dpop[IndIndex] = [IndIndex]
dcluster[popIndex] = dpop
struct = {0: dcluster}

nhlscrapi - download Data error

I am trying to get the statistics and game information of every game of an NHL Season. I am working with Stata. I found the package nhlscrapi and have written code to get all the data and statistics of a particular season:
# Import statements
# Notice how I import the whole modules and not the functions explicitly as given in the online example (good practice)
from nhlscrapi.games import game, cumstats
from nhlscrapi import constants
import csv
# Define season being considered:
season = 2012
# Get all stats they have defined
# Googled "get all methods of a class python" and found this:
# http://stackoverflow.com/questions/34439/finding-what-methods-an-object-has
# Also, needed to excclude some methods (ABCMeta, ...) after I checked what they do
# (I did that with: "help(cumstats.METHODNAME)") and saw that they did not contain stats
methods = [method for method in dir(cumstats) if callable(getattr(cumstats, method)) and
method != 'ABCMeta' and
method != 'AccumulateStats' and
method != 'ShotEventTallyBase' and
method != 'abstractmethod' and
method != 'TeamIncrementor' and
method != 'EF' and
method != 'St']
# Set up dictionary with all stats
cum_stats = {method: getattr(cumstats, method)() for method in methods}
print('All the stats:', cum_stats.keys())
# Now, look up how many games were in the regular season of the year 2012
maxgames = constants.GAME_CT_DICT[season]
# If one is interested in all the home coaches (as an example), one would first set up an empty list,
# and gradually fill it:
thingswewant_keys = ['home_coach', 'away_coach', 'home', 'away', 'attendance', 'Score', 'Fenwick']
thingswewant_values = {key: [] for key in thingswewant_keys if not key in cum_stats.keys()}
thingswewant_values.update({key+'_home': [] for key in cum_stats.keys()})
thingswewant_values.update({key+'_away': [] for key in cum_stats.keys()})
# Now, loop over all games in this season
for i in range(**12**):
# Set up object which queries database
# If one enters the following command in ipython: "help(game.Game)", one sees also alternative ways to set up
# query other than the one given in the example
ggames = game.Game(game.GameKey(season, game.GameType.Regular, i+1), cum_stats=cum_stats)
# This object 'ggames' now contains all the information of 1 specific game.
# To concatenate all the home coaches for example, one would do it like this
for key in thingswewant_keys:
if not key in cum_stats.keys():
# First case: Information is attribute of ggames (e.g. home_coach)
if not key in ['home', 'away', 'attendance']:
thingswewant_values[key] += [getattr(ggames, key)]
# Second case: Information is key of ggames.matchup (e.g. home)
if key in ['home', 'away', 'attendance']:
thingswewant_values[key] += [ggames.matchup[key]]
# Third case: Information is a cum_stat
# Figure out home_team and away team
hometeam = ggames.matchup['home']
awayteam = ggames.matchup['away']
for key in cum_stats.keys():
thingswewant_values[key+'_home'] += [ggames.cum_stats[key].total[hometeam]]
thingswewant_values[key+'_away'] += [ggames.cum_stats[key].total[awayteam]]
# Make one single table out of all the columns
results = [tuple([key for key in thingswewant_values.keys()])]
results += zip(*[thingswewant_values[key] for key in thingswewant_values.keys()])
# Write to csv
with open('brrr.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerows(results)
The problem now is that in every season, after a certain game, the code stops and spits out following error:
Traceback (most recent call last):
File "C:/Users/Dennis/Downloads/AllStatsExcell.py", line 67, in <module>
thingswewant_values[key+'_home'] += [ggames.cum_stats[key].total[hometeam]]
File "C:\Python27\lib\site-packages\nhlscrapi\games\game.py", line 211, in cum_stats
return self.play_by_play.compute_stats()
File "C:\Python27\lib\site-packages\nhlscrapi\games\playbyplay.py", line 95, in compute_stats
for play in self._rep_reader.parse_plays_stream():
File "C:\Python27\lib\site-packages\nhlscrapi\scrapr\rtss.py", line 56, in parse_plays_stream
p_obj = parser.build_play(p)
File "C:\Python27\lib\site-packages\nhlscrapi\scrapr\rtss.py", line 130, in build_play
p['vis_on_ice'] = self.__skaters(skater_tab[0][0]) if len(skater_tab) else { }
File "C:\Python27\lib\site-packages\nhlscrapi\scrapr\rtss.py", line 159, in __skaters
if pl[0].text.isdigit():
AttributeError: 'NoneType' object has no attribute 'isdigit'
In the 2012 season, this occurs after game 12. Therefore I just run for the game 12 in season 2012.
ggames1=game.Game(game.GameKey(2012, game.GameType.Regular, 12),cum_stats=cum_stats
ggames1.cum_stats['ShootOut'].total
In ShootOut, for example, it crashes. But if I run this line again I get the results.
I don't know how to fix this.
If I just could get the csv file of all the games, even if there are some missing values I would be very happy.
First, you need to do some debugging yourself. The error explicitly states:
File "C:/Users/Dennis/Downloads/AllStatsExcell.py", line 67, in
thingswewant_values[key+'_home'] += [ggames.cum_stats[key].total[hometeam]]
That means on line 67 of your program there is an error. At the bottom it shows you what that error is:
AttributeError: 'NoneType' object has no attribute 'isdigit'
This means that you are attempting to get the attribute isdigit on the value of an object that is NoneType. As you might surmise, NoneType objects don't have any contents.
This is the offending line, along with the preceding for block:
for key in cum_stats.keys():
thingswewant_values[key+'_home'] += [ggames.cum_stats[key].total[hometeam]]
What you want to do is probably the following:
for key in cum_stats.keys():
try:
thingswewant_values[key+'_home'] += [ggames.cum_stats[key].total[hometeam]]
except Exception as e:
print(e)
print("key={}".format(key)
print("hometeam={}".format(hometeam)
print("ggames.cumstats={}".format(s[key].total[hometeam])
This is a basic error catching block. The first print line should tell you the exception. The following ones inform you as to the state of various things you're utilizing in the offending line. Your job is to figure out which thing is NoneType (it may not be one of the ones I provided) and then, after that, figure out why it is NoneType. Essentially: look at the data you have and are trying to manipulate in that block. Something is missing in it.

Recursive function to search through a dictionary linked to parallel arrays?

I am currently working on a recursive function to search through a dictionary that represents locations along a river. The dictionary indexes 4 parallel arrays using start as the key.
Parallel Arrays:
start = location of the endpoint with the smaller flow accumulation,
end = location of the other endpoint (with the larger flow accumulation),
length = segment length, and;
shape = actual shape, oriented to run from start to end.
Dictionary:
G = {}
for (st,sh,le,en) in zip(start,shape,length,end):
G[st] = (sh,le,en)
My goal is to search down the river from one of it's start points represented by p and select a locations at 2000 metres (represented by x) intervals until the end. This is the recursive function I'm working on with Python:
def Downstream (p, x, G):
... e = G[p]
... if (IsNull(e)):
... return ("Not Found")
... if (x < 0):
... return ("Invalid")
... if (x < e.length):
... return (Along (e.shape, x))
... return (Downstream (e.end, x-e.length,G))
Currently when I enter Downstream ("(1478475.0, 12065385.0)", 2000, G) it returns a keyerror. I have checked key in G and the key returns false, but when I search G.keys () it returns all keys represented by start including the ones that give false.
For example a key is (1478475.0, 12065385.0). I've used this key as text and a tuple of 2 double values and keyerror returned both times.
Error:
Runtime error
Trackback (most recent call last):
File “<string>”, line 1, in <module>
File “<string>”, line 1, in Downstream
KeyError: (1478475.0, 12065385.0)
What is causing the keyerror and how can I solve this issue to reach my goal?
I am using Python in ArcGIS as this is using an attribute table from a shapefile of polylines and this is my first attempt at using a recursive function.
This question and answer is how I've reached this point in organizing my data and writing this recursive function.
https://gis.stackexchange.com/questions/87649/select-points-approx-2000-metres-from-another-point-along-a-river
Examples:
>>> G.keys ()
[(1497315.0, 11965605.0), (1502535.0, 11967915.0), (1501785.0, 11968665.0)...
>>> print G
{(1497315.0, 11965605.0): (([1499342.3515172896, 11967472.92330054],), (7250.80302528,), (1501785.0, 11968665.0)), (1502535.0, 11967915.0): (([1502093.6057616705, 11968248.26139775],), (1218.82250994,), (1501785.0, 11968665.0)),...
Your function is not working for five main reasons:
The syntax is off - indentation is important in Python;
You don't check whether p in G each time Downstream gets called (the first key may be present, but what about later recursive calls?);
You have too many return points, for example your last line will never run (what should the function output be?);
You seem to be accessing a 3-tuple e = (sh, le, en) by attribute (e.end); and
You are subtracting from the interval length when you call recursively, so the function can't keep track of how far apart points should be - you need to separate the interval and the offset from the start.
Instead, I think you need something like (untested!):
def Downstream(start, interval, data, offset=0, out=None):
if out is None:
out = []
if start in data:
shape, length, end = data[start]
length = length[0]
if interval < length:
distance = offset
while distance < length:
out.append(Along(shape, distance))
distance += interval
distance -= interval
Downstream(end, interval, data, interval - (length - distance), out)
else:
Downstream(end, interval, data, offset - length, out)
return out
This will give you a list of whatever Along returns. If the original start not in data, it will return an empty list.

Python: Using multiprocessing module as possible solution to increase the speed of my function

I wrote a function in Python 2.7 (on Window OS 64bit) in order to calculate the mean value of of the intersection area from a reference polygon (Ref) and one or more segmented (Seg) polygon(s) in ESRI shapefile format. The code is quite slow because i have more that 2000 reference polygon (s) and for each Ref_polygon the function run for every time for all Seg polygons(s) (more than 7000). I am sorry but the function is a prototype.
I wish to know if multiprocessing can help me to increase the speed of my loop or there are more performance solutions. if multiprocessing can be a possible solution i wish to know the best way to optimize my following function
import numpy as np
import ogr
import osr,gdal
from shapely.geometry import Polygon
from shapely.geometry import Point
import osgeo.gdal
import osgeo.gdal as gdal
def AreaInter(reference,segmented,outFile):
# open shapefile
ref = osgeo.ogr.Open(reference)
if ref is None:
raise SystemExit('Unable to open %s' % reference)
seg = osgeo.ogr.Open(segmented)
if seg is None:
raise SystemExit('Unable to open %s' % segmented)
ref_layer = ref.GetLayer()
seg_layer = seg.GetLayer()
# create outfile
if not os.path.split(outFile)[0]:
file_path, file_name_ext = os.path.split(os.path.abspath(reference))
outFile_filename = os.path.splitext(os.path.basename(outFile))[0]
file_out = open(os.path.abspath("{0}\\{1}.txt".format(file_path, outFile_filename)), "w")
else:
file_path_name, file_ext = os.path.splitext(outFile)
file_out = open(os.path.abspath("{0}.txt".format(file_path_name)), "w")
# For each reference objects-i
for index in xrange(ref_layer.GetFeatureCount()):
ref_feature = ref_layer.GetFeature(index)
# get FID (=Feature ID)
FID = str(ref_feature.GetFID())
ref_geometry = ref_feature.GetGeometryRef()
pts = ref_geometry.GetGeometryRef(0)
points = []
for p in xrange(pts.GetPointCount()):
points.append((pts.GetX(p), pts.GetY(p)))
# convert in a shapely polygon
ref_polygon = Polygon(points)
# get the area
ref_Area = ref_polygon.area
# create an empty list
Area_seg, Area_intersect = ([] for _ in range(2))
# For each segmented objects-j
for segment in xrange(seg_layer.GetFeatureCount()):
seg_feature = seg_layer.GetFeature(segment)
seg_geometry = seg_feature.GetGeometryRef()
pts = seg_geometry.GetGeometryRef(0)
points = []
for p in xrange(pts.GetPointCount()):
points.append((pts.GetX(p), pts.GetY(p)))
seg_polygon = Polygon(points)
seg_Area.append = seg_polygon.area
# intersection (overlap) of reference object with the segmented object
intersect_polygon = ref_polygon.intersection(seg_polygon)
# area of intersection (= 0, No intersection)
intersect_Area.append = intersect_polygon.area
# Avarage for all segmented objects (because 1 or more segmented polygons can intersect with reference polygon)
seg_Area_average = numpy.average(seg_Area)
intersect_Area_average = numpy.average(intersect_Area)
file_out.write(" ".join(["%s" %i for i in [FID, ref_Area,seg_Area_average,intersect_Area_average]])+ "\n")
file_out.close()
You can use the multiprocessing package, and especially the Pool class. First create a function that does all the stuff you want to do within the for loop, and that takes as an argument only the index:
def process_reference_object(index):
ref_feature = ref_layer.GetFeature(index)
# all your code goes here
return (" ".join(["%s" %i for i in [FID, ref_Area,seg_Area_average,intersect_Area_average]])+ "\n")
Note that this doesn't write to a file itself- that would be messy because you'd have multiple processes writing to the same file at the same time. Instead, it returns the string that needs to be written. Also note that there are objects in this function like ref_layer or ref_geometry that will need to reach it somehow- that's up to you how to do it (you could put process_reference_object as the method in a class initialized with them, or it could be as ugly as just defining them globally).
Then, you create a pool of process resources, and run all of your indices using Pool.imap_unordered (which will itself allocate each index to a different process as necessary):
from multiprocessing import Pool
p = Pool() # run multiple processes
for l in p.imap_unordered(process_reference_object, range(ref_layer.GetFeatureCount())):
file_out.write(l)
This will parallelize the independent processing of your reference objects across multiple processes, and write them to the file (in an arbitrary order, note).
Threading can help to a degree, but first you should make sure you can't simplify the algorithm. If you're checking each of 2000 reference polygons against 7000 segmented polygons (perhaps I misunderstood), then you should start there. Stuff that runs at O(n2) is going to be slow, so maybe you can prune away things that will definitely not intersect or find some other way to speed things up. Otherwise, running multiple processes or threads will only improve things linearly when your data grows geometrically.

Categories