Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 10 months ago.
Improve this question
I have a large panel dataset of countries. I want to drop a large number of countries and keep a few only for a certain number of year. What would be the appropriate command?
I can't see your code but you might be able to use boolean masks
newdf = df[(df["Countries"] == "Ireland")|(df["Countries"] == "South Africa")]
newdf = newdf[df["Year"] == 2011]
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 days ago.
Improve this question
OpCo_code = spark.sql(f""" SELECT OPCOCode
FROM delta./{SALES_ORG_REFERENCE_OBJECT}
WHERE SalesOrganisationCode = "{SalesOrg_row['SalesOrganisationCode']}" AND RegionID ="{SalesOrg_row['RegionID']}" """).collect()[0][0]
Output: HP
I am unable to pick the data in column(OPCOCode) uniform multiple rows.
I am expecting output
HP, IC,UF
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 10 months ago.
Improve this question
is it possible to somehow put before.rules and after.sales into two arrays, then compare the number of elements in them and subtract not their number, but the elements, so that the rest can be output later?
example
To compare the lengths of two lists, you can:
>>> print(len(after.roles)-len(before.roles))
3
In this example 3 new roles were gained
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have column with this kind of data:
I want to count how many times valu occur in a row. It is a string, so I want to convert this '63,63,63,63,63,63,63,63,63,63,63' to this ['63','63','63'...].
I there any way to do this quickly?
Thanks
if given string is s
l=s.split(',')
l is the required list
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
What is the best way to replace all the null "NaN" values in a Python dataframe with the value 0?
Also, is it possible to do this with a for loop?
you can simply use
df.fillna(0)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
df['Total']= df.iloc[3:5].sum(axis=1)
returns NaN for some most values, why is this? They are all intergers.
Also is there a better way of doing this?
Check with add with columns slice
df['Total'] = df.iloc[:, 3:5].sum(axis=1)