Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 days ago.
Improve this question
To build a python dictionary from a multiline string.
I except a dictionary from this multiline string.
ABC 100
ABC 1.1.1.127
ABC 1.1.1.109
MNO 200
MNO 1.1.1.140
MNO 1.1.1.127
vpls_dict = {
"1.1.1.127" : { "routing_instances" : ["ABC", "MNO"], "vlans" : ["100", "200"] },
"1.1.1.109" : { "routing_instances" : ["ABC"], "vlans" : ["100"] },
"1.1.1.140" : { "routing_instances" : ["MNO"], "vlans" : ["200"] }
}
You can make use of pandas here. First to read your file as a csv, then to filter and group the dataframe by ip, and finally to export as dict:
import pandas as pd
df = pd.read_csv("your_file.txt", sep="\s+", engine="python", header=None)
This will give you:
0 1
0 ABC 100
1 ABC 1.1.1.127
2 ABC 1.1.1.109
3 MNO 200
4 MNO 1.1.1.140
5 MNO 1.1.1.127
Then filter IPs using regex and merge the IP/non IP parts, before grouping by IP:
import re
ip_pat = re.compile(r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}') # regex pattern to match IPs
ip_match = df[1].str.match(ip_pat)
df_out = df[ip_match].merge(df[~ip_match], on=0)
df_out.columns = ["routing_instances", "ip", "vlans"]
df_out.groupby("ip").agg(list).to_dict(orient='index')
Output:
{'1.1.1.109': {'routing_instances': ['ABC'], 'vlans': ['100']},
'1.1.1.127': {'routing_instances': ['ABC', 'MNO'], 'vlans': ['100', '200']},
'1.1.1.140': {'routing_instances': ['MNO'], 'vlans': ['200']}}
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have a dataframe as follows: The values in the column are separated by ; and each item is assigned with numeric value. I want to sort them based on the numeric value.
ab = {
'Category': ['AD', 'AG'],
'data1': ['a, b=4; b=3; dk=1; kc/d2=8', 'km=4; df,md=8; lko=10; cog=12'],
'data2': ['a=9; kd=1; mn=1; fg=3', 'kl=6; md=1; jhk=3, b &j=4; ghg=7']
}
df1 = pd.DataFrame(ab)
df1
Category data1 data2
0 AD a, b=4; b=3; dk=1; kc/d2=8 a=9; kd=1; mn=1; fg=3
1 AG km=4; df,md=8; lko=10; cog=12 kl=6; md=1; jhk=3, b &j=4; ghg=7
I want to sort the items in each columns based on the value assigned to it.
the expected output is:
Category data1 data2
0 AD kc/d2=8; a, b=4; b=3; dk=1 a=9; fg=3; kd=1; mn=1;
1 AG cog=12; lko=10; df,md=8; km=4 ghg=7; kl=6; b &j=4; jhk=3; md=1
You can try:
df1[df1.filter(like='data').columns] = df1.filter(like='data').applymap(lambda s: '; '.join(sorted(s.split('; '), key=lambda x:x[-1], reverse=True)))
If there is a possibility that you have numbers > 9:
df1[df1.filter(like='data').columns] = df1.filter(like='data').applymap(lambda s: '; '.join(sorted(s.split('; '), key=lambda x:int(x.rsplit('=', 1)[-1]), reverse=True)))
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have an input file with the format: .in
The file is following:
% x R delta Uc
0.9800000 0.4040404 0.1306061 1.0000000
1.9800000 0.3393939 0.2311111 1.0000000
2.9800000 0.2585859 0.3517172 0.9924242
3.9800000 0.1696970 0.4723232 0.9924242
4.9800000 0.0808081 0.5929293 0.9924242
5.9800000 0.0000000 0.7135354 0.9696970
6.9800000 0.0000000 0.7738384 0.9015152
7.9800000 0.0000000 0.8341414 0.8333333
8.9800000 0.0000000 0.9145455 0.7575758
10 0 1.0133 .7064
11 0 1.1105 .6654
12 0 1.2077 .6312
13 0 1.3049 .6023
14 0 1.4021 .5775
15 0 1.4993 .5561
16 0 1.5965 .5373
17 0 1.6937 .5207
18 0 1.7909 .5060
19 0 1.8881 .4928
How can I put this information in an array like:
x = [0.980000, 1.980000, 2.980000 .....]
R = [0.40404, 0.33939, 0.231111 .....]
delta = ....
Uc = ....
This is a simple way to do it:
import pandas
df=pd.read_csv('your_file.in', sep=' ')
for i in df.columns:
i=list(df[i])
This layout seems to be easily parseable with Python's csv module.
import csv
with open("file.in") as file:
data = csv.reader(file, delimiter=" ", skipinitialspace=True)
x, R, delta, Uc = zip(*data)
No dependencies needed!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Hello all just learning dictionary in python. I have few data please let me know how to create a nested dictionary. Data are available below with duplicate values in excel file. Please do explain using for loop
Name Account Dept
John AC Lab1
Dev AC Lab1
Dilip AC Lab1,Lab2
Sat AC Lab1,Lab2
Dina AC Lab3
Surez AC Lab4
I need the result in below format:
{
'AC': {
'Lab1': ['John', 'Dev', 'Dilip', 'Sat'],
'Lab2': ['Dilip','Sat'],
'Lab3': ['Dina'],
'Lab4': ['Surez']
}
}
Something like this should get you closer to an answer but I'd need your input file to optimize it:
import xlrd
from collections import defaultdict
wb = xlrd.open_workbook("<your filename>")
sheet_names = wb.sheet_names()
sheet = wb.sheet_by_name(sheet_names[0])
d = defaultdict(defaultdict(list))
for row_idx in range(0, sheet.nrows):
cell_obj_0 = sheet.cell(row_idx, 0)
cell_obj_1 = sheet.cell(row_idx, 1)
cell_obj_2 = sheet.cell(row_idx, 2)
for lab in cell_obj_2.split(","):
d[cell_obj_1][lab].append(cell_obj_0)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a txt file with several columns. see sample data below.
25 180701 1 12
25 180701 2 15
25 180701 3 11
25 180702 1 11
25 180702 2 14
25 180722 2 14
14 180701 1 11
14 180701 2 13
There are no column headers. Column 1 is ID, Column 2 is date, Column 3 is Hour, Column 4 is value. I am trying to look up the number 25 in column 1 and extract data for all hours during period 180701 to say 180705 all values. so the result would be a new text file with following data.
25 180701 1 12
25 180701 2 15
25 180701 3 11
25 180702 1 11
25 180702 2 14
Any help in R or Python is appreciated.Thanks!
When we read the file with read.csv/read.table, there is an option header=FALSE and use col.names
df1 <- read.csv("file.csv", header = FALSE,
col.names = c("ID", "date", "Hour", "value"))
and subset the values later
subset(df1, ID == 25 & (date %in% 180701:180705), select = 1:4)
In R readr::read_delim() has a col_names parameter that you can set to F
> readr::read_delim('hi;1;T\nbye;2;F', delim = ';', col_names = F)
# A tibble: 2 x 3
X1 X2 X3
<chr> <int> <lgl>
1 hi 1 TRUE
2 bye 2 FALSE
In Python, try this:
import pandas as pd
#To read csv files without headers. use 'header = None' to be explicit
df = pd.read_csv('test.csv',header = None)
df
# Then rename the generated columns
df2 = df.rename({0:'ID',1:'Date',2:'Hours',3:'Value'},axis = 'columns')
df2
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I have parsed the JSON with json.load.
Now I want to query that JSON dict using SQL-like commands. Does anything exist like this in Python? I tried using Pynq https://github.com/heynemann/pynq but that didn't work too well and I've also looked into Pandas but not sure if that's what I need.
Here is a simple pandas example with Python 2.7 to get you started...
import json
import pandas as pd
jsonData = '[ {"name": "Frank", "age": 39}, {"name": "Mike", "age":
18}, {"name": "Wendy", "age": 45} ]'
# using json.loads because I'm working with a string for example
d = json.loads(jsonData)
# convert to pandas dataframe
dframe = pd.DataFrame(d)
# Some example queries
# calculate mean age
mean_age = dframe['age'].mean()
# output - mean_age
# 34.0
# select under 40 participants
young = dframe.loc[dframe['age']<40]
# output - young
# age name
#0 39 Frank
#1 18 Mike
# select Wendy from data
wendy = dframe.loc[dframe['name']=='Wendy']
# output - wendy
# age name
# 2 45 Wendy