Is it possible to reuse import code in Python? - python

There are several imports that are common between some files in my project. I would like to reuse this code, concentrating it in a unique file and have just one import in the other files. Is it possible?
Or is there another way not to replicate the desired import list in multiple files?

Yes its possible. You can create a Python file with imports and then import that Python file in your code.
For Eg:
ImportFile.py
import pandas as pd
import numpy as np
import os
MainCode.py:
from ImportFile import *
#Here you can use pd,np,os and complete your code
OR
from ImportFile import pd,np
#And then use pd and np

Related

Can explain the code at second last and last line, new to python

import os
import geopandas as gpd
import pandas as pd
file=os.listdir(r\\IP_Address\winshare\Affected_Area_Files)
path=[os.path.join(r\\IP_Address\winshare\Affected_Area_Files,i)
for i in file:
if".shp" in i:
# Below code need to be explained:
gdf=gpd.GeoDataFame(pd.concat([gpd.read_file(i).to_crs(gpd.read_file(path[0]).crs)for i in path],ignore_index=True),crs=gpd.read_file(path[0]).crs)
gdf.to_file(r\\IP_Address\winshare\affected_area\combined-AffectedArea_Test.shp)
I am new to python, the code has to be optimized for better performance.
It was running fine but now it is taking quite long to execute

Pandas shows up with a bunch of errers when running simple code

I am trying to write a program that chooses a recipie for me to cook at random so i found a csv file to get the recipies from but when i tried running some code this happened.
I'm new to pandas so sorry if this is stupid
`
import random
import pandas as pd
import numpy as np
pd.read_csv('recipes.csv')
pd.isnull()
`
https://github.com/cweber/cookbook/blob/master/recipes.csv (this is the link to the csv file)

Python master import file

I have several scripts in a project folder, most of which use the same handful of standard libraries and modules. Instead of having to reiterate in every single script
import pandas as pd
import numpy as np
import datetime
import re
etc
etc
is it possible for me to place all import statements in a masterImports.py file and simply import masterImports at the top of each script ?
Yes you can.
So the basic idea is to import all the libraries in one file. Then import that file.
An example:
masterImports.py
import pandas as pd
import numpy as np
import datetime
import re
etc
etc
otherFile.py
import masterImports as mi
print(mi.datetime.datetime(2021,7,20))
Or you could use wildcard imports -
from masterImports import * # OR from masterImports import important_package
print(datetime.datetime(2021,7,20))
Do not use wildcard asterix imports because there can be name clashes
Try this, and you will see that there is no error
It's possible, although not really the done thing
To use it, you'd need to do, at the top of each script:
from master_imports import *

CSV file does not exist

I am trying to pull the file "house_date.csv" and I ma being unssucesful becasue python is stating that the file cannot be found. is there a better way for me to figure out how I can load the file ? I am also not sure in what directory is located.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import math
pd.read_csv('house_data.csv')
This is the error message I am getting .
When you write pd.read_csv('house_data.csv'), the csv file directory and the python working directory is supposed to be the same.
Try to replace house_data.csv with the complete path (for example C:/house_data.csv, if the file is in "C").

How to put many numpy files in one big numpy file, file by file?

I have 166600 numpy files, I want to put them into one numpy file: file by file,
I mean that the creation of my new big file must from the begin: the first file must be read and written in the file, so the big file contains only the first file, after that I need to read and write the second file, so the big file contains the first two files.
import matplotlib.pyplot as plt
import numpy as np
import glob
import os, sys
fpath ="path_Of_my_final_Big_File"
npyfilespath ="path_of_my_numpy_files"
os.chdir(npyfilespath)
npfiles= glob.glob("*.npy")
npfiles.sort()
all_arrays = np.zeros((166601,8000))
for i,npfile in enumerate(npfiles):
all_arrays[i]=np.load(os.path.join(npyfilespath, npfile))
np.save(fpath, all_arrays)
If I understand your questions correctly, you can use numpy.concatenate for this:
import matplotlib.pyplot as plt
import numpy as np
import glob
import os, sys
fpath ="path_Of_my_final_Big_File"
npyfilespath ="path_of_my_numpy_files"
os.chdir(npyfilespath)
npfiles= glob.glob("*.npy")
npfiles.sort()
all_arrays = []
for i, npfile in enumerate(npfiles):
all_arrays.append(np.load(os.path.join(npyfilespath, npfile)))
np.save(fpath, np.concatenate(all_arrays))
Depending on the shape of your arrays and the intended concatenation, you might need to specify the axis parameter of concatenate.

Categories