I am developing a project which has about a dozen different files. At the top of each file I have almost the identical lines which import the same libraries and initializes a connection to my DB:
import re
import urllib2
import datetime
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.sql import *
from sqlalchemy.orm.collections import *
from table_def import Team, Player, Box_Score, Game, Name_Mapper
from datetime import timedelta
from bs4 import BeautifulSoup as bs
from datetime import date, datetime, timedelta
import numpy as np
import argparse
engine = create_engine('sqlite:///ncaaf.db', echo=False)
md = MetaData(bind=engine)
Session = sessionmaker(bind=engine)
s = Session()
teams_table = Table("teams", md, autoload=True)
games_table = Table("games", md, autoload=True)
box_scores_table = Table("box_scores", md, autoload=True)
players_table = Table("players", md, autoload=True)
names_table = Table("names", md, autoload=True)
Can I make a module to import all these modules and to initialize this DB connection? Is that standard? Or dumb for some reason I am not realizing?
When you import something into your module, it becomes available as if it was declared in your module itself. So, you can do what you want like this:
In common_imports.py:
from datetime import date, datetime, timedelta
import numpy as np
import argparse
...
In main_module.py:
from common_import import *
a = np.array([]) # works fine
However this is not recommended since Explicit is better than implicit. E.g. if you do this, someone else (or even you from the future) won't understand where all these imported modules come from. Instead, try to either organize your imports better, or decompose your module into several ones. For example, in your import list I see argparse, SQL stuff and numpy, and I can't imaging single module that may need all these unrelated libraries.
If you create a package, you can import them in the __init__.py file, although I would suggest leaving them where they are to increase code-readability.
Related
import streamlit as st
import pandas as pd
import numpy as np
import requests
import tweepy
import config
import psycopg2
import psycopg2.extras
import plotly.graph_objects as go
auth = tweepy.OAuthHandler(config.TWITTER_CONSUMER_KEY,
config.TWITTER_CONSUMER_SECRET)
auth.set_access_token(config.TWITTER_ACCESS_TOKEN,
config.TWITTER_ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
Problem with my code, it is not running with the twitter key. The module has no attributes
The config module that you are attempting to import and read off of is not what you want.
TWITTER_CONSUMER_KEY and TWITTER_CONSUMER_SECRET are not constants that you can get from a module. These are values that you must input yourself. There is perhaps a piece of code missing at the start of your application that looks like this:
config = {
'TWITTER_CONSUMER_KEY': 'ENTER YOUR TWITTER CONSUMER KEY',
'TWITTER_CONSUMER_SECRET': 'ENTER YOUR TWITTER CONSUMER SECRET'
}
Take a look at this article for more help. Goodluck!
I'm trying to import sqlalchemy library
import pandas as pd
import sqlalchemy as sa
engine = sa.create_engine('postgresql://{login}:{password}#database.org:5432/database'.format(login = 'login', password='password'))
engine_ch = sa.create_engine('clickhouse://database#ip')
but this ends up with an error
ImportError: cannot import name '_literal_as_label_reference'
I've checked this post, installed the right versions of both libraries, there was no errors during installation, but the import error remains
sqlalchemy version is 1.3.24, sqlalchemy-clickhouse version is 0.1.5.post0
I have a code below which connects to a MongoDB database and selects the specified JSON file, flattens it and exports it as a CSV.
So my problem is some of the JSON files in the MongoDB databases are absolutely huge with thousands of rows, so what I am trying to do is filter table down so that I only bring in data from the last 7 days.
from pymongo import MongoClient
import pandas as pd
import os, uuid, sys
import collections
from azure.storage.filedatalake import DataLakeServiceClient
from azure.core._match_conditions import MatchConditions
from azure.storage.filedatalake._models import ContentSettings
from pandas import json_normalize
mongo_client = MongoClient("connstring")
db = mongo_client.nhdb
table = db.Report
document = table.find()
mongo_docs = list(document)
mongo_docs = json_normalize(mongo_docs)
mongo_docs.to_csv("Report.csv", sep = ",", index=False)
Any help will be much appreciated.
Note: I know a way to do it in Azure Data Factory using the expression below, however, I am not sure how to go about it in Python
{"createdDatetime":{$gt: ISODate("#{adddays(utcnow(),-7)}")}}
In python create a datetime object to use as a filter; for example this shows the last 7 days:
from datetime import datetime, timedelta
document = table.find({'createdDatetime': {'$gt': datetime.utcnow() - timedelta(days=7)}})
I am running this query in python. I do get the error "Invalid string literal" in the regular expression part. I know this regex is, not sure what syntax is missing here. Any help would be appreciated.
import pandas as pd
import numpy as np
import datetime
from time import gmtime, strftime
import smtplib
import sys
from IPython.core.display import HTML
PROJECT = 'server'
queryString = '''
SELECT
mn as mName,
dt as DateTime,
ip as LocalIPAddress,
REGEXP_EXTRACT(path, 'ActName:([\s\S\w\W]*?)ActDomain:') AS ActName,
FROM Agent.Logs
'''
You're missing an r
REGEXP_EXTRACT(path, r'ActName:([\s\S\w\W]*?)ActDomain:') AS ActName
I trying to understand how to manage module with __all. For example, I have following structured code:
main.py
|=> /database
|=> __init__.py
|=> engine (with variables engine, session, etc.)
now I want to be able to import session and engine instances directly from database module like:
from database import session
I tried to add line __all__ = ['session'] or __all__ = ['engine.session'] to __init__py but when I trying to do import I've got an exception AttributeError: 'modile' object has not attribute 'engine.session'.
Is there any way to achieve wanted behavior?
Listing names in __all__ does not, by itself, import items into a module. All it does is list names to import from that module if you used from database import * syntax.
Import session into database/__init__.py:
from .engine import session