Python: Can't insert into MongoDB - python

I have been trying to insert data into my MongoDB collection but it's not working:
try:
client = MongoClient(uri,
connectTimeoutMS=30000,
socketTimeoutMS=None)
print("Connection successful")
print()
except:
print("Unsuccessful")
print(client)
print()
db = client["<database>"]
collection = db["<collection>"]
print(db)
print()
print(collection)
print()
doc = {"test": "success"}
collection.insert_one(doc)
print("success")
The URI variable is my connection string copied from MongoDB.
Everything works fine, even the the db and collection variables print out fine until I get to the line: collection.insert_one(doc)
When I run, it just stops at that line and then I get a timeout error after a while. I am using the latest versions of Python and Pymongo

So I resolved the issue:
1) I needed to configure the whitelist entries. (https://docs.atlas.mongodb.com/security-whitelist/)
2) I needed to get off of my Universities Wifi because they block certain things.

There might be few things at play
check if the URL is correct
check if you write rights on the DB
from pymongo import MongoClient
try:
client = MongoClient(uri,
connectTimeoutMS=30000,
socketTimeoutMS=None)
print("Connection successful")
except:
print("Unsuccessful")
db = client["<database>"]
doc = {"test": "success"}
db[collectionName].insert_one(doc)

Related

How to Store safely Mysql database credentials in my code

try:
connection = mysql.connector.connect(host='localhost',database='USER',user='root',password='password')
sql_select_Query = "select * from AuthSys WHERE mac = '%s'"%mac
cursor = connection.cursor()
cursor.execute(sql_select_Query)
row_headers=[x[0] for x in cursor.description]
records = cursor.fetchall()
except mysql.connector.Error as e:
return [e]
finally:
if connection.is_connected():
connection.close()
cursor.close()
print("MySQL connection is closed")
I wanted to store the host='localhost',database='USER',user='root',password='password' securely in my python project.So that everyone whoever uses my script will not get access to my database
Note: I am new to stackoverflow.If i wrote something wrong please suggent me right.Thanks in Advance.
You should probably put the credentials in a separate config file that isn't deployed with the project. And pass the path of this file to the main entry of the application, something like this:
python main.py --config=/your-path/to/your-config-file.ini
You will also need to parse this --config argument and then read and parse the your-config-file.ini file.
If you dont have too many such settings one common option is to get them from system environment variables.
user= os.environ["myuser"]
password= os.environ["mypassword"]
connection = mysql.connector.connect(host='localhost',database='USER',user=user,password=password)
See https://12factor.net/ factor 3.
I’d prefix all app settings environment names with something common, giving bkapp_user, bkapp_password.

Where does the connection to database need to be made?

I am creating a twitter bot that searches twitter for ticker symbols. I am then sending them to a database. The problem that I have here is that the list that is getting sent is not changing.
Maybe I have to keep connecting to the database, but I can not figure out where my problem is. Can anyone figure out how to make make the list of tickers be different everytime?
def searchTwit():
tweets = api.search("#stocks",count=100)
return tweets
print("connecting to database")
#connecting to the database
conn = pyodbc.connect(
"Driver={SQL Server};"
"Server=..............;"
"Database=master;"
"Trusted_Connection=yes;")
cursor = conn.cursor()
tickList=[]
def getTicker(tweets):
for tweet in tweets:
if "$" in tweet.text:
x = tweet.text.split()
for i in x:
if i.startswith("$") and i[1].isalpha():
i.strip(".")
i.upper()
tickList.append(i)
# print(var_string)
def retrieveTickers():
for i in tickList:
cursor.execute('INSERT INTO master.dbo.TickerTable (TickerName) VALUES (?);', (i))
conn.commit()
# thing to run
print("about to do while ")
while True:
sleep(60 - time() %60)
print("searchtwit")
searchTwit()
theTweets = searchTwit()
getTicker(theTweets)
print("getting Tickers")
retrieveTickers()
print("sending tickers")
print(tickList)
tickList=[]
print(tickList)
You can connect to a remote database or on your local machine. Define which database you want to use, so in your database server be 127.0.0.1:PORT (that means that the database is your machine) (THE PORT will change depending on which SGDB you want

How do I get Data to 'commit' from Python to SQL Server?

I have a localhost SQL Server running and am able to connect to it successfully. However, I am running into the problem of data not transfering over from temp csv files. Using Python import pyodbc for Server connection.
I've tried with Python Import pymssql but have had worse results so I've stuck with pyodbc. I've also tried closing the cursor each time or just at the end but not to any luck.
Here is a piece of code that I am using. Towards the bottom are two different csv styles. One is a temp in which is used to fill the SQL Server table. The other is for my personal use to make sure I am actually gathering information at the moment but, in the long term will be removed so only the temp csv is used.
#_retry(max_retry=1, timeout=1)
def blocked_outbound_utm_scada():
# OTHER CODE EXISTS HERE!!!
# GET Search Results and add to a temp CSV File then send to MS SQL Server
service_search_results_str = '/services/search/jobs/%s/results?output_mode=csv&count=0' % sid
search_results = (_service.request(_host + service_search_results_str, 'GET',
headers={'Authorization': 'Splunk %s' % session_key},
body={})[1]).decode('utf-8')
with tempfile.NamedTemporaryFile(mode='w+t', suffix='.csv', delete=False) as temp_csv:
temp_csv.writelines(search_results)
temp_csv.close()
try:
cursor.execute("BULK INSERT Blocked_Outbound_UTM_Scada FROM '%s' WITH ("
"FIELDTERMINATOR='\t', ROWTERMINATOR='\n', FirstRow = 2);" % temp_csv.name)
conn.commit()
except pyodbc.ProgrammingError:
cursor.execute("CREATE TABLE Blocked_Outbound_UTM_Scada ("
"Date_Time varchar(25),"
"Src_IP varchar(225),"
"Desktop_IP varchar(225));")
conn.commit()
finally:
cursor.execute("BULK INSERT Blocked_Outbound_UTM_Scada FROM '%s' WITH ("
"FIELDTERMINATOR='\t', ROWTERMINATOR='\n', FirstRow = 2);" % temp_csv.name)
conn.commit()
os.remove(temp_csv.name)
with open(_global_path + '/blocked_outbound_utm_scada.csv', 'a', newline='') as w:
w.write(search_results)
w.close()
I'm just trying to get the information into SQL Server but the code seems to be ignoring cursor.commit(). Any help is appreciated in figuring out what is wrong.
Thanks in Advance!
Try it without the conn.commit .
I do not understand why or how does it work but it seems, as well to me, that pyodbc ignores the commit clause.
Try change autocommit parameter in pymssql.connect()
conn = pymssql.connect(host=my_host, user=my_user, password=my_password, database=my_database, autocommit=True)
conn = pymssql.connect(host=my_host, user=my_user, password=my_password, database=my_database, autocommit=False)

MySQL server has gone away python MySQLdb

In my python script, I've subscribed to a web socket. Whenever the data is received, I'm inserting this data into MySQL db. Every second there are about 100-200 queries. The problem is it works for some time, and then it gives the error "error 2006: MySQL server has gone away"
I've increased Max_allowed_packets up to 512M. but it didn't work.
Here's my code.
def db_entry(threadName, _data):
_time = time.strftime('%Y-%m-%d %H:%M:%S')
#print ("starting new thread...")
for data in _data:
#print (data)
sql = "INSERT INTO %s (Script_Name, Lot_Size, Date, Time, Last_Price, Price_Change, Open,High, Low, Close, Volume, Buy_Quantity, Sell_Quantity) VALUES('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')" %("_" + str(data['instrument_token']), data['instrument_token'], 1, datetime.datetime.today().strftime("%Y-%m-%d"), _time, data['last_price'], data['change'], data['ohlc']['open'], data['ohlc']['high'], data['ohlc']['low'], data['ohlc']['close'], data['volume'], data['buy_quantity'], data['sell_quantity'])
cursor.execute(sql)
# Commit your changes in the database
db.commit()
def on_tick(tick, ws):
thread_name = "Thread" + str(thread_count + 1)
try:
_thread.start_new_thread(db_entry,(thread_name,tick, ))
except exception as e:
print (e)
raise
def on_connect(ws):
# Subscribe to a list of instrument_tokens (RELIANCE and ACC here).
ws.subscribe(instrument_token)
# Set RELIANCE to tick in `full` mode.
ws.set_mode(ws.MODE_FULL,instrument_token)
# Assign the callbacks.
kws.on_tick = on_tick
kws.on_connect = on_connect
kws.enable_reconnect(reconnect_interval=5, reconnect_tries=50)
# Infinite loop on the main thread. Nothing after this will run.
# You have to use the pre-defined callbacks to manage subscriptions.
kws.connect()
Thanks in advance. :)
The documentation provided by the MySQL developer docs are very clear on this point. Odds are, some of those MySQL queries are running slower than others because they're waiting for their turn to insert data. If they wait too long, MySQL will just close their connection. By default, MySQL's wait_timeout is eight hours (28800s). Has the MySQL configuration been tweaked? How much hardware is allocated to MySQL?
Generally, look at all the timeout configurations. Read them and understand them. Do not simply copy and paste all the performance tweaks bloggers like blogging about.
Finally, It's solved.
I kept the db connection open which was causing the problem.
I'm closing the db connection when the query is fired. and opening again when want to insert something again.
You need to create an object with its own connection handling methods. I use this and works well.
class DB():
def __init__(self, **kwargs):
self.conn = MySQLdb.connect(‘host’, ‘user’, ‘pass’, ‘db’)
try:
if (self.conn):
status = "DB init success"
else:
status = "DB init failed"
self.conn.autocommit(True)
# self.conn.select_db(DB_NAME)
self.cursor = self.conn.cursor()
except Exception as e:
status = "DB init fail %s " % str(e)
def execute(self, query):
try:
if self.conn is None:
self.__init__()
else:
self.conn.ping(True)
self.cursor.execute(query)
return self.cursor.fetchall()
except Exception as e:
import traceback
traceback.print_exc()
# error ocurs,rollback
self.conn.rollback()
return False
Usage
data = DB().execute("SELECT * FROM Users")
print(data)

Can I use a config file to hold connection string parameters?

I have a script which connects to a database. I'd like to pull out the server,user,password,db from the script and put it into a config file. I'm successfully pulling in the values from the config file. The problem I'm having is the pymssql.connect fails when I have variables in the syntax. I pasted my sample code below. Is this possible?
###database connection
config = configparser.ConfigParser()
config.read('test.config')
server = config['DEFAULT']['SQLServer']
db = config['DEFAULT']['Database']
user = config['DEFAULT']['User']
password = config['DEFAULT']['Password']
###this works
####conn = pymssql.connect(host='Server1', user='Joe',password='MyPass', database='MyDB')
###this doesn't
try:
conn = pymssql.connect(host=server, user=user,password=password, database=db)
except Exception as e:
print(str(e))
sys.exit()
This is how I retrieved connection properties and connected sql server database
**App.ini file**
[CoreContext]
host=sservername.database.windows.net
user=dbuser#servername
password=password
database=DeltaXCore
**Connection.py file**
appConfig = ConfigParser.ConfigParser()
appConfig.read("App.ini")
ConnectionString.HOST = appConfig.get("CoreContext", "host")
CoreConnectionString.USER = appConfig.get("CoreContext", "user")
CoreConnectionString.PASSWORD = appConfig.get("CoreContext", "password")
CoreConnectionString.DATABASE = appConfig.get("CoreContext", "database")
pymssql.connect(host=objdeltaxConnectionString.HOST, user=objdeltaxConnectionString.USER,password=objdeltaxConnectionString.PASSWORD, database=objdeltaxConnectionString.DATABASE)

Categories