How to read a JSON static file in web2py - python

I'm putting together my first web2py app, and I've run into a bit of a problem. I have some data stored in static/mydata.json that I'd like to access in a couple places (specifically, in one of my models, as well as a module).
If this were a normal python script, obviously I'd do something like:
import json
with open('/path/to/mydata.json') as f:
mydata = json.load(f)
In the context of web2py, I can get the url of the file from URL('static', 'mydata.json'), but I'm not sure how to load mydata - can I just do mydata = json.load(URL('static','mydata.json')? Or is there another step required to open the file?

It's advisable to use os.path.join with request.folder to build paths to files.
import os
filepath = os.path.join(request.folder,'static','mydata.json')
From that point on, you should be able to use that filepath to open the json file as per usual.

import os
filepath = os.path.join(request.folder,'static','mydata.json')
mydata = json.load(filepath)

Related

How to have persistent storage for a PYPI package

I have a pypi package called collectiondbf which connects to an API with a user entered API key. It is used in a directory to download files like so:
python -m collectiondbf [myargumentshere..]
I know this should be basic knowledge, but I'm really stuck on the question:
How can I save the keys users give me in a meaningful way so that they do not have to enter them every time?
I would like to use the following solution using a config.json file, but how would I know the location of this file if my package will be moving directories?
Here is how I would like to user it but obviously it won't work since the working directory will change
import json
if user_inputed_keys:
with open('config.json', 'w') as f:
json.dump({'api_key': api_key}, f)
Most common operating systems have the concept of an application directory that belongs to every user who has an account on the system. This directory allows said user to create and read, for example, config files and settings.
So, all you need to do is make a list of all distros that you want to support, find out where they like to put user application files, and have a big old if..elif..else chain to open the appropriate directory.
Or use appdirs, which does exactly that already:
from pathlib import Path
import json
import appdirs
CONFIG_DIR = Path(appdirs.user_config_dir(appname='collectiondbf')) # magic
CONFIG_DIR.mkdir(parents=True, exist_ok=True)
config = CONFIG_DIR / 'config.json'
if not config.exists():
with config.open('w') as f:
json.dumps(get_key_from_user(), f)
with config.open('r') as f:
keys = json.load(f) # now 'keys' can safely be imported from this module

How to iterate over JSON files in a directory and upload to mongodb

So I have a folder with about 500 JSON files. I need to upload all of them to a local mongodb database. I tried using Mongo Compass, but Compass can only upload one file at a time. In python I tried to write some simple code to iterate through the folder and upload them one by one, but I ran into some problems. First of all the JSON files are not comma-separated, rather line separated. So the files look like:
{ some JSON object }
{ some JSON object }
...
I wrote the following code to iterate through the folder and upload it:
import pymongo
import json
import pandas as pd
import numpy as np
myclient = pymongo.MongoClient("mongodb://localhost:27017/")
mydb = myclient['Test']
mycol = mydb['data']
directory = os.fsencode("C:/Users/PB/Desktop/test/")
for file in os.listdir(directory):
filename = os.fsdecode(file)
if filename.endswith(".json"):
mycol.insert_many(filename)
The code basically goes through a folder, checks if it's a .json file, then inserts it into the database. That is what should happen. However, I get this error:
TypeError: document must be an instance of dict, bson.son.SON,
bson.raw_bson.RawBSONDocument, or a type that inherits from
collections.MutableMapping
I cannot seem to upload it through python. I tried multiple variations of the code, but for some reason the python does not accept the json files.
The problem with these files seems to be that python only allows for comma-separated JSON files.
How could I fix this to upload all the files?
You're inserting the names of the files to mongo. Not the contents of the file.
Assuming you have multiple json files in a directory, where each file contains a json-object in each line...
You need to go through all the files, filter them, open them, read them line by line, parse each line into a dict, and then insert. Something like below:
os.chdir(directory)
for file in os.listdir(directory):
if file.endswith(".json"):
with open(file) as f:
for line in f:
mongo_obj = json.loads(line)
mycol.insert(mongo_obj)
I did a chdir first to avoid having to pass the whole path to open

Python - Flask: Importing a .json file from a specific folder and returning on GET/POST request

am getting failures importing the JSON file from the folder, I have tried the convention:
from web_scraper import data
and also
from web_scraper.data import *
and both didn't work out.
Also, how do I return the JSON file fetched? Is my method
return jsonify(bank_list)
the correct one?
Here is the snapshot gotten from my PC
Your import is wrong.
First of all, you cannot import JSON in python. Only python files.
If it would be a python file, you'd have to use from ..web_scraper import data, as it's in the parent directory (assuming you didn't modify the pythonpath).
To load JSON, you can use the built-in json module.
import json
import os
with open(os.path.join(os.path.dirname(__file__), "web_scraper", "data.json")) as file:
data = json.load(file)
# data is a dictionary that you can use in jsonify just fine
This would load the file's content and parse the JSON for later use, e.g. in jsonify. It's a normal dictionary

Automating through ZNC logs?

I am currently an oper for multiple irc servers, and I am trying to have a reliable way to log our channels due to a high amount of abuse. I have for the current time been using pierc, but I need all the functionality of ZNC.
My question is, using python what would be a simple way to loop through the ZNC log directory to parse the logs into a mysql database. The directory looks like the following:
username_ircnetwork_channel_20160209.log username2_ircnetwork2_channel_20160209.log
I know I can itterate through each file with something to this effect:
fileOpen = open("~/.znc/moddata/log/")
fileOpen = fileOpen.read().splitlines()
for line in fileOpen:
do something
However I am at a loss at a clean way to cycle through the log directory to check each file. Is there a decent way in python to accomplish this?
You could use Python's os module with listdir and loop through the files:
import os
path = '/path/to/logs/'
listing = os.listdir(path)
for infile in listing:
with open(path + infile, 'rb') as f:
content = f.read()
# parse however you need
↳ https://docs.python.org/2/library/os.html

How to create a new shapefile name for each scheduled task using a Python script

I just created a simple python script that goes through a folder with polyline shapefiles and merges them. Through the Windows 8 Task Scheduler I have scheduled the script to run when I want.
All I would like to do now is modify my script so I can slightly change the name of each shapefile output. For example, the script name for Week 1 would be MergedTracks_1, for Week 2 would be MergedTracks_2, for Week 3 would be MergedTracks_3, etc..
Does anybody have an idea on how to do this with the current script I have? I am running ArcGIS 10.2. I would appreciate any insight if possible. Below is the script I am currently using in PythonWin. Thanks so much in advance!!!
import arcpy, os
outPut = r"C:\Users\student2\Desktop\WeedTracksMergeScript\Output" # Output
arcpy.env.workspace = r"C:\Users\student2\Desktop\WeedTracksMergeScript"
shplist = arcpy.ListFeatureClasses('*.shp')
print shplist # prints polyline .shp list in current arcpy.env.workspace
arcpy.Merge_management(shplist, os.path.join(outPut, "MergedTracks_1.shp"))
print "Done"
You can use pickle to keep track of the last filename that was written, and then use that as part of your logic to determine what the next filename format should be.
Check out the tutorial here:
https://wiki.python.org/moin/UsingPickle
The main idea would be to do something like this:
Here is some pseudocode:
load pickle file
if pickle file exists
load pickle file
use logic to increment new filename
else:
use default file name
do logic for your work and write with set file name
write to pickle file for the new file name
Here is a quick example on how to use pickle:
import pickle
from os.path import exists
pickle_file = "current_file_name.pickle"
if exists(pickle_file):
with open(pickle_file, "rb") as pkr:
current_filename = pickle.load(pkr)
else:
current_filename = "current_file_1"
with open(pickle_file, "wb") as pkw:
pickle.dump(current_filename, pkw)
I think datetime stamps (strings) would be a lot less complicated and arguably more useful.
Many formatting options are documented at http://strftime.org
import datetime
dd = datetime.datetime.now().strftime("%Y%m%d")
shpfile = os.path.join(out_path), "MergedTracks_{}.shp".format(dd))
arcpy.Merge_management(shplist, shpfile)

Categories