How to share a variable between two python scripts run separately - python

I'm an extreme noob to python, so If there's a better way to do what I'm asking please let me know.
I have one file, which works with flask to create markers on a map. It has an array which stores these said markers. I'm starting the file through command prompt, and opening said file multiple times. Basically, how would one open a file multiple times, and have them share a variable (Not the same as having a subfile that shares variables with a superfile.) I'm okay with creating another file that starts the instances if needed, but I'm not sure how I'd do that.
Here is an example of what I'd like to accomplish. I have a file called, let's
say, test.py:
global number
number += 1
print(number)
I'd like it so that when I start this through command prompt (python test.py) multiple times, it'd print the following:
1
2
3
4
5
The only difference between above and what I have, is that what I have will be non-terminating and continuously running

What you seem to be looking for is some form of inter-process communication. In terms of python, each process has its own memory space and its own variables meaning that if I ran.
number += 1
print(number)
Multiple times then I would get 1,2..5 on a new line. No matter how many times I start the script, number would be a global.
There are a few ways where you can keep consistency.
Writing To A File (named pipe)
One of your scripts can have (generator.py)
import os
num = 1
try:
os.mkfifo("temp.txt")
except:
pass # In case one of your other files already started
while True:
file = open("temp.txt", "w")
file.write(num)
file.close() # Important because if you don't close the file
# The operating system will lock your file and your other scripts
# Won't have access
sleep(# seconds)
In your other scripts (consumer.py)
while True:
file = open("temp.txt", "r")
number = int(file.read())
print(number)
sleep(# seconds)
You would start 1 or so generator and as many consumers as you want. Note: this does have a race condition that can't really be avoided. When you write to the file, you should use a serializer like pickler or json to properly encode and decode your array object.
Other Ways
You can also look up how to use pipes (both named and unnamed), databases, ampq (IMHO the best way to do it but there is a learning curve and added dependencies), and if you are feeling bold use mmap.
Design Change
If you are willing to listen to a design change, Since you are making a flask application that has the variable in memory why don't you just make an endpoint to serve up your array and check the endpoint every so often?
import json # or pickle
import flask
app = Flask(__name__)
array = [objects]
converted = method_to_convert_to_array_of_dicts(array)
#app.route("/array")
def hello():
return json.dumps(array)
You will need to convert but then the web server can be hosted and your clients would just need something like
import requests
import json
while True:
result = requests.get('localhost/array')
array = json.loads(str(result.body)) # or some string form of result
sleep(...)

Your description is kind of confusing, but if I understand you correctly, one way of doing this would be to keep the value of the variable in a separate file.
When a script needs the value, read the value from the file and add one to it. If the file doesn't exist, use a default value of 1. Finally, rewrite the file with the new value.
However you said that this value would be shared among two python scripts, so you'd have to be careful that both scripts don't try to access the file at the same time.

I think you could use pickle.dump(your array, file) to serie the data(your array) intoto a file. And at next time running the script, you could just load the data back with pickle.dump(your array, file)

Related

Intercept (using Python) data being written to a file from another process

I am working on something where this could come in handy in the future.
Does anyone know of a way I can intercept data (using Python) being written to a file (via some other language/process)?
I would know the path of the file I want to intercept and I preferably want to find a solution that would work on Windows. I know watchdog can watch for file changes but my goal would be to intercept the write before it touches the file.
For example they I have the following script running on my computer that just constantly writes to a file:
import time
filename = "testfile"
i = 1
while True:
with open(filename, 'a') as out:
out.write(str(i) + '\n')
time.sleep(1)
i += 1
Note: This is just an example. The data I want to intercept is not being written with Python. I don't know what it is written with.
In another script, I want to intercept everything being written to testfile.
I don't believe this is possible but I figured I would ask.
Using os.walk you can make a list of how many files you have in your whole directory and then keep checking it and cross reference it with a previous variable that says what the file count is, and when there is a difference you can open it using os.open.

Python using same date/time in multiple files

I'm currently generating three different xml files, and I would like to have the second and third file have the same date/time as the first file.
In the first file, I do
import datetime
time = datetime.datetime.utcnow().strftime('%y%m%d%H%M%S')
This gives me the format I would like. I've tried multiple approaches such as storing it in a different variable and importing it to the second and third files, but it seems that it'll always keep the actual current time and not the time of the first file. I don't know if there's a solution to my problem using the datetime module but if anyone has any ideas that would be wonderful.
Whenever you call that function, whether directly or through import it will always run again and give a new "now".
If the same program just uses that string for 3 times there shouldn't be a problem, but if you're running 3 different scripts you will get 3 different dates!
To avoid this, I would save the first generated string to a file:
with open('.tmpdate') as f:
f.write(time)
And read it in the next to files:
with open('.tmpdate') as f:
time = f.read()
And finally, just to clean up after yourself, you can delete that file after it was used for the 3rd time with os.remove('.tmpdate') (you need to import os before that, of course)

How to append a list so that the changes are persistent?

I am new to Python and relatively new to coding in general, I was just trying to write a test bit of code to see if a list could be appended, and then when that .py is reloaded the changes are saved into it?
I'm not sure if this is possible with lists without a decent amount more work, but none of my attempts have managed to get me there.
The test code I have at the moment is:
lDUn = []
lDPw = []
def enterUn():
usrName = input(str("Enter your desired username:\n"))
if usrName in lDUn:
print ("Username is already in use, please try again")
enterUn()
else:
print ("Username created!")
lDUn.append(usrName)
enterUn()
def enterPw():
pW1 = input(str("Enter your desired password:\n"))
pW2 = input(str("please re-enter your password:\n"))
if pW1 == pW2:
print ("Password created!")
lDPw.append(pW1)
else:
print ("Passwords do not match, please try again")
enterPw()
enterPw()
When lDUn and lDPw are checked for inclusion later on they register no problem, but when the program is closed and re-opened later they are gone.
I have tried making it writing the 'pW' and 'usrName' inputs into a .txt but have failed to get it to search those .txts for the specific strings later on. (but that is for a different question) Other than that I'm not sure what to do with my current knowledge of Python.
The values of variables (in this case, the contents of your lists) are never automatically saved when you restart the program. You would need to write some code that saves the values to a file before the program exits, and also some code that reads the values in from the file when the program starts up again.
It's up to you just how you convert the lists into bytes that you can put in a file. One common way would be to just write the strings from the lists into the file, one per line, and then read them again. You can do this using the methods of file objects.
Another option is to use the pickle module. This may save you a bit of effort, since you just give it a Python object and it handles the conversion into bytes for you, but you won't be able to view and edit the resulting file. (Well, you can, but it will look like gibberish since it's a binary file.)
Here's basically what you do:
You set a file or files where you will store those lists.
At the begining of your code you load the data from files into lists.
Each time you modify one of the lists you also modify the related file (the most basic implementation would be to recreate the file).
You work with in-memory data in other cases.
That will work fine as long as you deal with small files and small "traffic". If any of those increases then you will have serious peformance and/or consistency issues. To properly solve these problems you should use a proper database, starting with sqlite (which doesn't require a separate server).

Change variable permanently

I'm writing a small program that helps you keep track of what page you're on in your books. I'm not a fan of bookmarks, so I thought "What if I could create a program that would take user input, and then display the number or string of text that they wrote, which in this case would be the page number they're on, and allow them to change it whenever they need to?" It would only take a few lines of code, but the problem is, how can I get it to display the same number the next time I open the program? The variables would reset, would they not? Is there a way to permanently change a variable in this way?
You can store the values in a file, and then load them at start up.
The code would look somewhat like this
variable1 = "fi" #start the variable, they can come from the main program instead
variable2 = 2
datatowrite = str(variable1) + "\n" + str(variable2) #converts all the variables to string and packs them together broken apart by a new line
f = file("/file.txt",'w')
f.write(datatowrite) #Writes the packed variable to the file
f.close() #Closes the file !IMPORTANT TO DO!
The Code to read the data would be:
import string
f = file("/file.txt",'r') #Opens the file
data = f.read() #reads the file into data
if not len(data) > 4: #Checks if anything is in the file, if not creates the variables (doesn't have to be four)
variable1 = "fi"
variable2 = 2
else:
data = string.split(data,"\n") #Splits up the data in the file with the new line and removes the new line
variable1 = data[0] #the first part of the split
variable2 = int(data[1]) #Converts second part of strip to the type needed
Bear in mind with this method the variable file is stored in plaintext with the application. Any user could edit the variables and change the programs behaviour
You need to store it on disk. Unless you want to be really fancy, you can just use something like CSV, JSON, or YAML to make structured data easier.
Also check out the python pickle module.
Variables have several lifetimes:
If they're inside of a block of code, their value only exists for that block of code. This covers functions, loops, and conditionals.
If they're inside of an object, their value only exists for the life of that object.
If the object is dereferenced, or you leave your code of block early, the value of the variable is lost.
If you want to maintain the value of something in particular, you have to persist it. Persistence allows you to write a variable out to disk (and yes, databases are technically out-to-disk), and retrieve it at a later time - after the lifetime of the variable has expired, or when the program is restarted.
You've got several options for how you want to persist the location of their page - a heavy-handed approach would be to use SQLite; a slightly less heavy handed approach would be to unpickle the object, or simply write to a text file.

Python, run commands in specific order

I'm writing a script that gets the most recently modified file from a unix directory.
I'm certain it works, but I have to create a unittest to prove it.
The problem is the setUp function. I want to be able to predict the order the files are created in.
self.filenames = ["test1.txt", "test2.txt", "test3.txt", "filename.txt", "test4"]
newest = ''
for fn in self.filenames:
if pattern.match(fn): newest = fn
with open(fn, "w") as f: f.write("some text")
The pattern is "test.*.txt" so it just matches the first three in the list. In multiple tests, newest sometimes returns 'test3.txt' and sometimes 'test1.txt'.
Use os.utime to explicitly set modified time on the files that you have created. That way your test will run faster.
I doubt that the filesystem you are using supports fractional seconds on file create time.
I suggest you insert a call to time.sleep(1) in your loop so that the filesystem actually has a different timestamp on each created file.
It could be due to syncing. Just because you call write() on files in a certain order, it doesn't mean the data will be updated by the OS in that order.
Try calling f.flush() followed by os.fsync() on your file object before going to the next file. Giving some time between calls (using sleep()) might help also

Categories